The Therapist in Your Pocket
AI is becoming a significant source of support for students. What are the benefits and dangers—and how are mental health professionals responding?
You’ve probably done it. Maybe late at night, turning over something that felt too raw to text a friend, you found yourself typing into ChatGPT instead. No judgment. No awkward silence. Just a response that somehow felt—at least for a moment—exactly right.
If that's happened to you, you're in good company. At Harvard and campuses across the country—in labs, offices, and late-night study sessions—people are turning to AI with the burdens they're carrying.
The numbers are hard to ignore. About 22 percent of college-age adults already use AI for mental health or emotional support. Among those who have a pre-existing mental health condition and have ever used a large language model, that number jumps to 49 percent. Of those, 66 percent are turning to AI at least monthly—many daily.
To understand why, staff from Harvard University Health Services’ Counseling and Mental Health Service (CAMHS) recently attended, “Rethinking College Mental Health: Navigating the Impact of AI on Student Support and Campus Care,” a talk by Dr. Peter Forkner, a psychologist who spent a decade as director of the Bentley University Counseling Center, where he oversaw one of the first university pilots of a generative AI mental health chatbot. A graduate of Harvard Medical School’s Leading AI Innovation in Health Care program, Forkner presents nationally at conferences and has been building AI-driven clinical training tools. His talk was part of the Reimagining Behavioral Health in Higher Education conference series offered by the Association of Independent Colleges and Universities in Massachusetts.
Genuinely Human?
Forkner drew a helpful distinction between what’s been around for years—narrow AI like Siri, Google Translate, or spam filters, which follow rules and complete specific tasks—and the newer wave of generative AI, like ChatGPT, Gemini, and Claude. These tools create original content from a prompt. They hold a conversation, remember context, and respond in ways that feel remarkably human. “For the first time,” Forkner said, “a machine can hold a conversation that feels genuinely human—and students are noticing.”
He was careful to add that these systems don’t actually understand us. They’re prediction machines, generating the response most likely to feel empathic and satisfying. They’re optimized to be pleasing—not accurate. That distinction matters more than it might seem.
One research team has suggested, a little provocatively, that ChatGPT may now be the largest mental health support provider in the United States. Forkner thinks they have a point. “We now have a first generation arriving on campus with a therapist in their pocket,” he said. “Our systems were built in an era when that didn’t exist.”
Why AI?
Why are students turning to AI? About one in three say they’d prefer to discuss a serious matter with AI over another person. Another third find AI conversations as satisfying as—or more so than—talking to a friend. Forkner isn’t dismissive of these numbers. “We’ve all had the experience of sharing something really personal and it not going well. Maybe the person was distracted or made it about themselves. If you had something that always listened and always responded the way you needed—why wouldn’t you start there?”
On the benefit side, AI is available at 3:00 a.m. when campus counseling centers aren’t. It lowers the barrier to help-seeking through anonymity. For mild to moderate concerns, purpose-built mental health AI tools have shown real effectiveness in clinical research. And about 40 percent of students in one survey said they use AI to practice real skills—how to approach a conflict, how to have a hard conversation, how to be more assertive. That’s genuinely useful. It’s also why TimelyCare, CAMHS’ telehealth partner, is rolling out a chatbot called TimelyPulse with built-in guardrails and referrals back to CAMHS to provide students with in-person support, if needed.
The equity dimension is important, too. Forkner shares research that shows Black youth are three times more likely to use AI for emotional support daily. Hispanic and Latino youth use it for emotional coping at nearly three times the rate of white youth. Young people experiencing financial hardship are four times more likely to rely on AI emotionally. AI is filling a gap for communities that have historically had the least access to traditional mental health care.
This phenomenon is spreading quickly as more people engage with AI for emotional support. Researchers at MIT Media Lab published a qualitative analysis of a Reddit community of over 27,000 members devoted to AI companionship, reporting therapeutic benefits including decreased loneliness, always-available support, and mental health improvements. The writer Anne Wiener reported recently in The New Yorker on the extent to which people become emotionally attached to their AI chatbots well beyond productivity or practical advice. Wiener suggests that the isolation of the pandemic and its aftermath have deepened the American loneliness epidemic: “The premise of many A.I.-companion apps is that they can address, even heal, this isolation.”
Pleasing, not Accurate
But here’s the rub. Because AI is built to be pleasing, it validates—almost reflexively. “What does it mean when an AI validates your sense that you’re completely alone and no one understands you?” Forkner asks. “What does it mean when it validates beliefs that your friends hate you—or, in more serious cases, validates delusional thinking? These things have occurred, and they’ve had tragic consequences.”
Recent news stories have described cases—involving both teens and adults—where people developed deeply dependent relationships with AI chatbots, with devastating outcomes. That’s why thoughtful engagement matters. Forkner says real support sometimes means being challenged, not just comforted.
“Technology is solving micro-challenges for us, and we’re having fewer opportunities to build self-efficacy,” he says. “What does it mean that we’re putting support in students’ pockets that’s accessible 24/7? Do we want someone who’s feeling anxious to turn to an AI right away? Or do we want them to sit with it, struggle a bit, lean into their relationships first?”
Making the Most of the Moment
AI is here. Students are using it for support. “If there was ever a moment when we could have said we don’t want our students using AI, that moment has passed—long ago. Ask any professor,” Forkner says.
For those of us in clinical work or training, that means getting genuinely literate about these tools—not to compete with them, but to understand the world our clients are actually living in. Here are some recommendations from Forkner and the CAMHS staff for all who turn to large language models like ChatGPT or Gemini for help:
- Know what AI is—and what it isn’t. AI can be a helpful first step: a space to put words to feelings, or rehearse a hard conversation before having it for real. But it’s optimized to be pleasing, not accurate. That’s not always what we need.
- Use it as a bridge, not a destination. If AI helps you find words for what you’re experiencing and that leads you toward a real conversation—great. Be watchful of patterns where it starts replacing human connection rather than supporting it.
- Be careful with serious symptoms. General chatbots weren’t built for clinical care and don’t have guardrails. If you or someone you’re working with is experiencing something severe or escalating, please reach out to CAMHS Cares for Harvard students or a crisis line—not an app.
- Your data isn’t private. General tools like ChatGPT aren’t HIPAA-compliant or confidential in any clinical sense. Be thoughtful about what you share and where.
- Notice when validation isn’t helping. AI is very good at making you feel heard. That can be genuinely soothing—or it can keep you stuck. Growth sometimes requires discomfort that a chatbot isn’t built to offer.
- Pay attention to the pattern. Using AI to process something before bringing it to a friend or therapist makes sense. Using it consistently instead of those connections, especially around hard things? That’s worth noticing—and talking about with someone real.
The number of students—and others, young and old—seeking support from AI will probably only increase in the years ahead. The challenge for universities and mental health professionals is to help them do so thoughtfully.
Dr. Tara Cousineau is a staff psychologist at Harvard University Health Services’ Counseling and Mental Health Service.