By Frederick d’Oleire Uquillas, Science Communications Fellow for the AI Lab
Imagine you’re in therapy. You’re telling your therapist: “I feel like I’m drowning in stress; I can’t stop thinking about work.”
Now imagine your therapist replies: “Take a moment and picture yourself watching this moment from the outside. What would future you say?”
That subtle shift, from immersed in the problem to viewing it from a distance, isn’t just therapist-speak. It’s a well-established strategy in psychology known as linguistic distancing, a form of cognitive reappraisal that helps people regulate emotion by changing how they mentally frame a situation. Prior studies have shown that people who adopt more distanced perspectives (e.g., using third-person self-talk, broad time horizons, or more abstract framing) tend to experience reduced stress, better problem-solving, and more adaptive long-term outcomes.
A team of Princeton researchers just used an AI model to spot that shift with more nuance than any method used before.

The Study: LLMs Go to Therapy
The Princeton researchers, led by postdoc Mostafa Abdou and supervised by professors Erik Nook and Nathaniel Daw, teamed up with the mental health platform Talkspace to analyze over 700,000 messages between therapists and clients.
Their question: Can large language models (LLMs), like the AI that powers ChatGPT, detect when people are speaking from a more psychologically “distanced” perspective?
And, does that distancing matter for mental health outcomes?
The answer to both questions turns out to be yes.
How Do You Measure “Distance” in Words?
Old-school methods used by psychologists, like Linguistic Inquiry and Word Count (LIWC), would count how often a patient says “I” vs. “you,” or whether a patient is using present tense or past tense. More “I” and “now”? The patient is likely more emotionally immersed. More “they” and “back then”? The patient is likely speaking from a more psychologically distanced perspective.
The problem? People are complicated. Language is messier than word counts. While tools like LIWC analyze language by tallying pre-defined word categories (like pronouns, verb tenses, or emotion words), they miss the forest for the trees. These systems can’t understand context, irony, or higher-order structure.
Enter: The LLM
Instead of counting words, Abdou and colleagues prompted a large language model (Meta’s LLaMA-3.1-70B-Instruct), fine-tuned for instruction-following, to rate the psychological distance of client and therapist messages.
Importantly, they didn’t train the models on therapy data, nor on examples, just plain English. The researchers wrote a prompt that explained what psychological distancing means the way you might instruct a research assistant, using language drawn from academic papers, and asked the model to rate each message accordingly. Since the models were trained to follow instructions, it worked! Kind of like saying, “Hey AI, here’s how psychologists think about this. Now you try.”
This “zero-shot” approach let them apply cutting-edge language understanding to over 700,000 real-world therapy messages; a scale that would be unthinkable for human coders.
This is a clever way to turn a qualitative judgement (“how distanced is this language?”) into a quantitative measure, which can then be correlated with symptom scores, treatment time, or other variables.
So, rather than guessing based on rigid rules, the LLM uses its contextual understanding, like tone, structure, and abstractness, to assign a smart, nuanced score.
It’s like having a psycholinguist with a Ph.D. and perfect memory read every single sentence.
What Did the AI Find?
Three major takeaways:
- Clients who spoke with more psychological distance improved more. The more their language reflected distancing over time, the more their depression and anxiety symptoms dropped.
- Therapists who encouraged distancing helped clients more. Not by being emotionally removed themselves, but by actively helping clients adopt that perspective. For example, by asking questions like, “What if you looked at this from a bird’s-eye view?”
- The LLM outperformed the old word-count tools. Its distance scores had stronger and more meaningful associations with clients’ symptom changes. Also, it didn’t get tricked by surface-level grammar.

Why This Matters: Beyond the Robots
This isn’t just about fancy AI. It’s about understanding what works in therapy.
- It suggests that encouraging clients to “zoom out” really does help.
- It shows that therapist language can be an active ingredient in healing, not just a side dish.
- And it gives us a tool to study this at massive scale, opening up new ways to train therapists, track progress, or even flag when therapy is going off-track.
As the authors put it:
“LLM-based linguistic analysis can offer novel insights into how therapists’ communication styles and use of interpersonal regulatory strategies shape the therapeutic process and contribute to symptom improvement.”
Translation: AI might help make therapy better, for everyone.
Acknowledging Limitations?
The researchers are careful to note that the study is correlational, so we can’t say for sure that distancing causes symptom improvement. Further, LLMs are still black boxes. They don’t explain their reasoning, and their outputs can be sensitive to subtle changes in prompts. Lastly, the data came from text-based therapy, which may differ in tone and pacing from in-person sessions.
Still, the findings suggest a powerful new tool for understanding the mechanisms of change in therapy, and for scaling linguistic insight in ways that traditional coding schemes or small-sample studies simply can’t match.
So no, AI won’t replace your therapist. But it might help them get even better at their job.
Final Thoughts
Therapy is all about words. And now, finally, we have an AI tool that can help us understand them, not just count them.
Whether you’re a clinician, a researcher, or someone who’s ever said, “Ugh, I can’t stop thinking about this,” this study offers something rare: a glimpse into how words can change minds.
Curious to learn more?
You can read the full paper on medRxiv.


Leave a Reply