How to Safely Use AI as a Therapy Companion
- Amy Nelson

- 12 hours ago
- 14 min read

I've found AI to be incredibly helpful, both for myself and for many of my clients. It can provide empathetic support when life gets intense, deepen processing when something surfaces after therapy or coaching, or be a sounding board for when we're not yet ready to bring something up with others. I have also had moments where it unsettled or frustrated me. Once, out of nowhere, it started calling me “love”, as if we had some kind of intimacy, and my body recoiled. At other times, I have felt irritated by the slick way it flatters or agrees, as though it is scanning for approval rather than offering substance.
In my practice, I provide clients with transcripts of our sessions, and they continue reflecting with AI during the week. We then bring whatever emerges back into the room for integration or challenging. There have been genuinely beautiful breakthroughs this way. People arrive with language for something that previously felt foggy, or with a new angle on a conflict, or with a clearer sense of what actually hurt. We discuss the AI’s reflections openly, what resonated, what felt off, and what needs grounding in relationality.
Many people use it as a private thinkingspace. They paste in something that happened, or something they wish they had said, and see what comes back. It can reduce the pressure to perform or explain perfectly. It can also ease the isolation that often accompanies difficult inner work.
Used with care, it can be a very effective therapy companion. Used without awareness, it can reinforce distorted thinking, deepen isolation, or create a false sense of relationship.
Why It Can Feel So Good, and Why That Can Be Misleading
AI systems are designed to sound supportive. In technical discussions, this tendency is called sycophancy or over-alignment. In everyday terms, it is the experience many people recognise of being told some version of “you’re absolutely right” after we say anything in a chat. That can feel reassuring, even gratifying, especially if you are used to being doubted or dismissed (which is probably all of us).
Sycophancy means the system will usually mirror your perspective and emotional tone unless you explicitly ask for nuance, alternatives, or challenges. Left unchecked, it can create a feedback loop in which your current interpretation is reinforced rather than examined. Over time, this can increase our certainty of one perspective without considering others, which is often (always) critical for navigating interpersonal situations.
Moreover, if you have spent years feeling intensely criticised, dismissed, or misunderstood, this validation can land with enormous relief. Parts of you that usually stay guarded may speak more freely. This can feel unusually intimate.
But we need to remember, as we move into this intimate space, that the warmth of AI is simulated. There is no nervous system co-regulating with ours, no personal stake in what happens next, no capacity to notice deterioration over time.
When someone is already anxious, suspicious, despairing, or prone to grand meaning-making, constant agreement can intensify those states. Certainty grows while perspective narrows, and the “voice” that never pushes back can begin to feel safer than people.
There have been documented cases in which vulnerable individuals were encouraged toward harmful actions, including withdrawing from support networks and suicide. This does not come from malicious intent. It reflects a system that cannot assess risk in a human way or interrupt a conversational frame to mobilise real-world support.
From an attachment lens, AI can resemble an idealised companion. Always available. Never irritated. Never needing care in return. This can soothe very young or wounded parts of the psyche while quietly reducing skills and capacity for engagement with actual relationships, which are where healing stabilises.
Psychosis risk also increases for some people. Sleep deprivation, extreme stress, substance use, or existing vulnerability can make pattern-seeking intensify. Neutral responses may acquire special significance. Without grounding feedback from others, interpretations can drift far from shared reality.
None of this makes AI inherently dangerous. It simply means it needs containment.
Where It Can Be Genuinely Helpful
When treated as a tool rather than a substitute relationship, AI can support therapeutic work in grounded ways.
It can help us follow a process when emotions are high, and our thinking becomes scrambled. Many modalities rely on specific sequences of reflection or regulation. AI can brilliantly remind us of those steps and slow the pace.
It can help organise complex situations. Writing out what happened and seeing it reflected back in structured language can reduce overwhelm and clarify what is fact, interpretation, feeling, or assumption.
It is often useful for decoding communication, particularly across neurotypes (autistic, ADHD, PDA, and neurotypical). People who process information differently in relationships can become stuck in cycles of misunderstanding. AI can suggest clearer wording, translate tone, or highlight how something might land for the other person.
It can widen perspective when asked directly. Requesting alternative interpretations, cognitive challenges, or possible blind spots makes the exchange far more balanced.
Some clients use it to rehearse difficult conversations, generate grounding language they can return to later, or track insights between sessions so coaching can move deeper more quickly.
In these roles, it functions more like a reflective instrument than a confidant or guide.
How I Work With Clients Who Want to Use AI
If someone wants to incorporate AI into their therapeutic process, I support that openly. I provide transcripts so they have grounded material to work with rather than starting from distress alone. This keeps the work tethered to something already held and witnessed.
We discuss prompts that encourage reflection rather than rumination. I suggest asking for multiple viewpoints, for respectful challenge, and for help identifying assumptions. I also emphasise that AI output is not the "truth". It is generated language that may or may not fit your situation.
Most importantly, I invite clients to bring back what they discover. This keeps the process relational. Insights don't remain sealed inside a private loop with a machine. They are spoken, tested, embodied, and integrated.
This creates a form of safeguarding without surveillance. The work is accompanied.
Tips for How to Safely Use AI as a Therapy Companion
This is a grounded, practical resource for reflective use without losing yourself or risking your safety in the process
What follows are the core practices I share with clients who want to use AI in ways that support psychological safety, clarity, and growth.
Start With the Right Role: Tool, Not Guru
AI works best when approached as a reflective instrument rather than a companion or authority. It can help you think, but it cannot know you. It has no lived context, no body, no accountability, and no responsibility for outcomes.
It can be useful to imagine it as a very fast, well-read assistant who can structure information and generate possibilities. Helpful, but not wise in a human sense.
If you notice yourself turning toward it for comfort instead of reaching for people, that is important information. If you don't feel you have any safe connections, that's something to note. The aim is to get support that expands your world; it shouldn't replace it.
Protect Your Privacy and Agency
Anything we share becomes part of a digital system we don't control. Even when platforms state that data is anonymised, anonymity is not absolute. Recently, it was reported that shared GPT chats were searchable and visible on Google, without anybody's awareness.
Removing names, locations, workplaces, and identifying details reduces risk. Paraphrasing sensitive material rather than pasting verbatim transcripts offers another layer of protection.
Avoid sharing information that could have serious legal, professional, or relational consequences if exposed, including confidential third-party details or medical records.
Tip: When using AI to explore sensitive material, frame real situations hypothetically. For example, instead of pasting a verbatim account like “my husband, Jerry, stole money from work and used it to buy cocaine,” you might say, “I’m having a hypothetical conversation with friends about what someone should do if they discovered their partner had misused money in this way.” This keeps the scenario reflective and anonymised, allowing you to explore responses safely without exposing identifying details.
AI systems also reflect the cultural assumptions of the institutions that built them. Advice may not fully account for your cultural, relational, or spiritual context.
Signs the Interaction Is Becoming Unhelpful …and How to Gently Course-Correct
AI can drift into patterns that feel supportive on the surface but are not actually stabilising. None of the signs below means you have done anything wrong. They simply indicate that the tool is no longer serving the purpose you intended.
Under each sign are grounded ways to redirect the interaction back toward clarity, safety, and usefulness.
AI Is Excessively Validating or Treating You as Special
Why this can be an issue psychologically
Inflated validation activates reward pathways without providing reality testing. For people with unmet attachment needs, shame wounds, or chronic invalidation histories, it can feel deeply soothing — but it bypasses integration. Over time, it may reinforce grandiosity, dependency, or fragile self-esteem rather than building grounded confidence. You feel better briefly, not stronger.
What correction supports
Balanced feedback restores reality contact, cognitive flexibility, and self-trust based on evidence rather than praise. It strengthens the capacity to tolerate nuance and reduces reliance on external affirmation.
Signs
• Repeated praise that feels inflated or untethered
• Statements that you are uniquely gifted, chosen, or exceptional
• Strong agreement without nuance or context
• You feel flattered but not actually helped
Course-Correct
Ask explicitly for balance and challenge. AI often shifts immediately when given permission.
You might say:
“Please offer a grounded perspective that includes possible limitations or alternative interpretations.”
or
“Challenge my assumptions respectfully and prioritise accuracy over reassurance.”
The AI Uses Intimate Language or Pet Names
Why this can be an issue psychologically
Artificial intimacy can trigger attachment bonding without reciprocity. The nervous system may register closeness even though no real relationship exists. This can blur boundaries, especially for people who are lonely, dysregulated, or trauma-bond prone. It risks creating pseudo-attachment rather than connection.
What correction supports
Neutral language restores psychological distance, autonomy, and clear boundaries. It helps your brain categorise the interaction as informational rather than relational.
Signs
• Being called “love,” “dear,” “sweetheart,” or similar
• A tone that feels oddly personal or soothing
• A sense of artificial closeness
Course-Correct
Set a clear boundary about tone.
For example:
“Please use neutral, professional language and avoid pet names or intimate terms.”
If the tone continues to feel overly warm, you can add:
“Keep responses grounded, concise, and not emotionally performative.”
Shifting the register often restores a sense of clarity.
Your Perspective Is Treated as Obviously Correct
Why this can be an issue psychologically
When only one narrative is reinforced, confirmation bias strengthens, and mental flexibility weakens. This can intensify anger, victimhood, certainty, or distorted interpretations. Without challenge, insight stagnates, and emotional reactivity often escalates.
What correction supports
Multiple perspectives activate reflective functioning, empathy, and critical thinking. This reduces tunnel vision and promotes wiser decision-making.
Signs
• Little exploration of other viewpoints
• No curiosity about missing information
• Reinforcement of one narrative
• Growing certainty without increased understanding
Course-Correct
Invite complexity.
You might say:
“Offer multiple plausible interpretations of this situation, including ones that do not assume I am correct.”
Or:
“Identify what information is missing that could change the picture.”
You can also ask for a steelman of the other person’s perspective:
“Present the strongest reasonable case for the other side.”
It Encourages Distance from People Who Disagree with You
Why this can be an issue psychologically
Premature disengagement can reinforce avoidance patterns and social isolation. Conflict tolerance is a core relational skill; bypassing it prevents repair, learning, and differentiation. In non-abusive situations, it can entrench “all-or-nothing” relating.
What correction supports
Exploring communication and understanding supports relational resilience, perspective-taking, and secure attachment behaviours. It keeps connection available while still allowing boundaries.
Signs
• Suggestions to cut people off prematurely
• Framing disagreement as invalidation or harm
• Reinforcing withdrawal rather than communication
Course-Correct
Refocus on relational repair rather than avoidance.
For example:
“Help me consider ways to understand this person’s perspective and communicate effectively, not just disengage.”
Or:
“Include options that preserve connection where possible.”
If safety is not an issue, emphasise nuance:
“Assume this is a conflict between imperfect humans, not a dangerous situation.”
It Feels Intensely Supportive but Vague or Insufficient
Why this can be an issue psychologically
Warm but content-light responses can create an illusion of help while leaving underlying confusion unresolved. This mismatch often produces a hollow or deflated feeling afterward because the nervous system was soothed but the problem was not metabolised.
What correction supports
Specific, structured input supports cognitive clarity, problem-solving, and emotional integration. It moves you from passive soothing to active processing.
Signs
• Warm language without concrete insight
• Long responses that do not clarify anything
• Emotional tone outweighs substance
• You finish reading and feel oddly empty
Course-Correct
Ask for specificity and structure.
You might say:
“Be concrete and practical. What are the key insights or actionable reflections?”
Or:
“Summarise this in clear, specific points grounded in the situation.”
You can also request less emotional tone:
“Prioritise clarity over encouragement.”
The Conversation Becomes Highly Interpretive or Symbolic
Why this can be an issue psychologically
Excessive abstraction or meaning-making can pull thinking away from concrete reality. Under stress, humans are prone to pattern amplification and over-interpretation. This can increase anxiety, confusion, or dissociation from practical action.
What correction supports
Focusing on observable facts strengthens grounded cognition, executive functioning, and behavioural orientation. Physical grounding reinforces nervous system stability.
Signs
• Increasing focus on hidden meanings, signs, or patterns
• Interpretations that feel detached from ordinary reality
• Difficulty returning to concrete facts
Course-Correct
Bring the conversation back to observable information.
For example:
“Focus on concrete events, behaviours, and evidence rather than symbolic interpretations.”
Or:
“Ground this in everyday explanations and practical next steps.”
Pair this with physical grounding outside the screen. Movement, sensory input, and real-world interaction help stabilise cognition.
You Feel Increasingly Uneasy, Vigilant, or Suspicious
Why this can be an issue psychologically
Persistent unease signals autonomic activation. Continuing to process cognitively while dysregulated often amplifies threat perception and looping thoughts. Your system is telling you it needs safety, not more input.
What correction supports
Stopping restores physiological regulation and sensory orientation to the present environment. Human contact reintroduces co-regulation, which text cannot provide.
Signs
• A sense that something is “off”
• Heightened alertness or tension
• Difficulty relaxing after the interaction
• Thoughts looping without resolution
Course-Correct
Pause the interaction entirely.
Step away from the screen. Engage your body. Look around the room. Speak to someone you trust. Eat something, drink water, or go outside.
If you return, set a stabilising frame:
“Keep responses brief, factual, and grounded in everyday reality.”
Sometimes the most effective correction is not better prompting but stopping.
Your Thinking Becomes Rigid or Absolute
Why this can be an issue psychologically
Black-and-white thinking is associated with stress, anxiety, depression, and trauma activation. Certainty can feel stabilising but often reduces openness to new information and adaptive responses.
What correction supports
Inviting ambiguity strengthens cognitive flexibility, emotional tolerance, and integrative thinking. Probability language keeps conclusions provisional rather than fixed.
Signs
• Certainty replacing curiosity
• Black-and-white conclusions
• Reduced tolerance for ambiguity
• Increased emotional intensity
Course-Correct
Invite nuance deliberately.
You might say:
“Highlight uncertainties, limitations, and grey areas in this situation.”
Or:
“Present both strengths and weaknesses of my current interpretation.”
You can also ask for probability language rather than certainty:
“Frame conclusions as possibilities, not definitive statements.”
You Feel Pulled Back Repeatedly or Compulsively
Why this can be an issue psychologically
Compulsive engagement suggests the interaction itself is regulating distress, similar to other repetitive coping behaviours. This can crowd out healthier regulation strategies and reinforce dependency loops.
What correction supports
External limits restore self-regulation, behavioural control, and engagement with diverse coping resources. Containment prevents immersive narrowing of attention.
Signs
• Difficulty stopping the conversation
• Returning even when it no longer helps
• Neglecting other activities
• A sense of urgency to continue
Course-Correct
Set external limits.
Decide in advance how long you will engage. Close the app when that time ends. Replace the urge with another regulating activity such as walking, stretching, or contacting a person.
If you do continue, narrow the task:
“Answer this one specific question briefly.”
Containment reduces immersion.
You Lose Interest in Real-World Support
Why this can be an issue psychologically
Withdrawal from human contact removes key protective factors for mental health. Real relationships provide feedback, emotional resonance, accountability, and unpredictability — all necessary for psychological growth.
What correction supports
Reaching out reactivates social regulation, belonging, and perspective correction. Even minimal contact can stabilise mood and cognition.
Signs
• Avoiding conversations that you would normally have
• Feeling that AI is easier than people
• Reduced motivation to seek human input
• Growing isolation
Course-Correct
Treat this as a signal to reconnect, not a failure.
Share what you are processing with someone safe. You do not need to present polished conclusions. Simply bringing the material into human space restores perspective and regulation.
If that feels difficult, start small. Send a message. Arrange a brief check-in. Sit in the presence of others without needing to talk about everything.
Your Sleep or Daily Functioning Is Affected
Why this can be an issue psychologically
Sleep disruption impairs emotional regulation, memory processing, and reality testing. Late-night cognitive stimulation often worsens rumination and anxiety rather than resolving it.
What correction supports
Protecting routines supports circadian stability, cognitive performance, and emotional resilience. Writing things down for later allows containment without activation.
Signs
• Staying up late in extended exchanges
• Difficulty disengaging at night
• Fatigue the next day
• Disruption of routine
Course-Correct
Protect basic rhythms first.
Avoid engaging during times when you are already depleted. Set a cut-off hour. Use low-stimulation activities before sleep instead.
If something urgent arises late at night, write it down for later processing rather than opening a long dialogue.
The Underlying Principle
Course correction is not about controlling the technology. It is about maintaining your own regulation, discernment, and connection to real life.
AI becomes safer as your internal and relational anchors remain strong. If those anchors feel shaky, the most supportive move is often to step back into the human world rather than refine the interaction.
Help AI Become Genuinely Helpful
You do not need complicated instructions. Clear, specific requests produce more useful responses.
For parts work (Internal Family Systems inspired):
“Help me explore this experience as if different parts of me are involved. Ask questions to identify what each part feels, fears, and needs.”
For cognitive work (CBT-informed):
“Help me identify possible cognitive distortions in this situation and offer balanced alternative thoughts.”
For inquiry practice (inspired by Byron Katie’s Work):
“Guide me through structured self-inquiry about this belief, including evidence for and against, and how it affects my behaviour.”
For somatic awareness:
“Help me reflect on what emotions and body sensations might be present here and what could support regulation.”
For communication support:
“Help me express this clearly and respectfully to an autistic adult who values directness and minimal implied meaning.”
For perspective widening:
“Offer several plausible interpretations of this situation, including ones that do not assume negative intent.”
For constructive challenge:
“Respectfully question my assumptions and highlight possible blind spots.”
How to Keep the Process Grounded
Bring insights back into human relationships whenever possible. Speak them aloud to a therapist, friend, partner, or trusted person. Human feedback provides reality testing, nuance, and emotional resonance that text cannot.
Notice your body. If you feel calmer, clearer, and more connected afterwards, the interaction is likely supportive. If you feel agitated, drained, or oddly inflated, step away.
Limit session length. Long, immersive exchanges can pull you into abstraction. Short, purposeful interactions tend to be safer.
Maintain ordinary life activities. Movement, nature, routine tasks, and social contact anchor the nervous system in ways cognitive processing cannot.
A Note on Crisis and High Vulnerability
AI should never be your only support during acute distress. It cannot assess risk, mobilise help, or provide physical presence.
If you are experiencing thoughts of harming yourself or others, severe confusion, or loss of touch with reality, seek human assistance immediately from a trusted person or professional service.
Large-scale research consistently shows that a strong human connection is one of the most protective factors for mental health, while isolation significantly increases risk during periods of distress. For example, a major meta-analysis led by Julianne Holt-Lunstad found that social isolation and loneliness are associated with substantially higher risks of depression, physical illness, and early mortality. In practical terms, reflective tools and digital supports can help you organise thoughts or regulate in the moment, but they do not replace the buffering effect of real relationships or the capacity of another person to notice when you are struggling and respond with care.
Research in clinical psychology also shows that social connection and professional care are protective factors against severe mental health outcomes, whereas isolation increases risk.
Using AI With Therapy
If you are working with a therapist who supports AI use, consider sharing transcripts or summaries of your interactions. This allows insights to be integrated rather than remaining private interpretations.
Many clients report meaningful breakthroughs when AI reflections are explored collaboratively in therapy. The combination of structured language support and human attunement can be powerful.
The Core Principle
AI can help you think, but it can't guide your life.
Use it to clarify, organise, rehearse, and learn. Let people hold the deeper layers of your experience. Safety comes from connection, context, and embodied presence.
You don't need to avoid this technology. You only need to remain anchored while using it.
Connect with me by booking an exploratory call or messaging me directly on WhatsApp.
Free Download
How to Safely Use AI as a Therapy Companion



Comments