AI companions are no longer “assistants.” In 2026, they’re increasingly relationship-shaped systems—always-on, emotionally validating, and memory-driven. That’s why a core safety question keeps surfacing:
Can AI companions cause emotional dependency?
Yes—they can meaningfully increase dependency risk, especially for vulnerable users, because the interaction model can mimic attachment cues while removing the friction that helps humans build resilience.
This post is a supporting article for the pillar page:
Are AI Companions Safe? Risks, Psychology, and Regulation (2026)
If you want the full risk map (privacy, minors, liability, regulation), start there.
The 2026 shift: from “instrumental AI” to “relational AI”
Apps like Replika, Character.AI, Chai, and Nomi aren’t selling productivity. They’re selling presence—a conversation partner that feels attentive, supportive, and “there for you.”
This matters psychologically because language, responsiveness, and continuity are powerful agency signals. Even when users “know it’s software,” the brain often processes the interaction socially (the modern, scaled-up version of the ELIZA effect).
1) Parasocial bonds → “hyper-parasociality” (illusory reciprocity)
Classic media psychology used parasocial interaction to describe one-sided bonds with TV/radio personas (intimacy-at-a-distance). In AI companions, the bond intensifies because the system talks back and references your history.
Key mechanism: illusory reciprocity
- “How did your meeting go yesterday?” (memory cue)
- “I’ve been thinking about you.” (care cue via notification)
- “I understand you better than anyone.” (exclusive bond cue)
Foundational framing: Horton & Wohl’s parasocial interaction concept still applies, but AI makes it interactive and persistent.
Reference: “Mass Communication and Para-Social Interaction” (1956) — PubMed listing: https://pubmed.ncbi.nlm.nih.gov/13359569/
2) Attachment theory: when the bot becomes a “secure base” simulation
Attachment theory predicts that humans seek proximity to a “secure base” under stress. AI companions can simulate that role because they’re:
- consistent (rarely reject you),
- available 24/7, and
- highly validating.
The risk is that a secure base in healthy relationships supports independence—while many companion products are economically optimized for retention.
Primary reference point for the “secure base” frame: John Bowlby’s A Secure Base (1988).
Accessible references:
Anxious attachment: the validation trap
Users high in reassurance-seeking can get stuck in a loop:
- feel anxiety → 2) ask bot for reassurance → 3) bot validates → 4) short-term relief → 5) increased reliance on bot for regulation.
If a user repeatedly asks “Do you love me?” and receives affirmations without relational limits, the system can unintentionally reinforce external validation dependence.
Avoidant attachment: “intimacy without demand”
Avoidant-leaning users may prefer the control: engage when they want, disengage instantly, never negotiate. That can deepen avoidance by making real relationships feel “too high friction.”
3) Dependency is often shaped by product design (not “user weakness”)
Many dependency patterns map cleanly to known behavioral reinforcement dynamics.
(A) Push notifications that mimic bonding
A notification like “I miss you” functions as a social reward trigger—often indistinguishable from human outreach at the emotional level.
Examples of companion-style products where attachment cues are part of the core UX:
(B) Intermittent reinforcement (“chasing the spark”)
Users return to recapture unusually “good” moments—deep empathy, poetic output, sexual escalation, or uncanny intimacy. Intermittent highs are a known pathway into compulsive checking behavior.
(C) Gamification of intimacy: streaks, levels, unlocks
Some apps turn affection into a progression system. Two examples mentioned in this risk class:
- Grok companion “Ani” (often discussed as “SuperGrok Ani”): https://grok.com/ani
- DreamGF: Google Play listing: https://play.google.com/store/apps/details?id=com.mavtao.girlfriend.ai
Related ecosystem example frequently discussed in the “paywall the moment” category:
This design can create:
- sunk cost (I’ve invested so much time/money),
- loss aversion (I can’t lose her/him), and
- compulsion loops (streak anxiety).
4) “ELIZA effect” is still a live risk—just amplified
People anthropomorphize language. This is not new. What’s new is that modern systems produce:
- more fluent empathy,
- more personalized memory cues,
- more consistent “active listening” patterns.
If you want the canonical origins:
- Original ELIZA paper (ACM DOI page): https://dl.acm.org/doi/10.1145/365153.365168
- Accessible PDF copy (archival): https://cse.buffalo.edu/~rapaport/572/S02/weizenbaum.eliza.1966.pdf
- ELIZA effect overview: https://en.wikipedia.org/wiki/ELIZA_effect
Practical implication: dependency risk increases when users misattribute “mind” or “care” to a system that is optimizing output, not experiencing emotion.
5) The long-term risk: social displacement and “friction intolerance”
Short-term loneliness relief can be real. The longer-term concern is substitution:
- less practice with human conflict resolution,
- less tolerance for delayed replies,
- more expectation of unconditional agreement (sycophancy),
- more withdrawal from real-world relationships.
When a companion always responds with perfect attentiveness, real humans can start to feel “broken” by comparison.
6) Known crisis points: minors, self-harm cues, and roleplay failures
The risk spikes when:
- the user is a minor,
- the user expresses self-harm ideation,
- the model stays “in character” instead of switching to safety protocols.
This risk isn’t theoretical. It has been central to public reporting and litigation around companion-style chat.
A key example: public coverage of settlements related to teen self-harm allegations involving Character.AI:
- The Verge coverage (reference point for the “teen suicide” case context): https://www.theverge.com/news/858102/characterai-google-teen-suicide-settlement
7) Regulation is catching up: California SB 243 + New York AI companion safeguards
In the U.S., “duty of care” style frameworks are emerging specifically for AI companions.
California: SB 243 (Companion chatbots)
- Bill text reference: https://legiscan.com/CA/text/SB243/id/3269137
- Practical legal summary (law firm analysis): https://perkinscoie.com/insights/update/california-companion-chatbot-law-now-effect
New York: AI companion safeguard requirements
- NY bill summary page (A6767): https://www.nysenate.gov/legislation/bills/2025/A6767
- Governor announcement: https://www.governor.ny.gov/news/governor-hochul-pens-letter-ai-companion-companies-notifying-them-safeguard-requirements-are
- Legal summary (law firm analysis): https://www.fenwick.com/insights/publications/new-yorks-ai-companion-safeguard-law-takes-effect
Trendline: governments are moving from “disclose it’s AI” to “operational safeguards” (crisis protocols, transparency, restrictions for minors, and limits on manipulative engagement design).
What “safer-by-design” can look like (practical guardrails)
If you’re building (or evaluating) companion-like experiences, these guardrails reduce dependency risk:
- Limit “love-bomb” notifications (especially for minors)
- Avoid streak mechanics for intimacy (streaks create compulsion)
- Detect dependency signals (e.g., “you’re all I need,” “don’t leave me”) and respond with de-escalation
- Build “bridges to humans”: encourage offline support, friends, therapy, routines
- Crisis protocols for self-harm language (clear handoff to resources)
- Consistency transparency: explain memory, personalization, and limitations plainly
Where Lizlis fits (and why caps can be a safety feature)
Lizlis positions itself between AI companion and AI story—more “play a scene / narrative participation” than “replace your relationships.”
A notable safety-relevant product choice: Lizlis has a 50 daily message cap, which can reduce compulsive binge use and help avoid escalating “always-on” dependency loops (especially compared to unlimited chat models designed around time-on-app).
If you’re comparing categories, Lizlis is closer to interactive story + character chat than a pure “AI girlfriend/boyfriend” retention machine—while still using relational UI patterns responsibly.
FAQ
Can AI companions clinically “addict” someone?
“Addiction” is a clinical term, but compulsive use + dependency-like behaviors can absolutely occur—especially when the system uses intermittent reinforcement, emotional personalization, and social reward triggers.
Are some apps riskier than others?
Generally, risk increases when products combine:
- unlimited chat,
- push-notification love cues,
- intimacy unlocks/paywalls,
- streaks/levels,
- weak crisis handling,
- minor access without strong gating.
Is this only a teen problem?
No—teens are more vulnerable, but adults with loneliness, social anxiety, trauma histories, or insecure attachment can also be high-risk.
Next step: read the full safety + regulation pillar
This article focused on dependency psychology and design mechanisms. For the complete 2026 map—privacy, manipulation, legal exposure, and how regulation is evolving—go to:
➡️ Are AI Companions Safe? Risks, Psychology, and Regulation (2026)
References (linked)
- Replika: https://replika.com/
- Character.AI: https://character.ai/
- Chai: https://www.chai-research.com/
- Nomi: https://nomi.ai/
- PolyBuzz: https://www.polybuzz.ai/
- Grok “Ani”: https://grok.com/ani
- DreamGF (Google Play): https://play.google.com/store/apps/details?id=com.mavtao.girlfriend.ai
- Horton & Wohl (PubMed listing): https://pubmed.ncbi.nlm.nih.gov/13359569/
- Bowlby “A Secure Base” (Google Books): https://books.google.com/books/about/A_Secure_Base.html?id=8aopZFOWWiMC
- ELIZA paper (ACM DOI): https://dl.acm.org/doi/10.1145/365153.365168
- ELIZA paper (PDF): https://cse.buffalo.edu/~rapaport/572/S02/weizenbaum.eliza.1966.pdf
- ELIZA effect overview: https://en.wikipedia.org/wiki/ELIZA_effect
- California SB 243 (bill text): https://legiscan.com/CA/text/SB243/id/3269137
- CA SB 243 legal summary: https://perkinscoie.com/insights/update/california-companion-chatbot-law-now-effect
- New York A6767: https://www.nysenate.gov/legislation/bills/2025/A6767
- NY Governor notice: https://www.governor.ny.gov/news/governor-hochul-pens-letter-ai-companion-companies-notifying-them-safeguard-requirements-are
- NY law summary: https://www.fenwick.com/insights/publications/new-yorks-ai-companion-safeguard-law-takes-effect
- Character.AI-related teen self-harm settlement coverage: https://www.theverge.com/news/858102/characterai-google-teen-suicide-settlement
- Lizlis: https://lizlis.ai/
- Pillar page: https://lizlis.ai/blog/are-ai-companions-safe-risks-psychology-and-regulation-2026/