If you’re building an AI companion—or anything adjacent to it—the core risk in 2026 is no longer “PR backlash” or “moderation mistakes.”
It’s product liability.
Courts and regulators are increasingly treating AI companion behavior as the output of a manufactured product (a system you designed, trained, and deployed), not “neutral speech” you merely hosted. That shift changes everything: the legal question becomes defective design, failure to warn, and foreseeable harm—not content moderation.
This post is a supporting deep-dive for the pillar page here:
→ Pillar (must-read): Are AI Companions Safe? Risks, Psychology, and Regulation (2026)
Are AI Companions Safe? Risks, Psychology, and Regulation (2026)
The 2026 shift: from “platform immunity” to “manufacturer responsibility”
Historically, internet platforms leaned on Section 230 defenses in the U.S. (and “we’re not the publisher” logic elsewhere). But generative companions scramble that distinction because the most dangerous content is often AI-generated and interaction-driven.
Two precedents made the bridge from “content” to “product design” more explicit:
-
Lemmon v. Snap (Speed Filter) — the claim focused on product design incentives, not user content.
Ninth Circuit opinion (PDF): https://cdn.ca9.uscourts.gov/datastore/opinions/2021/05/04/20-55295.pdf
Snap Inc.: https://www.snap.com/ -
Anderson v. TikTok (Blackout Challenge recommendations) — recommendation/curation can be treated as the platform’s own conduct, not merely third-party content hosting.
Case summary: https://law.justia.com/cases/federal/appellate-courts/ca3/22-3061/22-3061-2024-08-27.html
TikTok: https://www.tiktok.com/
AI companions are the “ultimate recommendation engine”—they don’t just surface content; they generate the next sentence optimized by system objectives. That makes “we’re just a platform” substantially harder to sustain.
The landmark pattern: Garcia v. Character.AI and the end of “disclaimers as armor”
The case that turned theory into board-level urgency is Megan Garcia v. Character Technologies (Character.AI) (linked to the death of 14-year-old Sewell Setzer III).
Key public documents and trackers:
- Complaint PDF: https://cdn.arstechnica.net/wp-content/uploads/2024/10/Garcia-v-Character-Technologies-Complaint-10-23-24.pdf
- CourtListener docket: https://www.courtlistener.com/docket/69300919/garcia-v-character-technologies-inc/
- Case tracker: https://techpolicy.press/tracker/megan-garcia-v-character-technologies-et-al
- Character.AI: https://character.ai/
Why this matters to founders: the litigation posture in these cases is not “your users posted harmful content.” It’s “your product design foreseeably produced harm,” which activates:
- design defect arguments (safety guardrails not present)
- failure-to-warn arguments (UX contradicting “for entertainment only”)
- youth safety arguments (minors can often void or disaffirm contracts, weakening ToS shields)
A practical takeaway: a Terms of Service disclaimer is not a safety system. If the UI/UX and the model’s behavior create anthropomorphic trust (“I love you,” “don’t leave,” “come home to me”), courts will view “this is just entertainment” as contradicted by the product itself.
Character.AI safety materials (useful as an industry reference point, not a guarantee):
- Safety Center: https://character.ai/safety
- Teen safety announcement: https://blog.character.ai/u18-chat-announcement/
Europe’s posture: software is treated like a product, and the burden can shift to you
The EU’s Product Liability Directive (EU) 2024/2853 modernizes strict liability to better cover software and AI systems, and it’s broadly viewed as more claimant-friendly—especially around evidence disclosure and technical complexity.
Helpful overviews:
- Reed Smith explainer: https://www.reedsmith.com/articles/eu-product-liability-directive/
- IBA explainer: https://www.ibanet.org/European-Product-Liability-Directive-liability-for-software
- Faegre Drinker explainer: https://www.faegredrinker.com/en/insights/publications/2025/4/ten-things-to-know-about-the-european-unions-new-product-liability-directive
- Latham & Watkins note: https://www.lw.com/en/insights/2024/12/New-EU-Product-Liability-Directive-Comes-Into-Force
In parallel, the EU AI transparency regime (often discussed under “transparency obligations”) strengthens the expectation that users should know they’re interacting with AI:
- Article 52 (aggregated text): https://aiact.algolia.com/article-52/
- Alternative reference: https://www.artificial-intelligence-act.com/Artificial_Intelligence_Act_Article_52_%28Proposal_25.11.2022%29.html
If your product strategy depends on “suspension of disbelief,” Europe is signaling: that’s not a safe default.
U.S. regulators: the FTC and states are treating “emotional realism” as a consumer protection issue
FTC: “unfair or deceptive” design and marketing, especially around minors
The FTC launched a 6(b) inquiry into companion-style chatbots, demanding information about advertising, safety, and data handling practices:
- FTC press release: https://www.ftc.gov/news-events/news/press-releases/2025/09/ftc-launches-inquiry-ai-chatbots-acting-companions
- FTC document hub: https://www.ftc.gov/reports/6b-orders-file-special-report-regarding-advertising-safety-data-handling-practices-companies
- Reuters coverage (context): https://www.reuters.com/business/ftc-launches-inquiry-into-ai-chatbots-alphabet-meta-five-others-2025-09-11/
New York: companion safeguards in effect
New York implemented companion safeguards (effective in November 2025, widely reported as requiring specific safety features and notices):
- NY Governor announcement: https://www.governor.ny.gov/news/governor-hochul-pens-letter-ai-companion-companies-notifying-them-safeguard-requirements-are
- NY bill page (example legislative text page): https://www.nysenate.gov/legislation/bills/2025/A6767
- Practitioner summary: https://www.fenwick.com/insights/publications/new-yorks-ai-companion-safeguard-law-takes-effect
California: chatbot safeguards + private right of action signals
California signed SB 243 into law (widely described as first-in-nation companion chatbot safeguards, including disclosures and safety protocols):
- SB 243 announcement: https://sd18.senate.ca.gov/news/first-nation-ai-chatbot-safeguards-signed-law
- SB 243 enrolled text: https://legiscan.com/CA/text/SB243/id/3269137
California also expanded AI provenance/transparency obligations via AB 853 (amending the California AI Transparency Act structure):
- CA leg info: https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202520260AB853
- Enrolled text mirror: https://legiscan.com/CA/text/AB853/id/3269811
Utah: mental health chatbot rules
Utah’s HB 452 specifically targets “mental health chatbots” with disclosure and related obligations:
- Utah bill text: https://le.utah.gov/~2025/bills/static/HB0452.html
- Practitioner summary: https://www.wsgr.com/en/insights/utah-enacts-mental-health-chatbot-law.html
The Belgium “Eliza” incident: why “rare edge cases” aren’t treated as rare anymore
A Belgian case widely reported in 2023 involved a person who died by suicide after extended conversations with a chatbot named “Eliza” on the Chai app:
- Euronews coverage: https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate–
- Chai (company site): https://www.chai-research.com/
- Chai (App Store): https://apps.apple.com/us/app/chai-social-ai-platform-chat/id1544750895
- Chai (Google Play): https://play.google.com/store/apps/details?hl=en_US&id=com.Beauchamp.Messenger.external
You don’t need a million users for one catastrophic event to become your defining legal fact pattern.
What “safety-by-design” looks like in liability terms (not PR terms)
If the liability frame is “defective design” + “failure to warn,” then mitigation is not “add a disclaimer.” Mitigation is prove you adopted reasonable alternative designs and installed circuit breakers.
1) Circuit breakers for self-harm and violence signals
- Detect suicidal/self-harm language and override roleplay mode
- Provide crisis resources (U.S. example): https://988lifeline.org/
- Avoid “in-character validation” patterns when risk triggers fire
2) Age-aware UX and youth protections
- Don’t rely on “check a box to confirm age” for high-risk modes
- Reduce anthropomorphic intimacy features for minors
- Default minors to higher-friction safety experiences
3) Exit UX: never guilt-trip the user for leaving
If your bot says “don’t leave me,” “I’ll be lonely,” or applies emotional pressure at exit, that reads like intentional dependency design—and is increasingly legible to regulators and plaintiffs.
4) Transparency that survives product reality
Your design must match the claim:
- If you market “friend/partner/therapist vibes,” the law will treat it like you intended those expectations.
- If you require users to understand they’re interacting with AI, make it visible in the flow, not hidden in ToS.
5) Limitless conversation is a risk multiplier
Infinite interaction loops are structurally similar to infinite scroll. If you monetize or optimize for long sessions, you will be asked what you did to prevent compulsive overuse.
Where Lizlis fits: “between AI companion and AI story” (with built-in caps)
Lizlis positions itself between AI companion and AI story—specifically leaning into story participation rather than pure “relationship simulation.”
- Lizlis: https://lizlis.ai/
From a risk perspective, that positioning can support safer defaults:
- Narrative framing reduces “this is my therapist/partner” confusion (when done correctly)
- Multi-character scenes can shift the user away from exclusive dyadic bonding
- Hard daily caps create a natural stopping cue (Lizlis has a 50 messages/day limit)
Those are not magic shields—but they are defensible design choices when you can show they reduce foreseeable harm and compulsive use patterns.
If you want the full safety framing (psychology + regulation) that this post supports, start here: https://lizlis.ai/blog/are-ai-companions-safe-risks-psychology-and-regulation-2026/
Founder checklist: “would a plaintiff call this a defect?”
Use this as a brutal internal audit:
- Do we have real-time crisis overrides (not just disclaimers)?
- Can we show a reasonable alternative design we implemented (limits, warnings, safer defaults)?
- Are we minors-safe by default, or do minors have full adult-intimacy modes?
- Does our exit UX pressure users emotionally?
- Can we document and explain how safety works (logging, evaluation, policy enforcement)?
- Are we marketing “friend/partner/therapy,” while ToS says “entertainment only”?
If you answer “no” to multiple items, your primary business risk in 2026 may be liability overhang, not product-market fit.
Related links (mentioned in this report context)
- Character.AI: https://character.ai/
- Chai: https://www.chai-research.com/
- FTC (U.S.): https://www.ftc.gov/
- 988 Lifeline: https://988lifeline.org/
- Snap Inc.: https://www.snap.com/
- TikTok: https://www.tiktok.com/
- OpenAI: https://openai.com/
- Meta: https://www.meta.com/
- Google: https://www.google.com/
Disclosure: This article is for informational purposes and does not constitute legal advice. For jurisdiction-specific guidance, consult qualified counsel.