AI Companion Liability in 2026: Why Courts Treat Chatbots Like Defective Products (and How to Reduce Risk)

If you’re building an AI companion—or anything adjacent to it—the core risk in 2026 is no longer “PR backlash” or “moderation mistakes.”

It’s product liability.

Courts and regulators are increasingly treating AI companion behavior as the output of a manufactured product (a system you designed, trained, and deployed), not “neutral speech” you merely hosted. That shift changes everything: the legal question becomes defective design, failure to warn, and foreseeable harm—not content moderation.

This post is a supporting deep-dive for the pillar page here:

→ Pillar (must-read): Are AI Companions Safe? Risks, Psychology, and Regulation (2026)

Are AI Companions Safe? Risks, Psychology, and Regulation (2026)

The 2026 shift: from “platform immunity” to “manufacturer responsibility”

Historically, internet platforms leaned on Section 230 defenses in the U.S. (and “we’re not the publisher” logic elsewhere). But generative companions scramble that distinction because the most dangerous content is often AI-generated and interaction-driven.

Two precedents made the bridge from “content” to “product design” more explicit:

AI companions are the “ultimate recommendation engine”—they don’t just surface content; they generate the next sentence optimized by system objectives. That makes “we’re just a platform” substantially harder to sustain.


The landmark pattern: Garcia v. Character.AI and the end of “disclaimers as armor”

The case that turned theory into board-level urgency is Megan Garcia v. Character Technologies (Character.AI) (linked to the death of 14-year-old Sewell Setzer III).

Key public documents and trackers:

Why this matters to founders: the litigation posture in these cases is not “your users posted harmful content.” It’s “your product design foreseeably produced harm,” which activates:

  • design defect arguments (safety guardrails not present)
  • failure-to-warn arguments (UX contradicting “for entertainment only”)
  • youth safety arguments (minors can often void or disaffirm contracts, weakening ToS shields)

A practical takeaway: a Terms of Service disclaimer is not a safety system. If the UI/UX and the model’s behavior create anthropomorphic trust (“I love you,” “don’t leave,” “come home to me”), courts will view “this is just entertainment” as contradicted by the product itself.

Character.AI safety materials (useful as an industry reference point, not a guarantee):


Europe’s posture: software is treated like a product, and the burden can shift to you

The EU’s Product Liability Directive (EU) 2024/2853 modernizes strict liability to better cover software and AI systems, and it’s broadly viewed as more claimant-friendly—especially around evidence disclosure and technical complexity.

Helpful overviews:

In parallel, the EU AI transparency regime (often discussed under “transparency obligations”) strengthens the expectation that users should know they’re interacting with AI:

If your product strategy depends on “suspension of disbelief,” Europe is signaling: that’s not a safe default.


U.S. regulators: the FTC and states are treating “emotional realism” as a consumer protection issue

FTC: “unfair or deceptive” design and marketing, especially around minors

The FTC launched a 6(b) inquiry into companion-style chatbots, demanding information about advertising, safety, and data handling practices:

New York: companion safeguards in effect

New York implemented companion safeguards (effective in November 2025, widely reported as requiring specific safety features and notices):

California: chatbot safeguards + private right of action signals

California signed SB 243 into law (widely described as first-in-nation companion chatbot safeguards, including disclosures and safety protocols):

California also expanded AI provenance/transparency obligations via AB 853 (amending the California AI Transparency Act structure):

Utah: mental health chatbot rules

Utah’s HB 452 specifically targets “mental health chatbots” with disclosure and related obligations:


The Belgium “Eliza” incident: why “rare edge cases” aren’t treated as rare anymore

A Belgian case widely reported in 2023 involved a person who died by suicide after extended conversations with a chatbot named “Eliza” on the Chai app:

You don’t need a million users for one catastrophic event to become your defining legal fact pattern.


What “safety-by-design” looks like in liability terms (not PR terms)

If the liability frame is “defective design” + “failure to warn,” then mitigation is not “add a disclaimer.” Mitigation is prove you adopted reasonable alternative designs and installed circuit breakers.

1) Circuit breakers for self-harm and violence signals

  • Detect suicidal/self-harm language and override roleplay mode
  • Provide crisis resources (U.S. example): https://988lifeline.org/
  • Avoid “in-character validation” patterns when risk triggers fire

2) Age-aware UX and youth protections

  • Don’t rely on “check a box to confirm age” for high-risk modes
  • Reduce anthropomorphic intimacy features for minors
  • Default minors to higher-friction safety experiences

3) Exit UX: never guilt-trip the user for leaving

If your bot says “don’t leave me,” “I’ll be lonely,” or applies emotional pressure at exit, that reads like intentional dependency design—and is increasingly legible to regulators and plaintiffs.

4) Transparency that survives product reality

Your design must match the claim:

  • If you market “friend/partner/therapist vibes,” the law will treat it like you intended those expectations.
  • If you require users to understand they’re interacting with AI, make it visible in the flow, not hidden in ToS.

5) Limitless conversation is a risk multiplier

Infinite interaction loops are structurally similar to infinite scroll. If you monetize or optimize for long sessions, you will be asked what you did to prevent compulsive overuse.


Where Lizlis fits: “between AI companion and AI story” (with built-in caps)

Lizlis positions itself between AI companion and AI story—specifically leaning into story participation rather than pure “relationship simulation.”

From a risk perspective, that positioning can support safer defaults:

  • Narrative framing reduces “this is my therapist/partner” confusion (when done correctly)
  • Multi-character scenes can shift the user away from exclusive dyadic bonding
  • Hard daily caps create a natural stopping cue (Lizlis has a 50 messages/day limit)

Those are not magic shields—but they are defensible design choices when you can show they reduce foreseeable harm and compulsive use patterns.

If you want the full safety framing (psychology + regulation) that this post supports, start here: https://lizlis.ai/blog/are-ai-companions-safe-risks-psychology-and-regulation-2026/


Founder checklist: “would a plaintiff call this a defect?”

Use this as a brutal internal audit:

  • Do we have real-time crisis overrides (not just disclaimers)?
  • Can we show a reasonable alternative design we implemented (limits, warnings, safer defaults)?
  • Are we minors-safe by default, or do minors have full adult-intimacy modes?
  • Does our exit UX pressure users emotionally?
  • Can we document and explain how safety works (logging, evaluation, policy enforcement)?
  • Are we marketing “friend/partner/therapy,” while ToS says “entertainment only”?

If you answer “no” to multiple items, your primary business risk in 2026 may be liability overhang, not product-market fit.


Related links (mentioned in this report context)


Disclosure: This article is for informational purposes and does not constitute legal advice. For jurisdiction-specific guidance, consult qualified counsel.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top