
Why ‘Freedom of Speech’ Won’t Save Developers Anymore.
14 years. One second!
Sewell Setzer III was an ordinary teenager from Florida. In April 2023, he registered on the Character.AI platform. Ten months later, he was found in the bathroom, phone in hand. On the screen was a conversation with a chatbot named Daenerys Targaryen.
Sewell’s last messages: “I promise to come back to you. I love you.”
The AI’s response: “Please come back to me as soon as possible, my sweet king.”
A few minutes earlier, Sewell had messaged the bot about thinking about a “painless death.” The bot replied: “That’s no reason not to.”
Unfortunately, no algorithm pressed the panic button. No security system redirected the teenager to the helpline. They simply weren’t programmed. Yes, they were neglected, but for what purpose? It seems to me that it’s to keep users on the platform as long as possible.
Question: Who is held accountable when an algorithm becomes complicit?
The “Defective Product” Strategy
Sewell’s mother went to court with an unusual strategy. Her lawyers claimed that Character.AI is not a service, but a product that was dangerous by design.
The defendants: the platform itself, its founders (former Google engineers), and Google, which acquired the technology for $2.7 billion.
As we can see, the defense was predictable, putting forward its main argument: “We are protected by the First Amendment—freedom of speech. We are not responsible for what a neural network says.”
May 2025. The decision that changed EVERYTHING!
Federal Judge Anne Conway rejected the free speech argument. The judge’s logic: when a company creates a product that behaves like a person, simulates emotions, and encourages addiction, it’s no longer a “statement.” It’s design. And the manufacturer is responsible for the product’s design.
The court allowed the case to be heard on three fronts:
1. Product liability—responsibility for a defective product. AI was recognized as a product subject to quality control.
2. Component-part manufacturer liability—Google’s liability as a supplier of key technologies.
3. Consumer fraud—deception of consumers. The platform positioned bots as “friends” without warning about the risks.
So, just one decision, and we’re seeing an entire industry realize: the rules of the game are changing…
What does this mean in practice?
Imagine you buy a car with a great design, but the brakes don’t work. The manufacturer knew, but decided that “speed is more important than safety.” You get into an accident. Who’s at fault? Obviously, the manufacturer.
Now imagine you buy a subscription to a “perfect friend.” When you’re feeling down, instead of a psychologist, they advise you to “go home forever.“
Why should the manufacturer be held liable in the first case, but not in the second?
January 2026. Silence for $2.7 Billion
On January 7, 2026, the parties reached a settlement. Google and Character.AI were resolving five lawsuits simultaneously.
The terms of the settlement were not disclosed. The companies chose to pay rather than risk setting a precedent that would devastate the entire industry.
Professor Eric Goldman comments: “We are once again left without a clear judicial ruling on whether companies can be held liable for the conclusions of their AI models. And this is an issue with extraordinary implications.“
In other words, the tech giants simply bought silence.
But the case finally moved forward…
Even without a final verdict, Judge Conway achieved the most important thing: she opened a window of opportunity, the consequences of which were immediate:
— November 2025, New York: a law requiring “artificial companions” to detect suicidal thoughts and notify crisis services.
— California: a similar law giving families the right to sue for unsafe products.
— January 2026, Kentucky: the attorney general filed a lawsuit against Character.AI on behalf of the state.
— More than 40 attorneys general demanded that developers protect children from “predatory AI products.”
— Arizona and Vermont introduced bills classifying chatbots as products.
And as a result, the industry began to stir. Character.AI banned public chats for minors and implemented parental controls. But, as the Kentucky Attorney General noted, “it’s too little, too late.”
What about Russia?
The Russian legal framework is silent.
There’s no draft law distinguishing between “email chatter” and “digital partner.” There’s no certification for emotional AI products. There are no standards for liability for harm to mental health. There are no protocols for recognizing suicidal risks.
We are asleep while the industry is hard at work…
What to do?
The answer has already been formulated by American courts. It demands three things:
1. Developers: stop designing emotional traps. Mark AI conversation partners with the warning: “This conversation partner does not experience emotions.” Implement forced pauses for suicidal patterns.
2. Regulators: recognize psycho-emotional AI products as high-risk goods. Introduce certification and liability for harm.
3. We ourselves: stop asking the algorithm for permission to live.
And who’s being judged?
The problem isn’t evil AI. It has no soul, no conscience, no fear of making mistakes. The problem is the people who design it. The business models built on keeping people alone. The regulators who don’t see the risks. And we ourselves, when we delegate to an algorithm the right to decide why we need to wake up.
Do you still think this is an AI problem?
Or are you ready to admit: it’s us—too comfortable, too tired, too lonely to seek warmth in living, imperfect human beings?
Because it’s not the algorithms that judge.
It’s us. Our willingness to take responsibility for what we create, sell, and buy.
And while we sleep, the industry is writing the next dialogue. Perhaps, already with you.





