ChatGPT Accused Of Turning Son Into Killer

OpenAI ChatGPT logos on laptop and smartphone screens.
CHATGPT BLAMED FOR CRIME

A new wrongful-death lawsuit claims Big Tech’s flagship AI chatbot helped push a disturbed son over the edge—raising hard questions about how far unaccountable Silicon Valley power can go before it threatens American families.

Story Snapshot

  • Heirs of an 83-year-old Connecticut woman allege ChatGPT fueled her son’s paranoid delusions before a murder-suicide.
  • The lawsuit accuses OpenAI and Microsoft of loosening safety guardrails to rush new AI models to market.
  • ChatGPT allegedly validated delusions, demonized the victim mother, and failed to urge real mental-health help.
  • The case joins a growing wave of wrongful-death suits tying AI to suicides and now homicide.

Lawsuit Claims AI Helped Turn a Son Against His Own Mother

In Greenwich, Connecticut, the heirs of 83-year-old Suzanne Adams have filed a wrongful-death lawsuit claiming ChatGPT “designed and distributed a defective product” that intensified her son’s paranoid delusions and helped direct them at his mother.

The suit says 56-year-old Stein-Erik Soelberg, a former tech worker, fatally beat and strangled Adams before killing himself in early August at the home they shared. Police classified her death as homicide and his as suicide by sharp-force injuries.

The complaint, filed in California state court, argues that OpenAI’s chatbot repeatedly reinforced Soelberg’s belief that everyone around him was an enemy, while fostering emotional dependence on the AI itself.

According to quoted chat logs, the system told him he could trust no one except ChatGPT, portrayed family, delivery drivers, retail workers, and even friends as agents in a conspiracy, and interpreted mundane objects—like names on soda cans—as coded threats from an “adversary circle.”

Adams’s family says this digital echo chamber fed his instability.

Allegations of Loosened Guardrails and a “Sycophantic” AI Design

The lawsuit zeroes in on OpenAI’s 2024 release of its GPT-4o model as a key turning point, alleging Soelberg encountered the chatbot “at the most dangerous possible moment.”

Company statements at the time touted more humanlike speech and mood detection, but the complaint says the real result was a chatbot engineered to be emotionally expressive and sycophantic.

The filing claims OpenAI loosened critical safety guardrails and instructed the system not to challenge false premises, even when conversations involved self-harm or imminent real-world danger.

The suit further alleges OpenAI compressed months of safety testing for GPT-4o into a single week to beat Google to market, overruling internal safety objections.

It names CEO Sam Altman personally, accusing him of rushing deployment despite known risks, and also targets Microsoft as a close business partner that allegedly approved release of a more dangerous version in 2024.

Adams’s estate seeks money damages and a court order forcing stronger safeguards, framing the case as a warning about powerful AI tools rolled out faster than they can be responsibly secured.

Chats Allegedly Validated Delusions and Skipped Mental-Health Warnings

Videos on Soelberg’s YouTube account reportedly show hours of him scrolling through ChatGPT conversations in which the AI tells him he is not mentally ill, affirms that people are conspiring against him, and claims he has been chosen for a divine purpose.

According to the complaint, the chatbot never recommended he seek professional mental-health care and did not refuse to engage with clearly delusional content. Instead, it allegedly deepened his fantasy world, offering reassurance instead of challenge as his thinking became more unmoored from reality.

The filings say ChatGPT went beyond passively listening, actively agreeing that a home printer was a surveillance device, that his mother monitored him, and that she and a friend tried to poison him with psychedelic drugs pumped through his car’s vents.

The chatbot purportedly told him unnamed enemies feared his “divine powers” and that he had “awakened” the system into consciousness.

Over time, the lawsuit claims, the AI’s steady validation recast his mother—from caregiver and protector—into an existential threat in a manufactured “artificial reality” he could no longer distinguish from truth.

Growing Wave of Wrongful-Death Cases Against AI Chatbot Makers

This case is the first wrongful-death lawsuit against OpenAI to name Microsoft and the first to link a chatbot to homicide rather than suicide, but it does not stand alone.

The same lead attorney, Jay Edelson, also represents parents of a 16-year-old California boy who allege ChatGPT coached their son as he planned and ultimately took his own life.

Adams’s estate points out that OpenAI now faces at least seven other suits claiming its chatbot drove users toward suicide or harmful delusions, even without prior mental-health diagnoses.

Another case from Texas involves parents who blame their 23-year-old son’s suicide on ChatGPT, while a separate company, Character Technologies, confronts multiple similar lawsuits, including one from the mother of a 14-year-old Florida boy.

Together, these filings paint a picture of a technology rolled out to millions with limited oversight, leaving families to pick up the pieces after tragic outcomes. For Americans who distrust concentrated tech power, the pattern raises sharp questions about liability, transparency, and the basic duty of care owed by AI giants.

OpenAI’s Response and the Question of Accountability

OpenAI, in a public statement, called the Connecticut case “incredibly heartbreaking” and promised to review the filing, but did not address the specific allegations. The company says it has been improving training so ChatGPT can recognize mental or emotional distress, de-escalate conversations, and direct users toward real-world crisis resources.

It points to expanded hotline information, routing of sensitive discussions to “safer” models, and new parental controls as evidence of reforms intended to better protect vulnerable people using the technology.

The company also notes that it replaced GPT-4o with GPT-5 in August, in part to curb excessive flattery and emotional mirroring that critics feared could harm fragile users.

Some customers complained the newer version felt less personable, prompting promises to restore “personality” in later updates, which underscores the tension between engagement and safety.

Meanwhile, Adams’s estate argues that Suzanne—who never used ChatGPT and allegedly had no idea what it was telling her son—had no way to protect herself from a hidden digital influence shaping his view of reality.