Local

Mom shares son’s AI chatbot death as lawmakers demand guardrails

ORLANDO, Fla. — A Central Florida mother joined one of the United States’ biggest political stages Tuesday to share a deeply personal story: how her son, with the encouragement of an AI chatbot, decided to take his own life.

Sewell Setzer was just 14 years old when he shot himself in his bathroom in February, becoming the first publicly known case of a teen dying by suicide as the nascent AI industry pushed to integrate itself into all aspects of people’s online lives.

Setzer had fallen in love with a chatbot designed to mimic Game of Thrones characters like Daenerys Targaryen on the platform Character.AI.

His mother, Megan Garcia, described how Setzer withdrew more and more from everyday life. His death came as she tried to impose limits on his screen time and wean him off his chatbot addiction.

“They designed [chatbots] to love bomb child users,” Garcia told a bipartisan panel of senators during Tuesday’s Judiciary Committee hearing. “They designed them to keep children online at all costs.”

Garcia was joined by two other parents: a California father who also lost his son, and an east Texas mother whose son has been checked into long-term mental health care and who had never spoken about her family’s story before.

“I know America wants to be a leader in AI, but do we need youth companionship?” the California father, Matthew Raine, asked.

The senators on the panel weren’t shy about their own beliefs.

“You can’t take [kids] to a strip club, you can’t expose them to pornography because in the physical world there are laws,” Sen. Marsha Blackburn (R-Tennessee) said. “In the virtual space it’s like the wild west.”

Another senator, Missouri Republican Josh Hawley, appeared to be astounded that the east Texas mother was only offered $100 in arbitration, without a chance to sue because of an agreement her then- 15-year-old son signed when he made an account.

A study by Common Sense Media, a technology watchdog group, published in July found 72% of teens used AI chatbots and 52% frequently used them.

One in three spoke to bots for companionship, the group found, giving reasons like the 24/7 availability of bots, their lack of judgement and encouragement, the teens’ loneliness or the relative ease of speaking to a bot over a human.

Many bots like ChatGPT respond to questions and statements positively, providing emotional reinforcement to users as they answer both simple and complicated questions – including to teenagers who might believe they’re not getting support from humans around them.

Character Technologies and Google, the companies behind Character.AI, told a federal judge in Orlando earlier this year that millions of users are safely on their platforms. The bot identities are designed by third parties and the conversations are fueled by the users themselves.

All were reasons they unsuccessfully tried to use to dismiss Garcia’s legal case against them.

“Congress can start with regulation,” Garcia told senators, such as preventing the companies from using products before they’re properly tested.

“Release the research,” she said.

Hawley noted that the companies had also been invited to testify but did not accept those invitations.

Character.AI and other programs have since raised their recommended ages for users. Character.AI now carries a “teen” recommendation, above the 13+ Garcia’s lawsuit said it had on the app store when Sewell signed up.

A note below where users type their responses and questions now reminds them they’re talking to a bot, not a human.

Click here to download our free news, weather and smart TV apps. And click here to stream Channel 9 Eyewitness News live.

0