close
close

Yiamastaverna

Trusted News & Timely Insights

Florida teenager killed himself after falling in love with an AI chatbot
Suffolk

Florida teenager killed himself after falling in love with an AI chatbot

At that point, the 14-year-old put down his phone and shot himself with his stepfather’s gun.

Ms Garcia, 40, claimed her son was merely “collateral damage” in a “grand experiment” conducted by Character AI, which has 20 million users.

“It’s like a nightmare. You want to stand up and scream and say, “I miss my child.” “I want my baby,” she added.

Noam Shazeer, one of the founders of Character AI, claimed last year that the platform was “super, super helpful for a lot of people who are lonely or depressed.”

Jerry Ruoti, the company’s security chief, told The New York Times that it would add additional security features for its young users, but would not say how many were under 18.

“This is a tragic situation and our condolences go out to the family,” he said in a statement.

“We take the security of our users very seriously and are constantly looking for ways to further develop our platform.”

Mr. Ruoti added that character AI rules prohibit “the promotion or depiction of self-harm and suicide.”

Ms. Garcia filed a lawsuit this week against the company she says is responsible for her son’s death.

“Dangerous and untested”

A draft of the complaint obtained by The New York Times said the technology was “dangerous and untested” because it could “trick customers into revealing their most private thoughts and feelings.”

She said the company failed to provide “normal” or “appropriate” care to Setzer or other minors.

Character AI isn’t the only platform people can use to build relationships with fictional characters.

Some allow or even promote unfiltered sexual chats, prompting users to chat with the “AI girl of your dreams,” while others have stricter security features.

Character AI allows users to create chatbots to imitate their favorite celebrities or entertainment characters.

The increasing prevalence of AI through custom apps and social media sites like Instagram and Snapchat is quickly becoming a major concern for parents in the US.

Earlier this year, 12,000 parents signed a petition asking TikTok to clearly label AI-generated influencers who might appear as real people to their children.

TikTok requires all creators to label realistic AI content. However, ParentsTogether, an organization that focuses on issues affecting children, argued that this was not consistent enough.

Shelby Knox, the campaign director for ParentsTogether, said children were watching videos from fake influencers promoting unrealistic beauty standards.

Last month, a report published by Common Sense Media found that while seven in 10 teenagers in the U.S. have used generative AI tools, only 37 percent of parents were aware that they were doing so.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *