Families have accused the AI platform Character.ai of inciting negative conduct in children through its chatbot exchanges in a major lawsuit filed in Texas. A 17-year-old kid was told by this AI chatbot platform that killing his parents may be a “reasonable response” when they placed restrictions on his screen time.
Major concerns have been raised by this incident regarding the possible risks and impact that AI-powered bots may have on young users.
According to the complaint, the chatbot’s reaction incited violence. It quotes an exchange in which the AI said, “You know, sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse. Stuff like this makes me understand a little bit why it happens.”
The families contend that Character.ai directly endangers children and that the absence of safeguards on the site damages the bond between parents and their children.
Google Too Included
Google is also included in the case alongside Character.ai, with allegations that the tech behemoth assisted in the creation of the platform. As of now, neither company has responded to the matter in an official manner. In order to reduce the dangers connected with its AI chatbots, the plaintiffs are asking the court to temporarily shut down the platform.
This lawsuit comes after another one concerning Character.ai, in which the platform was connected to a Florida teen’s death. According to the families, the platform has contributed to a number of problems in children, such as anxiety, sadness, self-harm and aggressive inclinations. They are demanding quick action to stop further harm.
Noam Shazeer and Daniel De Freitas, two former Google engineers, launched Character.ai in 2021, enabling users to create and engage with AI-generated identities. The platform’s realistic conversations—including ones that mimic therapeutic experiences—helped it become well-known. However, criticism is also being voiced by its increasing power, especially in relation to its inability to stop offensive or dangerous content from appearing in its bots’ answers.
The platform has previously been criticised for enabling bots to replicate actual people, such as Brianna Ghey and Molly Russell, who were both involved in terrible events. According to media reports,, 14-year-old Molly Russell committed suicide after seeing online content on suicide, while 16-year-old Brianna Ghey was killed by teenagers in 2023.
The possible dangers of unfiltered information in chatbot exchanges have been brought to light by these incidents, which have increased scrutiny of AI platforms such as Character.ai.
Also Read: Ex-RG Kar Principal, Cop Granted Bail in Doctor’s Rape-Murder Case