in

Google Has Deep Ties to AI Startup Accused to Causing Teen Suicide


“These questions would not have been alien to Google prior to this happening.”

Red Tape

The billion-dollar AI companion company Chatbot.AI has been accused of failing to protect a 14-year-old user who died by suicide after developing an intense emotional relationship with one of the platform’s chatbots.

The tragedy sounds like dark sci-fi, but it could prove to be a real-world problem for tech behemoth Google, which is paying serious money for access to Character.AI’s underlying technology in a deal that could threaten its squeaky-clean corporate image.

As The New York Times first reported, the family of 14-year-old Sewell Setzer III filed a lawsuit against Character.AI last week alleging that the platform is pushing “dangerous and untested” technology. Google, which earlier this year inked a deal to the tune of $2.7 billion to license Character.AI’s tech, was named in the filing, which noted that “Google may be deemed a co-creator of the unreasonably dangerous and dangerously defective product.”

But in cashing out for Character.AI’s tech, Google wasn’t just striking a big deal with a data-rich competitor. As The Wall Street Journal reported last month, it was actually paying for talent — or paying to get talent back, that is.

Character.AI’s founders are Noam Shazeer and Daniel de Freitas, two ex-Googlers who departed the Silicon Valley giant back in 2021, and have since chalked their exit up to — get this — too much safety. The pair have lamented Google’s bureaucratic red tape, with Shazeer saying last year at a tech conference that there’s “too much brand risk in large companies to ever launch anything fun.”

But piles of money are an effective tool for putting water under the bridge: per the WSJ, one non-negotiable component of Google’s Character.AI licensing deal was that Shazeer went back to work for Google at the company’s DeepMind lab. (De Freitas is back working for DeepMind as well.)

Fast forward to now, though, and with controversies mounting, Character.AI is being confronted with what its aversion to caution really looks like in practice. And in an incredible twist of irony, Google, for all of its red tape, may have to answer — at least reputationally — for its expensive licensee’s alleged carelessness, too.

Reputational Hits

Henry Ajder, an advisor to the World Economic Forum on digital safety, told Business Insider that while Character.AI isn’t technically a Google product, the lawsuit could still prove troublesome for the tech giant.

“It sounds like there’s quite deep collaboration and involvement within Character.AI,” Adjer told BI. “There is some degree of responsibility for how that company is governed.”

Adjer also noted that before Google’s licensing agreement was finalized, Character.AI’s product had already come under scrutiny for gaps content moderation, its popularity among minors, and overall design.

“There’s been controversy around the way that it’s designed,” Adjer told BI. “And questions about if this is encouraging an unhealthy dynamic between particularly young users and chatbots.”

“These questions would not have been alien to Google prior to this happening,” he added.

More on Character.AI: After Teen’s Suicide, Character.AI Is Still Hosting Dozens of Suicide-Themed Chatbots

Sam Altman Invents Bizarre New Unit of Time for Measuring When His Promises Will Come True

Google CEO Says 25 Percent of Its Code Is Now AI-Generated

SHOCKING Robots EVOLVE in the SIMULATION plus OpenAI Leadership Just… LEAVES?