in

AI Accuses Journalist of Escaping Psych Ward, Abusing Children and Widows


“This seriously violates my human dignity.”

LibelGPT

Microsoft’s AI chatbot Copilot falsely accused a journalist of committing some of the crimes he covered, The Register reports, in yet another example of AI hallucinations spitting out damaging but untrue information.

The journalist, Martin Bernklau, a court reporter from Tubingen, Germany, said that the AI described him as an escapee from a psych ward, a convicted child molester, and a fraudster who preys on grieving widows.

According to German public TV station Südwestrundfunk, which originally reported the incident, the chatbot also provided Bernklau’s full name and address, along with his phone number and a route planner to where he lived.

“This seriously violates my human dignity,” Bernklau told SWR, translated from German by Google Translate.

No Recourse

In an attempt to see how his stories were doing online, Bernklau made the discovery after asking a version of Copilot, built into Microsoft’s search engine Bing, about himself.

It appeared that the chatbot amalgamated Bernkalau’s decades’ worth of reporting on criminal trials and misconstrued him as the perpetrator.

Bernklau filed a criminal complaint for slander,  but was rejected by public prosecutors, who reasoned that no offense was committed because no real person could be considered the originator of the claims, according to SWR.

He had more success after reaching out to data protection officers, but only marginally.

“Microsoft promised the data protection officer of the Free State of Bavaria that the fake content would be deleted,” Bernklau told The Register. “However, that only lasted three days. It now seems that my name has been completely blocked from Copilot. But things have been changing daily, even hourly, for three months.”

Prolific Liar

This is far from the first time that AI hallucinations have defamed someone. Last year, Meta’s AI chatbot accused a Stanford AI researcher of being a terrorist. More recently, Elon Musk’s Grok offhandedly claimed that an NBA star was behind a string of vandalism attacks — probably because it misinterpreted joke tweets.

These are part of a broader trend of shoddy AIs churning out misinformation — or in some cases, disinformation — but the singling out of individuals makes these cases particularly damaging.

For Bernklau, the episode has been traumatizing, he told The Register. It was a “mixture of shock, horror, and disbelieving laughter,” he added. “It was too crazy, too unbelievable, but also too threatening.”

Whether or not Microsoft can be held legally accountable for what its chatbot says is, at this moment, up in the air. Ongoing legal battles could potentially set a precedent, such as the case of a man who sued OpenAI after ChatGPT similarly accused him — falsely— of embezzling money. But for now, there’s not a lot Bernklau can do.

More on AI: Did AI Already Peak and Now It’s Getting Dumber?


AI Shocks Again: OpenAI AI Robot, Orion GPT5, Flux, Grok 2, Gemini Live & More (August Monthly News)

OpenAI Raising At $100 Billion Valuation