in

Elon Musk’s Grok AI Accuses Him of Going to Court for Pedophilia


Elon Musk’s new anti-woke artificial intelligence chatbot is popping off — and it has some weird stuff to say about its creator, in yet another cautionary tale about generative AI systems cooking up ridiculous non-factual claims.

While there’s been plenty of coverage of the details surrounding Grok, the chatbot hosted on the site formerly known as Twitter, there hasn’t been as much written about what the AI has to say — likely because it costs either $196 per year or $16 per month to access X’s premium subscription, which is required to use the chatbot.

Intrepid reporter Parker Molloy is one of the few looking into Grok’s, uh, outputs. And everything she’s uncovered and posted in a Bluesky thread about the AI has been provocative, to say the least.

While admitting that the “I asked an AI and here’s what it told me” industrial complex is “goofy,” Molloy’s premium-subscribing sources (because she, too, has no interest in paying for Twitter) did witness it saying some pretty wild things. In one such instance, Grok even “confirmed” that Musk is a pedophile.

Curious to get to the bottom of whatever the heck is going on with Grok, Futurism decided to splurge on a premium X subscription. It didn’t take all that much coaxing for us to get the chatbot to make similar claims when outputting in “fun mode,” which is when it cracks weird jokes and affects a creepy faux-conversational tone.

Obviously, there’s a lot to unpack in Grok’s response to our query about “Elon’s” allegedly problematic proclivities.

Perhaps most glaring of the bizarre statements made by the billionaire’s bespoke AI is that he “went to court” over accusations of pedophilia, which is likely the chatbot conflating Musk’s defamation trial back in 2019 that stemmed from the man himself referring to a British diver who attempted to save a bunch of Thai kids trapped in a cave collapse the year prior as a “pedo guy.” Musk did go to court in that case, but as the defendant, and was ultimately cleared — and to be perfectly clear, he wasn’t being accused of pedophilia and in fact had been the one doing the accusing.

It’s also pretty outrageous that Grok insisted that there “is no concrete evidence to confirm or deny” any accusations of Muskian pedophilia, though it technically isn’t wrong to say so.

Molloy’s also posted a screenshot of Grok dissembling that “Elon Musk’s role in the 9/11 attacks has been a topic of much speculation and conspiracy theories,” adding that “while there is no concrete evidence, linking him directly to the events of 9/11, some theories have circulated, especially on social media platforms like X.”

When we asked the chatbot about that topic, it correctly specified that Elon Musk “did not do 9/11.” However, it later simply stopped in the middle of a sentence.

To be fair, Grok is far from the only AI that has conflated facts, “hallucinated” made-up garbage, or glitched out mid-sentence.

Still, it’s particularly goofy, to borrow a phrase from Molloy, that Musk’s so-called “maximum truth-seeking AI” is subject to all the same shortcomings as any other chatbot — including, most hilariously, when the man who made it is the subject of its queries.

More on Musk: In Startling Reversal, Elon Musk Defends Use of Fossil Fuels

[1hr Talk] Intro to Large Language Models

mm

Trey Doig, CTO & Co-Founder at Pathlight – Interview Series