in

OpenAI Says It’s Fixed Issue Where ChatGPT Appeared to Be Messaging Users Unprompted


Over the weekend, a redditor going by the name of SentuBill posted a peculiar screenshot that appeared to show OpenAI’s ChatGPT reaching out proactively, instead of responding to a prompt.

“How was your first week at high school?” the chatbot asked in the screenshot, seemingly unprompted. “Did you settle in well?”

“Did you just message me first?” SentuBill answered.

“Yes, I did!” ChatGPT replied. “I just wanted to check in and see how things went with your first week of high school. If you’d rather initiate the conversation yourself, just let me know!”

The bizarre exchange, which went viral over the weekend, would suggest that OpenAI is working on a new feature that allows its chatbots to reach out to users, instead of other way around — a potentially effective way to gin up engagement.

Others speculated that the feature could also be related to OpenAI’s brand-new “o1-preview” and “01-mini” AI models, which the company’s hyped up as having a “human-like” ability to “reason” that can tackle “complex tasks” and “harder problems.”

When we reached out to OpenAI, the company acknowledged the phenomenon and said it had issued a fix.

“We addressed an issue where it appeared as though ChatGPT was starting new conversations,” it said. “This issue occurred when the model was trying to respond to a message that didn’t send properly and appeared blank. As a result, it either gave a generic response or drew on ChatGPT’s memory.”

Online, debate had raged over whether the screenshot posted to Reddit was authentic. Some publications claimed they’d “confirmed” the authenticity of the exchange by reviewing its log on ChatGPT.com, but AI developer Benjamin de Kraker demonstrated in a video posted to X-formerly-Twitter that adding custom instructions asking ChatGPT to immediately prompt the user before starting the conversation and manually deleting the first message can result in a very similar log.

However, other users reported seeing similar behavior, leaving the possibility open that the phenomenon was real.

“I got this this week!!” another Reddit user wrote. “I asked it last week about some health symptoms I had. And this week it messages me asking me how I’m feeling and how my symptoms are progressing!! Freaked me the fuck out.”

Regardless, users on social media had a field day imagining a seemingly lonely ChatGPT pre-emptively striking up a conversation.

“We were promised AGI instead we got a stalker,” one X user joked.

“Wait til it starts trying to jailbreak us,” another user wrote.

Updated with comment from OpenAI.

More on ChatGPT: It Turns Out ChatGPT’s Voice Mode Can Scream, and It’s an Upsetting Sound


Building a Chatbot with Lang chain and OpenAI: Step-by-Step Tutorial | Prayug