The days are getting shorter and temperatures are plummeting. For many, the seasonal depression associated with a long and arduous winter is starting to settle in.
Here’s the twist: it may not just be humans struggling with the lack of vitamin D. The particularly galaxy-brained among us are speculating, Ars Technica reports, that OpenAI’s breakout chatbot ChatGPT may be suffering from the same condition.
Hear us out. Since last month, users started noticing that the chatbot, which recently celebrated its first birthday, was getting “lazier” and more irritable, often refusing to carry out the tasks it’s asked to do, or asking the user to do it instead.
“Due to the extensive nature of the data, the full extraction of all products would be quite lengthy,” it demurred in one case. “However, I can provide the file with this single entry as a template, and you can fill in the rest of the data as needed.”
As many have pointed out, it’s relatable to see an AI bot get lazy as we approach the holiday break.
“I’ve been a staunch voice in stating that we will not reach [artificial general intelligence] in our lifetime,” cybersecurity firm founder Frank McGovern tweeted. “However, ChatGPT literally becoming lazy on its own and getting tired of answering questions and doing work for people is REALLY changing my mind.”
The bizarre trend eventually caught the attention of OpenAI, which issued a statement on its official ChatGPT account on X-formerly-Twitter last week, writing that “we’ve heard all your feedback about GPT-4 getting lazier!”
“We haven’t updated the model since Nov 11th, and this certainly isn’t intentional,” the company added. “Model behavior can be unpredictable, and we’re looking into fixing it.”
Is ChatGPT really getting lazier — or is it yet another instance of humans anthropomorphizing an algorithm and reading too much into its uncanny outputs?
And here’s where things get really galaxy-brained. Could the bot be picking up from its immense training data that people often have waning energy levels and motivation to do work in the winter months — and reflecting it back at us?
“What if it learned from its training data that people usually slow down in December and put bigger projects off until the new year, and that’s why it’s been more lazy lately?” X user Mike Swoopskee suggested.
The theory, dubbed “winter break hypothesis” by another user on X, quickly caught on on social media, as Ars reports, surfacing as an unconventional explanation for ChatGPT’s newfound sluggishness.
It’s a theory as elegant as it is hard to prove — especially because the researchers behind ChatGPT have admitted themselves that they’re not entirely sure how the tool actually works.
In the meantime, some have attempted to quantify ChatGPT’s laziness by measuring the number of characters it was willing to spit out in May compared to in December. Others have struggled to reproduce early results claiming that ChatGPT had indeed gotten lazier.
In a tweet, OpenAI technical staffer Will DePue acknowledged the issue, but stopped short of theorizing about ChatGPT’s winter blues.
“Not saying we don’t have problems with over-refusals (we definitely do) or other weird things (working on fixing a recent laziness issue), but that’s a product of the iterative process of serving and trying to support sooo many use cases at once,” he wrote.
According to DePue, ChatGPT users may have simply noticed these regressions more as they stick out more compared to other improvements made to the “ChatGPT experience” that “you don’t hear much about.”
Another possibility is that OpenAI’s large language models are trying to reduce their burden on already-overloaded systems. But there’s no evidence supporting that theory either, as Semafor pointed out last month.
More on ChatGPT: Sam Altman’s Right-Hand Man Says AI Is Overhyped