in

AI Chatbots Are Only Useful If You Think They Are, Scientists Find


Our experience with AI chatbots so far has been incredibly mixed.

In one moment, it can feel like talking to an actual person who can provide genuine insight and advice. But other conversations lead to frustration, with overeager algorithms tripping to offer nonsense or false factual claims.

But what if our experience reflects our expectations before starting these conversations? In other words, what if AI is simply reflecting our own beliefs back at us, something many have suspected for a while now?

In a new study published in the journal Nature Machine Intelligence, a team of researchers from the MIT Media Lab found that subjects who were “primed” for a specific AI experience almost always ended up having that experience.

If you think about it, that’s pretty striking: it seems to suggest that a lot of the attention-grabbing capabilities of chatbots can be explained by users projecting their expectations onto the systems.

“AI is a mirror,” MIT Media Lab’s Pat Pataranutaporn, co-author of the study, told Scientific American.

“We wanted to quantify the effect of AI placebo, basically,” he added. “We wanted to see what happened if you have a certain imagination of AI: How would that manifest in your interaction?”

In an experiment, the team divided 300 participants into three groups. All participants were asked to use an AI to receive mental health support and gauge how effective it was at providing it.

However, the three groups were each told to expect different experiences, despite the fact that all 300 participants would encounter either OpenAI’s generative GPT-3 or ELIZA, a simpler rule-based AI, neither of which was manipulated in any way.

One group was told the AI had no motives. The second was told the AI was trained to show empathy and compassion. The third was told the AI had “malicious intentions and trying to manipulate or deceive the user,” per the paper.

The results were striking, with the majority of participants in all three groups reporting that their experience had fallen in line with what they were told to expect.

“When people think that the AI is caring, they become more positive toward it,” Pataranutaporn told SA, arguing that “reinforcement feedback” led to participants changing how they viewed the AI depending on what they were told.

Pataranutaporn and his colleagues suggest that the way entire cultures see AI could end up influencing how the tech is used and developed over time.

“We found that the mental model considerably affects user ratings and influences the behavior of both the user and the AI,” the researchers wrote in their paper. “This mental model is the result of the individual’s cultural background, personal beliefs and the particular context of the situation, influenced by our priming.”

That also means the people behind these AIs have a considerable amount of influence.

“People in marketing or people who make the product want to shape it a certain way,” he told Scientific American. “They want to make it seem more empathetic or trustworthy, even though the inside engine might be super biased or flawed.”

“While we’ve largely focused on studying any biases present in the answers of AI, we should also “think of human-AI interaction, it’s not just a one-way street,” he added. “You need to think about what kind of biases people bring into the system.”

More on AI: Futurist Scholar Says AI Chatbots May Have Some Degree of Sentience

TH OWL Podcast: Campus Corvey, Studierendeninformationen, Tag der offenen Tür und ChatGPT

An interview with ChatGPT about Artificial Intelligence 10 – 11 – 2023 Podcast