in

Harvard Scholars Suggest Pollsters Ask Questions to AI Simulations of Voters Because Real People Won’t Answer The Phone


Surely nothing could go wrong with this plan.

Poll Me This

With just a small fraction of people picking up their phones for political polling, Harvard experts are suggesting that pollsters “call” artificial intelligence simulations of voters instead.

In a new editorial, a group of policy and computer science scholars from Harvard’s Ash Center for Democratic Governance and Innovation insist that the already-unreliable system of polling can only be enhanced by asking AI chatbots poll questions instead of humans.

As the Ash experts point out, Pew found in 2019 — via polling, of course — that only six percent of people responded to political polling calls. That unsurprising figure could well suggest that polling is on its way out, but to these Harvard researchers, it contains the promise of an algorithmic future for the industry.

And what of the systematic wrongness seemingly inherent in AI, as evidenced by the many instances of chatbots “hallucinating”? According to these scholars, that will go away “over time” as AI gets better at “anticipating human responses, and also at knowing when they will be most wrong or uncertain.”

We might not hold our breath.

Wrong Is Right

In a study published by the Harvard Data Science Review last fall, editorial writers Nathan Sanders and Bruce Schneier said that when they posed typical polling questions to ChatGPT and instructed it to respond from various political perspectives, the chatbot generally responded the way humans would the majority of the time.

ChatGPT’s only slip-up, as the researchers explained in their more recent writing, occurred when they had the chatbot cosplay as a liberal voter and asked it about American support for Ukraine against the Russian invasion. As Sanders and Schneier observed, it likened that support to the Iraq War because it “didn’t know how the politics had changed” since 2021, when the large language model (LLM) undergirding it at the time had last been trained.

“While AI polling will always have limitations in accuracy,” the scholars wrote, “that makes them similar to, not different from, traditional polling.”

“Today’s pollsters are challenged to reach sample sizes large enough to measure statistically significant differences between similar populations,” they continued, “and the issues of nonresponse and inauthentic response can make them systematically wrong.”

Amid ample concerns about broader misuses of AI during elections, including with the kinds of deepfakes and disinformation seen in the re-election of Indian Prime Minister Narendra Modi, this technology’s use in polling could well muddy up the works even further.

The message from these Harvard scholars is clear, however: polling AI is good, actually.

More on political BS: Researchers Say There’s a Vulgar But More Accurate Term for AI Hallucinations

Let Slip the Robot Dogs of War

Let Slip the Robot Dogs of War

Light-Based Chips Could Help Slake AI's Ever-Growing Thirst for Energy

Light-Based Chips Could Help Slake AI’s Ever-Growing Thirst for Energy