In fascinating research, new polling from the research and advocacy nonprofit AI Policy Institute (AIPI) found that 80 percent of Americans think presenting AI content as human-made should be illegal.
Specifically, here’s the question they asked:
Sports Illustrated recently acknowledged that it used AI to write stories and assigned them fake bylines. Do you believe this practice should be legal or illegal?
Yes, they’re referring to the story we broke about Sports Illustrated publishing articles bylined by fake authors with made-up names and AI-generated headshots.
It’s fascinating stuff, but we should point out that AIPI’s question contains a notable error. Though there’s no dispute that Sports Illustrated‘s authors were fake and had AI-generated headshots, the magazine and its owner have denied that AI was used to generate the actual content of the posts, saying they were provided by a third-party contractor called AdVon Commerce, which also denied that AI was used to generate the articles themselves. (Our sources, though, allege that at least some of the articles themselves were produced using AI.)
That caveat aside, the poll results feel like an interesting glimpse into the public’s perception of the media’s use of AI. An overwhelming number of Americans clearly aren’t thrilled with the idea of publishers spoonfeeding them AI-generated content — let alone content bylined by AI-generated writers — without proper disclosures. And as we move further into an era that could well be defined by AI, that doesn’t seem like a data point publishers should ignore.
According to the survey results, the AIPI asked a diverse cohort of 1,222 Americans four questions concerning Sports Illustrated‘s AI debacle: whether the use of AI to “write stories” — again, this has been disputed — and “assign them fake bylines” was “ethical”; whether such an operation should be straight-up “illegal”; whether companies should be required to “disclose and watermark content created by AI”; and whether political ads specifically should be made to “disclose and watermark content created by AI.”
The two first questions yielded the strongest responses, with 84 percent of participants agreeing that such a use of AI would be unethical and 80 percent agreeing that it should be fully illegal. These feelings even appeared to reach across divided party lines — in either poll, Democrats, Republicans, and independents all fell within nine points of each other.
The questions about watermarking in general yielded much vaguer responses, but that could be because the questions are about a more particular possible solution. While someone could, say, generally consider a lack of AI disclosure unethical, they could be unsure about how effective watermarking might actually be; it could also be true that they don’t totally understand what watermarking would entail, or think another possible solution — like requiring that publishers of AI content denote its use through explicit and visible disclaimers — could be more effective overall.
Fascinating as these poll results are, though, they’re not entirely surprising. Disclosure of AI is a basic consumer rights issue. People like to know what they’re consuming, and with good reason. Should a publisher choose to experiment with AI, the least it could do for readers is to allow them the opportunity to decide if and how they want to engage with it. And if a publisher fails to explicitly denote the use of AI? Be it an intentional obscuration or not, it’s a failure of basic media ethics nonetheless — and anyone in the business of providing information to the greater public needs to be better than that.
More on Sports Illustrated and AI: People Are Absolutely Roasting Sports Illustrated’s Ridiculous Excuse for Its AI-Generated Writers