in

A new book tackles AI hype – and how to spot it



That’s one small example of how AI fails. Arvind Narayanan and Sayash Kapoor collect dozens of others in their new book, AI Snake Oil — many with consequences far more concerning than irking one science journalist. They write about AI tools that purport to predict academic success, the likelihood someone will commit a crime, disease risk, civil wars and welfare fraud (SN: 2/20/18). Along the way, the authors weave in many other issues with AI, covering misinformation, a lack of consent for images and other training data, false copyright claims, deepfakes, privacy and the reinforcement of social inequities (SN: 10/24/19). They address whether we should be afraid of AI, concluding: “We should be far more concerned about what people will do with AI than with what AI will do on its own.”

The authors acknowledge that the technology is advancing quickly. Some of the details may be out of date — or at least old news — by the time the book makes it into your hands. And clear discussions about AI must contend with a lack of consensus over how to define key terms, including the meaning of AI itself. Still, Narayanan and Kapoor squarely achieve their stated goal: to empower people to distinguish AI that works well from AI snake oil, which they define as “AI that does not and cannot work as advertised.”

Narayanan is a computer scientist at Princeton University, and Kapoor is a Ph.D. student there. The idea for the book was conceived when slides for a talk Narayanan gave in 2019 titled “How to recognize AI snake oil” went viral. He teamed up with Kapoor, who was taking a course that Narayanan was teaching with another professor on the limits of prediction in social settings.

The authors take direct aim at AI that can allegedly predict future events. “It is in this arena that most AI snake oil is concentrated,” they write. “Predictive AI not only does not work today, but will likely never work, because of the inherent difficulties in predicting human behavior.” They also devote a long chapter to the reasons AI cannot solve social media’s content moderation woes. (Kapoor had worked at Facebook helping to create AI for content moderation.) One challenge is that AI struggles with context and nuance. Social media also tends to encourage hateful and dangerous content.

The authors are a bit more generous with generative AI, recognizing its value if used smartly. But in a section titled “Automating bullshit,” the authors note: “ChatGPT is shockingly good at sounding convincing on any conceivable topic. But there is no source of truth during training.” It’s not just that the training data can contain falsehoods — the data are mostly internet text after all — but also that the program is optimized to sound natural, not necessarily to possess or verify knowledge. (That explains Enceladus.)

I’d add that an overreliance on generative AI can discourage critical thinking, the human quality at the very heart of this book.

When it comes to why these problems exist and how to change them, Narayanan and Kapoor bring a clear point of view: Society has been too deferential to the tech industry. Better regulation is essential. “We are not okay with leaving the future of AI up to the people currently in charge,” they write.

This book is a worthwhile read whether you make policy decisions, use AI in the workplace or just spend time searching online. It’s a powerful reminder of how AI has already infiltrated our lives — and a convincing plea to take care in how we interact with it.


Buy AI Snake Oil from Bookshop.org. Science News is a Bookshop.org affiliate and will earn a commission on purchases made from links in this article.


Managing Dark Data | Building Responsible AI in UAE

xAI breaks records with 'Colossus' AI training system

xAI breaks records with ‘Colossus’ AI training system