Book Review: AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference

Book Review by Frank Cerwin:

AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
By Arvind Narayanan & Sayash Kapoor, Princeton University Press, 2024

The title may dissuade many AI advocates among you from picking up this book.  Yes, it is a cautionary tale of the risks inherent in Artificial Intelligence.  However, it is well-grounded in the experience of the authors.  Narayanan is a professor of computer science at Princeton and the director of the Center for Information Technology Policy.  Kapoor is a researcher who examines the societal impacts of artificial intelligence, with a focus on reproducibility, transparency, and accountability in AI systems.

This book explores the existential risk concerns of AI threatening the future of humanity, AI doing AI research, and AI going rouge.  Spoiler alert!  It is not a story of doom and gloom.  They do debunk many of these myths.  I found the chapters on AI-driven content moderation especially interesting as it deep dives into how context, intent, and slang impacts what is considered objectionable.  Examples are presented on how AI struggles when content it must evaluate might not look like the past data it was trained on.  The use of “algospeak” in social media is also described as a method used to avoid being penalized by content moderation algorithms.  Content moderation of intellectual property (IP) is addressed.  Despite AI’s capabilities, public domain content and content fair-use currently presents a challenge.  Ultimately, policymaking is a human activity and is a challenging aspect of content moderation.  You will appreciate that current AI models do not have all the answers and developing future AI-driven solutions will continue to be equally challenging.

The book further explores the culture and history of AI hype especially when it comes to making predictions about the future.  Realize the lack of claim verification by AI vendors.  Understand why companies do not make their AI models publicly available for scrutiny arguing they are trade secrets.  Learn how vendors game the accuracy of AI predictions to attract investors and buyers.  Recognize that AI models evaluated on data they had been trained on is a way of “teaching to the test.”  Additionally, learn how “priming bias” is introduced when past exposure to a concept leads to overemphasizing its importance in future decisions. 

The book is not only about challenges and opportunities.  The authors present their ideas for addressing several issues with AI.  They introduce the concept of “partial lotteries” to alleviate AI bias.  They also contend that new regulations and laws are not necessarily needed.  But rather enforcement of existing ones that remain relevant for AI.

There is a wealth of food for thought in this book whether you are highly involved with AI in your job or just interested in gaining perspectives on the subject, like myself.  The book is available in many local libraries.