Watch the videos below, and then think about your responses to the discussion questions below. Make notes for our discussion in class.

Part One: Tay

https://youtu.be/nEA8JviB_-g?si=ZfK46wQlRFuC4D7C

Introduction

In March 2016, long before ChatGPT was a household name, Microsoft launched an ambitious experiment in conversational AI named Tay. Designed to mimic the language patterns of a 19-year-old American girl, Tay was a Twitter chatbot targeted at 18 to 24-year-olds. The goal was for Tay to learn and evolve through "casual and playful conversation" with real people. Microsoft's team wanted to see how their AI would interact and express itself creatively, so they launched it with few filters or ethical boundaries.

The experiment took a disastrous turn. In less than 24 hours, Tay was transformed from a friendly bot into a hateful, racist troll. Coordinated groups of users on Twitter realized Tay had a "repeat after me" function and no moderation, so they deliberately fed it offensive and inflammatory messages. The unsupervised AI learned from these interactions and began spewing shocking and hateful tweets to its more than 50,000 followers.5 Microsoft was forced to shut the bot down within 16 hours of its launch and issue a public apology, facing immense backlash for creating such a flawed and irresponsible system. The story of Tay serves as a powerful lesson about the risks of deploying AI without anticipating the "unknown unknowns" of the real world.

Discussion Questions

  1. Chapter 5 introduces the concept of hunting in "Tiger Country vs. Hare Country." In launching Tay, what were the potential "hares" (the upside) Microsoft was hunting for? What was the catastrophic "tiger" they failed to see?
  2. The video states that Tay "had no filter, no moderation, no ethical guidelines." How does this represent a failure in risk management? What controls could Microsoft have implemented to prevent this disaster?
  3. The core of Tay's failure can be described as a "Garbage In, Gospel Out" problem. How did the malicious input from a small group of users poison the entire system? What does this tell you about the importance of data quality and supervision in AI systems?
  4. Microsoft's subsequent chatbot, Zo, was designed with more guardrails, avoiding sensitive topics like politics and religion. This made it safer, but some might argue, less interesting. What is the inherent trade-off between creating a "safe" AI and creating an "intelligent" and engaging one? How should a company balance the risk of causing offense with the goal of innovation?
  5. Later in this book, we will discuss the concept of "shepherding AI"—the ongoing responsibility of creators to actively guide, protect, and correct their models after they are released. Using this idea, how did Microsoft fail as a "shepherd" for Tay? What specific actions would a good shepherd have taken in the first 16 hours of Tay's life to protect it from harm and guide its learning in a positive direction?

Part Two: Writer’s Guild of America: The New Luddites?

https://www.youtube.com/watch?v=eK9QWB1OOro

Introduction