https://youtu.be/bRiPBi-j2jQ?si=c4qavjQQGCBjSpFh

Introduction

Why does technology policy always seem to be playing catch-up? History is filled with examples of regulations being created only after a new technology has already revealed its potential harms, forcing governments to be perpetually reactive. With the dawn of the AI era, this challenge has become more acute than ever. As Roberto Viola of the European Commission explains, AI represents both one of our greatest hopes and one of our greatest fears. While it offers powerful solutions for existential challenges like climate change and disease, the World Economic Forum also lists AI-driven misinformation and other adverse outcomes as top global risks.

This "friend or foe" dilemma is what makes regulation so difficult, but AI presents a unique problem. Unlike a traditional piece of engineering like a nuclear plant, where the risks can be calculated and controlled with a predictable set of outcomes, the developers of advanced AI models often "don't know what the outputs can be." As Viola asks, "How are you going to regulate something that you don't know?" You cannot create proactive rules for a near-infinite number of unpredictable outcomes.

The European Union's landmark AI Act is a pragmatic answer to this challenge—an explicit embrace of reactive governance. The Act is risk-based, applying strict guardrails to high-risk applications like medical diagnostics or university admissions, where a biased algorithm could cause enormous harm. But for the most powerful and unpredictable generative models, the strategy is inspired by cybersecurity: regulation is ex-post, or after the fact. The EU is creating an "AI Office" with the power to listen to the scientific community, identify risks as they emerge in the real world, and then compel a developer to "fix it." This is a formal admission that for this new class of technology, policy must be reactive to be effective.

However, the EU's strategy is not purely defensive. While its governance is reactive, its investment in innovation is proactive. By funding public infrastructure, such as the world's largest supercomputing network made freely available to startups, and by creating massive, federated databases of cancer images to train diagnostic models, Europe is actively trying to steer innovation toward positive societal outcomes. This dual approach—reactive control paired with proactive enablement—represents one of the world's first comprehensive attempts to govern the future of AI.

Discussion Questions

  1. Roberto Viola explains that with AI, developers "don't know what the outputs can be." How does this fundamental uncertainty answer the case study's central question: "Why is technology policy for AI largely reactive?"
  2. The EU AI Act takes a risk-based approach, applying strict rules to high-risk areas. How does this connect to the "Tiger Country vs. Hare Country" framework from Chapter 5?
  3. The EU's strategy is twofold: reactive regulation (the AI Act) and proactive investment (public supercomputers, federated data). Why is this dual approach necessary? Why can't a government simply regulate and let the private sector handle all innovation?
  4. Viola mentions the debate around open algorithms, acknowledging they can be used by "bad actors" but also that a "concentration of power stifles innovation." From a societal perspective, what are the biggest risks and benefits of powerful open-source AI models?
  5. Viola gives the example that a "radiologist with AI are even much better" than either alone. How does this support the arguments made throughout this playbook about the importance of framing AI as a "Co-Pilot" (Chapter 10) rather than a full replacement?