https://youtu.be/bRiPBi-j2jQ?si=c4qavjQQGCBjSpFh
Why does technology policy always seem to be playing catch-up? History is filled with examples of regulations being created only after a new technology has already revealed its potential harms, forcing governments to be perpetually reactive. With the dawn of the AI era, this challenge has become more acute than ever. As Roberto Viola of the European Commission explains, AI represents both one of our greatest hopes and one of our greatest fears. While it offers powerful solutions for existential challenges like climate change and disease, the World Economic Forum also lists AI-driven misinformation and other adverse outcomes as top global risks.
This "friend or foe" dilemma is what makes regulation so difficult, but AI presents a unique problem. Unlike a traditional piece of engineering like a nuclear plant, where the risks can be calculated and controlled with a predictable set of outcomes, the developers of advanced AI models often "don't know what the outputs can be." As Viola asks, "How are you going to regulate something that you don't know?" You cannot create proactive rules for a near-infinite number of unpredictable outcomes.
The European Union's landmark AI Act is a pragmatic answer to this challenge—an explicit embrace of reactive governance. The Act is risk-based, applying strict guardrails to high-risk applications like medical diagnostics or university admissions, where a biased algorithm could cause enormous harm. But for the most powerful and unpredictable generative models, the strategy is inspired by cybersecurity: regulation is ex-post, or after the fact. The EU is creating an "AI Office" with the power to listen to the scientific community, identify risks as they emerge in the real world, and then compel a developer to "fix it." This is a formal admission that for this new class of technology, policy must be reactive to be effective.
However, the EU's strategy is not purely defensive. While its governance is reactive, its investment in innovation is proactive. By funding public infrastructure, such as the world's largest supercomputing network made freely available to startups, and by creating massive, federated databases of cancer images to train diagnostic models, Europe is actively trying to steer innovation toward positive societal outcomes. This dual approach—reactive control paired with proactive enablement—represents one of the world's first comprehensive attempts to govern the future of AI.