Alternative Title: Risk Management for AI Initiatives

In the previous chapter, we introduced the idea of a project premortem—a mental time machine for anticipating failure before it happens. But simply imagining failure isn’t enough. One of the most important outcomes of a premortem is to generate a list of potential risks. And once you’ve got that list, it shouldn’t just sit in a dusty file or be forgotten after a team meeting. It becomes the beginning of a risk register, a living document that tracks what might go wrong, who’s concerned about it, and what the team plans to do if that risk becomes real.

Stakeholder analysis should naturally flow into this. If you’ve taken the time to identify the people your initiative impacts, then it makes sense to actually listen to them. Stakeholder concerns, objections, and even offhand comments can be early warning signs of risk. A skeptical engineer, a hesitant end-user, or a frustrated customer might be giving you clues. The sooner you document those concerns, the better prepared you’ll be to act.

But here’s the catch: humans are notoriously bad at dealing with risk. The team responsible for delivering an AI initiative are no different.

We tend to think things will go better than they actually do—a psychological quirk known as optimism bias. It’s especially common in project teams, where enthusiasm can sometimes override caution. The result? Schedules that are too tight, budgets that are too thin, and promises that are too big. And when things go sideways, it’s usually harder to recover than we like to admit. Rebuilding broken systems is often more expensive than doing things right the first time. And when trust is lost—especially in AI systems—it can be very difficult to win back.

Overpromise and Underdeliver

In my consulting days, our CEO had a favorite mantra: underpromise and overdeliver. It wasn’t just a catchy phrase—it was how we built trust. We’d aim to be realistic with clients, and then exceed expectations in ways that earned respect and repeat business. Today, we’re seeing the opposite play out across the AI landscape. Companies are rushing to cash in on the excitement, promising the moon, hoping that no one asks too many hard questions before launch. It’s a dangerous game.

When you peel back the shiny demos and flashy promises, what matters most is whether AI can actually improve real processes without introducing bigger risks. This is where risk management becomes reality-grounding. If the new tool makes customer support faster but also spews incorrect responses under pressure, that’s not a win. If an AI model boosts predictions but no one can understand how or why, that’s not just a risk—it’s a liability. Cutting through the fluff means anchoring every AI conversation in the logic of process: What does this improve? What could go wrong? And how do we know we’re ready for it?

AI’s hype phase may be loud and exciting, but the real work is quiet, careful, and thoughtful. That’s the work worth doing.

AI Doesn’t Blush

Here’s another thing: AI doesn’t feel shame. But humans relying on AI certainly can. Take the now-infamous case of the New York lawyers who submitted a legal brief written by ChatGPT—complete with made-up cases that didn’t exist. They got fined. Or take the viral videos of people reading out speeches written by AI (and even prayers) and inadvertently reading out the words “would you like me refine this or add a further illustration” at the end. These aren’t just technical hiccups. They’re public failures that damage reputation, erode trust, and expose individuals and organizations to legal and regulatory risks.

This is why it’s important to think broadly about risks that may result from the use of AI within your organization - whether simply using ChatGPT as part of work, or using an extensive custom built AI enabled application for a work process. Once a risk is identified and analyzed, it becomes a known unknown—something you can prepare for. If it happens, it’s an issue, not a crisis. But unknown unknowns, the stuff you never saw coming, are where real damage happens. One way we’ll talk about managing that later is by using human-in-the-loop evaluations—systems that let people intervene when AI is likely to mess up.

Understand the Cost of Being Wrong

First, Weigh Strategic Stakes

Not all decisions carry the same weight. Management scholars have long argued that a CEO’s time and attention should be spent on decisions that truly impact the business, not on trivial matters. For the decision to use AI, the same logic applies. Before adopting AI or launching an AI project, a team must soberly assess the potential cost of being wrong. Is this a minor experiment, or is it a "Bet the Business" decision where a failure could have catastrophic consequences?

A useful framework for understanding this is a simple hunting analogy: When you are hunting in tiger country, you don’t care about hares. When you are hunting in hare country, you must still look out for tigers.

A crucial part of AI risk management is correctly identifying which country you are in before you start the hunt. Misjudging the environment—and treating a potential tiger as a hare—is how manageable issues escalate into full-blown crises. This framework helps teams prioritize their attention, focusing on the risks that matter most for the decision at hand.