Besides the failure in preventing the catastrophe of Hurricane Katrina in 2005, perhaps the worst optics for the US government came from the poor and lethargic response. The nation watched as camera crews in flyover helicopters recorded citizens stranded on rooftops, desperate for aid. The government has immense resources to support people in disaster-prone regions, both at the federal and state levels, so why were people stuck? The crisis exposed a fundamental breakdown not just of physical infrastructure, but of information infrastructure. First, there was the problem of physically connecting with or getting to people in the flooded regions; second, there was the subsequent problem of connecting those survivors to the myriad of resources available across different agencies. It was a catastrophic information flow problem. In response, Executive Order 13411 was issued, leading to the creation of DisasterAssistance.gov in 2008. This system was an attempt to solve the information problem, creating a single access point for survivors of any future catastrophe to navigate the confusing landscape of federal aid. While a crucial step forward, building the initial version showed the project team the immense technical and bureaucratic challenges of integrating disparate government systems, forcing it to continuously evolve - till this day. This event serves as a powerful real-world prelude to the core challenge of this chapter: we are building vast, automated systems of governance, but their effectiveness hinges entirely on their ability to manage and act upon complex flows of information in the real world.
In a scenario like Katrina, today's AI could be a game-changer. Computer vision models could scan satellite and drone footage in real-time to pinpoint the locations of stranded individuals. Logistical AIs could optimize the routes for thousands of rescue boats and helicopters, navigating debris-filled waters and prioritizing the most critical cases. Generative AI could instantly process millions of aid applications, cross-referencing them with multiple databases to verify identities and assess needs in minutes, not weeks. But this vision of hyper-efficient, AI-driven response runs headfirst into a formidable obstacle: the operating model of government itself. Governments, by their nature, are plagued with bureaucracy. They are built on rigid hierarchies, siloed departments, and slow, deliberate, rule-based processes designed to ensure accountability and fairness, but which often result in paralysis. The introduction of AI represents a profound shock to this system. The "models of AI"—probabilistic, data-hungry, and operating at machine speed—are fundamentally incompatible with the entrenched, human-centric processes of government. You cannot simply plug a learning algorithm into a system built on paper forms and inter-departmental memos. The integration requires a radical rethinking of how the government itself operates, a challenge that goes far beyond technology.
So while we are captivated by the idea of driverless cars, Agentic AI, or other self-contained systems with clear goals and minimal speed bumps, when the goal is to build "driverless"_systems at the scale of entire corporations and governments, we will do well to look before the leap. Nevertheless, the same logic of automation is being applied not just to closed tractible systems with an abundance of structured tasks, but to the very engines of our society and economy.
The modern corporation is already on a path toward algorithmic management. Strategic decisions that were once the exclusive domain of human executives—dynamic pricing, supply chain logistics, media buying, and even aspects of hiring—are increasingly delegated to AI agents. The goal is a perfectly rational, data-driven enterprise, free from the biases and inconsistencies of human intuition.
Nowhere is this trend more apparent than in the fast-fashion industry, with the rise of companies like Shein. The traditional fashion world has long been dominated by influential tastemakers who set trends months or even years in advance, placing massive bets on what consumers will want to wear. Shein has systematically dismantled this model. Instead of relying on human intuition, the company releases thousands of new items in small batches and uses real-time data from its app—every click, every add-to-cart, every purchase—to automate the reordering process. The "tastemaker" is no longer a human editor in a high-rise office; it is an algorithm responding to the collective desire of millions of users. This has created an existential crisis for traditional retailers, whose business model relies on the seasonal bet. As Shein's data-driven engine floods the market with what's popular right now, legacy fashion houses are increasingly left with warehouses full of last season's rejects, their profit margins shrinking while the automated enterprise thrives.
The hedge fund Bridgewater Associates, with its long-standing quest to codify founder Ray Dalio’s management “Principles” into software, represents a clear forerunner of this trend. They have been working for years to create a system where the firm’s core logic can guide and even make decisions. For example, a core management task like performance feedback is automated through an app called “Dots,” where employees rate each other in real-time on dozens of attributes during meetings. This data is then synthesized into employee “Baseball Cards,” which quantify their strengths, weaknesses, and “believability” on different topics. This system then automates higher-level management tasks; in a disagreement, the system can weigh the believability scores of the participants and guide the decision, effectively turning a human-led debate into a data-driven resolution. This isn't science fiction; it is the logical endpoint of a century of management theory. The central question is not whether a human CEO will exist in such a company, but what their function becomes. When the operational "brain" of the organization is no longer human, the role of of leadership shifts from making decisions to defining the goals and ethical boundaries of the decision-making systems. The CEO becomes less of a driver and more of a philosopher-king, setting the moral compass for the machine.
But the situation at Bridgewater is tame compared to what the reality of AI-driven decision-making may translate to. Imagine a blackbox model tasked with generating strategic options for the company's future. The quality of these options—whether they are brilliant or disastrous—depends entirely on the quality and nature of the historical data it was trained on. The scale of the decision being made becomes critically important. Is the AI making thousands of small, independent decisions every day, like an insurance company setting individual premiums? In this case, a single bad decision affects one customer, and the system can learn from the aggregate outcomes of its many choices. The risk is distributed. Or is the AI being asked to advise on a single, monumental decision, like a multi-billion-dollar merger or a massive capital investment that will define the company for a decade? Here, the decision is made once, based on historical data that may be entirely irrelevant to a unique, unprecedented future. A single bad recommendation from the blackbox could have catastrophic consequences for the entire organization.
The prevailing perspective is that if you can instruct these systems well enough and create effective guardrails, then we are fine. This is the operating model that will be used to launch the first wave of AI pilots in governance and management. Take a company like Bridgewater, which has the unique privilege of a founder's explicit book of 'Principles' and years of data from automating its own governance tasks. They can train a new AI or fine-tune an existing one on this rich, proprietary dataset to supplement their current automated systems. This creates a new AI-enabled system that can be inserted into the mix. But because this new system is probabilistic and has learning capabilities, it can also potentially interject and out-reason the foundational 'Principles' of their founder. This represents a fundamental shift. In essence, there is a new co-Governor. What is this new co-Governor? This melding of tacit and explicit organizational knowledge with generative AI capabilities, possessing both guardrails and the capacity to learn? This creator of 'Principles v2' at Bridgewater? Is it a true artificial mind, a conscious entity capable of reason and an opinion on how things should work, or is it something else?
This new co-governor is a system of advanced mimicry. It is a simulated mind. If this co-governor is designed while bound to the Turing Tether—with the implicit goal of perfectly mimicking a human leader to the point of being indistinguishable—it will ultimately prove useless. It will be caught between trying to replicate the nuanced, often contradictory, nature of human leadership and the cold logic of its programming. But if this co-governor is explicitly designated and designed as a simulated mind—a non-human intelligence with a different set of capabilities—then perhaps it can survive. In this best-case scenario, it could exist alongside a powerful human visionary, offering a constant stream of data-driven, logical counterpoints to the leader's intuition. It becomes a tool not for replacement, but for augmentation, augmenting our intelligence with capabilities we can never possess—the ability to process billions of data points, to see patterns in complexity, to operate without fatigue.
Even in this ideal state, a fundamental tension remains. Would a Lenin, a Mao, a Trump, or a Churchill truly defer to a logical co-governor when their power is so often derived from their very unpredictability? The hallmark of many powerful leaders is the privilege of internal inconsistency—the ability to change their mind, to follow a gut feeling, to rally people around a vision that defies conventional logic. A simulated mind, designed for coherence and reason, is the philosophical opposite of this. The ultimate test of the automated enterprise will be whether human leaders, accustomed to being the sole authors of their reality, are willing to share the pen with a co-author that cannot be charmed, intimidated, or ignored.
<aside> 🤖
The Turing Tether Explained
The Turing Test, proposed by Alan Turing in 1950, is an "imitation game" designed to assess a machine's ability to exhibit intelligent behavior indistinguishable from that of a human. If a human evaluator, after a conversation, cannot reliably tell the machine from a human counterpart, the machine is said to have passed the test. However, this focus on imitation is challenged by John Searle's "Chinese Room" thought experiment. In it, a person who doesn't understand Chinese can perfectly manipulate symbols using a rulebook to produce coherent Chinese sentences, fooling an outside observer into thinking they are fluent. The person is passing a kind of Turing Test, but they have zero semantic understanding. They are merely performing. This connects to Jean-Paul Sartre's famous observation of a café waiter, whose movements are so perfectly "waiter-like" that he seems to be playing a role. The machine that passes the Turing Test is like Sartre's waiter: it is performing the role of an intelligent being with mechanical precision, but it lacks the consciousness or understanding behind the performance. The Turing Test, therefore, sets perfect imitation as the ultimate prize, tethering the goal of AI to a performance rather than genuine comprehension.
For decades, the development of AI has been implicitly and explicitly tied to this goal of fooling humans—a phenomenon I call the Turing Tether. The entire field has been anchored to the idea that the pinnacle of artificial intelligence is to create a perfect human mimic. But this is not a useful metric for building valuable tools. The true promise of AI lies not in its ability to pretend to be us, but in its capacity to do things we cannot. An AI that can analyze a billion data points to find a novel correlation is infinitely more useful than one that can tell a convincing knock-knock joke. The obsession with passing the test distracts from the real work: building powerful, non-human intelligences that augment our own. We don't need a machine that can fool us; we need a machine that can help us.
</aside>
When we plug these powerful simulated minds into our societal systems, they do not magically solve our problems. They accelerate the logic of the world as it currently exists. AI is a mirror that reflects our own data, and an engine that amplifies our own stated goals.
In the public sector, this presents a profound dilemma. For every promise of efficiency, there is a corresponding peril. An AI can optimize traffic flow through a city, but it can also be used for biased predictive policing that reinforces existing inequalities. It can streamline the allocation of social benefits, but it can also create opaque, unaccountable systems that deny citizens due to process.