<aside>
I spy is a guessing game where one player (the spy or it) chooses an object within sight and announces to the other players that "I spy with my little eye something beginning with...", naming the first letter of the object. Other players attempt to guess this object. It is often played as a car game. - Wikipedia
</aside>
At first glance, defining artificial intelligence may seem trivial. The term is used thousands of times a day, billions of dollars are invested in its promise, and people are even reconsidering their retirement plans based on its anticipated impact on the economy. Surely, this means AI is well-defined and universally understood. Unfortunately, that is not quite the case.
Some people—let’s call them Enders—view artificial intelligence as an end-state, the culmination of technological progress. They often see intelligence as an inherently human trait, arguing that lines of code or language models do not constitute intelligence in the same way human cognition does.
Others—let’s call them Growers—consider existing AI technologies as legitimate forms of artificial intelligence, though still evolving and improving. Growers are more likely to distinguish Artificial Intelligence from Artificial General Intelligence (AGI), framing AGI as the true "end-state" yet to be achieved.
This ongoing debate is not new. If someone woke up from a 70-year slumber—assuming they survived the shock of modern smartphones—they might recall a time when expert systems were commonly referred to as artificial intelligence. This individual may also remember being inundated with stories of the promise of artificial intelligence, and not realize that a few years after they went into a coma, investments in AI also hibernated in the period known as the first AI winter (from 1967 to 1977). This historical context lends weight to a cheeky retort from the Enders: that the Growers are simply moving the goalposts, redefining AI as needed to fit current advancements. “Was it AI then or is it AI now?”. “Large Language Models are essentially a very sophisticated form of auto-complete” reads the blurb of a recent article on Big Think. From that standpoint, current technologies, as advanced as they appear are only part of the eventual contraption that will be true AI.
<aside>
Expert Systems are the ‘AI of the 1950s’. They operated by connecting information stored within a database (which was a new idea at the time) with a set of ‘if-then’ rules used for inference or decision making. They primarily used ‘pattern matching’ and flow charts to arrive at conclusions which, when done well, resembled or even exceeded what a human would conclude. Expert Systems predominantly used ‘symbolic knowledge’ which operated by getting computers to apply ‘logic’ to solving problems.
See more: https://en.wikipedia.org/wiki/Expert_system
</aside>
While this argument between Enders and Growers rages, companies are busy figuring out how to create the replacement for the modern smartphone that talks back to you constantly—like a digital roommate. Someone tell our friend waking up from the coma not to sign up for a 5 year payment plan on their first smartphone. You see, both the Enders and the Growers have valid points, but the world cannot and will not wait for a perfect definition. Technology moves forward regardless, shaped less by philosophical debates and more by market forces, user expectations, and the relentless pursuit of the next big thing.
However, in order to properly discuss the role that AI plays in business strategy, it is important to absorb some of this debate between Enders and Growers without being weighed down by it. For instance, it is important to recognize what the ‘state of the art’ in artificial intelligence represents today, if for no other reason than to inoculate yourself from the AI-washing that is going on.
The first property that AI can be defined by is that it requires computing power to perform, making it inherently a computational achievement. Therefore any system that accomplishes tasks that were done manually prior using some form of computation-based system is a candidate for being called artificial intelligence. However, the fact that computation is being used to solve a task doesn’t necessarily mean that the task involves artificial intelligence. Simply relying on computation doesn’t automatically make a system intelligent in the strict sense. What sets AI apart is its complexity—the ability to handle intricate, multifaceted tasks that go beyond simple, rule-based computations. While traditional computational systems, which are rule-based, follow predefined steps and provide solutions based on fixed instruction sets, AI systems are designed to process vast amounts of data, recognize patterns, and handle uncertainty in ways that are much more flexible and adaptive.
However, rule-based systems can handle complex tasks within a well-defined domain. They can be highly effective at solving specific problems where the rules and relationships are clearly understood and can be encoded in a set of "if-then" statements by the programmer. In fact, rule-based systems were quite successful in fields like medical diagnosis, expert systems for legal advice, and troubleshooting systems.
Therefore, the real distinction between AI and traditional computational systems comes when learning is introduced. AI systems are not just performing a fixed set of operations; they are capable of learning from data, improving over time, and algorithmically adapting their behavior based on ‘experience’. This learning ability allows AI to tackle problems that are not easily solvable with static programming, enabling it to solve novel, dynamic challenges such as performing tasks that require understanding context, managing multiple variables, and making decisions with incomplete or ambiguous information.

Putting these core ideas together, AI refers to a computational-based technology that can handle complexity effectively because it is designed to learn from the data it has contact with.
But defining AI isn't just about what the machine can do; it's also about how our relationship with the machine is changing. The very nature of Human-Computer Interaction (HCI) is evolving from a one-way command system to a two-way dialogue. As we will explore in our final chapter, this evolution is accelerating dramatically, ushering in an age of Human-Computer-AI Interaction (HCAII) that will fundamentally reshape our world.
Before we advance more into this chapter, I want to take a step back and talk about the value of manual simplicity and the idea of models.
I imagine a thought experiment called the Xamber hypothesis where you observe, from the natural vantage point you find yourself, a line of people going into a tunnel and at the end of the tunnel, which is also visible from where you are standing, you see a number of people coming out as well. Let us assume that you look long enough to realize that some people who are entering the tunnel are coming out on the other side after some moments, but some others appear not to be emerging at all. Most people, on observing something like this, would immediately start to think about what their observation means. In other words, they start to build an explanation of what is going on in the scene they are observing. They may grow more curious about the input into the tunnel (the people going in), the output of the tunnel (the people going out), or they may be more focused on the tunnel itself (the point of transformation in the system). Assuming that there is something actually going on inside the tunnel, then what we have captured with this thought experiment is the notion of a model. It exists both in the real world, as a transformational effect (something goes in as phone calls made, and comes out as sales) as well as inside the mind of the observers to the scene. In other words, as I try to figure out what is happening to the inputs in the tunnel to lead to the outputs, I am building a ‘mental model’ of the tunnel model - I am building a model of the Xamber (Xamber is pronounced chamber).