Table Of ContentS E C T I O N
The Evolution of Game Al
Paul Tozout-Ion Storm Austin
gehn298yahoo.com
Th e field of game artificial intelligence (AI) has existed since the dawn of video
games in the 1970s. Its origins were humble, and the public perception of game
A1 is deeply colored by the simplistic games of the 1970s and 1980s. Even today,
game AI is haunted by the ghosts of Pac-Man2 Inky, Pinky, Blinky, and Clyde. Until
very recently, the video game industry itself has done all too little to change this
perception.
However, a revolution has been brewing. The past few years have witnessed game
AIs vastly richer and more entertaining than the simplistic AIs of the past. As 3D ren-
dering hardware improves and the skyrocketing quality of game graphics rapidly
approaches the point of diminishing returns, A1 has increasingly become one of the crit-
ical factors in a game's success, deciding which games become bestsellers and determin-
ing the fate of more than a few game studios. In recent years, game AI has been quietly
transformed from the redheaded stepchild of gaming to the shining star of the industry.
The game A1 revolution is at hand.
A Little Bit of History
--miilh46844I "M1 "ass~.lrgysbpag)~-~~~~~w~~~>mP~~Bw~~~~~"*"B*B-"8 *"*"X*I b"s*s"s*e "9-*W" *"*
At the dawn of gaming, AIs were designed primarily for coin-operated arcade games,
and were carefully designed to ensure that the player kept feeding quarters into the
machine. Seminal games such as Pong, Pac-Man, Space Invaders, Donkey Kong, and
Joust used a handful of very simple rules and scripted sequences of actions combined
with some random decision-making to make their behavior less predictable.
Chess has long been a mainstay of academic AI research, so it's no surprise that
chess games such as Chessmaster 2000 [Soffool86] featured very impressive A1 oppo-
nents. These approaches were invariably based on game tree search [SvarovskyOO].
Strategy games were among the earliest pioneers in game AI. This isn't surprising,
as strategy games can't get very far on graphics alone and require good AI to even be
playable. Strategy game AI is particularly challenging, as it requires sophisticated unit-
level AI as well as extraordinarily complex tactical and strategic computer player AI. A
number of turn-based strategy games, most notably MicroProse's Civilization [Micro-
Prose911 and Civilization 2, come to mind as early standouts, despite their use of
cheating to assist the computer player at higher di&culty settings.
4 Section 1 General Wisdom
Even more impressive is the quality of many recent real-time strategy game AIs.
Warcraft II [Blizzard951 featured one of the first highly competent and entertaining
"RTS" AIs, and Age of Empires 2: The Age of Engs [Ensemble991 features the most
challenging RTS AI opponents to date. Such excellent RTS AIs are particularly
impressive in light of the difficult real-time performance requirements that an RTS AI
faces, such as the need to perform pathfinding for potentially hundreds of units at a
time.
In the first-person shooter field, Valve Software's Half-Life [Valve98] has received
high praise for its excellent tactical AI. The bots of Epic Games' Unreal: Tournament
[Epic991 are well known for their scalability and tactical excellence. Looking Glass
Studios' Thiej The Dark Project [LGS98], the seminal "first-person sneaker," stands
out for its careful modeling of AIs' sensory capabilities and its use of graduated alert
levels to give the player important feedback about the internal state of the AIs. Sierra
Studios' SWAT 3: Close Quarters Battle [Sierra991 did a remarkable job of demon-
strating humanlike animation and interaction, and took great advantage of random-
ized A1 behavior parameters to ensure that the game is different each time you play it.
"Sim" games, such as the venerable SimCiy [Maxis89], were the first to prove the
~otentialo f artificial life ("A-Life") approaches. The Sims [MaxisOO] is particularly
worth noting for the depth of ~ersonalityo f its A1 agents. This spectacularly popular
game beautifully demonstrates the of fuzzy-state machines (FuSMs) and A-
Life technologies.
Another early contender in the A-Life category was the Creatures series, originat-
ing with Creatures in 1996 [CyberLife96]. Creatures goes to great lengths to simulate
the psychology and physiology of the "Norns" that populate the game, including,
"Digital DNA" that is unique to each creature.
"God games" such as the seminal hits Populous [Bullfrog89] and Dungeon Keeper
[Bullfrog971 combined aspects of sim games and A-Life approaches with real-time
strategy elements. Their evolution is apparent in the ultimate god game, Lionhead
Studios' recent Black & White [Lionheado1 1. Black Q White features what is undoubt-
edly one of the most impressive game AIs to date-some of which is described in this
book. Although it's certainly not the first game to use machine learning technologies,
it's undoubtedly the most successful use of A1 learning approaches yet seen in a com-
puter game.
It's important to note that Black Q White was very carefully designed in a way
that allows the A1 to shine-the game is entirely built around the concept of teaching
and training your "creature." This core paradigm effectively focuses the player's atten-
tion on the AI's development in a way that's impossible in most other games.
Behind the Revolution
A key factor in the success of recent game A1 has been the simple fact that developers
are finally taking it seriously. Far too often, AI has been a last-minute rush job, imple-
mented in the final two or three months of development by overcaffeinated program-
1.1 The Evolution of Game Al 5
*"M"~a-
mers with dark circles under their eyes and thousands of other high-priority tasks to
complete.
Hardware constraints have also been a big roadblock to game AI. Graphics ren-
dering has traditionally been a huge CPU hog, leaving little time or memory for the
AI. Some AI problems, such as pathfinding can't be solved without significant
processor resources. Console games in particular have had a difficult time with AI,
given the painfully tight memory and performance requirements of console hardware
until recent years.
A number of the early failures and inadequacies of game AI also arose from an
insufficient appreciation of the nature of game AI on the part of the development
team itself-what is sometimes referred to as a "magic bullet" attitude. This usually
manifests itself in the form of an underappreciation of the challenges of AI develop-
ment-"we'll just use a scripting language9'--or an inadequate understanding of how
to apply AI techniques to the task at hand-"we'll just use a big neural network."
Recent years have witnessed the rise of the dedicated AI programmer, solely
devoted to AI from Day One of the project. This has, by and large, been a smashing
success. In many cases, even programmers with no previous AI experience have been
able to produce high-quality game AI. AI development doesn't necessarily require a
lot of arcane knowledge or blazing insights into the nature of human cognition. Quite
often, all it takes is a down-to-earth attitude, a little creativity, and enough time to do
the job right.
Mainstream Al
The field of academic artificial intelligence consists of an enormous variety of differ-
ent fields and subdisciplines, many of them starkly ideologically opposed to one
another. To avoid any of the potentially negative connotations that some readers
might associate with the term "academic," we will refer to this field as mainstream AI.
We cannot hope to understand game AI without also understanding something
of the much broader field of artificial intelligence. A reasonable history of the evolu-
tion of the AI field is outside the scope of this article; nevertheless, this part of the
book enumerates a handful of mainstream AI techniques-specifically, those that we
AII
consider most relevant to present and future game See [AI95] for an introduction
to nearly all of these techniques.
Expert systems attempt to capture and exploit the knowledge of a human expert
within a given domain. An expert system represents the expert's expertise within
a knowledge base, and performs automated reasoning on the knowledge base in
response to a query. Such a system can produce similar answers to those that the
human expert would have provided.
Case-based reasoning techniques attempt to analyze a set of inputs by compar-
ing them to a database of known, possibly historical, sets of inputs and the most
advisable outputs in those situations. The approach was inspired by the human
Section 1 General Wisdom
tendency to apprehend novel situations by comparing them to the most similar
situations one has experienced in the past.
Finite-state machines are simple, rule-based systems in which a finite number of
"states" are connected in a directed graph by "transitions" between states. The
finite-state machine occupies exactly one state at any moment.
Production systems are- comprised of a database of rules. Each rule consists
of an arbitrarily complex conditional statement, plus some number of actions
that should be performed if the conditional statement is satisfied. Production
rule systems are essentially lists of "if-then" statements, with various conflict res-
olution mechanisms available in the event that more than one rule is satisfied
simultaneously.
Decision trees are similar to complex conditionals in "if-then" statements. DTs
make a decision based on a set of inputs by starting at the root of the tree and, at
each node, selecting a child node based on the value of one input. Algorithms
such as ID3 and C4.5 can automatically construct decision trees from sample
data.
Search methods are concerned with discovering a sequence of actions or states
within a graph that satisfy some goal-either reaching a specified "goal state" or
simply maximizing some value based on the reachable states.
Planning systems and scheduling systems are an extension of search methods
that emphasize the subproblem of finding the best (simplest) sequence of actions
that one can perform to achieve a particular result over time, given an initial state
of the world and a precise definition of the consequences of each possible action.
First-order logic extends propositional logic with several additional features to
allow it to reason about an AI agent within an environment. The world consists
of "objects" with individual identities and "properties" that distinguish them
from other objects, and various "relations" that hold between those objects and
properties.
The situation calculus employs first-order logic to calculate how an A1 agent
should act in a given situation. The situation calculus uses automated reasoning
to determine the course of action that will produce the most desirable changes to
the world state.
Multi-agent systems approaches focus on how intelligent behavior can naturally
arise as an emergent property of the interaction between multiple competing and
cooperating agents.
Artificial life (or A-Life) refers to multi-agent systems that attempt to apply
some of the universal properties of living systems to A1 agents in virtual worlds.
Flocking is a subcategory of A-Life that focuses on techniques for coordinated
movement such that AI agents maneuver in remarkably lifelike herds and flocks.
Robotics deals with the problem of allowing machines to function interactively in
the real world. Robotics is one of the oldest, best-known, and most successful fields
of artificial intelligence, and has recently begun to undergo a renaissance because of
the explosion of available computing power. Robotics is generally divided into sep-
arate tasks of "control systems" (output) and "sensory systems" (input).
Genetic algorithms and genetic programming are undoubtedly some of the
most fascinating fields of AI (and it's great fun to bring them up whenever you
find yourself in an argument with creationists). These techniques attempt to imi-
-
tate the process of evolution directly, performing selection and interbreeding with
randomized crossover and mutation operations on populations of programs,
algorithms, or sets of parameters. Genetic algorithms and genetic programming
have achieved some truly remarkable results in recent years [Koza99], beautifully
disproving the ubiquitous public misconception that a computer "can only do
what we program it to do."
Neural networks are a class of machine learning techniques based on the archi-
tecture of neural interconnections in animal brains and nervous systems. Neural
networks operate by repeatedly adjusting the internal numeric parameters (or
weights) between interconnected components of the network, allowing them to
learn an optimal or near-optimal response for a wide variety of different classes of
learning tasks.
Fuzzy logic uses real-valued numbers to represent degrees of membership in a
number of sets-as opposed to the Boolean (true or false) values of traditional
logic. Fuzzy logic techniques allow for more expressive reasoning and are capable
of much more richness and subtlety than traditional logic.
Belief networks, and the specific subfield of Bayesian inference, provide tools
for modeling the underlying causal relationships between different phenomena,
and use probability theory to deal with uncertainty and incomplete knowledge of
the world. They also provide tools for making inferences about the state of the
world and determining the likely effects of various possible actions.
Game AIs have taken advantage of nearly all of these techniques at one point or
another, with varying degrees of success.
Ironically, it is the simplest techniques-finite-state machines, decision trees, and
production rule systems-that have most often proven their worth. Faced with tight
schedules and minimal resources, the game A1 community has eagerly embraced
rules-based systems as the easiest type of A1 to create, understand, and debug.
Expert systems share some common ground with game AI in the sense that many
game AIs attempt to play the game as an expert human player would. Although a
game AI's knowledge base is usually not represented as formally as that of an expert
system, the end result is the same: an imitation of an expert player's style.
Many board game AIs, such as chess and backgammon, have used game trees and
game tree search with enormous success. Backgammon AIs now compete at the level
of the best human players [SnowieOI]. Chess A1 famously proved its prowess with the
bitter defeat of chess grandmaster Garry Kasparov by a massive supercomputer named
"Deep Blue" [IBM97]. Other games, such as Go, have not yet reached the level of
human masters, but are quickly narrowing the gap [GoOl].
Unfortunately, the complexities of modern video game environments and game
mechanics make it impossible to use the brute-force game tree approach used by systems
such as Deep Blue. Other search techniques are commonly used for game AI navigation
and pathfinding, however. The A* search algorithm in particular deserves special men-
tion as the reigning king of AI pathfinding in every game genre (see [StoutOO],
[RabinOO] for an excellent introduction to A*, as well as [Matthews021 in this book).
Game AI also shares an enormous amount in common with robotics. The signif-
icant challenges that robots face in attempting to perceive and comprehend the "real
world" is dramatically different from the easily accessible virtual worlds that game AIs
inhabit, so the sensory side of robotics is not terribly applicable to game AI. However,
the control-side techniques are very useful for game AI agents that need to intelli-
gently maneuver around the environment and interact with the player, the game
world, and other AI agents. Game A1 agent pathfinding and navigation share an enor-
mous amount in common with the navigation problems faced by mobile robots.
Artificial life techniques, multi-agent systems approaches, and flocking have all
found a welcome home in game AI. Games such as The Sims and SimCity have indis-
putably proven the usefulness and entertainment value of A-Life techniques, and a
number of successful titles use flocking techniques for some of their movement AI.
Planning techniques have also met with some success. The planning systems
developed in mainstream A1 are designed for far more complex planning problems
than the situations that game A1 agents face, but this will undoubtedly change as
modern game designs continue to evolve to ever higher levels of sophistication.
Fuzzy logic has proven a popular technique in many game AI circles. However,
formal first-order logic and the situation calculus have yet to find wide acceptance in
games. This is most likely due to the difficulty of using the situation calculus in the
performance-constrained environments of real-time games and the challenges of ade-
quately representing a game world in the language of logical formalisms.
Belief networks are not yet commonly used in games. However, they are particu-
larly well suited to a surprising number of game AI subproblems [Tozour02].
The Problem of Machine Learning
~ ~ B B B ~ ~ ~ B _ B B _ B ~ a ~ e ~ m ~ ~ m m ~ ~ w w m s m M ~ a w ~ ~ B m ~ * ~
In light of this enormous body of academic research, it's understandable that the game
A1 field sometimes seems to have a bit of an inferiority complex. Nowhere is this more
true than with regard to machine learning techniques.
It? beginning to sound like a worn recordfi.omy ear to year, but once again,
game developers at GDC2001 described theirgame AIas not being in the
same province as academic technologies such as neural networks and ge-
netic algorithms. Game developers continue to use simple rules-basedj-
nite- andfizzy-state machinesfor nearly all their AI need [WoodcockOl].
There are very good reasons for this apparent intransigence. Machine learning
approaches have had a decidedly mixed history in game AI. Many of the early
1.1 The Evolution of Game Al 9
attempts at machine learning resulted in unplayable games or AIs that learned poorly,
if at all. For all its potential benefits, learning, particularly when applied inappropri-
ately, can be a disaster. There are several reasons for this:
Machine learning (ML) systems can learn the wrong lessons. If the AI learns from
the human player's play style, an incompetent player can easily miseducate the
AI.
ML techniques can be difficult to tune and tweak to achieve the desired results.
Learning systems require a "fitness function" to judge their success and tell them
how well they have learned. Creating a challenging and competent computer
player is one thing, but how do you program a fitness function for "fun?"
Some machine learning technologies-neural networks in particular-are
heinously difficult to modify, test, or debug.
Finally, there are many genres where in-game learning just doesn't make much
sense. In most action games and hack-and-slash role-playing games, the A1 oppo-
nents seldom live long enough to look you in the eye, much less learn anything.
Besides these issues, it's fair to say that a large part of the problem has sprung
from the industry's own failure to apply learning approaches correctly. Developers
have often attempted to use learning to develop an AI to a basic level of competence
that it could have attained more quickly and easily with traditional rules-based A1
approaches. This is counterproductive. Machine learning approaches are most useful
and appropriate in situations where the AI entities actually need to learn something.
Recent games such as Black & White prove beyond any doubt that learning
approaches are useful and entertaining and can add significant value to a game's AI.
Learning approaches are powerful tools and can shine when used in the right context.
The key, as Black ei. White aptly demonstrates, is to use learning as one carefully con-
sidered component within a multilayered system that uses a number of other tech-
niques for the many AI subtasks that don't benefit from machine learning.
We Come in Pursuit of Fun
Much of the disparity between game AI and mainstream AI stems from a difference in
goals. Academic AI pursues extraordinarily difficult problems such as imitating
human cognition and understanding natural language. But game AI is all about&n.
At the end of the day, we are still a business. You have a customer who paid $40
for your game, and he or she expects to be entertained. In many game genres-action
games in particular-it's surprisingly easy to develop an AI that will consistently
trounce the player without breaking a sweat. This is "intelligentn from that AI's per-
spective, but it's not what our customers bought the game for. Deep Blue beat Kas-
parov in chess, but it never tried to entertain its opponent-an assertion Kasparov
- -
himself can surely confirm.
In a sense, it's a shame that game AI is called "AI" at all. The term does as much
to obscure the nature of our endeavors as it does to illuminate it. If it weren't too late
10 Section 1 General Wisdom
to change it, a better name might be "agent design" or "behavioral modeling." The
word intelligence is so fraught with ambiguity that it might be better to avoid it
completely.
The misapprehension of intelligence has caused innumerable problems through-
out the evolution of game AI. Our field requires us to design agents that produce
appropriate behaviors in a given context, but the adaptability of humanlike "intelli-
gence" is not always necessary to produce the appropriate behaviors, nor is it always
desirable.
Intelligence Is Context-Dependent
The term "IQ" illustrates the problem beautifully. The human brain is an extraordi-
nary, massively interconnected network of both specialized and general-purpose cog-
nitive tools evolved over billions of years to advance the human species in a vast
number of different and challenging environments. To slap a single number on this
grand and sublime artifact of evolution can only blind us to its actual character.
The notion of IQi s eerily similar to the peculiar Western notion of the "Great
Chain of Being [Descartesl641]. Rather than viewing life as a complex and multi-
faceted evolutionary phylogeny in which different organisms evolve within different
environments featuring different selective pressures, the "Great Chain of Being col-
lapses all this into a linear ranking. All living organisms are sorted into a universal
pecking order according to their degree of "perfection," with God at the end of the
chain.
An article on IQi n a recent issue of Psychology Today [PTOl] caught my eye:
In 1986, a colleague and Ipublished a study of men who fiquented the
racetracks daily. Some were excellent handicappers, while others were not.
What distinguished experts from nonexperts was the use of a complex
mental algorithm that converted racing data taken from the racingpro-
grams sold at the track. The use of the algorithm was unrelated to the
men? IQscores, however. Some experts were dockworkers with IQs in the
low 805, but they reasonedfar more complexly at the track than all non-
experts-even those with IQs in the upper 120s.
In fact, experts were always better at reasoning complexly than nonex-
perts, regardless of their IQ scores. But the same experts who could reason
so well at the track were often abysmal at reasoning outside the track-
about, say, their retirement pensions or their social relationships.
This quote gives us a good perspective on what game A1 is all about. What we
need is not a generalized "intelligence," but context-dependent expertise.
Game A1 is a vast panorama of specialized subproblems, including pathfinding,
steering, flocking, unit deployment, tactical analysis, strategic planning, resource allo-
cation, weapon handling target selection, group coordination, simulated perception,
situation analysis, spatial reasoning, and context-dependent animation, to name a few.