A lot of people have been curious about how the AI in AI War: Fleet Command works, since we have been able to achieve so much more realistic strategic/tactical results compared to the AI in most RTS games. Part 1 of this series will give an overview of the design philosophy we used, and later parts will delve more deeply into specific sub-topics.
Decision Trees: AI In Most RTS Games
First, the way that AI systems in most games work is via giant decision trees (IF A, then C, IF B, then D, etc, etc). This can make for human-like behavior up to a point, but it requires a lot of development and ultimately winds up with exploitable flaws. My favorite example from pretty much every RTS game since 1998 is how they pathfind around walls; if you leave a small gap in your wall, the AI will almost always try to go through that hole, which lets human players mass their units at these choke points since they are “tricking” the AI into using a hole in the wall that is actually a trap. The AI thus sends wave after wave through the hole, dying every time.
Not only does that rules-based decision tree approach take forever to program, but it’s also so exploitable in many ways beyond just the above. Yet, to emulate how a human player might play, that sort of approach is generally needed. I started out using a decision tree, but pretty soon realized that this was kind of boring even at the basic conceptual level — if I wanted to play against humans, I could just play against another human. I wanted an AI that acted in a new way, different from what another human could do, like playing against Skynet or the Buggers from Ender’s Game, or something like that. An AI that felt fresh and intelligent, but that played with scary differences from how a human ever could, since our brains have different strengths and weaknesses compared to a CPU. There are countless examples of this in fiction and film, but not so many in games.
The approach that I settled on, and which gave surprisingly quick results early in the development of the game, was simulating intelligence in each of the individual units, rather than simulating a single overall controlling intelligence. If you have ever ready Prey, by Michael Crichton, it works vaguely like the swarms of nanobots in that book. The primary difference is that my individual units are a lot more intelligent than each of his nanobots, and thus an average swarm in my game might be 30 to 2,000 ships, rather than millions or billions of nanobots. But this also means that my units are at zero risk of ever reaching true sentience — people from the future won’t be coming back to kill me to prevent the coming AI apocalypse. The primary benefit is that I can get much more intelligent results with much less code and fewer agents.
There are really three levels of thinking to the AI in AI War: strategic, sub-commander, and individual-unit. So this isn’t even a true swarm intelligence, because it combines swarm intelligence (at the individual-unit level) with more global rules and behaviors. How the AI decides which planets to reinforce, or which planets to send waves against, is all based on the strategic level of logic — the global commander, if you will. The method by which an AI determines how to use its ships in attacking or defending at an individual planet is based on a combination of the sub-commander and individual-ship logic.
Here’s the cool thing: the sub-commander logic is completely emergent. Based on how the individual-unit logic is coded, the units do what is best for themselves, but also take into account what the rest of the group is doing. It’s kind of the idea of flocking behavior, but applied to tactics and target selection instead of movement. So when you see the AI send its ships into your planet, break them into two or three groups, and hit a variety of targets on your planet all at once, that’s actually emergent sub-commander behavior that was never explicitly programmed. There’s nothing remotely like that in the game code, but the AI is always doing stuff like that. The AI does some surprisingly intelligent things that way, things I never thought of, and it never does the really moronic stuff that rules-based AIs occasionally do.
And the best part is that it is fairly un-trickable. Not to say that the system is perfect, but if a player finds a way to trick the AI, all they have to do is tell me and I can usually put a counter into the code pretty quickly. There haven’t been any ways to trick the AI since the alpha releases that I’m aware of, though. The AI runs on a separate thread on the host computer only, so that lets it do some really heavy data crunching (using LINQ, actually — my background is in database programming and ERP / financial tracking / revenue forecasting applications in TSQL, a lot of which came across to the AI here). Taking lots of variables into effect means that it can make highly intelligent decisions without causing any lag whatsoever on your average dual-core host.
Fuzzy logic / randomization is also another key component to our AI. A big part of making an unpredictable AI system is making it so that it always make a good choice, but not necessarily the 100% best one (since, with repetition, the “best” choice becomes increasingly non-ideal through its predictability). If an AI player only ever made perfect decisions, to counter them you only need to figure out yourself what the best decision is (or create a false weakness in your forces, such as with the hole in the wall example), and then you can predict what the AI will do with a high degree of accuracy — approaching 100% in certain cases in a lot of other RTS games. With fuzzy logic in place, I’d say that you have no better than a 50% chance of ever predicting what the AI in AI War is going to do… and usually it’s way less predictable than even that.
Bear in mind that the lower difficulty levels make some intentionally-stupid decisions that a novice human might make (such as going for the best target despite whatever is guarding it). That makes the lower-level AIs still feel like a real opponent, but a much less fearsome one. Figuring out ways in which to tone down the AI for the lower difficulties was one of the big challenges for me, actually. Partly it boiled down to just withholding the best tactics from the lower-level AIs, but also there were some intentionally-less-than-ideal assumptions that I also had to seed into its decision making at those lower levels.
Skipping The Economic Simulation
Lastly, the AI in AI War follows wholly different economic rules than the human players (but all of the tactical and most strategic rules are the same). For instance, the AI starts with 20,000+ ships in most games, whereas you start with 4 ships per player. If it just overwhelmed you with everything, it would crush you immediately. Same as if all the bad guys in every level of a Mario Bros game attacked you at once, you’d die immediately (there would be nowhere to jump to). Or if all the enemies in any given level of an FPS game just ran directly at you and shot with perfect accuracy, you’d have no hope.
Think about your average FPS that simulates your involvement in military operations — all of the enemies are not always aware of what you and your allies are doing, so even if the enemies have overwhelming odds against you, you can still win by doing limited engagements and striking key targets, etc. I think the same is true in real wars in many cases, but that’s not something that you see in the skirmish modes of other RTS games.
This is a big topic that I’ll touch on more deeply in a future article in this series, as it’s likely to be the most controversial design decision I’ve made with the game. A few people will likely view this as a form of cheating AI, but I have good reasons for having done it this way (primarily that it allows for so many varied simulations, versus one symmetrical simulation). The AI ships never get bonuses above the players, the AI does not have undue information about player activities, and the AI does not get bonuses or penalties based on player actions beyond the visible AI Progress indicator (more on that below). The strategic and tactical code for the AI in the game uses the exact same rules as constrain the human players, and that’s where the intelligence of our system really shines.
In AI War, to offer procedural campaigns that give a certain David vs Goliath feel (where the human players are always David to some degree), I made a separate rules system for parts of the AI versus what the humans do. The AI’s economy works based on internal reinforcement points, wave countdowns, and an overall AI Progress number that gets increased or decreased based on player actions. This lets the players somewhat set the pace of game advancement, which adds another layer of strategy that you would normally only encounter in turn-based games. It’s a very asymmetrical sort of system that you totally couldn’t have in a pvp-style of skirmish game with AI acting as human standins, but it works beautifully in a co-op-style game where the AI is always the enemy.
This provides a pretty good overview of the decisions we made and how it all came together. In the next article, which is now available, I delve into some actual code. If there is anything that readers particularly want me to address in a future article, don’t hesitate to ask! I’m not shy about talking about the inner workings of the AI system here, since this is something I’d really like to see other developers do in their games. I play lots of games other than my own, just like anyone else, and I’d like to see stronger AI across the board.
AI Article Index
Part 1 gives an overview of the AI approach being used, and its benefits.
Part 2 shows some LINQ code and discusses things such as danger levels.
Part 3 talks about the limitations of this approach.
Part 4 talks about the asymmetry of this AI design.
Part 5 is a transcript of a discussion about the desirability of slightly-nonideal emergent decisions versus too-predictable “perfect” decisions.
Part 6 talks about player agency versus AI agency, and why the gameplay of AI War is based around keeping the AI deviously reactive rather than ever fully giving it the “tempo.”