Month: June 2009

AI War 1.008 Released (Free DLC & Player Suggestions)

Arcen Games is pleased to announce the release of AI War: Fleet
Command version 1.008. You can download a trial version
of the game, as well as purchase
a license key
to unlock the full version. If you already have the
game or demo installed, just hit “Check For Updates” inside the game to
get the latest patch.  

More Free DLC: The primary upgrade in this release is
improvements to the AI and how the players can interact with them.
 Planets on the galaxy map now show up as red (as seen below left) when
the AI is likely to reinforce at them.  This helps the human players
avoid tipping their hand too soon and causing key AI planets to
reinforce too heavily.

Additionally, AI players will now reinforce their wormholes much like
the human players tend to do.  This makes deep raiding much more
difficult and interesting, and requires many new tactics.  In response
to this, the human scouts are now much upgraded — they all now include
Cloaking, and most also now have a Cloaking Booster ability (as seen
below right) that makes them even more effective in groups.

 

In addition to the above, this release also contains another large batch
of improvements, tweaks, and extensions based on player suggestions.
 Here are some of the highlights:

– Capturable ships now are now only captured after a command station
is finished being built.

– Parasite costs have increased.

– Waypoints can now be queued in conjunction with wormhole-movement
commands.

– Ships can now move between planets in attack-move or free-roaming
defender modes.

– Repair Stations and Mark II Engineers have had a range upgrade.

– Cloaked ships will no longer auto-fire at enemy ships unless in
attack-move/free-roaming mode.  This helps them maintain invisibility on
longer raids.

– New Ship: Wormhole Command Posts are a perma-cloaked ship used by
AIs to reinforce their wormholes. Kill the command station to kill
these.

– Scouts and EyeBots have been heavily upgraded to deal with the
newer wormhole threats; they now all have cloaking, and most have a new
Cloaking Booster ability.

– Control group icons are now included in the planetary summary.

– Mark II Science Labs and Missile Silos are now less expensive.

– The range of all tachyon beam emitters is now higher.

– The way the AI reinforces is now tech-level dependent (higher
techs get fewer reinforcements).

– Lightning Missiles have been made a bit more durable.

– A new “AI Alert Level” is now shown on the galaxy map intel
summaries, showing at which planets the AI is likely to do
reinforcements.

– Fixed several fairly rare bugs, including a multiplayer desync
relating to attacking forcefields.

– Fixed Astro Trains just hanging out at their stations.

– Fixed a bug with AI players getting new ship types at too fast a
rate.

– Fixed a 1.007 bug where too many Advanced Research Stations and
Data Centers were being created on maps 80 planets and up.

The above list is just a sampling, however, so be sure to check out the
full release notes to see everything (attached at the end of this post).
 More free DLC will be heading your way next week. Enjoy!

AI War Soundtrack Now Available Through iTunes!

The soundtrack from AI War has been widely praised, and with good
reason. If you can’t get enough of the music while playing the game, or
you love the music but RTS games just aren’t your thing, you’re in
luck: iTunes is now carrying the original soundtrack (OST) for the
game!

 

In the iTunes store, just do a search for either “Pablo Vega” or “AI
War: Fleet Command.” All of the proceeds from iTunes sales go directly
to the composer. He did an awesome job, above and beyond the call of
duty for any contractor, and we’re really looking forward to bringing
him on board as a fulltime staff member if AI War sales continue to be
as great as they recently have been. In the meantime, we’ve left the
rights for OST sales with him so that he can promote his incredible work
beyond just the gaming community — we’re thrilled to see it now
appearing on iTunes.

AI War 1.007 Released (Free DLC & Player Suggestions)

Arcen Games is pleased to announce the release of AI War: Fleet
Command version 1.007. You can download a trial version
of the game, as well as purchase
a license key
to unlock the full version. If you already have the
game or demo installed, just hit “Check For Updates” inside the game to
get the latest patch.  

More Free DLC: A new transport ship (shown below) can now be
unlocked from the DEF tab of your command stations.  This transport is
hugely useful for ferrying your weaker ships past heavy enemy defenses
(across minefields, past ion cannons, etc).  Transports can also be used
for ferrying teleporting ships between planets (very useful, since
teleporting ships were previously locked to their current planet).

In addition to the new ships, this release also contains another large
batch of improvements, tweaks, and extensions based on player
suggestions.  Here are some of the highlights:

– Music streaming has been improved, which speeds up loading
savegames and the game itself.

– Suggested paths between planets are now much better.

– Several AI enhancements.

– Harvester ExoShields have been made more useful.

– Fighters have been rebalanced, making them more useful.

– Several new graph options are now available.

– Overzoom is now an option.

– Efficiency of drawing hundreds or thousands of ship attack ranges
is now much better.

– Multiple constructors can now be managed at once (space docks,
missile silos, etc).

– It is now possible to issue commands to ships still being built
(which they will execute after completion).

– Galaxy maps as small as 10 planets are now supported.

– A new Auto AI Progress option has been added, allowing game pacing
to be greatly adjusted.

– Performance in very active games with more than 80,000 ships has
been greatly improved.

– Trial players can now play with full retail players for the trial
duration (1 hour).

– Waypoints now work better with attack-move mode.

The above list is just a sampling, however, so be sure to check out the
full release notes to see everything (attached at the end of this post).
 More free DLC will be heading your way next week. Enjoy!

Bytten And Out Of Eight Reviews of AI War

Two new reviews of AI War have now come out. Be sure to click the links
to read the full
reviews, if you’re interested! We’ve also now set up a
Press &
Player Reactions
page to
aggregate reviews and player reactions all in one place.

I can’t recommend it any more highly to strategy fans and fans of
space games in general.
Innovative, absorbing and most importantly a hell of a lot of fun to
play, AI War is the best
indie game I’ve got my hands on so far in 2009.


Bytten
Review

 

Challenging and multifaceted AI that increases in difficulty as
you become more powerful,
extremely large battles with lots of varied units that promote
strategic variety, ample
interface designed to handle a large scale.


Out
Of Eight PC Game Review

How to stream an ogg file using DirectSound in SlimDX.

With a nudge in the right direction from the SlimDX devs, I’ve finally got a working streaming solution for DirectSound in SlimDX. This will be a part of the 1.007 release of AI War, improving performance in the game, but I also wanted to share the code for this since it isn’t out there anywhere else.

First of all, let me note that this entire solution is really just an adaptation of an MDX Directsound streaming solution by Dr Gary. Secondly, it uses the wonderful Ogg Vorbis Decoder (a C# wrapper for the standard ogg dlls) by Atachiants Roman. And, lastly, of course this uses the SlimDX library.

Here’s the code, including a compiled test executable. Do with it what you will. If you want a deep explanation of what is going on there, you want to read Dr Gary’s article above. This is basically his work, after all, and he explains it very nicely.

Here are the major things I added/changed:

1. SlimDX is used instead of MDX, which obviously necessitated a lot of changes.

2. Rather than use Dr Gary’s wav stream class, I’m using the ogg stream class. This required even more changes, especially since the ogg calls are apparently not threadsafe (Dr. Gary was using his wav stream object from multiple threads, which the ogg library did not like one bit).

3. The way I am pulling data off of the ogg stream is a bit different from how the original code was pulling data off of the wav stream. In the original, the stream was seeking every time, to prevent data skips and loss. In my version, I’m using a loop to make sure that this isn’t neccessary, and thus there’s a minor performance gain.

4. I took out support for looping. This code was originally being adapted for AI War, which doesn’t need looping in the same sense. If you want to loop the file, just do like I’m doing in the sample and start playing it again after it completes. My goal was to keep this simple.

5. There are a variety of other minor changes, just switching it to be more in line with the AI War codebase. This was for my own purposes, but since I wanted to make the code public these changes are also seen here.

That’s it! Happy coding.

AI War 1.006 Released (Free DLC & Player Suggestions)

Arcen Games is pleased to announce the release of AI War: Fleet
Command version 1.006. You can download a trial version
of the game, as well as purchase
a license key
to unlock the full version. If you already have the
game or demo installed, just hit “Check For Updates” inside the game to
get the latest patch.  

Note:  some people were having some
trouble with the 1.005 installer, but that should be resolved with this
release without the need for any external workarounds.  Thanks to
everyone affected for their patience with that issue.

 

More Free DLC: Four new ships are now available for your use:
Engineer II, and three new kinds of Orbital Command Stations: Mark II,
Mark III, and Warp Jammer.  These four new ship types greatly expand the
economic options for players who want to customize their civilizations
in that direction.  The higher-level command stations provide extra
resources, while the Warp Jammer command station provides a way to
prevent all enemy warps into a few select systems (but at a steep
ongoing resource cost).

In addition to the
new ships, this release also contains even more dozens of improvements,
tweaks, and extensions based on player suggestions. This week’s offering
exceeds even last week’s, we think.  Thanks again to everyone who took
the time to offer feedback — and as a reminder for everyone whose
suggestion made it onto the official to-be-done-in-DLC list, but didn’t
make it into this release, you should be seeing your ideas in play over
the coming weeks.

Here are some of the highlights:

– New graphs in the scores display shows player progress on a number
of metrics over time.

– The online mini strategy guide has become a more comprehensive
wiki.

– Various improvements to the score screen display.

– Tooltips for starting ship types in the lobby, and other minor
lobby enhancements.

– Even more galaxy map display modes, filters, and control options.

– Holding the I key now shows the Hit Percent and Damage for all
selected ships.

– Savegame performance improved 5x for large games (60,000+ ships)
in particular.

– AI Progress now uses a scale 10x higher than it was previously,
and Astro Train Station and Special Forces Command Post destruction now
has a minor increase to AI Progress.

– Several AI behaviorlets have been added for making it more
intelligent in a few specific cases.

– Several improvements have been made to make the game more friendly
to Turn-Based Strategy (TBS) players.

– Dozens more smaller improvements, tweaks, and extensions suggested
by players.  See full release notes for details.

The above list is just a sampling, however, so be sure to check out the
full release notes to see everything (attached at the end of this post).
 More free DLC will be heading your way next week. Enjoy!

Designing Emergent AI, Part 3: Limitations

The first part of this article series was basically an introduction to our AI design, and the second part of this article series took a look at some of the LINQ code used in the game, as well as discussing danger levels and clarifying a few points from the first article. The second article was basically just for programmers and AI enthusiasts, as is this third one. The topic, this time, is the manner in which this approach is limited.

This Approach Is Limited?
Well, of course. I don’t think that there is any one AI approach that can be used to the exclusion of all others. And our AI designs are already heavily hybridized (as the first article in the series mentions), so it’s not like I’m using any single approach with AI War, anyway. I used the techniques and approaches that were relevant for the game I was making, and in some respects (such as pathfinding), I blended in highly traditional approaches.

The purpose of this article is to discuss the limits of the new approaches I’ve proposed (the emergent aspects, and the LINQ aspects), and thereby expose the ways in which these techniques can be blended with other techniques in games beyond just AI War. Let’s get started:

Where Emergence Quickly Breaks Down
To get emergent behavior, you need to have a sufficient number of agents. And to keep them from doing truly crazy stuff, you need to have a relatively simple set of bounds for their emergence (in the case of AI War, those bounds are choosing what to attack or where to run away to. Basically: tactics). Here are some examples of places that I think it would be massively ineffective to try for emergent behavior:

1. A competitive puzzle game like tetris. There’s only one agent (the cursor).

2. Economic simulations in RTS games. That’s all too interrelated, since any one decision has potentially massive effects on the rest of the economy (i.e., build one building and the opportunity cost is pretty high, so some sort of ongoing coordination would be very much needed).

3. Any sort of game that requires historical knowledge in order to predict player actions. For example, card games that involve bluffing would not fare well without a lot of historical knowledge being analyzed.

4. Any game that is turn-based, and which has a limited number of moves per turn, would probably not do well with this because of the huge opportunity cost. Civilization IV could probably use emergent techniques despite the fact that it is turn-based, but Chess or Go are right out.

Why Emergence Works For AI War
1. There are a lot of units, so that gives opportunities for compound thinking based on intelligence in each unit.

2. Though the decision space is very bounded as noted above (what to attack, or where to retreat to), that same decision space is also very complex. Just look at all of the decision points in the LINQ logic in the prior article. There are often fifty ways any given attack could succeed to some degree, and there are a thousand ways they could fail. This sort of subtlety lets the AI fuzzify its choices between the nearly-best choices, and the result is effective unpredictableness to the tactics that feels relatively human.

3. There is a very low opportunity cost per decision made. The AI can make a hundred decisions in a second for a group of ships, and then as soon as those orders are carried out it can make a whole new set of decisions if the situation has changed unfavorably.

4. Unpredictableness is valuable above most all else in the tactics, which works well with emergence. As long as the AI doesn’t do something really stupid, just having it choose effectively randomly between the available pretty-good choices results in surprise tactics and apparent strategies that are really just happenstance. If there were fewer paths to success (as in a racing game, for example), the boundedness of decisions to make would be too tight to make any real opportunities for desirable emergent behavior.

Learning Over Time
One intentional design choice that I made with AI War was to keep it focused on the present. I remember seeing Grandmaster Chess players at Chess tournaments, where they would play against 50 regular ranked players at once, walking around the room and analyzing each position in turn. On most boards they could glance at the setup and make their move almost instantly. On a few they would pause a bit more. They would still typically win every game, just choosing whoever put up the best fight to cede a victory to (that person typically got a prize).

I figured it would be nice to have an AI with basically those capabilities. It would look at the board at present, disregarding anything it learned in the past (in fact, it would be just a subject matter expert, not a learning AI in any capacity), and then it would make one of the better choices based on what it saw. This is particularly interesting when players do something unexpected, because then the AI does something equally unexpected in response (responding instantly to the new, unusual “board state”).

I feel like this works quite well for AI War, but it’s not something that will evolve over time without the players learning and changing (thus forcing the AI to react to the different situations), or without my making extensions and tweaks to the AI. This was the sort of system I had consciously designed, but in a game with fully symmetrical rules for the AI and the humans, this approach would probably be somewhat limited.

The better solution, in those cases, would be to combine emergent decision making with data that is collected and aggregated/evaluated over time. That collected data becomes just more decision points in the central LINQ queries for each agent. Of course, this requires a lot more storage space, and more processing power, but the benefits would probably be immense if someone is able to pull it off with properly bounded evaluations (so that the AI does not “learn” something really stupid and counterproductive).

Isn’t That LINQ Query Just A Decision Tree?
One complaint that a few AI programmers have had is that the LINQ queries that I’ve shared in the second article aren’t really that different from a traditional decision tree. And to that, I reply: you’re absolutely right, that aspect is not really too different. The main advantage is an increase in readability (assuming you know LINQ, complex statements are much more efficiently expressed).

The other advantage, however, is that soft “preferences” can easily be expressed. Rather than having a huge and branching set of IF/ELSE statements, you have the ORDER BY clause in LINQ which makes the tree evaluation a lot more flexible. In other words, if you have just this:

ORDER BY comparison 1,
comparison 2,
comparison 3

You would be able to have a case where you have 1 as true, 2 as false, and 3 as true just as easily as all three being true, or having false, true, false, or any other combination. I can’t think of a way to get that sort of flexibility in traditional code without a lot of duplication (the same checks in multiple branches of the tree), or the use of the dreaded and hard-to-read gotos.

So, while the idea of the LINQ query is basically the same as the decision tree in concept, in practice it is not only more readable, but it can be vastly more effective depending on how complex your tree would otherwise be. You don’t even have to use LINQ — any sorting algorithm of sufficient complexity could do the same sort of thing. The novelty of the approach is not LINQ itself, but rather using a tiered sorting algorithm in place of a decision tree. You could also express the above LINQ code in C# as:

someCollection.Sort( delegate( SomeType o1, SomeType o2 )
{
int val = o1.property1.CompareTo( o2.property1 );
if ( val == 0 )
val = o1.property2.CompareTo( o2.property2 );
if ( val == 0 )
val = o1.property3.CompareTo( o2.property3 );
return val;
} );

In fact, throughout the code of AI War, statements of that very sort are quite common. These are more efficient to execute than the equivalent LINQ statement, so on the main thread where performance is key, this is the sort of approach I use. In alpha versions I had it directly in LINQ, which was excellent for prototyping the approaches, but then I converted everything except the AI thread into these sorts instead of LINQ, purely for performance reasons. If you’re working in another language or don’t know SQL-style code, you could easily start and end with this kind of sorting.

Combining Approaches
In AI War, I use basically the following approaches:

1. Traditional pathfinding
2. Simple decision-engine-style logic for central strategic decisions.
3. Per-unit decision-engine-style logic for tactics.
4. Fuzzy logic (in the form of minor randomization for decisions) throughout to reduce predictability and encourage emergence given #3.
5. Soft-preferences-style code (in the form of LINQ and sorts) instead of hard-rules-style IF/ELSE decision tree logic.

If I were to write an AI for a traditional symmetrical RTS AI, I’d also want to combine in the following (which are not needed, and thus not present, in AI War):

6. A decision tree for economic management (using the sorting/LINQ approach).
7. A learning-over-time component to support AI at a level for really great human players in a symmetrical system.

If I were to write an AI for a FPS or racing game, I’d probably take an approach very similar to what those genres already do. I can’t think of a better way to do AI in those genres than they already are in the better games in each genre. You could potentially combine in emergence with larger groups of enemies in war-style games, and that could have some very interesting results, but for an individual AI player I can’t think of much different to do.

For a puzzle game, or a turn-based affair like Chess, there again I would use basically the current approaches from other games, which are deep analysis of future moves and their consequences and results, and thus selection of the best one (with some heuristics thrown in to speed processing, perhaps.).

Performers or adventure games could also use mild bits of swarm behavior to interesting effect, but I think a lot of players (myself included) really do like the convention of heavily rules-based enemies in those genres. These games don’t really tend to feature AI that is meant to mimic humans, rather AI that is meant to just follow simple rules that the human players can learn and remember.

The sort-style decision tree might be easier to code and might bring results slightly more efficiently in all of the above cases, but the end result for the player wouldn’t be much different. Still, making AI code easier to program and read is a good thing for everyone (lower costs/time to develop, fewer bugs based on poor readability, etc).

Next Time
Part 4 of this series talks about the asymmetry in this AI design.

AI Article Index
Part 1 gives an overview of the AI approach being used, and its benefits.
Part 2 shows some LINQ code and discusses things such as danger levels.
Part 3 talks about the limitations of this approach.
Part 4 talks about the asymmetry of this AI design.
Part 5 is a transcript of a discussion about the desirability of slightly-nonideal emergent decisions versus too-predictable “perfect” decisions.
Part 6 talks about player agency versus AI agency, and why the gameplay of AI War is based around keeping the AI deviously reactive rather than ever fully giving it the “tempo.”

Choosing a DirectX Platform In C#

Apparently my game AI War: Fleet Command is the first selling game to use the SlimDX development framework. Does it reflect poorly on me that I didn’t know that until today, a month after my game was actually released? Well, that knowledge probably wouldn’t have scared me off from using their library, anyway — the proof is in the pudding, and I liked their pudding the best out of all the ones I tried.

At the time I remember seeing a “projects that use SlimDX” page, and I even looked a few of those, but I had no idea that they were all freeware, hobbyist, or yet to be released. Color me surprised. In this article, I’m going to talk a bit about why I wound up choosing SlimDX, my experiences with it, and what other frameworks I used before it.

Early Alpha: Managed DirectX (MDX)
When I first started prototyping AI War, I did so in MDX simply because that’s what my other games in the past have been coded in. However, as is widely known, MDX simply isn’t supported anymore. There are some bugs and glitches in there, a few things that don’t perform as well as they should, and a lot of the latest stuff (and anything beyond DX9) is not well supported to my knowledge. Overall I was pretty happy with MDX, though, and for my purposes it worked well enough to use it for the first four or so months of work on AI War.

After a certain point, however, I was having performance issues that I suspected were due to MDX. Also, it was coming up to where I needed to start adding music to the game, and I didn’t want to just use DirectShow, which I had used in the past and found very limiting. Since I knew MDX was out of date, I figured it would be a good time to start looking for a more robust, modern platform.

Evaluating the XNA Framework
The first place that I turned was to the XNA framework, since it was Microsoft’s spiritual successor to MDX. I actually converted my game Alden Ridge to be XNA-based for a while (never did try with AI War), and in general the conversion was pretty painless and quick. Some things are done differently, but overall the transition was smooth and the XNA way of naming things is a little more sensible sometimes. I skipped a lot of their higher-level Game classes and such, instead just embedding the XNA content in a panel in my window, same as I had been doing with MDX previously.

Very quickly I found out that having Shader Model 2.0 as a minimum hardware requirement for Alden Ridge was not going to be a good thing. It meant that my and my wife’s laptops, both IBM Thinkpads (R42 and T40, respectively), could no longer run the game even though they had been able to run it fine with MDX.

That was a dealbreaker for me with XNA, but the other big issue for me was how resources, particularly audio, were handled in the 2.0 version of XNA (they may have changed this since, I am not certain). Basically I would have to run all of my audio through a special conversion tool that they provided, and then it would sit on the disk in a not-compressed format taking way more room than nice little ogg or MP3 files do. This would, in effect, make Alden Ridge an 800MB download for users instead of a 150MB download — this would have been a dealbreaker on its own, too.

In the future I have some interest in porting some of my games to the XBox, and XNA is the obvious way to go with that. Conversion from MDX to XNA was easy given that I had kept most of the library-touching code pretty isolated, so that is something that I can approach when the time comes without having to build in XNA support all along.

Other Libraries
I looked at a number of other libraries as well, such as the Tao Framework and others based on Mono and OpenGL. Those were interesting, and in the future I might want to work with them to get some better cross-platform support, but converting all my code to Mono and OpenGL instead of some form of DirectX is going to be a lot more time consuming. I also had worries about what performance might be like on two totally new platforms, so I didn’t want to sink days into this and then still find out it was unworkable for some reason or another.

Switching To SlimDX
I found SlimDX basically through google searches, and once I had exhausted all of the more official paths to modern DirectX in C# I decided to give them a try. To be clear, I wasn’t wary because of their status as “hobbyists” or because I was worried the project would die, I was wary simply because it was small, new, and had relatively few tutorials. Basically all the things they say on their home page. All of those points are simply The Way Things Are(tm), and are not at all a reason not to choose SlimDX — it’s quite a good product (as I’ll explain below).

Conversion to their library from MDX was crazy easy, even moreso than when I converted to XNA, and the first thing I noticed was how much smaller their installer and runtime dlls were than the MDX equivalents. And how many fewer of them there were (everything is in one convenient SlimDX.dll, rather than being spread out through nearly a dozen MDX dlls).

When I got everything working in SlimDX, there was a noticeable performance jump from MDX. Often on the order of 10% or so, and occasionally even more than that (30% during some key spikes). This was on the August 2008 version of SlimDX, which apparently had some performance issues that are not present in the newer March 2009 version. Those issues were primarily with matrix math, which I use a fair bit of for all the transforms in AI War, but I saw little difference between the two versions of SlimDX. March 2009 was perhaps a bit faster, but it was not nearly so notable as the original jump from MDX had been.

The conversion to SlimDX took part of one afternoon, and then suddenly things were running faster, so I decided to stay with the library given that I knew it was also under active maintenance (unlike MDX). And it works just fine on older graphics cards, same as MDX does.

The only problems that I really encountered with SlimDX were:

1) At first, they did not support Force Feedback, which is an issue for Alden Ridge. They have added support for that in the latest June 2009 release, but I’ve been so tied up with post-release work for AI War that I have not had a chance to try it yet.

2) The performance of the Line class in SlimDX was terrible, but it was still at least on par with MDX so I suspect this was an issue with the underlying DirectX code and not SlimDX itself. A simple solution was to just have a 1px square sprite that I transform and scale into drawing very nice lines (angled or straight). That’s a fair bit of matrix transformation when there are a lot of lines on the screen, and SlimDX handled those with no issue whatsoever.

3) When I was originally trying to set up music in the game, I ran into a lot of trouble trying to choose between XAudio2 and DirectSound for playback. I had already settled on using an excellent C#-based ogg vorbis decoder by Atachiants Roman (which I also highly recommend as the best solution for decoding ogg files in C#), but I was having some issues with both tools. With some advice from the SlimDX team, I managed to cobble together a home-grown streaming approach using DirectSound, but I know it is not the ideal way to handle music streaming. I will probably update that in a future release of AI War, but for now it is reliable and works well enough, the main issue is that it wastes around 70MB of RAM that could be put to some other use. I’m told that the SlimDX team is planning to later add a sample that shows how to properly handle streaming in DirectSound in SlimDX, and that will be the trigger for me to update AI War in that capacity.

In general, I suppose the low volume of tutorials was my only complaint with SlimDX, and that’s something that I can’t really fault them for given their newness and small size. There were enough tutorials in their SDK that I was able to get everything going except for the optimal DirectSound streaming, and that’s pretty darn good as far as I’m concerned.

I should also point out that my needs as far as graphics go are pretty limited, I was mostly using their matrixes, DirectX9 sprites and text classes, and things like that. I can’t really comment on the 3D aspects, or the DX10 or DX11 support, though I’ve seen the classes there and they are reputed to be very good. My experience with their DirectInput, DirectSound, XAudio2, and Direct3D9 components have left me feeling like this is a very solid library throughout, and the clear choice for C# developers who want to do DirectX development but don’t need XBox 360 support.

Update: I now have a streaming solution for SlimDX.

AI War 1.005 Released (Free DLC & Player Suggestions)

Arcen Games is pleased to announce the release of AI War: Fleet
Command version 1.005. You can download a trial version
of the game, as well as purchase
a license key
to unlock the full version. If you already have the
game or demo installed, just hit “Check For Updates” inside the game to
get the latest patch.

More
Free DLC
: Three new ships are now available for your use:  Science
Lab II, Mobile Repair Stations, and Counter-Negative-Energy Turrets.
 Each provides interesting new strategic capabilities.  As helpful as
these new ships are, however, they almost pale in comparison to the vast
number of other updates and enhancements in this release.  This release
contains dozens of improvements, tweaks, and extensions based on
suggestions from over ten players.  Thanks to everyone who took the time
to offer feedback so far — and for everyone whose suggestion made it
onto the official to-be-done-in-DLC list, but didn’t make it into this
release, you should be seeing your ideas in play over the coming weeks.

Here are some of the highlights:

– The game now supports smaller galaxy maps in addition to the
existing sizes.

– Enhancements to zoom.

– Many new settings options, including better support for windowed
modes.

– Many, many new display modes available in galaxy view.

– Ship borders now flash in far zoom when they take damage.

– Minimap enhancements, including minimap display modes.

– Icons for important enemy ships are now shown in the Intel Summary
of the galaxy map.

– Minor planetary summary (the palette on the right side of the
screen) improvements.

– Ship autotargeting is now much improved.  Overkill is a thing of
the past.

– The AI now acts a little bit smarter at guard posts, retreating
more often when needed.

The above list is just a sampling, however, so be sure to check out the
full release notes to see everything (attached at the end of this post).
 More free DLC will be heading your way next week. Enjoy!

AI War on Co-Optimus.

From the post:

I checked out the demo last week and came away
surprised just how deep this game is.  The game isn’t for the faint of
heart when it comes to strategy.  Thankfully there’s a full tutorial
available to walk you through the game.  The cooperative mode feels very
robust, but you’ll definitely want to use some sort of voice chat
software because there’s so much stuff to communicate.

Head
over to Co-Optimus
for the full post.

Designing Emergent AI, Part 2: Queries and Code

The first part of this article series has been a hit with a lot of people, yet criticized by others for being too introductory/broad. Fair enough, starting with this article I’m going to get a lot lower-level. If you’re not a programmer or an AI enthusiast, you probably won’t find much interest beyond this point.

What Do You Mean, It Works Like A Database?
First question that most people are asking is in what way my AI code can possibly be like a database. In a past article (Optimizing 30,000+ Ships In Realtime In C#) I already talked about how I am using frequent rollups for performance reasons. That also helps with the performance of the AI, but that’s really not the meat of what I’m talking about.

I’m using LINQ for things like target selection and other goal selection with the AI in the game, and that really cuts down on the amount of code to make the first level of a decision (it also cuts down on the amount of code needed in general, I’d estimate that the entirety of the decision-making AI code for the game is less than 20,000 lines of code). Here’s one of the LINQ queries for determining target selection:


var targets =
//30% chance to ignore damage enemy can do to them, and just go for highest-value targets
( unit.UnitType.AIAlwaysStrikeStrongestAgainst ||
AILoop.Instance.AIRandom.Next( 0, 100 ) < 30 ?
from obj in rollup.EnemyUnits
where ( unit.GuardingObjectNumber <= 0 || //must not be guarding, or guard target must be within certain range of guard post
Mat.ApproxDistanceBetweenPoints( unit.GuardingObject.LocationCenter,
obj.LocationCenter ) < Configuration.GUARD_RADIUS )
orderby obj.UnitType.ShipType == ShipType.Scout ascending, //scouts are the lowest priority
obj.GetHasAttackPenaltyAgainstThis( unit ) ascending, //ships that we have penalties against are the last to be hit
(double)obj.GetAttackPowerAgainstThis( unit, usesSmartTargeting ) / (double)obj.UnitType.MaxHealth descending, //how much damage I can do against the enemy out of its total health
obj.IsProtectedByForceField ascending, //avoid ships that are under force fields
obj.NearbyMilitaryUnitPower ascending, //strength of nearby enemies
Mat.ApproxDistanceBetweenPoints( obj.LocationCenter, unit.LocationCenter ) ascending, //how close am I to the enemy
obj.UnitType.ShieldRating ascending, //how high are their shields
unit.UnitType.AttackPower ascending, //go for the lowest-attack target (economic, probably)
obj.Health ascending //how much health the enemy has left
select obj
:
from obj in rollup.EnemyUnits
where ( unit.GuardingObjectNumber <= 0 || //must not be guarding, or guard target must be within certain range of guard post
Mat.ApproxDistanceBetweenPoints( unit.GuardingObject.LocationCenter,
obj.LocationCenter ) < Configuration.GUARD_RADIUS )
orderby obj.UnitType.ShipType == ShipType.Scout ascending, //scouts are the lowest priority
( chooseWeaklyDefendedTarget ?
obj.UnitType.TripleBasicFirePower >= obj.NearbyMilitaryUnitPower :
( chooseStronglyDefendedTarget ?
obj.UnitType.TripleBasicFirePower < obj.NearbyMilitaryUnitPower : true ) ) descending, //lightly defended area
(double)obj.GetAttackPowerAgainstThis( unit, usesSmartTargeting ) / (double)obj.UnitType.MaxHealth descending, //how much damage I can do against the enemy out of its total health
obj.IsProtectedByForceField ascending, //avoid ships that are under force fields
obj.NearbyMilitaryUnitPower ascending, //strength of nearby enemies
obj.GetHitPercent( unit ) descending, //how likely I am to hit the enemy
unit.GetAttackPowerAgainstThis( obj, false ) descending, //how much damage the enemy can do to me
obj.Health ascending //how much health the enemy has left
select obj
);

Blogger eats a lot of the formatting there, but hopefully you can see what is going on based on the comments in green. In some ways you could call this a decision-tree (it does have multiple tiers of sorting), but the overall code is a lot more brief and (when properly formatted with tabs, etc) easier to read. And the best thing is that, since these are implemented as a sort, rather than distinct if/else or where clause statements, what you arrive at is a preference for the AI to do one thing versus another thing.

There are a lot of things that it takes into consideration up there, and there are a few different modes in which it can run, and that provides a lot of intelligence on its own. But that’s not enough. The loop that actually evaluates the above logic also adds some more intelligence of its own:


bool foundTarget = false;
foreach ( AIUnit enemyUnit in targets )
{
if ( enemyUnit.Health <= 0 || enemyUnit.CloakingLevel == CloakingLevel.Full )
continue; //skip targets that are already dead, or are cloaked
if ( unit.CloakingLevel == CloakingLevel.Full &&
enemyUnit.UnitType.ShipType == ShipType.Scout )
continue; //don't give away the position of cloaked ships to scouts
if ( unit.CloakingLevel != CloakingLevel.None &&
enemyUnit.UnitType.TachyonBeamRange > 0 )
continue; //cloaked ships will not attack tachyon beam sources
if ( enemyUnit.UnitType.VeryLowPriorityTarget )
continue; //if it is a very low priority target, just skip it
if ( enemyUnit.IsProtectedByCounterMissiles && unit.UnitType.ShotIsMissile )
continue; //if enemy is protected by counter-missiles and we fire missiles, skip it
if ( enemyUnit.IsProtectedByCounterSnipers && unit.UnitType.ShotIsSniper )
continue; //if enemy is protected by counter-sniper flares and we fire sniper shots, skip it
if ( enemyUnit.GetAttackPowerAgainstThis( unit, false ) == 0 )
continue; //if we are unable to hurt the enemy unit, skip attacking it
if ( unit.EffectiveMoveSpeed == 0 && !unit.UnitType.UsesTeleportation &&
enemyUnit.GetHitPercent( unit ) <>
continue; //stop ourselves from targeting fixed ships onto long-range targets

gc = GameCommand.Create( GameCommandType.SetAttackTarget, true );
gc.FGObjectNumber1 = unit.FGObjectNumber;
gc.FGObjectNumber2 = enemyUnit.FGObjectNumber;
gc.Integer1 = 0; //Not Attack-Moving
unit.LastAICommand = gc.AICommand;
AILoop.Instance.RequestCommand( gc );
foundTarget = true;
break;
}

//if no target in range, and guarding, go back to base if out of range
if ( !foundTarget && unit.GuardingObjectNumber > 0 )
{
Point guardCenter = unit.GuardingObject.LocationCenter;
if ( Mat.ApproxDistanceBetweenPoints( guardCenter, unit.LocationCenter ) >
Configuration.GUARD_RADIUS )
Move( unit, guardCenter );
}

Nothing real surprising in there, but basically it has a few more decision points (most of these being hard rules, rather than preferences). Elsewhere, in the pursuit logic once targets are selected, ships have a preference for not all targeting exactly the same thing — this aspect of them watching what each other are doing is all that is really needed, at least in the game design I am using, to make them do things like branching and splitting and hitting more targets, as well as targeting effectively.

Rather than analyzing the above code point by point, I’ll mostly just let it speak for itself. The comments are pretty self-explanatory overall, but if anyone does have questions about a specific part, let me know.

Danger Levels
One important piece of logic from the above code that I will touch on is that of danger levels, or basically the lines of code above where it is evaluating whether or not to prefer a target based on how well it is defended by nearby ships. All ships have a 30% chance to disregard the danger level and just go for their best targets, and some ship types (like Bombers) do that pretty much all the time, and this makes the AI harder to predict.

The main benefit of an approach like that is that it causes the AI to most of the time try to pick off targets that are lightly defended (such as scrubbing out all the outlying harvesters or poorly defended constructors in a system), and yet there’s still a risk that the ships, or part of the ships will make a run at a really-valuable-but-defended target like your command station or an Advanced Factory. This sort of uncertainty generally comes across as very human-like, and even if the AI makes the occasional suicide run with a batch of ships, quite often those suicide runs can be effective if you are not careful.

Changing The AI Based On Player Feedback
Another criticism that some readers had about the first AI article was to do with my note that I would fix any exploits that people find and tell me about. Fair enough, I didn’t really explain myself there and so I can understand that criticism. However, as I noted, we haven’t had any exploits against the AI since our alpha versions, so I’m not overly concerned that this will be a common issue.

But, more to the point, what I was actually trying to convey is that with a system of AI like what you see above, putting in extra tweaks and logic is actually fairly straightforward. In our alpha versions, whenever someone found a way to trick the AI I could often come up with a solution within a few days, sometimes a few hours. The reason is simple: the LINQ code is short and expressive. All I have to do is decide what sort of rule or preference to make the AI start considering, what relative priority that preference should have if it is indeed a preference, and then add in a few lines of code. That’s it.

With a decision tree approach, I don’t think I’d be able t to do that — the code gets huge and spread out through many classes (I have 10 classes for the AI, including ones that are basically just data holders — AIUnit, etc — and others that are rollups — AIUnitRollup, which is used per player per planet). My argument for using this sort of code approach is not only that it works well and can result in some nice emergent behavior, but also that it is comparably easy to maintain and extend. That’s something to consider — this style of AI is pretty quick to try out, if you want to experiment with it in your next project.

Next Time
Part 3 of this series talks the limitations of this approach.

AI Article Index
Part 1 gives an overview of the AI approach being used, and its benefits.
Part 2 shows some LINQ code and discusses things such as danger levels.
Part 3 talks about the limitations of this approach.
Part 4 talks about the asymmetry of this AI design.
Part 5 is a transcript of a discussion about the desirability of slightly-nonideal emergent decisions versus too-predictable “perfect” decisions.
Part 6 talks about player agency versus AI agency, and why the gameplay of AI War is based around keeping the AI deviously reactive rather than ever fully giving it the “tempo.”

UKGamer Review of AI War

From the review:

Since you specify the parameters
of the Universe you play in, there are a lot of replay opportunities and
the degrees of AI difficulty and play style will also further add to
that, so you’re getting a lot of indie-fuelled strategic play for your
$20. But you are also encouraged to become part of the AI War community,
and participate in its future evolution and development. With DLC
already being rolled out regularly, and a planned expansion in the
works, you too could be a part of its emergence.

Head over to
UKGamer for the
full review.

AI War 1.004 Released (Free DLC & Several Fixes)

Arcen Games is pleased to announce the release of AI War: Fleet Command version 1.004. You can download a trial version of the game, as well as purchase a license key to unlock the full version. If you already have the game or demo installed, just hit “Check For Updates” inside the game to get the latest patch.

More Free DLC: Two new ships are now out in the galaxy for you to encounter.  Both of these ships are quite challenging to deal with, so only show up on the advanced difficulty levels.  Next week’s DLC will have ships for everyone, never fear — but other major features for everyone in this week’s DLC include a variety of control enhancements, and a new “No Enemy Waves” AI Modifier.

There are also a couple of bugfixes: if you were having trouble loading savegames in the trial version of the game, or if you had a larger screen resolution and game fonts looked chunky or fuzzy, this version fixes those issues.

The first new unit, shown right, is PermaMines.  These mines work like regular mines, except that they cannot be destroyed by being shot or by being collided-with (though they will kill that which collides with them).  These mines also don’t have cloaking, and you will only occasionally run into these on the higher difficulties on AI planets.  These mines create some interesting strategic challenges , as you can only get past them with EMPs, luck (sometimes ships don’t set off mines they pass — the smaller and faster the ship, the more likely it is to slip past), ships with Mine Avoidance (builders, engineers, raiders, etc), or by going around the PermaMined wormhole.

The second new ship is the Core Starship, a terrifying haunt of some core and home AI planets on the “HARDER” AI styles (so, regardless of what AI difficulty you play on, AIs like Technologist Homeworlders and Backdoor Hackers will have a chance of having Core Starships.  These starships have a lot more health than any other starships, and a powerful primary attack.  The best strategy with them is perhaps avoidance — you can set up decoys and distractions to keep them busy while you complete your real objectives.  Of course, you could opt for a full-on assault against them, too, but just be prepared for a long and difficult battle unless you bring along something like Mobile Force Field Generators to protect your units.  There are other strategies that work, too — experiment and see what fits for you!

There are three key control/interface enhancements included in this week’s release, all suggested by canny player Maxim Kuschpel.  They are: group move, free-roaming defender mode, and the ability to press the L key and divide your selected forces in half (by numbers of selected ship types).  You can read more on these new features in the controls document — press escape and then hit View Controls from the in-game menu while you are playing.

In a nutshell, Group Move lets you move a batch of selected ships which may have differing speeds all at the same speed (the slowest speed).  This is great for having fighters, bombers, and cruisers all reach an attack target at once, or for having faster ships guard a smaller, weaker, ship such as a Science Lab or Colony Ship that is on the move. To put ships into this mode, either hold the G key while giving them movement orders, or use the new button at the bottom of the HUD (Lone vs Group movement style toggle).  They will show up with a turquoise border in far zoom while in this mode.

Free-Roaming Defender Mode lets you easily defend your planets.  In past versions, if a couple of small ships got past your wormhole defenses, you’d have no choice but to hunt those down manually, which required more micromanagement than was always pleasant (that really emphasized wormhole defense, but perfection is not always possible).  By holding V and right-clicking anywhere, you can put your ships into the new Free-Roaming Defender mode, which gives them a pink border in far zoom, and makes them chase and attack any enemy ships on the current planet.  Setting some of your ships in this mode is great for letting them take care of any stragglers that slip through your defenses, without you having to manage it.

More free DLC will be heading your way next week. Enjoy!

Designing Emergent AI, Part 1: An Introduction

A lot of people have been curious about how the AI in AI War: Fleet Command works, since we have been able to achieve so much more realistic strategic/tactical results compared to the AI in most RTS games. Part 1 of this series will give an overview of the design philosophy we used, and later parts will delve more deeply into specific sub-topics.

Decision Trees: AI In Most RTS Games
First, the way that AI systems in most games work is via giant decision trees (IF A, then C, IF B, then D, etc, etc). This can make for human-like behavior up to a point, but it requires a lot of development and ultimately winds up with exploitable flaws. My favorite example from pretty much every RTS game since 1998 is how they pathfind around walls; if you leave a small gap in your wall, the AI will almost always try to go through that hole, which lets human players mass their units at these choke points since they are “tricking” the AI into using a hole in the wall that is actually a trap. The AI thus sends wave after wave through the hole, dying every time.

Not only does that rules-based decision tree approach take forever to program, but it’s also so exploitable in many ways beyond just the above. Yet, to emulate how a human player might play, that sort of approach is generally needed. I started out using a decision tree, but pretty soon realized that this was kind of boring even at the basic conceptual level — if I wanted to play against humans, I could just play against another human. I wanted an AI that acted in a new way, different from what another human could do, like playing against Skynet or the Buggers from Ender’s Game, or something like that. An AI that felt fresh and intelligent, but that played with scary differences from how a human ever could, since our brains have different strengths and weaknesses compared to a CPU. There are countless examples of this in fiction and film, but not so many in games.

Decentralized Intelligence
The approach that I settled on, and which gave surprisingly quick results early in the development of the game, was simulating intelligence in each of the individual units, rather than simulating a single overall controlling intelligence. If you have ever ready Prey, by Michael Crichton, it works vaguely like the swarms of nanobots in that book. The primary difference is that my individual units are a lot more intelligent than each of his nanobots, and thus an average swarm in my game might be 30 to 2,000 ships, rather than millions or billions of nanobots. But this also means that my units are at zero risk of ever reaching true sentience — people from the future won’t be coming back to kill me to prevent the coming AI apocalypse. The primary benefit is that I can get much more intelligent results with much less code and fewer agents.

Strategic Tiers
There are really three levels of thinking to the AI in AI War: strategic, sub-commander, and individual-unit. So this isn’t even a true swarm intelligence, because it combines swarm intelligence (at the individual-unit level) with more global rules and behaviors. How the AI decides which planets to reinforce, or which planets to send waves against, is all based on the strategic level of logic — the global commander, if you will. The method by which an AI determines how to use its ships in attacking or defending at an individual planet is based on a combination of the sub-commander and individual-ship logic.

Sub-Commanders
Here’s the cool thing: the sub-commander logic is completely emergent. Based on how the individual-unit logic is coded, the units do what is best for themselves, but also take into account what the rest of the group is doing. It’s kind of the idea of flocking behavior, but applied to tactics and target selection instead of movement. So when you see the AI send its ships into your planet, break them into two or three groups, and hit a variety of targets on your planet all at once, that’s actually emergent sub-commander behavior that was never explicitly programmed. There’s nothing remotely like that in the game code, but the AI is always doing stuff like that. The AI does some surprisingly intelligent things that way, things I never thought of, and it never does the really moronic stuff that rules-based AIs occasionally do.

And the best part is that it is fairly un-trickable. Not to say that the system is perfect, but if a player finds a way to trick the AI, all they have to do is tell me and I can usually put a counter into the code pretty quickly. There haven’t been any ways to trick the AI since the alpha releases that I’m aware of, though. The AI runs on a separate thread on the host computer only, so that lets it do some really heavy data crunching (using LINQ, actually — my background is in database programming and ERP / financial tracking / revenue forecasting applications in TSQL, a lot of which came across to the AI here). Taking lots of variables into effect means that it can make highly intelligent decisions without causing any lag whatsoever on your average dual-core host.

Fuzzy Logic
Fuzzy logic / randomization is also another key component to our AI. A big part of making an unpredictable AI system is making it so that it always make a good choice, but not necessarily the 100% best one (since, with repetition, the “best” choice becomes increasingly non-ideal through its predictability). If an AI player only ever made perfect decisions, to counter them you only need to figure out yourself what the best decision is (or create a false weakness in your forces, such as with the hole in the wall example), and then you can predict what the AI will do with a high degree of accuracy — approaching 100% in certain cases in a lot of other RTS games. With fuzzy logic in place, I’d say that you have no better than a 50% chance of ever predicting what the AI in AI War is going to do… and usually it’s way less predictable than even that.

Intelligent Mistakes
Bear in mind that the lower difficulty levels make some intentionally-stupid decisions that a novice human might make (such as going for the best target despite whatever is guarding it). That makes the lower-level AIs still feel like a real opponent, but a much less fearsome one. Figuring out ways in which to tone down the AI for the lower difficulties was one of the big challenges for me, actually. Partly it boiled down to just withholding the best tactics from the lower-level AIs, but also there were some intentionally-less-than-ideal assumptions that I also had to seed into its decision making at those lower levels.

Skipping The Economic Simulation
Lastly, the AI in AI War follows wholly different economic rules than the human players (but all of the tactical and most strategic rules are the same). For instance, the AI starts with 20,000+ ships in most games, whereas you start with 4 ships per player. If it just overwhelmed you with everything, it would crush you immediately. Same as if all the bad guys in every level of a Mario Bros game attacked you at once, you’d die immediately (there would be nowhere to jump to). Or if all the enemies in any given level of an FPS game just ran directly at you and shot with perfect accuracy, you’d have no hope.

Think about your average FPS that simulates your involvement in military operations — all of the enemies are not always aware of what you and your allies are doing, so even if the enemies have overwhelming odds against you, you can still win by doing limited engagements and striking key targets, etc. I think the same is true in real wars in many cases, but that’s not something that you see in the skirmish modes of other RTS games.

This is a big topic that I’ll touch on more deeply in a future article in this series, as it’s likely to be the most controversial design decision I’ve made with the game. A few people will likely view this as a form of cheating AI, but I have good reasons for having done it this way (primarily that it allows for so many varied simulations, versus one symmetrical simulation). The AI ships never get bonuses above the players, the AI does not have undue information about player activities, and the AI does not get bonuses or penalties based on player actions beyond the visible AI Progress indicator (more on that below). The strategic and tactical code for the AI in the game uses the exact same rules as constrain the human players, and that’s where the intelligence of our system really shines.

Asymetrical AI
In AI War, to offer procedural campaigns that give a certain David vs Goliath feel (where the human players are always David to some degree), I made a separate rules system for parts of the AI versus what the humans do. The AI’s economy works based on internal reinforcement points, wave countdowns, and an overall AI Progress number that gets increased or decreased based on player actions. This lets the players somewhat set the pace of game advancement, which adds another layer of strategy that you would normally only encounter in turn-based games. It’s a very asymmetrical sort of system that you totally couldn’t have in a pvp-style of skirmish game with AI acting as human standins, but it works beautifully in a co-op-style game where the AI is always the enemy.

Next Time
This provides a pretty good overview of the decisions we made and how it all came together. In the next article, which is now available, I delve into some actual code. If there is anything that readers particularly want me to address in a future article, don’t hesitate to ask! I’m not shy about talking about the inner workings of the AI system here, since this is something I’d really like to see other developers do in their games. I play lots of games other than my own, just like anyone else, and I’d like to see stronger AI across the board.

AI Article Index
Part 1 gives an overview of the AI approach being used, and its benefits.
Part 2 shows some LINQ code and discusses things such as danger levels.
Part 3 talks about the limitations of this approach.
Part 4 talks about the asymmetry of this AI design.
Part 5 is a transcript of a discussion about the desirability of slightly-nonideal emergent decisions versus too-predictable “perfect” decisions.
Part 6 talks about player agency versus AI agency, and why the gameplay of AI War is based around keeping the AI deviously reactive rather than ever fully giving it the “tempo.”