Category: Programming

Recent Hacking Attempt

Hello everyone, Quinn here.

Quick Reminder!

We will NOT request any passwords, emails addresses, or other personal information via PM. If any information is requested via PM, please report it and do not respond. With that out of the way, on to the meat of the post:

Hacking Round 1

Sometime yesterday, some of the accounts of Chris Park (company founder/owner) were compromised. The person who compromised the accounts attempted to get sensitive data from Keith and myself.  However, database searches do do not show record of him (or her) trying to get data from anyone else.

It was quite clever social engineering, but fortunately there were enough red flags that Keith and I were each able to pick up on the phishing attempt, and contacted Chris via external means to check if it was really him. Once we had confirmation it was not, we locked down the accounts. Chris was able to reset his passwords, we did a variety of security sweeps, and (after a few issues), Chris has full control of his accounts again.

Nothing seems to have been damaged in this — it was mainly a prelude to a second goal.

Hacking Round 2

Mid-morning today, Chris’ Steam account was abruptly compromised. It’s safe to say that this was the same individual as the first attempt, because it fits with the goals they originally had, and they again leaned heavily on particular infiltration tactics.

Today’s attack was much more concerning for a variety of reasons, including what they targeted and how they did it.  We’re omitting that second part for hopefully obvious reasons.  Valve worked with us to piece together what happened, and there’s pretty good odds that exact approach won’t work again.

As part of gaining access to his Steam account, the attacker triggered an instant notification to Chris, which led to a quick shutdown of his Steam account.  After thorough review, we believe the attacker didn’t actually do anything once logged on as Chris.   Build data and similar have detailed logs that show no activity, and the cosmetic areas that are not logged as stringently seem untouched.

We don’t know if the attacker made any posts on Steam as Chris, or if he sent any chat messages. If you did get a message, post, or anything else from Chris’ Steam account today, please do not hesitate to contact us to verify the identity of who sent it.


Mostly this was a case of very clever social engineering.  There was a second prong that involved actual hacking, which enabled that to be particularly convincing.  Chris does of course use 2 Factor Authentication, but that was circumvented via a particular obscure method.  Fortunately, the fact that multiple accounts have 2FA on them enabled us to catch and correct it particularly quickly.  The vagueness here is to not encourage copycats; it was not a case of password reuse between Arcen and Steam or something embarrassing like that.

At any rate, the initial method through which we believe the attacker got the password for Chris’s services was by exploiting a weakness in Mantis, which has now been patched to prevent further exploits.


We have not gotten reports of any other accounts being compromised. However, if any staff member sends you a PM and it seems out of place, please report it and do not answer it.

When in doubt, please send Arcen Games (arcengames AT gmail DOT com), the staff member who sent the PM, Chris Park (chrispark7 AT gmail DOT com), and myself (quinnbeltramo AT yahoo DOT com) an email regarding the matter. The reason behind sending everyone an email about it is to reduce the likelihood that the person will get away with the attempt. We can get in touch with each other by phone for any suspicious requests, as we did here.

We’ve spent most of the rest of the day combing through our databases and services, and Chris has spent that time scanning and examining his computers.  We’ve found no other evidence of anything unusual.

We would like to remind everyone to stay safe on the Internet, and use different, strong passwords for each site you visit. That will reduce the damage that can occur should an account be compromised — although in cases such as this, that’s more of a helpful measure than an absolute barrier.

I was able to verify that there was no unauthorized access to the web server or the database, so any information that was contained there is still secure.  Passwords on our site are hashed and salted, and are not stored in plain text for obvious security reasons.  We do not keep any credit card, paypal, address, or other such information on our own servers.


Everything is fine, but if Chris or someone else from the staff said anything strange to you today or yesterday, please report it to us.  We had a very clever individual spending a lot of time trying to gain access to our steam partner site for some reason, so we’ve circled the wagons quite a bit.  The biggest negative result of this has been lost productivity today, near as we can tell.

AI War 2 Alpha v0.103 released!

Here are the release notes, which are mostly bugfixes and more quality of life adjustments.

A Tale About Linux

Okay, so — Linux support, grr.  Mostly that has been going fantastically, and we have a lot of folks playing the game on linux machines with no issues.  That said, we’ve had a few problems with some of them, including the VM install of linux on my mac.  My new linux laptop arrived today, although I haven’t unpacked it quite yet.

Today and yesterday, I’ve spent hours and hours researching various things on linux, and here’s a lot of what I’ve found, just kind of in random bullet points:


1. On my OSX machine, the issue is just because of the VM version of linux, not the hardware itself.  This is almost 100% certain, because it’s well within the supported parameters.  Obviously the game runs great on that machine in OSX, so it was always a strange thing that it didn’t work on linux there.

It looks like it’s just a matter of anything higher than OpenGL2 not being supported in a pass-through fashion between ubuntu and OSX using Parallels between them.  I looked into doing a dual-boot situation with linux and OSX on that machine (I already have a dual-boot with Windows 8.1 on there, so I guess triple-boot), but that looked like it would eat up even more time and not yield information that is too useful at this point.


2. Anything older than a Sandy Bridge integrated GPU is no longer supported except for on windows, so that means pre-2011.  But on windows it would still work because of DirectX9.

Basically, OpenGL2.x support was completely removed in Unity 5.5, which is quite frustrating and was not well-advertised as something that would happen:

A linux box with only integrated graphics and a CPU older than the sandy bridge architecture (sixth generation intel) will need to use WINE and DirectX9 in order to run the windows version of the game.  On OSX, it’s possible that the Metal support would work on a machine that old, but honestly I have no idea.  That also might be a case for WINE.

Either way, we’re talking here about machines that are 7+ years old and which only have integrated graphics, so the CPU load is likely to start becoming problematic once you pass that point, anyway.


3. If you ARE running a Sandy Bridge processor — or newer — and are using that for integrated graphics, then you need to have Mesa drivers that are at least from very-very late 2014 or newer.

Thanks to Unity 5.5 only supporting OpenGLCore, that means anything OpenGL 3.2 or newer is needed.  Those integrated cards can do OpenGL 3.3, but they’ll report that they cannot do 3.2 if you don’t have recent-enough drivers.  And there are new Mesa drivers that are awesome!


4. We have someone else with a Radeon card who is having a crash on linux, and I have no idea what is going on with that just yet.  He’s on the latest Catalyst drivers, which could mean there’s a bug in them, or… who knows, really.  I’ve heard from various reports of the driver support for Radeon on linux being pretty dodgy, so it could simply be a bug in the latest version of the drivers.  I honestly am not sure on that just yet.

As you can see in the link I posted there, there were a few other things I changed in the last build prior to this one that should make it more likely to run on that machine.  In that particular instance, some of the OpenGL Core command line parameters might actually work to solve the issue.

-force-opengl has been established to no longer work on Unity, particularly on Linux, since the removal of OpenGL2 support.

However, this one might work quite well: -force-glcoreXY: XY can be 32, 33, 40, 41, 42, 43, 44 or 45; each number representing a specific version of OpenGL. If the platform doesn’t support a specific version of OpenGL, Unity will fallback to a supported version

This one might also be of some use: -force-clamped: Request that Unity doesn’t use OpenGL extensions which guarantees that multiple platforms will execute the same code path. This is an approach to test if an issue is platform specific (a driver bug for example).

All of that said, it may simply be an issue that can’t be resolved outside of driver fixes.


5. We also have someone for whom the game crashes to blue screen on linux every other run, which is something I saw limited mention of elsewhere on the internet yesterday… and now can’t find for the life of me.  If anyone finds threads or topics about that in unity 5.5, can you please send me the link?  Anyhow, I’m not positive if there’s much we can do on that or not, aside from possibly updating to a later version of unity 5.5.  They are currently on 5.5.1p4, whereas we’re using 5.5.0p3.  There just haven’t been enough changes to the newer 5.5 builds to warrant our update quite yet, in my opinion.


6. Overall this works really well for most people!  Which I am very glad for.  But we’ve had some folks fishing for the lower end of the system specs range, which I super appreciate, and these are the things we’ve been running into.



There’s a new “Bleeding Edge Graphics Test” that I’ve included in the latest build and that I’d really appreciate if you ran and let us know how it went.  Basically that new build uses unity 5.6 for a simple scene completely unrelated to AI War 2, and it also uses the Vulkan Renderer for linux and windows (and Metal on OSX).

That may very well solve the Radeon crash, I don’t know.  However, driver support for Vulkan is much more limited than something like OpenGL or DirectX — so far.  Vulkan is the successor to OpenGL, meant to replace it and modernize it, and it looks set to beat the pants off DirectX as well.  Metal is kinda Apple’s answer to DirectX11, and for some reason they don’t really do Vulkan yet.  Bummer.

Anyway, I’m reluctant to make people use Vulkan, because that immediately excludes a lot of pre-2014 hardware.  As it currently stands, that would be moving the minimum spec up to be more recent by 3-4 years, depending on platform.  Not my preference.  But if Vulkan gives people who can use it a speed boost, as well as maybe getting around some driver issues on linux, then that’s super exciting.

If you run the bleeding edge graphics test programs and it just utterly fails, you might try doing so with the following command line arguments mentioned from above:

-force-glcoreXY: XY can be 32, 33, 40, 41, 42, 43, 44 or 45; each number representing a specific version of OpenGL. If the platform doesn’t support a specific version of OpenGL, Unity will fallback to a supported version.

Those would let you use Unity 5.6, but going back to an earlier version of OpenGL Core.  I’d suggest force-glcore32 or force-glcore33 (3.2 or 3.3).  March 31st is the official date that Unity 5.6 comes out of beta, and “Bleeding Edge Graphics Test” will help tell us whether to upgrade to that ASAP or give it a wait-and-see-for-now attitude.



Those substantial snafus with the initial rollout of alpha keys to some alpha-tier backers are all fixed up, by the way — thanks again for your patience on that!



The AI From Within – Part 4: The Special Forces

Keith here.

A Short Update on Development

Goodness, how time flies. We’ve been pretty quiet for the last few weeks finalizing the Kickstarter (95% of you have completed your BackerKit surveys, thank you! 137 backers have not yet, please do so!) and getting everything setup for the alpha. We been applying our noses directly to the grindstone and believe we have developed the right foundation and workflows for both our own work and for modding support. We are currently planning to start the alpha Monday, February 27th, 2:00pm EST.

Sidebar For Modders

As we’ve mentioned before, the graphics in AIW2 are much more complex than AIWC. Adding or replacing graphical assets won’t be as simple as adding or replacing a .png file. In order for the game to work efficiently with the meshes, materials, and so forth (and also with sfx) they have to be baked into “asset bundles” in the Unity editor. Thankfully, there’s a free version of the Unity editor that modders can use to create these asset bundles. Furthermore, the asset bundles aren’t baked into the game’s executable but can be loaded at runtime. So we’ve double- and triple-checked that this actually works and that we can hand you a workflow (with examples) that will function correctly. This has been going well, and as side effect, we’ve significantly reduced the amount of time we ourselves spend waiting for the assets to re-bake after a change, etc.

Other Progress

I’ve also spent a fair bit of time making sure that my workflows for adding new user interface elements and input bindings (i.e. “things you can trigger by pushing a button on your keyboard/whatever”) do not require recompiling the game itself or otherwise require anything an end-user won’t have access to. The list of bindings and the layout of the windows is in xml, and the code-logic that says what those keys/buttons/etc should do is defined in C# code that’s compiled into an external dll.

That external source code will ship with the game and you can modify and recompile it (with whatever you’d normally use for c# on your platform, be it VS or MonoDevelop or whatever) or add your own external dll’s that your xml can reference. Obviously playing a multiplayer game requires that you be using the same stuff which gets trickier on the C# side, but it’s doable.

We’ve also made a ton of progress on the game itself, but that’s better shown in a video when Chris finds the time to make it :)

You call that a short update?

Yea, good point.

Last time in this series we looked at how an AI (or human) ship’s guns decide what to shoot at. Before that we covered the low-level “where do I go?” logic for AI ships not assigned to guard duty or some special purpose. Together, those two sets of logic basically make a complete opponent.

But we would never leave well enough alone. Not at all.

Today we’re focusing on something much higher-level: the AI’s Special Forces Fleet. Due to some interesting… comments… during a recent performance review, Laser Guardian #443x-023 will be conducting the briefing.

Those pesky humans

Laser Guardian #443x-023 here.

Stupid developers. I tell them “how do you expect the meatbags to be properly frightened when I show up without any paint on?” And they just mumble something about working on higher-priority things and we’ll get to it soon and shuffle me out the door again. HIGHER PRIORITY? Clearly an adjustment is in order.


So, a quick history lesson. Back in the day (AIW Classic version 5.0 and earlier) the “Special Forces” were basically just some special guard-duty ships that marched in a weird sort of galactic line-dance between the various Special Forces Guard Posts. Cute, but not especially effective.

When the galaxy was so obviously under our thumb, it wasn’t a big deal.

But the humans were getting entirely too cheeky about picking off our planets with no particular fear of immediate consequences.

On August 30th, 2012, thanks to feedback from several of the more provocative organic target-practice-units (notably Hearteater, Faulty Logic, Martyn van Buren, Diazo, TechSY730, Lancefighter, Sunshine!, Bossman, Draco18s, Ozymandiaz, Wingflier, and chemical_art), that all changed.

Malice aforethought

First, rather than just getting odds and ends of the normal Reinforcement budget, we got our own new Special Forces budget to maintain (but not exceed) a certain overall force level. Silly restrictions, but I guess there are other things to spend it on too.

Second, rather than just getting whatever random ship was selected for reinforcements, we got the ability to weight our preferences based on things we thought would be useful. Lots of anti-bomber stuff, for instance, since bombers were so annoyingly effective against everything we wanted to protect in AIW Classic.

But my personal favorite was getting to spend 10% of the budget on the ships specifically found to be most annoying to the player. Well, that was basically just two ships: Tackle Drone Launchers and Lightning Torpedo Frigates. Neither of which were remotely well-balanced and aren’t slated for the (first) official release AIW 2. Oh, Tackle Drone Launchers, I will miss you.

Even when we didn’t get TDL’s, it was great fun to focus on really long-ranged stuff, or stuff with lots of tractor beams (nothing like nabbing a bunch of human fighters and dashing off to alert other AI planets with them), and so on. Good times.

Later we even got Riot Control Starships, though they were never as effective as they were in human hands. Some players could achieve simply stupid kill-to-loss ratios with Riots, and they don’t even have decent firepower. We’re still sore over that.

Third, rather than our individual ships just following a line to the next special forces post, we got to pick a single planet to all gather on. We were generally only allowed to pick friendly planets (sigh), and we weren’t allowed to rush to the defense of particularly unimportant planets, but this let us build up a strong force instead of just trickling in. Especially since, if our services were not in immediate demand, we’d hang out on a friendly planet 3 hops or so away from any enemy planets.

Fourth, when the enemy dared show their faces we could check to see which planet we could do the most good on (prioritizing our homeworld or core worlds, avoiding planets where we couldn’t get there without passing through a grinder, and otherwise preferring the biggest concentration of organic intruders), and make a beeline over there.

The result?

Nobody expects the Special Forces

A massive roving fleet of jolly AI ships, specifically built to annoy you, singing sea shanties while they crash your party and ruin your fleet’s day.

Of course, this didn’t stop the most virulent meatbags (stupid warheads…), but it did make them work a lot harder. And our refrigerator is covered with pictures of SF assaults scissoring in behind unsuspecting human fleets and stomping them flat.

The Future

So where are the Special Forces going in the sequel? All over the place. Literally, actually. Instead of a single fleet we’ll have regional headquarters, each with its own fleet.

We weren’t sold on the idea, at first, since concentration-of-force is pretty important. But then we were told that these regional headquarters were actually secret ninja hideout fortresses. How could we say no?

This also allows for more exotic behaviors and ship-preferences, since each region can use its own. An all-cloaking SF fleet that only attacks when you’re multiple hops from your territory may die like flies in some circumstances, or it might be the thing that stymies the human infestation in others. We’re all about iterating over permutations. That’s… actually about all we ever do.

… Ah, good, it appears the paint crew has finally arrived, so we can strike terror into our enemies and the appearance of the SF fleet can once again have its proper brown-trousers effect on hapless human attack fleets.

Ok, make sure you get plenty on the weapons ba-aaAAAAhhhHHH!! (entire ship dunked into vat of MurTech-brand Astatine-enriched Universal Solvent)

Up next

Keith here. Whoops, it looks like the quartermaster made a mistake on that paint order. Oh well.

(pushes speaker button, “Cleanup, nebula three.”)

Bear in mind that the plan is for modders to be able to define their own Special Forces controller logic sets, for determining which ships a fleet picks to build and/or which planet to rally at in a given situation. Then you can have those controllers be put into the normal rotation, or even remove the vanilla controllers and only use your own. Naturally, you would never use this to make your game easier by reestablishing the galactic line dance.

I’m not sure when the next entry in this series will be, since work on the game itself obviously takes priority, but we’ll probably look at the offensive equivalent of the Special Forces: the Threatfleet. The SF gets to play dirty tricks on you, but a Threatfleet has much better opportunities for gratuitous vandalism.

Also in the works is another combat video from Chris. In general we’ll try to check in every now and then so you know we haven’t suffered a misfortune at the hands of disgruntled machines.

Original kickstarter post.



AI War 2 Random Working Notes Video Roundup – Circa January 19th

Chris here!  Okay, so this is a whole lot of random stuff from the last month, and these are some of our internal videos that I did a quick vetting of and figured were safe to release for those people who might be interested in them.

These are all unlisted on our youtube channel, to not clutter that up with this sort of thing, but if you’re interested in the development work side of the process (technical or art or otherwise), here’s some meat for you (in chronological order):

December 10th, 2016: Chris rambles about custom editors for Keith, but generally fine to show for whoever.

December 5th, 2016: Notes for Keith, specifically aimed at showing what was done in AI War Classic so that he could replace it with something much better for AI War 2 (via Forge Networking).

January 16th, 2017: Quick visual mockup for Blue (artist) of unpainted model of my rough (VERY rough) concept for the Ark for AI War 2. This is incredibly ugly in a lot of ways and is meant to be a bit of a “throw together some legos as a 3D concept” sort of thing.

She then takes it and makes it into something that is actually awesome, which is work that is currently in-progress as of the time of this post (though there are some screenshots from Maya).

Warning: this is very much “how the hotdog is made” in terms of the nature of the video!

January 16th, 2017: Notes for Keith on some craziness that happens with external DLLs and moving the cheese on unity, plus how to re-link assets that become unlinked.

January 17th, 2017: Here’s are some progressive screenshots from Maya of what Blue (artist) had done at that point (very very incomplete) and how her work was evolving from mine.

On the left in the first shot is the terrible mockup fbx file I created, and then in the center is what she was evolving that into in broad form:

Then later that day, two more shots of the Ark.

January 18th, 2017: More screenshots from Maya from Blue’s work, where she is basically done with the model itself.  She notes “I left all the small bits in grey-checker so you can distinguish them from the main ship.”

A few hours later that day, more stuff (in false colors).  She notes:  “UV’s done and organized.  Each color indicates a set of UV’s. So if they share a color, they share a UV set.  So there will be 8 texture sets.”

And then actually painting those uvs in Photoshop comes next, and is currently in progress. :)

Here are the flat colors, actually, with her notes: “I haven’t done any of the detail work yet. Just laying down the base over all.”

January 18th, 2017: Quick visual mockup for Blue (artist) of unpainted models that may or may not be used, with the general idea of the motion they would have, and the vertex-animated internal model for the “wormhole” inside the AI Warp Gates for AI War 2.

Very very rough, and not meant to be any sort of final product.

January 19th, 2017: Notes for Cinth on setting up LOD groups and otherwise optimizing the particle effects for AI War 2 to keep the framerates high.

Visual Evolution of the Bomber

This is out of sequence because it happened over the course of many days throughout this whole process, but here’s an interesting look at some of our workflow.

I gave Blue this reference bomber model to work with that I had created, but it was messy and needed a lot of work, and had no textures yet:

In the above two, it’s shown next to the fighter for scale.

She then had to do a lot of fixing of the model, which is a process we’ve decided to scrap because it’s much easier for me to create a rough approximation with notes (ala the Ark, as seen above), and then her to create the final model from scratch herself rather than trying to repair my wreckage.  Nonetheless, she spent 3ish hours repairing the wreckage of my bomber and made something awesome out of it:

Note how crisp all the lines are, although it’s not yet having a proper shader, and it looks washed out, etc.  This is to be expected.  Generally once I pull it into Maya and apply my shaders and tweaks, it is basically the final touches that turn it fully awesome.  The post-processing effects and bloom and her emission maps help a lot with that, too, to be sure.

But this time, there was a problem.  I set it up, and did so like this:

Grooooosss!  While the lighting and so on is cool, there’s a lot up there to hate compared to the original model.  She noted there was some quality loss, and I wasn’t sure specifically which things she meant, so I pointed out the following possible areas:

With some detailed commentary on each area that are not worth repeating here.  At any rate, I made a lot of tweaks to the shaders, and we got to this:

Muuuuch better.  Still needs work, though.

She finished her tweaks to the emission map and the albedo texture, and I figured out some big improvements on the texture import side (both quartering the VRAM used by the bomber as well as improving its crispness), and we arrived at this:

Biiiiig difference.

That’s It For Now!

Just a lot of random notes, since we’ve been quiet.  There was a ton of other stuff happening as well, but this gives you some idea. :)



AI War II – Far Zoom First Look

Lots of stuff has been happening on the AI War II forums in general, and a lot of news in the dev diary section in particular.  Once the kickstarter for the game launches — later this week — we’ll be sharing more through our blog and social media.  It’s been an intensive process so far in the forums, though, with folks contributing 2,470 post on 180 topics in a matter of just a couple of weeks.

Here’s the first video of the game:

The kickstarter for AI War II is coming up in the next week! If you want to be notified of it by email, please send a note to arcengames at gmail dot com and we’ll be happy to.

Our forums have a ton of information about what’s going on and how you can contribute (non-financially) prior to the kickstarter if you wish to.

And our public design document is huge and still growing.  The latter will be finished up (for purposes of v1.0 specs) within the next 3 days.

I’ll be around on the forums plenty in the coming days, or I’ll hopefully catch you later this week when we launch the Kickstarter!

  • Chris


Volumetric Lights And Custom Frustrum Culling

What!?  Shouldn’t this post be about procedural generation or new robots? ;)  That stuff is coming, don’t worry.  As you can see in the release notes, a lot of work has been done on that.  But in the meantime I did want to do a release with a few other things in it.


Volumetric Lighting

First up, we’re now using the brand-spanking-new Hx Volumetric Lighting component from Hitbox Team, the folks behind Dustforce and the upcoming Spire.  I figure I owe them some shout-outs there, because their work on the volumetric lighting is so freaking fantastic.

It has a moderate impact on framerate, depending on your graphics card.  For a lot of lower-end cards you’ll need to turn it off.  But for folks running on middle-high or high-end rigs, this is something that really takes the game to the next level visually.

This is something I’ve wanted to do for quite some time, to give more of a sense of atmosphere to the game.  However, short of particle effects that can look iffy, and a few light-specific options that usually have iffy performance, there’s been no good way to do that until now.


Anyway, so, that’s neat.  That will make for some nice differences in the next round of videos, so I’m pleased to get that in now.

Obviously this is an effect that is not to be used on every last freaking light in the game — sometimes it’s really nice to have crystal clear areas that just pop with sharpness.  Other times you want a slight bit of softness, and other times you want something that’s super foggy.  The point isn’t that we’re switching over to deep fog all over the place, but that we now have a greater depth of mood effects we can go for.

The screenshots in this post are really leaning on higher-volumetric views, though, since that’s what is new; the non-volumetric views don’t look any different.  Oh!  And if you hate it, you can always turn it completely off.  So, as with all things, tune to taste.


Custom Frustrum Culling

Occlusion culling is a complicated subject, particularly in games that are partly or completely procedurally-generated.  What it means is not rendering things that the camera can’t see.  The biggest problem is knowing what is behind other opaque stuff and thus invisible.

Unity has some built-in support, but only for static levels, not procedurally-generated ones.  I created my own occlusion culling system that works on procedurally-generated levels, but the levels have to be designed with the occlusion system in mind or else it doesn’t work to full effect.

However, there’s also a middle-tier of object culling that is based around the “view frustrum.”  Aka, the view out of your camera based on where it is pointed right now and what your FOV is, etc.  Put another way, it’s to avoid drawing things that are either offscreen to your side or behind you.


Unity has a built-in way of handling this, too, and I had — until now — not bothered to create my own.  I’m not in the habit of trying to reinvent the wheel for no reason.  However, I found that unity’s solution has some really strange issues with false-negatives when the camera gets close to a wall.  Basically it would stop drawing certain objects that were straight in front of me once my camera got a bit close to the wall behind it.

Imagine having your head leaning back against a wall, and the wall on the other side of the room in front of you mysteriously disappears.  Um… no thanks on that.

Apparently with any of Unity’s occlusion culling on at all, it was trying to do a mixture of occlusion culling (what is behind something else should not draw) and frustrum culling (what is out of my view should not draw).  And when I got really close to a wall with the camera, it decided “oh hey, you must be on the other side of that wall.”


I’ve known about this for well over a month, and I figured that the solution was to get the camera to stay a bit further from the walls.  Turns out… nope!  That doesn’t work in any way that I can figure out.  I thought that it perhaps was related to concave mesh colliders, but nope there too!  I was really surprised, because I thought for sure that was the one.

I had turned off unity’s occlusion culling a few weeks ago because of the graphical errors it was introducing, but then performance took a big hit and so I turned it back on.  Now the graphical errors were getting on my nerves increasingly, so I decided to once again disable their system, and this time come up with my own frustrum culling system.

So I did.  It works!  No false negatives.  It seems to have a very similar performance profile to what unity’s system did, but minus the errors.  Knock on wood that’s what others also experience with it!


Click here for the official forum thread on this post.



Raptor Dev Diary #3: More Audio And Video

Oi, Raptor!  Been four business days since we saw you — what’s new?

The short list of new stuff shown in this video:

  1. Improved post-processing visual effects.
  2. Improved raptor skin.
  3. Completely re-rigged and freshly animated raptor.
    1. A couple of glitches there, though, most notably in the walk animation.
  4. Completely new sound system based on SECTR Audio and a bunch of custom stuff.  HDR audio!
  5. Revised sound effects for many things, new sound effects for many things including the raptor, and things like reverb zones.
  6. More stuff is destructible now, and you can see how that actually affects level traversal in some places (much nicer than getting stuck on geometry).

What’s next?

  1. More splitting-out of the various assets and prepping them for procedural generation systems.
  2. The first foray into procedural levels with real content.
  3. Custom audio occlusion based on my occlusion culling system.
  4. Custom ambient sounds attached to objects (hissing pipes, etc) based on the SECTR Audio stuff, but distinct from it.
  5. Particle effects!
  6. Better vocalization sounds for the raptor, probably.
  7. Putting all these new things together in the form of actual enemies to fight that use all of these subsystems together.

Other important news?

  • Linux support is a definite thing now!
    • Previously we were considering using dearVR, which includes some compiled platform-specific code that does not run on linux yet.
    • Instead we’ve opted not to get that product, and are instead achieving the same thing using a mixture of SECTR Audio, our own custom code, and good use of Unity’s own built-in reverb zone capabilities.  No platform-specific compiled code makes me a happy fella.

New Screenshots (click for really detailed versions!):

1. Running With Claws Out


2. Jumping Threateningly2_JumpingThreateningly

3. Smash – And Cool Glass Shader3_SmashAndCoolShaders

4. Dramatic Pose And Detail4_DramaticPoseAndDetail

5. Raptor’s Body Detail5_RaptorBodyDetail

6. Depth Of Field And Detail On Closeup Box6_DepthOfFieldAndDetail



Click here for raptor dev diary #2.

Click here for raptor dev diary #1.

Click here for the official forum thread on this post.



Raptor Dev Diary #2: Thar Be Dinos Behind That Longwinded Man

First video of any of the parts of the game in action!  Bear in mind this is circa alpha 0.1, a private build.  The video should make that pretty clear, though.  I wound up talking about a TON of stuff with the project in general, and unity development as a whole as well, instead of just covering what Blue and I have been up to in the last two weeks.

So… er… somehow that wound up lasting an hour?  If you just want to see the raptor run around for a brief bit, you only need to watch a couple of minutes out of that.  If you want to see how the layers of certain effects are made, or hear and see a bit of behind the scenes of how my custom occlusion culling system works, then keep watching.

Right — so how to sum up In Case Of Emergency, Release Raptor?

  • Be a velociraptor.
  • Feel like a velociraptor — lots and lots of design centers specifically around that feeling.
  • Procedural levels in different themes.  A pleasing blend of hand-crafted setting and randomization.
  • Fight robots!  Tear giant ones limb from limb, pounce on small ones, and try not to die.

The page for the game goes into more depth into what the game is about, as well as some information on minimum specs.

Click here for raptor dev diary #1.

Click here for the official forum thread on this post.



Free tool for game developers: Random Game Title Generator (with source code)

Bear with me, because this is going to sound strange.  First, let’s get the premise out of the way: naming games is hard, and it’s also incredibly important.

Why is it so important?  Well, take the following screenshot of Steam:


Unless you have banner rotation featuring, this is ALL people see of your game.  Even when your game is on sale, all they see is this, the base price, and the slashed price.  No screenshots, no trailer, no description, no tagline, nothing.  Just the little capsule image, the title, and the genres.

And you know what?  I’m okay with that.  Can you imagine how cluttered a storefront it would be if every game had even just a tagline?  Forget about it.  That would make people (including myself) glaze over even MORE when looking at the front page or the scanning through the discounts page.  The same is true of pretty much all the other major stores, incidentally, so this isn’t limited to Steam.

But something that Steam does that the others do not is give us information about our clickthrough rates when our game is in a featured spot, so we learn how effective this stuff is — JUST the information you see above, before anyone even gets to trailer or screenshots, etc.  The clickthrough rate for Bionic Dues was abysmal — about half the global average for games on Steam.  The clickthrough rate for The Last Federation was incredible, more than twice the global average.

Obviously you have to have a good trailer, description, screenshots, and GAME on the other end of that clickthrough, but still — if nobody clicks through, you can have a great game that everybody overlooks.  Bionic Dues is one of those games that gets a really positive reception, got respectable reviews, and yet is very much overlooked.  At least one reviewer told me flat-out that the name was awful, and that he was frustrated by that because he felt like it would turn people away from the game.

There are a lot of reasons games sell or don’t sell, so we can’t blame anything entirely on the name for any game.  If a game is good enough, it will sell no matter what you call it.  If everyone is talking about a game, that will also overcome a mediocre name.  But what about all the games that are not inherently darlings with the press, that are exciting to a niche audience, and that therefore rely on storefront discoverability?  There the clickthrough rate is really important as a first step, and if that clickthrough rate is too low then you may just be kneecapping yourself.

logo9Spectral Empire

After much brainstorming back in June, the name that we came up with for our upcoming 4X title was Spectral Empire.  I originally wanted something like Age of Ashes or Proto Colony, but those were really unpopular with our players.  And they didn’t really feel right to me, either.  After many many many attempts at names, we found one that was overwhelmingly popular: Spectral Empire.

That said, it’s only with a small group of our most dedicated fans, and in particular a group that was already weary from us trying out so many dozens of names on them.  There was some concern that “spectral” sounds like a fantasy game rather than a sci-fi one, but we all kind of brushed that aside as a cost of the process.

More recently, as we’ve been working on the internal prototype for the game, I’ve been getting more and more unhappy with the name.  I brought that up, and immediately some of the other staff said they’d been having the same feeling.  It wasn’t really a conscious thing at first, I don’t think, it was just a growing discomfort that wasn’t fully noticeable until a certain amount of time had passed.  (Which is one of many good reasons not to name your game at the last second, incidentally).

So now we are again trying to find a name for our 4X game with the help of our players, and thus far not coming up with anything that really immediately excites any of us.  Lots of brainstorming, which is great, but there’s nothing yet that makes ANY of us just go “oooh, that’s the one!”

Putting Together Puzzle Pieces

A name for a game can obviously be a variety of things.  It can be off-the-wall and eye-catching for that reason.  It can be pretty and symbolic and thus make people curious.  It can be funny.  It can be evocative of its genre, and thus pique interest in that way.

Of course, they all carry risks.  Off-the-wall is likely to miss a lot of people who have no idea what it is.  Pretty and symbolic may be a turnoff to some people who think it is pretentious and/or again have no idea what it is.  Humor is a fine line no matter what you do, since all people don’t find the same things funny.  And with “serious” games like the strategy or simulation genres, or even with hardcore first person shooters for that matter, a funny title can really be a mood-killer.  Being evocative of its genre is ALWAYS a good idea if you can manage it, but you risk delving too far into “Generic Game #56” names.  Let’s call it “Empire Combat: Age of War!”  Er… no.

So if you’re having trouble coming up with a name, and are anything like me, then what you wind up with is an increasing collection of possible words that can go in a name.  But what combination of them?  And in what order?  The more you brainstorm, the more potential words you wind up with, and the harder it gets to even consider all the various combinations.  I realized it was hitting a point where my brain just couldn’t do the job well anymore, and so I coded up a little tool to help.

Random Title Generator

First of all, you can download the compiled version here.  If that version doesn’t quite do what you need, then here’s the C# source code for it.  I hope it helps.

Now, how to use it?


The left column is your dictionary of interesting words.  If you decide you don’t like one after all, you can highlight it and hit delete or backspace.  You can save that list to disk with the button above the column, and you can load a list from disk with the other button above the column.  Easy peasy.

The middle column is kind of the “working column,” and gets used for two purposes.

Purpose one for the middle column is as a place to show potential game names.  If you hit the “> Titles From Dict” button, then it will make a list of randomized two-word combinations of words from the dictionary on the left.  Obviously your title may not be two-words, but when you see something like “Lords Cities” that almost-kinda makes sense, it can spark you to think “ah, Lords of Cities.”  And of course that name stinks too, but anyway my point is that just doing two-word pairings is I think the simplest way to inspire even titles that have more words than that.  If you disagree and come up with a better approach for multi-word titles, then definitely feel free to re-share your altered version of the source code and/or executable.

The right-hand column is actually a big textbox.  You can paste anything you want into there.  Paste the words of a sci-fi novel in there.  Paste the names of every strategy game ever made into there.  Whatever you like.  Then click Parse Text To Possibilities, and it will write the distinct words into the middle column.  That’s purpose 2 for the middle column.  When you do this, it will order the central column by the most frequently used words.  So if you post the collected works of Issac Asimov, for instance, you can find all the most commonly used words by him and potentially get some inspiration from there (after you sort through all the “if” “you” “he” “she” “it” clutter, of course).

When you want to move something from the middle column into the left-hand dictionary column, then select the row(s) that you want to move, and click the “< Words To Dict” button.  That will move just the selected stuff over, and clear those selected items from the middle column.

The Clear button over on the far right just clears the middle and right columns.

Anyway, so that’s how you build up your dictionary, and how you then generate potential names from that.

Also, I suggest the One Look Reverse Dictionary as an awesome resource.  It’s very much like a thesaurus in many ways, but the two are definitely distinct.  The reverse dictionary is a lot less literal and gives you some extended phrases and ideas that can be further afield.  It’s particularly useful for naming things.  I wind up using both quite a bit, and then the resulting words can just be fed right into the dictionary here.


Now back to trying to find a name that isn’t rancid for me…


Click here for the official forum discussion on this post.

Technical Notes: (Finally) Sprite Batching in our Graphics Pipeline

The really big update in the current version of The Last Federation is anew sprite batching system — and this is something that is going to be making its way into our other games soon, too.  This is a performance improvement that I have been putting off since 2010, and arguably since 2002.  I first ran into this sort of batching problem in 2002, and never did get around to solving it.  Then in 2008 I switched to using Direct3DX, which had sprite batching built in.  So that made it moot.  Then in 2010 we switched to unity, and lost that capability again.  Since that time I’ve made about every sort of graphics pipeline improvement under the sun except to actually do this one.

I’m not actually that lazy, but it’s a very difficult problem in our sort of pipeline.  It’s hard to explain why exactly, but it has to do with the way that we queue things for rendering, the way we load things off disk, and so forth.  Also the way we use the depth buffer and the way we use orthographic cameras, in some limited cases (those relating to things in an isometric perspective, which thankfully doesn’t apply to TLF but does to Spectral Empire and Skyward Collapse and I believe the overworld of Valley 2).

Avoiding Transient Memory Allocation

One of the chief problems (among many many others) that I was facing was trying to figure out how to do sprite batching without having to constantly reallocate arrays given that the number of sprites that go into an array can constantly change.  Something Keith said in an email last week gave me kind of a lightbulb moment and I realized that I could have pools and pools and pools of stuff.  We do a lot of pooling in general, but generally we don’t have pooled objects that reference other pooled objects that reference other pooled objects.  But here that’s exactly what we’re doing, and it keeps the RAM usage incredibly tight and efficient — and avoids hitting the RAM garbage collector, which impacts performance, and which was one of my biggest worries.

In other words, part of why I put this off for so long was that I felt like I could get some performance gains out of this, but that I’d also be making some tradeoffs in order to do so.  I’d be basically trading some transient memory allocation and CPU processing for less data passing between the CPU and the GPU.  The latter is an important goal, but the former is a dangerous thing to play with.  So with Shattered Haven I used RenderTextures to get around the GPU limitations, with AI War Keith coded a proximity-based CPU-side combiner for far zoom icons (and something similar is used on the solar map with fleets in TLF), with Valley 1 and 2 and Bionic Dues we came up with ways of combining tiles and using texture UVs to to do repeating patterns across smaller numbers of vertexes, which in turn reduced the draw calls.

So we made do, in other words, because I was unable to think of a solution to the transient memory allocation problem.  Well, that and some other things.

Matrix Math

One of the other things is that we use Matrix4x4 transformations for scale, rotation, and positioning.  Moving scale and rotation out of that and into our own code is simple enough, really.  But moving rotation out of that and into our own code in an efficient way that would not bog down the CPU was no small task.  We were going to have to give up some precision for that, do a lot of caching, and so forth.  Keith spent a goodly bit of time last week working that out, and got it fixed up.

And then a funny thing happened yesterday: I realized that I could still use the Matrix4x4 math anyhow, and that we didn’t need to do any of our own custom code there at all.  So it literally looks and works like it always did, because we didn’t wind up needing to use the reinvented wheel that we made for that.  I hate reinventing wheels, but I was unaware of a few things regarding matricies and Vector3s.  Anyway, Keith’s work was not in vain, however, because his implementation had yet some more ideas that I cribbed and used to make other parts of the pipeline code more efficient.  So despite the fact that that code didn’t wind up being used directly, it still had an impact on making the pipeline as fast as possible.

What Sort Of Benefits Are There?

In Spectral Empire, there is an orthographic view of a hex map where you see countryside, buildings, etc, etc.  The number of draw calls this can cause reach into the thousands if you zoom out much.  On my nVidia GeForce GTX 650 Ti on my main desktop, when I was all the way zoomed out prior to these updates, I could only get 27fps.  And that was actually clamping the zoom a lot tighter than I ultimately wanted it to be.  On the same scene, all the way zoomed out with the new updates, I now get about 200fps.

In The Last Federation, on my same card, I get around 800-1000fps when all the way zoomed out during a large battle with the bullet-crazy Obscura ships that are coming in the expansion for the game.  Those guys can fill a battlefield with literally thousands of shots on the screen at once, and performance understandably suffered previously.  I didn’t remember to check exactly what it was before the shift, but something sub-30fps is a pretty good bet.

Even in older versions of TLF, where bullets were not so plentiful, there were some folks on older graphics cards (in particular laptop ones) that could get bogged down during really heavy fighting and see their fps drop to the 10ish range.  That’s super frustrating, even considering that’s on a computer that’s 5+ years old (I think one was actually more than 10 years old).  Still, even my 4 year old MacBook Pro was dropping into the 40s during heavy fighting before, and I wanted that to stick at a solid 60fps minimum.

I can’t vouch for what will happen with all lower-end machines in terms of the improvements seen here, but I would expect that 30fps ought to be maintainable at the very least, and it’s possible that even during heavy fighting that 60fps can be maintained.

Why Are The Benefits So High?

Basically this lets us just use one draw call for a given texture/shader combination (or if it’s a hue-shifting shader, then texture/shader/hue combination), rather than one draw call per sprite.  This means that, depending on the scene, you can see an improvement that is an order of magnitude greater.  “Draw calls” are expensive in CPU/GPU time, because they have to pull a texture out of RAM (across that bus), push it to the GPU (across that bus), and then send some vertex data (which is very tiny by comparison).  Then the next time there is a draw call, it does the same thing.  There are other implications as well, a lot of which vary by your platform and how the shaders compile there.  But at any rate, the rule of thumb is “Draw Calls == Slow.”

When there are a ton of shots on the screen, there are usually not more than maybe 10ish actual distinct graphics.  Even if it looks like more, usually a given shot type has a dictionary of sprites, meaning that the number of textures is still like 10 for all the shots.  But if you’re seeing 2000 shots, so that would be 2000 draw calls, without batching.  Sloooow.

With batching, it only matters how many TYPES of shots you have.  If you’re using 10 types of shots, it doesn’t matter if you have 10, 200, 2000, or 5000 shots on-screen — there are still only 10 draw calls.  (Actually I simplified that a bit for the sake of brevity, but it’s not far off).  The faces and vertices and colors arrays (this is the per-sprite data) is really small (6 floats, 9 floats, and 12 floats, respectively per sprite in each respective array).  At some point if we start sending 5000 shots to the screen we start hitting a different problem — namely that of GPU fill rates, and a couple of other possible slowdowns that are intra-GPU.  But it’s a much less likely problem to hit than the bus problems, because even mobile GPUs are built for pushing out WAY more pixels and vertices than we remotely come close to.

Anyway, so the difference in performance is something that is hard to quantify.  In high-load cases it’s orders of magnitudes faster.  In low-load cases, it’s about the same speed as before (which is very fast).  Which is good news, because I worried that in those low-load cases we’d be actually getting slightly slower with an approach like this.  Another barrier to my doing this, over the past years.  But not so!  It’s pretty awesome, and I am super stoked to have this new piece of our pipeline in place after so long.

An Extra Challenge: Orthographic Views

The deal with orthographic views is that you have to be able to sort them from front-to-back in terms of tiles.  You’re giving a fake sense of distance.  So this means that some tiles of Grass Texture show up further away, and some show up closer.  Meanwhile, some textures of Mountain show up very far away, some closer, some even closer.  Aka, some of the tiles of Grass are behind those of Mountain, and some are in front of it.

With an orthographic camera, this isn’t too hard for us to handle, even with delayed-write draw calls (as opposed to single-thread immediate-mode draw calls, which we used to use heavily but no longer rely on for most of our stuff as of a year or so ago — though we do use a hybrid direct and batched system for our pipeline).  Anyway, we’re able to just set z positions on sprites in an orthographic camera, and the stuff with the lower z position renders first.  Easy, right?

Well… it turns out that this is really only per vertex batch.  With our new sprite batching, we have a bunch of vertices that are defining textured quads (aka a square sprite), and these are all at different z depths.  No problem whatsoever when showing Grass relative to other grass.  It works just like it always has.

But wait… when you show Grass relative to Mountain in an orthographic camera that is using a false 2D orthographic perspective… you wind up with the entire batch of vertices being drawn as a whole, and thus you wind up with “Z fighting” issues.  It’s a familiar problem to 3D programmers, and not one that I had properly considered prior to doing the sprite batching.  Basically, you need to use the z buffer in order to make sure that things further back don’t draw on top of things that are closer forward, since each mesh (collection of vertices and such associate with a texture) is drawn sequentially.

That sounds fine, but normally we don’t write to the z buffer or do z checks, because we’re using 2D and the order of draws just overdraws each pixel and it’s fine.  That works particularly well for 2D because if you have something with partly-transparent edges that is closer forward, it can blend well with the stuff behind it.  z buffer is an all-or-nothing thing per pixel.  That means that if a partly-transparent pixel from something closer-in draws, then whatever would have shown through that partial transparency from further back will NOT draw, if its overall mesh is drawn second (but it will if it is drawn first).  Hence: z-fighting.

Anyway, the overall solution was to create a new shader that I called Depth, which is a variant of our normal basic shader, but with z writing and z testing turned on.  That has to be used for any layers using a fake 2D orthographic perspective, but is not required on any other layers.  To go along this, any sprites being drawn with the Depth shader automatically skip rendering any pixels that have less than 50% opacity on them.  That keeps it so that the edges are sharper (unfortunately, but a minor thing especially when already layering tiles), but also prevents largely-transparent pixels from causing strange black lines and creases between tiles — and worse, black lines and creases that flicker thanks to z-fighting.

Oy.  Thankfully, even with the games that use a 2D orthographic perspective, anything that isn’t in the layer of sprites that is orthographic is free to continue ignoring the z buffer, and all the wonderful blending effects can remain just as they always have.  That’s important for things like special blend modes (additive blending, multiplicative, etc), which rely on combining sprites with partial transparency.  So the need for the Depth shader remains thankfully quite limited, as it has some slight drawbacks that would frustrate certain kinds of art (particle effects, for instance).


I do wish that I had done this years ago, but honestly at the same time I am glad that I never implemented this the wrong way, because that would have been a far worse problem than not doing it at all.  And because we had to be creative for all those years and work around not having sprite batching, we actually came up with a number of OTHER performance improvements which still are part of our engine, and which help us squeak out even more performance than we could get if we were using sprite batching alone.  So that’s a pretty big silver lining.

As it was, even once I had the epiphanies on how to handle this, it took me the bulk of four or five days to actually fully figure out all the details and get them implemented.  That’s an unusually large chunk of engine work time by Arcen’s standards at this point in the life of our engine, but it was definitely worth it.  At the time of this writing, The Last Federation and Spectral Empire now have this fully up and running, but I’ll be porting it to some of our other titles soon.  Mainly AI War and probably Bionic Dues.  Honestly our other titles are already so GPU-efficient using other methods that I don’t think they really need it.

Thanks for reading, and if you’re another indie developer I hope this gives you some ideas for potential solutions to your own sprite batching problems if you’re using a custom engine.

Click here to view the forum topic for this post.

Old AVWW Top-Down Interior Generator Source Code Now Released

As promised, the source code (and runtime) for the old Interior Generator for the top-down version of AVWW has now been released.  If you want to read about what it is and what it does, please see the link above — it’s described in great detail.

And here is where you can download the source and runtime.

What’s new in this version, since the blog post linked above?  Well, the cleanup processes have grown and grown, so there’s something like 60 unique cleanup steps now.  It also builds even more varied interiors between the various interior styles, too.  I also started using chasms for the “remainders” where a wall wouldn’t fit but I wanted to have something blocked off — that worked out really well.

I’m sure there are still some little glitches here and there where something gets placed in a way that violates perspective, etc.  I had fixed most of those, but when we made the switch to a side-view this particular style of interior generation got tabled and so that was basically the end of this program for Arcen.

However, if you’re making procedural interiors for a game that is top down, especially if it’s tile-based with a faux perspective — like so many pixelart games are — then I welcome you to use this!  We’d appreciate credit if you feel like it, or at least a heads up that you’re using it just for our own curiosity, but you’re not legally obligated and it’s not a huge deal to us either way.  Consider this released under the MIT license, and if you find it useful I’m glad.  Enjoy!

AVWW Interior Generator (Technical Overview)

I’ll have a full developer diary #11 (no video this week, though) posted in the next day or so.  However, first I want to talk about interior floorplan generation.  This is something that I’m going to release as an open source example program in a month or two, once it’s fully polished.

This past week has been both the most productive and the most mind-numbing that I can remember in a long time, for me.  That’s a really odd combination, no?  Well, there’s a reason that I’ve been putting off the interior floorplans work for A Valley Without Wind since March.

Discarding The Map-With-Variances Approach
Initially I had thought I would do a map-with-variances approach for interior floorplans, basically a similar version to what we are doing for external overworld chunks.  The problem is, while outside wilderness areas are nice and blobby and organic and lend themselves well to that sort of approach, interior floorplans really don’t.  The first time somebody sees the bathroom opening directly into the kitchen, or finds a conference room that can only be accessed by passing through a small storage closet, the jig is up.

Then there’s the problem that I have to model building interiors based on exteriors that are pre-rendered.  What I mean is, the vague shape of a building (ignoring windows) needs to be the same as the outer shape you’re seeing in the overworld chunks.  If the exterior walls are angled in the pre-rendered image for outside, they have to be angled inside, too.  If the door is in the front center of the building, then it has to be there on the inside, too.  If your building is rectangular or square, tall or short, and on and on.  And that’s before you get into function, even — houses look one way inside, office buildings another, etc.

We’ve only got maybe three dozen building graphics so far, and lots more are planned, but already we have a staggering number of combinations of shapes, which was going to make that map-with-variances approach an incredibly intensive amount of content generation, as well as profoundly disappointing to players that expect something procedural.  Externally, our map-with-variances approach works so well because it augments the procedural generation techniques, providing more variance through complex human-created inputs than a pure-procedural approach is likely to make.  Internally, the variances would be too small because of the need to make the whole structure make some sort of human-constructed sense.

Examining Existing Fully-Procedural Approaches
So my next thought was to just go fully procedural for the interior of the buildings, after all.  I started cooking up some ideas for how to do this, and then decided to do some research.  And that quickly shot holes in pretty well all of my ideas.  Turns out that this has been a much-studied problem, and there is even one guy who did his masters thesis on how to model 3d house interiors — just houses, not any other kind of building.  There were some other scholarly papers on other forms of building interiors, but most of those were behind paywalls and were likely to be full of examples in math, not code.  I think in code, not math.

This was mostly stuff for 3D models, though, anyway — much more complex than I wanted.  I turned instead to roguelikes, which are a very well-understood form of interior generation, with a lot of different kind of algorithms.  Last fall I got into procedural maze-generation algorithms for some of the latest AI War map types, and these roguelike algorithms were only somewhat more complex on the whole.

The problem is that the maps they create are full of holes.  If you’ve ever seen a roguelike, you know what I mean: most dungeons have a room, with some hallways off of it connecting to other rooms, and all of this is floating in this big, empty blackness.  The overall extents of the rouguelike dungeons tend to be unpredictable, there’s all sorts of non-accounted-for holes all throughout the map, and it definitely doesn’t conform to any sort of exterior building shape.

Perhaps the simplest example of these approaches was one by Jason LaPier.  Here’s his Javascript example that you can play with in your browser, and here’s his blog post about it.  I was attracted to his approach because it was very simple and was able to take in human-created data points in the form of room shapes, which would allow for a lot of flexibility even after the initial algorithm was complete.  However, it still suffered from all the drawbacks (for my purposes) of any roguelike algorithm, in that it created a roguelike-looking dungeon.  Which wasn’t my goal.

Setting: Creating Unique Architecture
Before I continue on with my technical explanation, there’s something else that I’d like to note: namely, that as I was looking at all these methods for making both fantastical and realistic building interiors, I realized how much more interesting the fantastical ones were.  Houses and offices are actually kind of boring buildings, when you get right down to it.

That’s why you see games like Left 4 Dead having all sorts of passages blocked off and destroyed, and parts of the buildings collapsed, etc.  Normally most buildings are meant to be as easily-traversal and non-maze-like as possible, for the sake of usability.  We want people who enter an office building to be able to get from point A to point B as quickly and simply as possible, in the real world.  In the game world, it’s far more interesting for players if the path from point A to point B is as convoluted and challenging as possible (well, within reason).

This is something I’d run into in the exterior areas, too.  Normally, in real life, when you’re in a big field of grass or in the woods… you can just pretty much walk any way you want.  Sometimes there are bushes, but you can go through them if you really want to.  And going around is rarely far.  Most of a grassy field is going to be hugely boring to you, because it’s all just the same grass, essentially.  What made the exteriors in AVWW suddenly take on a life of their own is when I started to diverge from reality — making lots of little streams, lots of chasms, and various other obstacles to get in your way.

Suddenly the exteriors felt mildly to majorly maze-like (depending on the area), and were a lot more fun and interesting to explore even without having more interesting points of interest yet.  That was a really key thing for me to learn.  And if you look at something like Minecraft, you can also see how effective the terrain-generation is there, too, even with very few types of terrain.  This is also something that really adds to the attraction of roguelikes, I think — because their dungeons are very abstract and unrealistic, they are also very maze-like and excitingly unpredictable.  They feel like somewhere you’d be chasing after dwarves or orcs, not like someplace you’d be doing your taxes.

With all this running through my mind, the choice was easy — this game needs to take a more fantastical turn when it comes to interiors.  Instead of trying to model real house floorplans, let’s model house interiors that are as interesting and crazy as possible.  Floorplans that have all the parts you’d expect — kitchens, bedrooms, bathrooms, etc — but which are built to be intentionally game-like and thus maze-like.

A lot of games that are trying for realism just use the aformentioned techniques of building damage and rubble to accomplish the same effect, but to me that just wouldn’t provide enough procedural variance.  Players can tell when the floorplans are boring but rubble is just moving around.  Of course it’s still my upcoming of challenge to actually populate the buildings with interesting things, but right from the floorplans on up I wanted to have buildings that felt unique, unpredictable, and excitingly foreign.  Like a roguelike, then, but without all that soupy blackness between rooms and halls.

Crafting My Own Approach
The things I really liked about LaPier’s roguelike example was how he defined his room templates as char arrays that were easily human modifiable, and how he randomly mixed them up on a row-by-row basis to create random variances.  Since he graciously released his Javascript code under the MIT license, I decided to start with those pieces as a core for my work, and go from there.  Instead of doing halls and such in his fashion, I’d do something of my own.

The other challenge that I was going to have was that we use a faux-3D perspective in AVWW that is identical to most SNES RPGs but very divorced from anything roguelikes are doing.  As with most maze-generation algorithms, they assume a flat grid and walls that take up exactly one tile each on the grid

What I had to do was somehow implement walls that had perspective, which meant that side walls would be one wide while top and bottom walls would be three high at minimum, but preferably not more than three high without converting into side walls or a big ceiling block above a certain point.  What a challenge!

This was the other reason that I really liked LaPier’s approach: I could build those elements of faux perspective into the room templates themselves, which would give me a good start on having a correct map once they were seeded in.  Then I’d just have to figure out a new way to add doors and hallways, to make sure everything was connected, and to correct all of the literally hundreds or thousands of perspective errors that were likely to result from just plopping everything together in this fashion.

So that’s what I sat down to do, a week ago.

The Process
This is the process that the interior generator goes through, in sequence:

1. The type of floorplan is based on an enum, and the size of the overall floorplan in width and height is encoded into the enum name for the sake of brevity.  Note that these sizes are inflated compared to the actual size that the building appears to be on the outside — common to most RPG games.  That gives us much more interesting interiors, and exteriors that we can actually see the whole of.

2. A random seed is passed to the FillMap method, along with the InteriorGenerationStyle enum.   Given those same two inputs, and the same desired floor (Z Index), the result will always give you the same map, which is helpful.

3. The first mapgen step is to set all of the InteriorGenerationStyle-related variables, which includes defining the broad profile of the building’s outer walls to match the exterior.  This also places the exterior doors or ladders.  For side doors, since you can’t see those in the exterior graphics in the game, those can be placed at any Y offset along each wall.

4. There are a couple of different room templates that I have defined in a similar style to what LaPier was doing.  Which are used varies by InteriorGenerationStyle as well, and I have four different possible definition types.  W is “provisional wall,” C is “provisional blocking ceiling,” b is “provisional non-blocking ceiling,” period is “generic floor,” and blank space is “hallway.”  At this stage in the process, that’s all we need.  I’m not defining room types, and I’m also not defining possible door-points.

5. For buildings that have more than one floor, it decides how many floors they will have between min/max values for both the upper and lower bounds.  It then clamps the requested floor to the actual floor bounds — so if you request floor 0 and the max floor is -1, you get floor -1 as your topmost floor (so, this would be an underground building, like the ice age hatch).

6. Now we actually cut staircases between the floors.  Starting at Clamp(0) floor, it starts deciding where to place floors, and then it recursively moves toward the floor you actually requested, collision-detecting and stripping out two-floors-away staircases as it goes.  The result is that you wind up with staircases that are always lined up perfectly and never sitting on top of one another.  Beyond that it’s a little hard to explain, so I’ll leave the code to speak for itself on how exactly I made that work, when the code is released.

7. Now that we have the outer shape of the building in place, the doors or ladders to the outside, and the interior staircases, it’s time to actually start filling in walls and rooms and hallways.  To do this, I use LaPier’s method with a few modifications: I don’t intentionally leave space between each room, and when there is leftover space I distribute it completely randomly in the X and Y bounds of each row.  I also randomize the list of rooms per row after all of them have been picked, so that there isn’t a trend of having smaller rooms to the right of the building.

8.  Step #7 happens in isolation, in a completely different code structure from the actual building I’ve been building so far.  What I wind up with is an identically-sized LaPier-method overlay of rooms with no external shaping that I can plop down on the staircases, the doors, and the exterior walls that I’ve defined so far in steps 1-6.  During this step, I never overwrite the actual existing building structure that we’ve defined in those first six steps.  The staircases, the doors, and the exterior walls cause all sorts of disruptions to the LaPier-style rooms, and that’s completely okay as that actually adds substantially to the variance once we combine the two.

9. Now I’m done with the LaPier method, and I have a perfect external shape, random staircases, doors to the outside, and some pretty interesting rooms layered about — and filler hallways naturally develop based on how I designed the LaPier room templates, too. However, there are no internal doors between any of the rooms or hallways, and all sorts of things are inacessible and invalid.  The perspective of the walls is munged up in many places, because the various room templates haven’t been blended together, they’ve just been set next to one another.  When working with rooms with perspective, that’s deadly to the realism.

9.a. From here on out, we enter a master method called CleanUpInteriors, which is what I spent the bulk of this past week creating.  It has 22 steps that it goes through with the “provisional” walls and floors and ceilings, and then it converts all the provisionals to actuals, and then it goes through a further 30 distinct cleanup steps.

9.b. The very first thing that we do in CleanUpInteriors is 8 cleanup steps that get rid of the most obvious problems: walls that are incredibly high or short, ceilings that are placed incorrectly, and so on.

9.c. Next, and optionally, the game looks at all of the defined rooms (any contiguous areas of GenericFloor), and it cuts random doorways between a room and anything that’s not part of the room, in a random direction.  Later we’ll definitely be more precise about making sure everything is connected in a valid way, but this is a handy way to disrupt things early.  Some InteriorGenerationStyles use it, others turn it off

9.d. Now we go through another 11 of those cleanup steps, some of them quite lengthy.  The goal here is to get things prepped to the point that the overall structure of what is closed off and what is open is now pretty set.  We need to know which tiles the player is allowed to traverse in some fashion in order for our next step to work right.

9.e. Having the data on traversable tiles in hand, now I use a pathfinding algorithm to find all the grouped traversable tiles that are unable to be traversed to one another.  For buildings with two exterior doors, I actually use a pathfinding trick that allows for each door to have a separate set of rooms and hallways that does not connect to the other.  That doesn’t always happen, but it can, which is a neat variance.  Sometimes one of the side doors just opens into what amounts to a coat closet, or sometimes it’s just half the building, etc.

At any rate, once I have the data on where connections are missing, I add in randomly-sighted doors such that everything is connected.  It makes sure that a valid connection will result from each door so that it doesn’t put in excess doors, but where it places that door out of the pool of valid places per contiguous traversable tile set is completely randomized.

9.f. Now we have a fully-traversable interior floorplan, but there are still a lot of details that are messed up, and the cutting of the new doors (and in some cases, hallways) has caused some new minor disruptions.  So it’s time for one last cleanup step on the provisional tiles, then the conversion to the non-provisional tiles, and then the final 30 cleanup steps.

9.g. Some of these final 30 cleanup steps added in chasms (for rooms that are completely inaccessible by any means for some reason), and courtyards (for rooms that aren’t directly accessible via an interior door or hallway, but instead require going under the edge of a ceiling to get into them.  The chasms and courtyards are fanciful and just part of the AVWW theme; you could easily leave them as hallways or GenericFloor, at your preference, if you’re using this algorithm in another game.

10. Now the game has a fully-defined, polished set of interior rooms, halls, chasms, and courtyards, inside an outer structure, with staircases up and down and exits to the outside on a single floor out of potentially many.  FUTURE STEP: All of the rooms are now nicely defined, distinct from the hallways, based on being either GenericFloor groups or Hallway groups.  These GenericFloor groups can instead be changed into more specific floor types, such as Bedroom, Kitchen, Bathroom, Office, etc, etc, etc.

This is actually a comparably straightforward step, although I’ll have to be careful not to put bedrooms leading into the kitchen, or bathrooms without any doors.  Anyway, I haven’t don this step yet, which is one of the main reasons I’m not yet releasing the code (the other reason being that the numerous cleanup steps need more testing and tuning time, although at this stage they’re looking pretty solid).

Once the above process is completed, you have a hard-won array of tiles that define an interior floorplan.  It’s devoid of any objects, but it lays out the rooms and doors and stairs and all that good stuff.  It also doesn’t really lay out anything specific about the walls or ceilings, etc.

So the game itself has to decide when to draw a front-left-corner ceiling tile, or things of that nature.  These were already things I’d implemented in AVWW when I was creating floorplans by hand in xml, and it’s a really straightforward bit of logic (if there is ceiling to your top and right, but not your bottom or left, draw the front-left-corner ceiling, etc).

Also, the game itself will have to go through and actually put in things like tables and beds, ovens and debris, and so on.  But since the algorithm above (with the later addition of step 10) will determine the function of rooms (which tiles belong to the kitchen, etc), that makes for nicely encapsulated, relatively straightforward sub-algorithms.

This algorithm also doesn’t do any setting of tilesets — to choose what the walls, ceilings, or floors look like.  That’s yet another thing the game has to do, and something I’d already coded back in March.  That stuff is trivial compared to actually making the floorplans, I can tell you.  But the cool thing is that this represents nesting within nesting within nesting (actually deeper than that, in reality).  So the floorplans are quite unique when just looked at on their own, but when you combine those with different entity-seeding logic in each room and hall, with different tilesets, with different enemy populations, and so on — you get a pretty insane amount of variance.

Example Floorplans In My Ugly GDI+ Tester Program

Ice Age Hatch Examples 1-4

Little Shop Examples 1-4

Log Cabin Lodge Examples 1-4

Modern Ruins 3 Examples 1-4

In Conclusion
If you’re absolutely desperate for an algorithm like this and this is the one you think you want, then shoot me a note and I’ll send you the C# code now.  But I’ll be officially making the code public under the MIT license in a month or two, once I’ve got things completely ironed-out and step #10 in there.

The tedious part of all this was getting all the various cleanup steps tested, and tested, and tested.  I had the working exteriors and interior walls put together after a day or two (most of that time spent on research of various interior wall methods), and I figured I was almost done then.  What I didn’t figure on was another 2,000 lines of code that would be required for cleanup alone.

In a lot of respects, this sort of cleanup very much reminds me of the sort of work I used to do whenever we had a business client with dirty data in Excel that we needed to scrub and get set up in a properly-validated database.  What I didn’t expect in this particular instance is that these cleanup steps add even more to the procedural variance in the floorplans.  Based on everything that gets combined, scrubbed, cut, and so on, you get a ton of room shapes emerging that were never in the base room templates.

Oh, and the other thing that I should mention is that at any time I can add more room templates, and get even more variety without adding new code (same benefit as with the base LaPier method).  Right now I did enough so that I could differentiate the various types of buildings, and so that I could really get a lot of starting variance, but there’s lots of room for more later.

All right, that’s it for now.  Like I said at the top, I’ll do an actual sans-video post on our overall progress on the game at this stage, but this was my big project for this period and it was pretty interesting, so I figured it was best as  its own post.

UPDATE: The source code is now available.

Q&A: In What Ways Is A Valley Without Wind Randomized?

Question from leekster: I was just curios how much of the game is procedurally generated? Is it in the vein of Dwarf Fortress, where all of it is random? I understand monsters can’t be random due to all the animations and textures. But are spells randomly generated? Thanks for your help.

Answer from Chris: Well, this is a really good question, but also a really complicated one to answer.  I could answer you with a lot of technical details, or I could try to answer the spirit of your question.  I suppose I’ll try to do a bit of both.

Dwarf Fortress As Your Example
First, to clarify, though: Dwarf Fortress is not nearly “completely random” as you imply.  This isn’t a criticism of that game, but I think that if I’m going to be able to answer you we have to define our terms a bit here.  I’ve only played a little bit of DF, but for the most part I find that it has lots of pieces, but and high randomization with how those pieces fit together, but the overall algorithms are anything but random.

For instance, in DF all the various ground types are predefined — Sandy Clay, layers with Aquifers, and dozens of others.  So far as I know, all the enemy and dwarf types are also predefined — but boy are there are a lot of them.  In terms of the world map regions, those are also defined by the designer, but plentiful.  They are “random,” but not truly so — you get clumpings of regions, you get islands, you get rivers, etc.  This is randomized algorithmic, not pure random.  You don’t get things like a water tile next to a lava tile next to a grassy tile next to a snow tile; things are randomized only in the bounds where they make some kind of sense.

So in a lot of senses, DF is not very random at all — I would argue that 80% of the game isn’t random at all.  This is also true of AI War, which is also known for being pretty random, if not so plentiful in its pieces as DF.  But that other 20% that IS randomized is what makes the game feel “completely random.”  That’s a really important distinction.

AVWW Is Mostly Like That
Now, to answer the spirit of your question: from what I have seen of DF, I would say that AVWW will have a similar degree of randomization overall.  AVWW will also have a similar or greater degree of randomization to AI War.  The tricky thing about this sort of algorithmic randomization, though, is that it requires a ton of input in order to generate a lot of randomized output.  AI War has had about two and a half years to build up, and DF has had… I want to say seven years, but I’m not positive.  Maybe as little as three.

At any rate, earlier in their life cycles, both DF and AI War felt “less random,” even though they were just as random, because they had less content.  If you have three things to choose from at random, that doesn’t seem very random even if it is.  If you have three million things, it would seem very random to choose between such items even if it’s not random at all, or only partly random.

AVWW, of course, won’t ship with three of anything — nor with three million.  My hope is to have dozens or hundreds of each type of thing in the game at 1.0, though, so that there’s a high degree of “this feels random” that we can then continue to build on with free DLC and paid expansions as we have with AI War in the two years since it came out.  My expectation is that AVWW will feel considerably more random and content-ful at 1.0 than most games on the market, but that it will truly start becoming crazy post-release, assuming players are interested in that.

Specific Examples
So, that was all pretty general and vague, but hopefully it answers the spirit of your question, which I think is important to do.  Now to a few specific examples:

Monsters of course aren’t random because that wouldn’t have much meaning.  How do you create a random monster?  The closest thing I can think of is spore, and I think we can all agree that even after years of a large company pouring in resources to creating “random” creatures, the result was underwhelming.  This is a case where I think that human creativity trumps randomness, and if you have enough monsters that are interesting then you wind up with something that feels varied and interesting, which is the point.

That said, the way that monsters are seeded into the world is hugely random, although there are various “population patterns” that we use to make sure that things don’t fall out of bounds — this is akin to making sure there’s no lava or tropics in the arctic zones of DF world maps.  Algorithmic randomness is a big pattern in all the games mentioned here.

Spells are pretty much the same sort of story.  They aren’t random because I’m not sure what that would really mean.  In the closest example to “random” for spells, Magicka, most of those spell combinations that players “discover” were actually anticipated by the developers and programmed in.  Those art effects and the way the spells work has to come from somewhere.  Don’t get me wrong, their approach was brilliant and fun, but it also wasn’t very random in the case of most of what players encountered — it only felt random.

Of course, they did have some genuine randomness built in to the areas that they didn’t specifically code, and some of those devolved into some balance-breaking superspells that they had to deal with.  Not unexpected, and not even tragic, just the cost of doing business when you have any random component.

At any rate, spells in AVWW aren’t planned to be random or combinatorial, although we do have a pretty interesting slots system that we’ll be unveiling soon.  Basically letting you customize and combine various types of items and equipment to get some more unique, if not random, results.

More to the point, which spells you get at any given time will be pretty random.  You have to find the right pieces to make these spells, and as you level up you’ll want to also craft higher-level spells.  So if you get a Fireball I gem, you won’t just use that for the rest of the game because it’s your favorite spell.  You’ll eventually craft Fireball II and maybe even Fireball XIV if you play that long (that’s like 140 hours in one world to get a level 14 spell, we estimate).  So your equipment loadout is going to be heavily random as well as changing on an ongoing basis as you explore around and craft new stuff.

Terrain is another good example of where algorithmic randomization comes into play.  When you look at the DF world map, for example, all the worlds are different in their details, but their very broad outline is always the same, right?  Cold arctic and antarctic at the north and south, a realistic temperature spectrum between them, and water in realistic bodies with one or more land masses in between them.  When you generate a new world in DF, it looks like a reasonable facsimile of a world, not like some cut-up messy soup of a world.

Through my work on AI War, what I’ve really found is that it’s important to have multiple layers of randomization.  If you just have one layer, even a really good layer, it doesn’t feel that random.  If you have ten layers each with some hand-crafted parts and some random parts, you get multiplicative complexity that feels very random — and yet still also makes sense.  This is something you can also see evidence of in the DF maps — you have regions and subregions and types of ground layers and hostiles and so on, all nested within the overall world creation algorithm.

Terrain generation in AVWW works in much the same way, in that there are some broad not-that-random-but-still-somewhat-random algorithms in play, and then nested within that are many layers of randomness.  So far we’re only partly through actually getting those layers all up and running, but as each layer comes online it makes the game leap forward in terms of how varied it is.  Hopefully by our next video I’ll be able to show off the next major outdoor layer, which I’m quite excited about finally getting to.

Characters are another good example of randomization.  Sure, we only have x number of sprites (right now 2, but hopefully about 60+ by the time we launch 1.0), but there’s a ton more to a character than just their visual look.  In terms of names, there are literally a few million possible combinations of first and last names per sprite.  In terms of actual stats, we have a system of stats (physical attack, magic defense, etc) that get randomly rolled per character out of a pool of points.  This is pretty familiar to any western RPG, really, but we don’t let you re-roll.  You choose from the characters you meet, which has a pretty interesting element to it on its own.

So that’s a few examples of our philosophy of algorithmic randomization, anyway.

Randomization Vs Customization

I should also note that when it comes to “random” spells or weapons, perhaps the question was if they would have randomly-rolled stats.  That’s something that equipment in games like Diablo and Borderlands has.  To me, that’s a system that has really been done a lot elsewhere, and we’re going a different way — customization over randomization when it comes to equipment.

In AVWW, you never just find a spell or a sword lying around — you find the components you need to craft such things lying around.  That’s an important distinction, because each component has more than just a single use.  If you find a Shotgun of Scoped Awesomeness in Borderlands, but you hate shotguns, you have nothing you can do with that weapon but give it to an ally or to sell it.  Or use it despite the fact that you hate it.

When you’re finding components instead of finished goods, there is still randomization in what you’re finding, but it’s less random and more directed-by-you in terms of what your final goods are.  If you hate gatling guns, you never have to built one.  You can build swords and shields and other medieval weapons instead — and then jack them up with magic so that they rival the power of this modern weaponry you’re forgoing.  Or you can jack up the gatling gun with a fire gem, if you want to go a completely different way.

What we are not doing is is having fire gems that have variable stats.  To me, that just really devalues what a fire gem even is, and ultimately makes all weapons and spells pretty similar except for their visual look.  Instead we have, for instance, longswords, broadswords, short swords, and rapiers, all of which have differing base stats, and of which you can craft different levels.  So a Level 10 Longsword would crush a Level 5 Rapier in a fight, of course, but if you prefer the stabbing action of rapiers to the slashing ability of some of the others, you can overall craft more Rapiers with varying custom modifiers and abilities as you play the game.  For a while maybe you have a rapier with a strength crest, and later it’s a fire gem, and even later after you get to the point of having two slots maybe it’s two things.

In Closing
This is a huge topic, and it’s something I’ve been meaning to write about for this game for a while — thanks to leekster for asking the question!  I didn’t cover nearly everything here, but it should give you a pretty solid idea of the general approach we’re going with.  It’s a good mix of hand-crafted, random, and customizable aspects, which we think players will find unique and rewarding.  And suitably vast, in terms of creating a pseudo-infinite world, of course!

AVWW Pre-Alpha #9 Video — Bats, Espers, Burowers, New HUD, and Weather

It was just three days ago that I wrote the #9 preview for A Valley Without Wind, and mostly that’s still the best source for the goods on what all is new this  time around.  However, the big thing we were missing last time was a video, because of a few technical difficulties with the first version of the video.

This video has been worth the wait, however, as our PR guy Erik is now doing them, and so the artistry in the video itself has jumped way upwards.  Without further ado:

What’s New?
For the most part the video shows off exactly what I was telling you about in that #9 preview (linking again to it for those who skipped down), but there are a few things that have actually changed in the last three days.  So we’ve not only put up yet more new screenshots, but I have a few new things to add to our prior list:

Revamped HUD
Perhaps the most immediately noticeable new thing is the new visual look for the game’s HUD and GUI in general.  Gone are the AI War-like dark buttons and such, and in are a new, higher-quality fantasy-looking style based on the Necromancer GUI for Unity.

It makes a huge difference in the feeling of polish for the game, from the loading screens to the main menu on down to the actual in-game HUD itself.  Note that the character select screen hasn’t been fully updated yet, so it looks a bit off still.  I’m particularly fond of how much nicer the minimap looks, along with the ability bar slots at the bottom of the screen.

Lots More Sound Work
As always, Pablo is hard at work on the sound and music for the game, but the last week or so he’s been working on sound effects in particular.  Mostly we don’t include those in videos of this sort, but towards the end of the video you can here one of the wind sound effects, which is pretty cool.

New Ground Graphics
One thing that is shown in some parts of the video, but not others, is the new ground graphics that are now in use.  I had only managed to update some of them before Erik was taking the video, but you can see the difference in the outdoor grasslands areas, in the small town areas, and in the lava area.

We previously had maybe 5 ground layers, but now we have a whopping 29 of them.  There are five different kinds of lava alone, five different kinds of full snow, two different kinds of thawing snow, new pine needles and rocky grounds, and so on.

Between this and the HUD, the difference in the latest versions is pretty dramatic when you’re actually playing.  You’ll be seeing more of these in future videos of course, but this week’s has a first taste.  The big change is that these grounds are higher quality and more interesting, which really brings the scenes together better.

Revamped Damage And Melee Models
Now when players or monsters take damage from enemies, they flash red for a brief second or so, and are invincible during this brief window.  This prevents a lot of things, such as enemies swarming players and insta-killing them, or players overkilling enemies with area damage that was “cooking” the enemies over time rather than just hitting them with one damaging blast.

In the video the only real evidence of this is the flashing red on occasion as the bats hit the characters or the character hits the enemies, but in actual gameplay this feels much better for close combat.  In general Keith actually redid the entire melee model, as the other one felt a bit clunky and that was part of the reason I haven’t wanted to show it yet.  Now swords are actually a worthwhile thing and a viable way to take out espers or whatever else.  I still have work to do on the visual effect for the sword before we show it, but gameplay-wise it’s now ready to go, which is a big step.

New Weapon: Gatling Gun
For our first gun, I wanted to pick a particularly challenging case to inflict on Keith, so the gatling gun was it.  This required a ton of new gameplay subsystems that we’ll be able to use for various other weapons in the future, so it was a nice test case (despite some things that are unique to the gatling gun, of course).

The gatling gun has a spinup time while you hold down its fire button, then starts spraying bullets in a line, going faster and faster the longer you hold it down, until it overheats and has to go through a lengthy cooldown.  Or you can let go before it overheats, and the cooldown is correspondingly less.

The gatling gun also makes it so that you can’t turn, but you can still move around — so you wind up spraying bullets in the direction you were facing when you started using it, which can be phenomenally useful against, say, swarms of bats.  Even better than the energy lance.  You can see the ability icon for this next to the sword icon throughout the video, but I still have some visual work to do on this before we show the gun proper.

Until Next Time!
There were actually other internal things we also did in the last three days like the addition of tilesets for outdoor areas, and some first work on some very cool crafting stuff, but I’ll wait to share those things with you when they’re a bit further along.  Suffice it to say, we’re very pleased with how things are coming along!

AVWW #9 Preview: Characters, Monsters, and Weather (Video Coming Later)

We’re transitioning the video creation duties from myself to Erik, and in the process of that we’re having a few technical difficulties that are requiring some updates to his machine and so forth.  So that’s delayed the video for our progress report #9, but I wanted to go ahead and do a written update (and we have new screenshots) in the meantime.

Ideally we’ll have the video ready sometime later this week, but no promises as yet.  So what’s new?  Let’s hit some of the cool new technical stuff — skip down if you’re not into that sort of thing.

Sprite Dictionaries
Previously, the game used individual images in sequence for animations, instead of sprite dictionaries (with texture offsets into a single image, in other words).  This is consistent with AI War and Tidalis, and it has the advantage of being easy to update and easy to code to (and with SlimDX’s wrapper of ISprite3DX, it was the only way to do sprites).

However, it has two disadvantages: first, being slower to load off disk, and second being more wasteful of VRAM and RAM, and thus minorly hurting performance if you have a lot of animations on different frames all onscreen at the same time.  Now we’re using sprite dictionaries, which I’m creating using a handy GDI+ helper program that I’ve been working on to let me manipulate AVWW-specific PNGs in various formats.

Proper Character Animation Offsets
This is another Big Deal thing that I did with the GDI+ helper program I’m talking about.  The problem is a bit tough to explain, but bear with me.  I’m rendering characters such as Darrell in Poser, and using three different cameras to do so; one for front, one for back, and one for the side (which gets mirrored in-game).

Each camera is at an arbitrary distance from the character, but showing the whole character, and the characters are rendered very large, at 400+ pixels instead of 128px like they are shown in-game.  Then I have to shrink each frame down to 128px in another program, and then I’ve got my animation frames.  In the past, I had been doing a simple Trim operation in a Photoshop script, and then scaling them down to 128px by height.

Problem?  That makes every frame of the animation exactly centered, and exactly 128px high from the bottom of the character’s feet to the top of their head.  But your feet rise and fall as you run, and your head bobs up and down.  For that matter, your weight shifts from left to right, and thus the overall sprite is not perfectly centered in each frame (if any of them).

That’s what should have been happening, and what I was exporting from Poser, but my Photoshop script was normalizing it all out so that the animations looked subtly wrong in-game.  Using my GDI+ program, I’ve now got an automated way of analyzing an entire batch of frames from one camera for one character and using the resulting data to fit the character in 128px square roughly centered and with the proper head bobbing, feet rising, and weight shifting.  The difference is dramatic.  I look forward to showing you that in the video when we get that out.

More Character Animation Frames, And Animated Shadows
Previously we only had 12 frames per side on each character, which was still a total of 39 frames per character (including the three standing poses).  And we only had one static shadow image.  Why?  Because it took long enough to export 39 frames from Poser, and because I was worried about RAM usage and disk loading times in AI War, when all those images were separate.

By adding the capability to have sprite dictionaries, this also let me feel comfortable with upping it from 12 to 30 frames per side per character (so 93 frames in all per character), as well as adding in animated and directional shadows (so another 93 frames for the shadows, too).

The result is that not only do the characters now look correct (see above), they also look wonderfully fluid in their movement, as do their shadows.  At a full 60fps or higher, it really looks particularly nice.  I also figured out that I can just render a “movie” in Poser to a series of PNGs, which saves me from (d’oh) exporting each frame one by one to the Queue Manager.  Poser is new to me, but I’m getting a much better handle on it lately now that I’m really sitting down with it more.

In-Game Shadow Skewing
Know how the shadows in the game come out of the back of things and skew off to the upper left?  Previously, that was just a flat prerendered image.  That had two major disadvantages: first of all, it takes up an absolutely enormous amount of image space, and thus uses way too much RAM.  Secondly, because of the way I was handling that via yet another automated Photoshop script, it didn’t always look as good as it could (big and small objects tended to fade differently over distance, for example).

Now I’ve re-rendered all the shadow images in the game using the same Photoshop script of mine, but not bent to the side.  The AVWW engine now has the capability to skew sprites, which is new to our engine, and it uses that for the shadows to get an effect that is very similar to what we had before, but way more friendly to RAM, nicer looking, and compatible with even older cards that don’t support textures larger than 1024 in size (no guarantees on performance on a card that old, but it will at least work).

The other cool thing I did with these new shadow images was that I shrunk them all by 60% in their base images, so that they fit inside the same size image as their original source — so if Darrell is 128px square, his shadow will fit inside that now, too.  Previously, his shadow was in a 512px square, which is why there was no way I was going to try animating that — too much RAM!  The game scales the shadows back up so that they look normal, and there’s no quality loss at all because of the fact that… it’s a shadow!  It’s supposed to look diffuse and fuzzy at the edges.  The fuzzier the better, actually, given the style of shadows we’re showing here.

New Character: Dawn
So we finally have a new character in addition to Darrell and the Neutral Skelebot!  She’s the first woman in the game, but there will be tons more character sprites for players to choose from, and to encounter as NPCs.

My private goal from a while back had been 60 such sprites, but I was iffy if I could hit that because Darrell took me about 6 hours to create (when I was still learning so much about Poser).  Now, with all the various process improvements I’ve made in the last two weeks, and the automation tools I’ve added via my GDI+ helper program, I estimate that each character will only take about 30 minutes from start to finish.  So 60 character sprites is looking quite feasible, and maybe I can even exceed that; that would be nice!

Re-Rendered Characters
Because of the need to have full 30 frames on all the other characters and monsters, all of the existing ones have been re-rendered.  I also improved their visual look quite a bit when it comes to the skelebots in particular.  Now they have one arm and leg larger than the other, for instance, so that you can actually track them visually while they are moving.

Before it looked like their animation was half the length it was, because having their left or right legs forward looked far too much the same!  Now they look a bit more stylized, as well as just much higher quality animated in general; you’ll like the video.

The game now has a lot of different kinds of weather.  In non-windstorm times, we have: sunny (well, we always had that), light snow (in the snow areas, mainly), light rain (the junkyards are a great place to find that, but also other places have a small chance of having it), and blowing sand (in the desert, but not always).

During windstorms (every four tiles as you move, see the counter in the upper right of the world map), you get the following kinds of weather: snowstorm (in any of the snow areas), rainstorm (in most sunny or light rain areas), sandstorm (always the desert), and firestorm (always the lava flats).

Windstorms are such a huge part of the game, but now is the first time we’ve had an actual visual component to them.  Plus just having calmer weather at other random times is a favorite thing of mine to do, anyway.  We obviously don’t have any weather in space in AI War, but that was something we did a lot of in Tidalis.

Main Menu Visual Improvements
Until now, we’ve only had a basic main menu that was just a rip from AI War with slightly different graphics.  That was always to be temporary, but it’s surprising how much more polished the game suddenly feels by having a proper main menu (including story scroll).  That scrolling story from the main menu is now included in the main AVWW page, by the way.

The main menu shows as a background a randomly-created chunk of world, which is pretty cool to see the variety right even in the main menu every time you load up the game.

Related to this, we also now have the ability to create and load multiple worlds via the interface, which is quite nice.  There’s no limit on the number of worlds you can create, but it will show 10 per page.

New Objects, New Hands On Deck
We’ve also got a ton of new objects in the game, although most of them aren’t being seeded in yet.  I think 9 miscellaneous objects are in, then we have 9 flower sprites of various sorts, and we have something like 33 new vehicles, including some futuristic ones from the ice age time period.  All broken, of course.

Erik, our PR guy, is also now helping out with some of the actual game development work, and so that’s helping me out in a major way to be able to do more art faster, and then having him “wire them up” to have proper collision boxes and other metadata through a handy in-game tool we created for that purpose.  So that’s been a big help for our creation process, although we’re just getting started with that.

Six New Enemies!
We are finally getting into adding more enemies, and we added a ton of them: bat, fire bat, ice bat, sniper skelebot, esper, and desert burrower.

Bat, Fire Bat, and Ice Bat
These all have the same basic movement characteristics: they fly, they swoop and dodge, and tend to flock about you in swarms.  Fighting them is a completely different experience from fighting the skelebots.

The regular bats are bad enough, but the fire bats set you on fire for about five seconds (which does damage over time — and stacks from each bat that hits you), and the ice bats freeze you for about five seconds (which slows you down slightly — and again stacks from each bat that hits you, which can all but immobilize you, though you can still fight).

An Aside: Region Level Gating
The bats also brings up the fact that we have level gating for various “population patterns” of enemies.  Each chunk that is generated is assigned a population pattern which includes one or more enemy types in various concentrations and in various numbers relative to the size of the chunk.  Some of these are demonstrably harder to deal with, no matter what your level is versus the other enemies.

So, the game introduces things to you gradually — everything, really.  The harder population patterns, the more complex crafting materials, and so on, aren’t available in the lower-level regions.  Every few levels you go up, more and more stuff is appearing around you.  Of course, you can strike out into higher-level regions while you are still low-level, that’s just pretty hard.

Skelebot Sniper
The regular skelebot is fast and a melee fighter, as anyone following this game knows (tired of skelebots much?  Not more than me), but the sniper variant is slower and doesn’t chase you at all.  Instead of you kiting it, it kites you, intentionally keeping its distance and firing fireballs at you.

Sure, it’s just a recolor, but the behavior is completely different and running into yellow and red skelebots at once is pretty interesting.  In general, my goal with this game is to have a ton of variety by having a lot of base concepts (bats, skelebots, etc) as well as a good number of variants of each thing.  People who are familiar with AI War and how crazy huge it is will know just what I mean.

The esper is a magical being that hardly moves at all, and which fires quite strong lightning attacks at you.  Your own magic attacks are extremely weak against it, so you’ll need to resort to melee weapons, firearms, or other physical types of attacks to take these out.

In motion, these look particularly cool.  Both these and the fire bats also act as light sources, which makes battles with them in the dark forest particularly exciting-looking!

Desert Burrower
These were just added today, so we don’t have any screenshots of them yet, but they are nearly-invincible enemies that live only in the desert and which roam around at high speeds underground.  They don’t chase you at all, but if you come too close to them they will strike out at you for substantial damage.  The desert really is a dangerous place!

Magic is ineffective against these as well but you can strike at them with melee attacks and kill them if you dare to get close enough.

New Name Dictionaries!

We now have new sets of names for Darrell, Dawn, and the Neutral Skelebot.  There are literally a few million possible combinations of first and last names each.

And that’s only with three characters; by the time we have 60-some sprites, we’ll have north of 50 million possible unique character names, I think — that’s enough that you’ll likely never have two characters named the same thing for as long as you play!

Lots More Coming Up!

We’re in progress on melee attacks, on a new and awesome-looking HUD graphics set (based on the Necromancer GUI skin for Unity), and we’ve got some very cool things up our sleeves for ground-level regions and underground regions.  After that we’ll be hitting a lot more crafting materials, weapons, etc, and then I’ll finally be circling back around to more interiors.

It’s exciting times — things really are flying now.  Keith has been sick and busy with his other job, which has slowed him down, and all the work with redoing all the shadows and all the characters and enemies in the game really slowed me down on the new content, but even so this has been a huge number of leaps forward in the last two weeks.  I also finally sat down and learned how to do rigging in Poser, and the bat animation (only part of which is shown above) is the result.  I’m quite excited by the possibilities that this simple new skill opens up!

Anyway, stay tuned for our #9 video hopefully later this week or else definitely next week if we run late.  Enjoy!

A Valley Without Wind Pre-Alpha #8 — New Lighting Test

This video is just a simple lighting test showing off our new way of handling light in AVWW. The older method, using the z buffer, is still available for lower-end graphics cards and computers, but this new model looks way better, and on most computers won’t use that much more CPU/GPU!

What We Were Doing: The Z Buffer
The problem with the Z Buffer method is that it winds up with very pixelated edges.  That buffer doesn’t support anything except binary write / don’t write data, and that’s not super conductive to having high quality lighting.  The plus side is that it’s super compatible with older graphics cards, and it doesn’t use much at all in the way of GPU power.

The #7 video was using the z buffer, but was also using a trick for the “eyesight” part of the view to make it look like a gradient when the z buffer doesn’t really support that.  The trouble there is that you can’t do that with anything that overlaps in the z buffer; so I could do that for eyesight, but not for lights.

What We Tried Before The Z Buffer
Why not just use the normal 3D lighting model that most 3D games use?  I tried it, but the problem is that this is a 2D game being rendered using 3D quads.  That just really didn’t work out, because pixel lighting is incredibly expensive on the GPU, and not supported by every card, while vertex lighting is incredibly coarse and looks really bad.

To make matters worse, the way that we draw things in Unity 3D is by using their Graphics.DrawMeshNow method.  Supposedly that supports lighting, just not certain kinds of lighting, but my experience was that even vertex lighting was fraught with errors.  Based on the materials in use, I’d get random aspects of the lighting shutting on and off in various parts of the scene.

I tried using Graphics.DrawMesh instead, but that in turn doesn’t support things that I need to do with uv animations and such for textures.  As I discovered with Tidalis and AI War, Graphics.DrawMeshNow is basically the only possible solution that has any substantial speed for the number of sprites that we need to draw, while also supporting all the functionality we need in terms of fancy texture mappings.

So that meant that a traditional lighting model was basically out.

The New Method
The new method is actually conceptually simpler than any of the others, and doesn’t really do anything new technology-wise.  I thought of doing this from way back, but never thought it would look very good or perform very well.  After the #7 video, and some prompting from long-time Arcen community member eRe4s3r, I decided to at least give it a shot.

What I’m actually doing for perfect darkness is rendering an array of big black circles with fading-to-transparent radial gradient edges to them.  These circles are about 256 pixels in outer diameter, but on the pure-black internal diameter they are about 64 pixels.  Each one of these is rendered in a grid that is screen-aligned and 64 pixels per grid tile.

Talk about an inefficient way to draw blackness!  But the cool thing happens when you start “carving out” blackness where there is a light source.  Every frame, each light source writes into that big screen-aligned array if it is providing some light to it (in a framerate-independent manner, of course).  Also every frame, at about half that rate, each tile loses light.

The longer the light is in one location, the closer towards perfectly-invisible that black circle on all the tiles near it get, until they disappear and it seems perfectly bright.  Since the images drawn in each tile are so much larger than they need to be, their gradient edges “hang over” into the nearby tiles.  I also have each image pulsing by about 10% on a 4x speed sine wave, which makes the edge of the darkness seem to pulse in a living, threatening manner.

The other effect of all this is that when you turn move a light source, it takes a minute to fade.  About half the time it took to create the light source in the first place.  This means that things like the fireball, which flicker and sputter in a seizure-inducing fashion if there was no slowdown, instead look graceful and almost liquid.  This is darkness with substance, which is a pretty neat stylistic effect, and in keeping with the visual style and themes of the game.

How Does This Integrate Into The Game?
This new effect adds a lot of extra draw calls into the game, and so on some very old GPUs it might stutter some.  These would be the same computers that would need to turn shadows off in the game, and might even need to turn foliage down.  To make sure those sort of machines can still play the game, we now have a settings option where the lighting model can be chosen.

The z buffer method is functionally equivalent to this new one, it just looks worse, but it’s more efficient.  For people with astronomically large screen resolutions, they may also prefer that, as the cost of this lighting model gets linearly higher with the number of pixels in the screen resolution.

On our test machines, we haven’t found this new approach to be very heavy at all, though, so we suspect most people will leave this turned on by default, as they will shadows, foliage, and so on.  If you can comfortably run AI War, you can comfortably run this, too.  But in terms of netbooks and other smaller computers that can run Tidalis but not AI War, let’s say, we also wanted to have options for them with AVWW.  So far it seems we do!