This is on the beta branch, as it’s a freaking huge number of changes. The more testers the better, please! Full notes here.
How To Play A Beta Branch Build
To play this on Steam, please go under betas and choose current_beta from the drop-down list. We would really appreciate some testers on this so that we can get back out of beta status as quickly as possible! There is not currently a way to get the beta versions on GOG, but we won’t be in that status for more than a week, knock on wood.
We want to make sure we didn’t break anything with all the substantial changes in here, but we fully expect some savegames to throw cosmetic errors at the very minimum. Please report and upload those here: https://bugtracker.arcengames.com/
What’s New?
- The biggest thing for me personally is the new data compaction stuff, which I have been pouring hours into in preparation for multiplayer. The last thing I want is for multiplayer to be functional, but immediately laggy as soon as we have the desync repair stuff going. I needed to start with efficient data storage and transmittal so that there wouldn’t be a point where I need to disappear for a week to redo all that after we’re into multiplayer’s alpha.
- I’m not entirely done with my data compaction shenanigans, but I’m very excited about how it has turned out so far. I do expect it to result in a number of user-facing errors, though, so hence the beta, which I really hope you’ll partake in.
- Badger fixed a number of bugs, including several achievements that were not triggering. Also that all the AI units have had the equivalent of mark 1 shields/hull for the last two weeks??
- Meanwhile, Badger also implemented another major fireteams upgrade, this one which is intended for the upcoming DLC2 but has been backported to the base game.
- The first use of this fireteams code is to let the AI really fight some other faction other than you with giant fists, without killing you in the process. Aka, let it fight the Marauders or the Nanocaust using REALLY BIG AND SCARY GUNS if those two factions are taking over the galaxy… but don’t have that be game-ending for you. You’re not the problem, after all, from the AI’s point of view.
- The second use of this is to let us have certain sub-groups of the AI faction, most notably Hunters right now, which have a sub-subgroup that just focuses on one objective after you harass them. The example in place right now is Major Data Centers (MDCs). Those always triggered an Exogalactic Wave, previously, which was boring — more ships come in increasing waves, and you survive them or don’t. It felt kind of uninspired. NOW, instead, the hunters get a major buff to a certain sub-team that explicitly hunts the MDC and its protectors… and if it wins against that, eventually, then they just leave you alone — “job’s done, bye!”
- The way that the hunter deals with allied factions to you is a lot more interesting, too, rather than your allies causing rapid AIP gains by killing all the warp gates and aggroing the AI in that way.
- These are some major strides forward towards multiplayer being a smooth and fun experience as we get into that, and towards the DLC2 that will come out around the time multiplayer fully releases (multiplayer will be free). Please note that none of the DLC2 stuff is slowing down multiplayer at all, as Badger is not working at all on multiplayer and I’m entirely focused on that and related things for now.
- Oh, also, for the sake of modders we’ve documented some new things that will help with making sure mods are multiplayer-compatible.
- Note that if a mod you like turns out NOT to be multiplayer-compatible, the game should still be playable. You’ll just have a ton more desync-repair messages going back and forth, related to the mod. This is an excellent example of why I want those desync-repair messages to be absolutely as efficient as they possibly can be, which means representing all our data in as tiny a format as possible.
More to come soon!
But here’s some further reading on what we’re doing in detail:
Why Do We Care About Compaction?
I explain what I mean by this below, and go into detailed benchmarks. The release notes have a lot of other detailed info. But a TLDR of why this matters is certainly a good place to start:
- Smaller savegames are easier to transmit across the network when you’re initially synchronizing your game from a host to clients.
- Doing it via our compaction method, versus compression, uses FAR less CPU cycles, and thus makes the transfer of even small, frequent bits of data run more smoothly.
- Doing it via our compaction method makes even small bits of data… smaller. Compression often requires a lot of data before it makes much difference.
- Given that we are going to have frequent small commands being sent back and forth just to have the game run at all, and then we ALSO are going to have the desync-repair data going back and forth that is still small (but substantially larger at the scale of network messages), this is pretty critical.
- So the TLDR of the TLDR is that we use less bandwidth and less CPU to do the same thing, and thus your game will run way more smoothly in multiplayer.
- And it has the side benefit of making savegames a lot smaller, and also faster to save.
Data Compaction vs Compression
What the heck is data “compaction?” Well, in the most direct sense here, it’s a term I made up. The way I’m defining the term, for purposes of what we’re doing here, is “making data smaller at the level of small objects, as we put it into a binary stream.”
Wait, what? Basically… it’s something we do as we go along, and it works on very small pieces of information. A single integer. One ship and its data. That sort of thing.
Compression, by contrast, requires a lot of processing AFTER you put in your data to some sort of format. So to compress a bunch of data in AI War Classic, for instance, we write roughly 30MB of data for a savegame into a temporary buffer, and then we run GZIP compression on that, which turns it into maybe 1MB that we can store on the disk or send across the network.
That 30:1 ratio is legitimately what AIWC was seeing, on average, by the way. The downside is that it only really works well for very large amounts of data, and it also requires a lot of processing every time it wants to send or save. You’ll notice that AIWC has a noticeable hitch in itself every time it saves the game. AI War 2, by contrast, doesn’t use compression and thus doesn’t have that sort of hitching.
AI War 2 is a lot smarter about how it represents its data to start with (using binary formats directly instead of a unicode-based format), and uses clever tricks to also know when not to send data of certain sorts (if I don’t have X properties, then don’t bother tracking them). So this realistically converts what would have been 30MB in AI War Classic into something that might be… 5 MB in AI War 2.
Beyond that, though, requires a lot more work, because we actually have far more data here than in AI War Classic — enough that despite our better formats and such, we’d be back up around 10 MB or so for savegames.
Several years ago, perhaps four now, Keith LaMothe came up with a great new concept he called “buzzsaw binary,” and we posted benchmarks on how incredibly efficient it was at storing data. After working out some kinks with it, we’ve been using that for the last few years, and it saves blazing fast, as you’ve probably noticed.
Basically, this takes each data point that we want to store in binary and writes it directly into a bitstream (which C# does not officially have — they only have bytestreams). As it writes to our custom bitstream, it analyzes each piece of data and figures out if it can make it any smaller than would normally happen for that kind of data. The types of data that Keith built support for were string, 32bit integer, and 64bit integer. The last of those also gave us support for our own FInt fixed-integer format, and by extension also some limited float support (thought we avoid float as much as possible).
The plan was always to go further, and he left notes to himself to do so directly in the code, actually. With this format, storing a byte was actually highly inefficient, taking on average 10 bits rather than 8. But we never even tried to directly store bytes, so that was kind of irrelevant.
In most programming languages, integer numbers can be stored as 8, 16, or 32 bits, with or without the ability to have negative numbers (called signed numbers). In the prior edition of buzzsaw binary, we assumed all numbers were signed and could be up to 64 bits, and wrote them out in a manner that still produced results that were often 16 bits or less for a 32bit number. That was a savings of 50% or more, in lots of cases.
A big part of the savings is what to do with “default case” numbers, such as those that are 0. How many bits should you take to store that zero? By default, a 32bit number will take 32 bits to say 0, and a 64bit number will take 64 bits to say 0. In buzzsaw binary, regardless of the bit level of the number, it has always taken us 1 bit to store a zero.
One tricky thing is that we use a lot of -1s as defaults for various reasons, and that was something that required 10 bits to store. Ack!
So for me, I went in and finally figured out Keith’s code, and added a ton of comments to make it clear what was going on. I added explicit support for
- 8bit (aka byte) and 16bit (aka short) numbers
- And some special cases for:
- numbers of 16 or more bits that cannot be negative
- numbers of 16 or more bits that cannot be any smaller than -1
- and 8 bit numbers (bytes) that are frequently 0
Those are… strange categories, I know. But when you look at the data we use, and apply the same sort of buzzsaw binary approach, but with more hinting from the higher-level program (as well as more appropriate number formats for variables instead of int32 for everything), then you wind up with some truly amazing compaction. I’ve detailed the macro results of that here.
Bytes now take an average of 4.74 bits in a savegame, down from 10ish in our previous implementations. In the benchmarks I just not the percentage relative to the normal 8 bits. “32 bit numbers that cannot be less than -1” are averaging out at 2.71 bits instead of 32.
Overall file sizes for the game dropped in this build to commonly about 70% of what they previously were, and as low as 28% in one case. The actual data format change only represents a savings of 10%, sadly, but it’s still a win, even if it is a minority of the gains.
Please Do Report Any Issues!
If you run into any bugs, we’d definitely like to hear about those.
The release of this game has been going well so far, and I think that the reviews that folks have been leaving for the game have been a big help for anyone passing by who’s on the fence. For a good while we were sitting at Overwhelmingly Positive on the Recent Reviews breakdown, but there have been a lot fewer reviews lately and so that has definitely had a material negative effect. Go figure. Having a running selection of recent reviews definitely is helpful, but at least we have a pretty healthy set of long-term reviews. If you’ve been playing the game and enjoying it, we’d greatly appreciate it if you’d drop by and leave your own thoughts, too.
More to come soon. Enjoy!
Problem With The Latest Build?
If you right-click the game in Steam and choose properties, then go to the Betas tab of the window that pops up, you’ll see a variety of options. You can always choose most_recent_stable from that build to get what is essentially one-build-back. Or two builds back if the last build had a known problem, etc. Essentially it’s a way to keep yourself off the very bleeding edge of updates, if you so desire.
The Usual Reminders
Quick reminder of our new Steam Developer Page. If you follow us there, you’ll be notified about any game releases we do.
Also: Would you mind leaving a Steam review for some/any of our games? It doesn’t have to super detailed, but if you like a game we made and want more people to find it, that’s how you make it happen. Reviews make a material difference, and like most indies, we could really use the support.
Enjoy!
Chris