Combat Engine

What is combat? What is a combat engine?

I suppose (because I’m really just guessing) a combat engine for a game is anything from a turn based system all the way to a high accuracy physics engine with projectiles colliding with physic bodies. With Failed State’s game world being merely a 2.5D environment where terrain is flat and the height of buildings is contrived, it makes little sense to try and perform accurate projectile physics.

In fact, there may be no point in having bullets fly around that map. The infantry of the original Command and Conquer series did this quite well, whereby the infantry would be doing all this shooting and with an animated sprite to show it and another animated sprite to show kicks of dirt around their target.

That and health bars which I don’t intend to use.

That being said, C&C did use projectiles for their tanks and missiles yet even then, they weren’t there to perform high accuracy point to collision mesh collision detection; there’s a good chance it was known in advance if there was going to be a hit or not.

Level of accuracy

Failed State only requires a certain level of combat accuracy to be exposed to the player. Although seeing individual bullets might not be required, there is something to be said of seeing missiles with vapour trails. This video of M1A1 Abrams tanks doing live fire exercises clearly show projectiles in motion and it would be a shame to not see some of those in the game world.

I am definitely going to try and pull some audio from that video for sound effects.

But even if there are projectiles along a 2D plane, they needn’t “hit” anything; not in a physics sense at least. The system I’m interested in making involves the idea that something shoots at something else and that there is a chance of hitting the target. You could say I want to conduct a dice roll in realtime.

Here’s a really basic example…

One soldier fires on an opponent soldier. Either the target is hit or it is not. Now, what are the chances of the hit occurring? What affects the likelihood of hitting the target?

  • The skill of the shooter
  • Dumb luck

Let’s expand upon this. The skill of the shooter alone is not enough to determine if a target will be hit. Here are things that could make the shot more difficult.

  • The target could be moving.
  • The target could be far away.
  • The target could be behind cover, partially or fully obscuring them.
  • The cover could be of various strengths. A bullet might pass through wood but not steel.


There’s a possibility of lots of shooting and not much hitting, especially considering modern combat tactics. Suppressive fire (and modern weapons) use a lot of ammo. What will make Failed State interesting is if these modern combat strategies can be realised. Suppressive machine gun fire or artillery can make a target ineffective (they can’t move, shoot back or even see their opponent approaching their position). Different weapons would create differing amounts of suppression. Implementing such a system would bring some strategic elements to the game because when suppressed, a unit’s awareness of their surroundings is compromised. They may also take time to become effective again as the shock of being shelled diminishes. A unit with high moral and/or experience would recover quicker than a ‘green’ unit.

By taking the above into account, a rudimentary combat system can be constructed.

Let’s say there is a valid target.

  • Every 5 seconds the soldier takes a shot at the target
  • Typically this soldier has a hit ratio based on some non linear relationship with distance (and skill), i.e. when really close they’re more likely to hit but when further away, they’re more likely to miss.
  • Assuming this curve exists, let’s say that the soldier has a 50% chance of hitting
  • …but the target is behind cover, reducing the chance of hitting to 20%
  • Also, the soldier is in turn under fire, reducing their effectiveness and thus reducing their chance of hitting to 5%.
  • So the soldier mathematically needs about 20 shots to hit the target which would equate to about 100 seconds of shooting before their target is hit.

But then there are other considerations. What if the soldier doing the shooting is part of a squad? Surely a squad would be more efficient at eliminating a target than a lone (non sniper) soldier that is isolated?


As much as I want to avoid it, projectiles are important. They take time to reach a target. It would be a strange sight in a game to see one unit fire at another and the target to get hit immediately. It would look like there was a disruption in the time-space continuum.

When dealing with straight line or artillery projectiles, the time for a projectile to reach a target can be calculated in advance; all the visuals are just cosmetic. Yet despite there being no need for collision detection because the start and end points of the projectile are known, this does complicate the timing of the combat engine. Unit A could fire at unit B while the projectile of Unit B is already in flight.

It’s as if the game is turn based but where each turn is 0.01 seconds long.

Attack step and damage resolution

Thankfully the attacking step and the resolving-of-damage step are independent of each other.

Attacking modifiers:

  • Range
  • How good a shot are the units? Are the weapons accurate?
  • What type of weapon? Small arms or a tank shell?
  • Are they currently under fire (suppressed)?

Damage resolution modifiers (NB: it takes distance / velocity of the projectile before this can be resolved)

  • Cover
  • Moving (will probably affect cover)
  • Armoured? Infantry with kevlar or tanks with steel all have an affect
  • Do they generally suck at keeping their head down (skill/experience level)
  • Are they being shot at from all sides? This is probably part of the suppression.

Note that suppression probably only affects a unit’s ability to see and shoot back rather than a likelihood of being hit.

First draft complete

And with that brain dump to a blog post, all my initial thoughts for a combat engine have been collated. There’s a good chance that many of these ideas will ultimately be flawed and will have to be modified but one has to start somewhere.

Culling, normals and face winding

The internet is an amazing resource for programmers, especially when using a language or platform that has a massive community behind it. It’s almost guaranteed that someone has experienced your current problem and found a solution to it, a solution that’s shared on a forum post or mailing list somewhere. Unity3D is one such platform.

Unfortunately though, if you don’t use the correct search terms you can end up going in circles for a while and such a thing happened to me last week when trying to extrude my OSM buildings into 3D. I kept telling myself that it was not important for gameplay (and it isn’t) but I was so curious to see what it looked like that I’ve spent just over a weeks worth of free time trying to implement it.

Inevitably trying something new (or even doing something that was done ages ago but is now being reimplemented in some new manifestation) results in all sorts of problems, puzzles, frustrations, mindless hacking and misinterpretations.

Mesh extrusion should be easy, right? Just duplicate the initial mesh, offset it then create new triangles between the two layers to create the walls?


Fun with normals

It turns out that doing a ‘sandwich’ style join between two layers of the 2D mesh (one at a y offset to the other) was a bad idea because of the way shading (from lighting) is done in the grahics pipeline. If there is only one normal per vertex, the buildings would look like they have rounded edges instead of flat shaded (hard) edges. What’s required is to have three copies of a vertex in order to hold the three normals that represent each face. Consider a cube. There aren’t 8 vertices, there are really 24 vertices with 24 corresponding normals (3 per corner).

I had to break the code down into its most basic form and try an extrude just a box into a cube so that I could figure out where my code was breaking.

Fun with normals part 2

Oh, so Unity uses a left-hand coordinate system? That right hand rule for cross product I learnt at Uni and is all over the internet doesn’t actually apply here? Grrr. If I’d been more diligent I’d have noticed that the Unity3D docs clearly state the use of left-hand rule when using Vector3.Cross but it took a while to ‘get it’. My Vector3 normal = Vector3.Cross(b-a, d-a).normalized; should have been Vector3 normal = Vector3.Cross(d-a, b-a).normalized; (I didn’t want to multiply by -1).

With the normals facing the wrong way, the walls would look dark despite the intensity of the lighting.

Fun with normals and culling

Upon investigating the ‘why are the walls dark’ issue, I experimented with some Unity3D provided shaders, but oh no!

Why oh why are some of the walls of my buildings being culled out?! The normals are definitely outward facing!

I stumbled across this forum post (see, someone had a similar issue). Normals are only used for shading and colour, the triangle winding order determines what triangles are back facing.

*sigh* I felt like I’d had that very issue before when my collision meshes didn’t work; the triangles had to be drawn with CCW winding instead of CW (or was it vice versa) for the raycasts to ‘hit’. Anyway, duh! I felt stupid for wasting an evening.

Fun with normals and culling part 2

But holy crap! It still didn’t work! Some of the buildings had walls disappearing but others were fine.

Think, think…

Oh, the OSM data has some polygons defined in CCW order and others in CW order. I had to make them use the same ordering. More searching and I found this. Yay for pseudo code. It works. Ship it.

And now for a screenshot.

Oh look, I’ve got placeholder art from Shenandoah Studio’s Battle of the Bulge; I hope they don’t mind. Their games looks amazing.

Sphere Casts

There are so many instances where I write a pleading forum post asking for suggestions/solutions to a problem I have only to discover the answer soon after I post it. I’m hedging bets in a way, hoping that someone else has a solution in case I don’t eventually find one myself.

This almost happened again in regards to collision detection in Unity3D. I wanted to do something that seemed really basic: find out when two collision meshes collided (overlapped). You’d think this would be simple, right? Nah, maybe… kind of…

After the initial hunting around the documentation I tried implementing the the OnCollisionEnter/Exit approach. Mmmm, didn’t work. So I hunted around the forums and found this.

Oh, so I need rigid bodies to detect a collision? Seriously? That’s a bit overkill when I’m not using any physics.

Oh, wait, it turns out I should have been using raycasts.

That I can do because I’d been using them for touch selection (mouse picking).

…but oh, wait… What’s this sphere cast?


SphereCasts are just fat raycasts but they feel like a little heavy handed for detecting collisions. Every time a unit moves (such as a tank or infantry) I’m having to sphere cast its bounds against the 2D world to see if it hits anything. I’d rather the engine have been smart enough to tell be there was a collision rather than me having to ask. I feel like I’m have to hand-hold it.

In my case I’m trying to determine if a unit is inside one of the OpenStreetMaps defined areas such as a building. If I was having to build this system from scratch I’d use the fantastic Sort and Sweep algorithm that is described in Real-Time Collision Detection by Christer Ericson but I have to trust that Unity is using some cleverly optimised collision detection algorithm that quickly determines collisions between sphere cast and the collision meshes.

That’s why I’m using an engine. It look quick a while to make that algorithm work for Mobile Assault.

Path Complete

Whoops, that was a tangent. After all those experiments with creating user defined paths (and posting a number of articles about it) the actual solution is conceptually quite simple. It was not until I discussed the pathfinding problem with a work colleague that it was realised what Autumn Dynasty’s path creation was actually doing, especially when it came to avoiding world obstacles and impassible terrain.

The solution

When the user draws a path, the points making up the path are initially exactly as they are drawn. There is no resampling or fancy path finding going on; the unit just follows the path that the user traces with their finger. This happens until the path hits impassible terrain.

As soon as the path hits impassible terrain, the traced path turns into a line segment that goes from the last point that was in passable terrain and the current location of the touch (cursor). As the user drags their finger around, the line segment’s final endpoint follows the finger.

I’ve kept it simple and made the line-segment-mode persist until touch release at which point the two endpoints of the line segment are used by the pathfinding library to create a new path around the obstacles.

z-fighting fixed

I even managed to fix the z-fighting with proper use of render queues.

Potential problem

To say this method is simple is a half-truth. In practice there is one big problem in that the resolution of the points read in by the system as the user traces a path may not be sufficient. If the touch gesture happens very quickly there is a chance that not enough points are read in and an obstacle may be bypassed. The solution is just to subdivide the line segments but I’ll deal with this if it becomes a problem later. For now it seems okay.

In practice

Given the high density of obstacles (e.g. buildings being obstacles to tanks) it’s actually quite difficult for the user to trace complicated, exact routes/paths. That being said, the ability to do curvy lines is new for touch screens. With a mouse controlled RTS, shift-clicking out waypoints is used for creating exact paths but in most instances players just set single point destinations for a selection of units.

If the player is going to quickly flick out a straight path, the line is going to be in line-segment-mode almost immediately and you end up with a single-click-destination mouse based RTS. The player is therefore not missing out much.

All this has reminded me that if I’m going to make a PC version, I’m going to have to actually implement the shift+click waypoint to path creation.

Paths are hard

I almost never estimate correctly; it’s freak’n difficult. That being said, for a project like mine it doesn’t really matter because all progress is done linearly by just me. The only stakeholder is me and only I am suffering from blown out time frames.

I consider Failed State (as I’m currently making it) a first draft app. I’ll hurriedly make it to some kind of limited scope, ship it, then look back on it and see how it can be edited, e.g. better code integrity, better modularisation etc. For now, I don’t care too much because:

  1. I’m just learning Unity so naturally I’m unfamiliar with best practise still, and
  2. I just need to ship something. I’m not after creating some kind of architectural masterpiece. This is not medical software so bugs aren’t going to be detrimental.

This past week my expectations of progress have been dashed by path drawing. It turns out it’s not that simple. Well, conceptually it’s simple but trying to code and and seeing the result being rendered using those naive assumptions, it becomes quickly apparent that more thought is required.

Work flow (simple version)

  • The user clicks on (touches) a unit and drags a path out from that unit.
  • The pathfinding library calculates a new path in case some impassible objects have to be avoided. If there are no obstacles, the user drawn path and the path-lib path will be the same.
  • A rendered line follows that new path
  • As the path is being ‘drawn’ by the user/player, the unit begins to follow that path even if the path has not been completed yet.
  • As the path is followed by the unit, the rendered line is cropped as it is ‘consumed’ by the unit that is following it

Unfortunately it’s not that simple.

Work flow (more complex version)

  • The user clicks/touches a unit. This only selects the unit, i.e. making it “ready to be given a path to follow”.
  • A drag start event creates the first major waypoint; we’ll call it the 0th major waypoint. Major waypoints are the points that make up the user drawn line and ordinary waypoints are what the pathfinding library generates.
  • With the number of major waypoints greater than zero and the targetMajorWaypoint still at zero, this special case means that the pathfinding lib will calculate a path between the unit’s current position and that 0th major waypoint.
  • This will provide a list of points that make up the actual path that the unit will follow. A line can be rendered along those points.
  • As more path calculations occur between the 0th and 1st major waypoint, then the 1st and 2nd major waypoint etc, the resulting paths (or legs between waypoints) can be added together to form a complete rendered path line to represent the path the unit is following. As the unit tracks through each waypoint of the mega-path, the rendered line is cropped so that it only renders from the current waypoint up to the up to the final destination waypoint.

Touch-dragged paths

The above steps work great if the major waypoints are well defined all at once. Consider a modern RTS game where you can shift-click out a bunch of waypoints for the unit to follow; when you release the shift-key and mouse press, the path that navigates through all the major waypoints is created in one go.

This isn’t applicable for touch screens. Games like Flight Control have created the expectation that the unit will move as soon as you start dragging. The problem with this is that the drag motion creates a huge amount of points which need to be filtered. It would be a bit of waste of processing power to feed the pathfinding lib a multitude of paths to work out.

Here’s what is going to happen:

  • The user starts dragging and the pathfinding lib calculates between the unit’s pos and the drag point.
  • The user continues to drag and x number of major waypoints get detected. This is where things get a bit crazy…
  • …The pathfinding lib does its processing in a background thread and a callback notifies when it’s done processing each path. There’s a chance that many major waypoints have been added before the pathfinding lib has completed the path between the first 2 points it was given! This is an opportunity to do some resampling.
  • Take all the major waypoints from [targetMajorWaypoint, last major waypoint] and use the Douglas-Peucker Line Approximation Algorithm to resample those points to something that is a lot simpler (less points).
  • This creates a complication for our rendered line. It now comprises of both the calculated paths that the library has processed and the remaining resampled points that make up what the user/player has drawn on the screen.

Pathfinding errors

Ignoring the obvious z-fighting issues, there is one glaring problem and it partly from trying to interpret user intent. I’ve drawn a path that goes through the middle of the buildings but because of the amount of major waypoints being recorded and the resampler not taking enough points away, some of those major waypoints have ended up in the streets between buildings. The result is a u-curve path around buildings as the pathfinding lib calculates a path from one side of the building to the other. A much simpler path is required, certainly not one with u-curves in it.

Consuming the line

The last piece of the puzzle is the rendered line being consumed. There is one giant issue that occurs when the distance between two waypoints is quite large. As the unit starts moving toward the next waypoint, the line segment/leg between the previously reached waypoint and the target/destination waypoint is “consumed”. A suddenly disappearing line segment is going to be really obvious to the user. I see a couple of solutions:

  1. Resample the rendered path so that there are even more points; essentially subdividing each line segment of the path. This new line should probably exist as a different list of points but because this subdivided path perfectly maps on top of the path the unit is following, the unit will transition through each of the multitude of points of the rendered line. A distance check between the unit and each of those points making up the line will dictate when the segments are lopped off.
  2. Alternatively, the line segment/leg that the unit is currently following could become semi-transparent. In fact, the amount of transparency could increase the closer the unit gets to the next waypoint. This could look quite good.

Weekly goals and realities

I got sick this week. I blame it on the salsa dancing where I go from a stupidly hot and humid dance floor to the outside night of Winter; it cools you down but I’m pretty sure the rapid temperature changes mess with the immune system to the point where the bugs get you.

And that’s what happened to me this week. Nothing crippling but enough to lose the appetite, have the usually pleasant coffee smell make you feel queasy and just general lethargy. Still, I went to the day job and managed to drag myself to the gym once my gut didn’t feel like crunches were going to make me throw up.

Hows that for an intro?

This week it was lots of preparatory work for being able to move my place-holder units around the screen. The goal was to actually get the ‘drag-to-path’ system in place using the style used in Autumn Dynasty but good old Unity-learning-curve and life’s distractions only made this a partial success. I finally made my own Unity prefabs (they’re so fundamental it’s any wonder why I haven’t used them until now) to represent each unit and it’s gotten to the point where the press/release will select/deselect them. I even have a picture…

Super basic unit selection.

The white circle is a touch-pressed unit and all the placeholder white lines are the paths that the units are currently following. Really basic and with lots of fun z-fighting problems.

The next is to drag a path from the unit to a desired location so that the unit traverses it. Currently I’ve just got hardcoded destinations.

Sounds simple, but here’s what I observed from Autumn Dynasty.

  • When dragging the path line from the unit, the path line continually resamples itself, whether than be via simplification, smoothing or both.
  • When the final path line is created, it might totally redraw itself depending on impassible terrain. For example, a straight line drawn across an impassible mountain range will change to a line that moves around the mountains.
  • Multiunit selection and moving will draw multiple lines; the lines will converge quite early on.
  • A path that has an end point in impassible terrain will crop the end of the path.
  • An eventual path that the unit follows will be consumed as the unit follows it.

It turns out dragged paths are quite involved but Autumn Dynasty has a winning formula that’s worthy of copying (shamelessly). Thankfully there are some line simplifications libraries out there. I’ve already gotten my hands on a C# implementation of the Douglas-Peucker Line Approximation Algorithm.

Version Control in Unity – RTFM

I could probably create an entire RTFM series because of my terrible track record of ‘getting it’ the first time around. I could have sworn I’d read this Unity article about version control, especially considering I’d changed the settings so that the .meta files showed up in my project. Strangely though, I somehow didn’t come to actually commit those .meta files to version control despite the article explicitly stating to do so. Perhaps I’d read some misinformed post somewhere else that said not to.

The realisation came when I decided to clone my repository just to make sure I’d been committing all the files I was supposed to have. I’ve had some misadventures with getting Unity command line builds to work and had given up on my continuous integration aspirations; the consequence being that I never got to the part where I could make the project build from the files submitted to the repo…

…It turns out the repo build was pretty broken.

The primary culprit was all the missing meta files, but there were a few other little tweaks that were required. All fixed now. Unfortunately no one wants to answer my post as to whether or not command line builds are possible using the free version of Unity. At this rate I will have to apply for a 30 day Pro trial to see if I can get it working and thus answer my own forum question.

What makes a gun battle?

As I evaluate the pros and cons of Aron Granberg’s A* Pathfinding library over that of the RAIN{Indie} package, my mind continually get distracted with the idea of what is going to constitute a gun battle in Failed State. Is it going to be a straight numbers game where the troops with the greatest numbers will prevail (like Galcon)? Will there be a rock, paper scissors balance like Age of Empires or Autumn Dynasty? Will a squad be broken up into individual entities (e.g. soldiers of a squad) that have smaller granularity of info such as ammo, whether they’re aiming, loading or suppressed much like Clost Combat and Wargame European Escalation? How will tactics play a part in making the game more interesting? And once all these decisions are made, how do I balance the game mechanics so that one strategy is infinitely better than the others?

I’m getting too far ahead of myself.

I play paintball quite regularly and simply put, paintball is a game of suppression and assault. If you take more ground you get better angles on your opponent and the best way to make ground is to keep their head down with suppressive fire.

Suppressive fire isn’t necessary directed at a specific foe like a sniper taking a shot at an individual. Instead it’s firing on an area to force those within that area to take cover. There is no ‘clear shot’ so to speak.

That’s something that I want to be able to capture in Failed State. Like in Close Combat I want artillery units to potentially suppress infantry in and around the firing zone. If a machine gun has a 200m shot at a building with troops in it, I want that machine gun to be able to suppress the troops inside. This offers a tactical advantage because other infantry units can move up on the opponent with reduced chance of being seen and/or fired upon.

Those are the type of gun gun battles that I want to be able to bring to hand held devices.

Like taking a jellyfish for a walk

My highschool orchestra conductor had a saying:

“It’s like taking a jellyfish for a walk.”

He was referring to our playing (more specifically our string section’s playing) where the pace would gradually slow irreparably. So much for animato.

That’s what my Failed State game feels like at the moment, a freak’n Jellyfish that just won’t slop over fast enough because of slimy, slippery impediments that stuff up the flow.

Here’s a list of annoyances:

  • Fun with z-fighting billboards. – I tried to get some billboarded text in front of a billboarded background in Unity. It seems the 3DText GameObject and my custom Mesh just don’t want to get along. I’m probably doing something stupid.
  • Freak’n Unity keeps crashing when I try to use breakpoints! Seriously! And the bug reporter seems to keep hanging! Argh! It turns out that I’m running Unity 4.2.1 and they’re up to 4.3.4. Let’s hope that updating makes a difference (It doesn’t seem like that long ago I’d updated so maybe they released a crap build).
  • MonoDevelop gets in this weird state where the CPU maxes out and everything grinds to a halt. One forum post suggested turing off the version control system but that doesn’t do anything. *sigh*
  • Started trying RAIN{Indie} as an alternative path finding library. I’ve had to spend a few evenings figuring that out. The pathfinding seems promising by I need to get it to calculate the navmesh programmatically.
  • And work drains me to the point of computer apathy. No amount of exercise is fixing that.

So here I am, bitching about my misadventures. Hopefully I can look back on this post and laugh at it once I figure this mess out.

Now it’s off to applying a bunch of OSX updates and installing a new version of Unity.

Random Ramblings Regardling Recent Wangling

Wangle (verb): to bring about, accomplish, or obtain by scheming or underhand methods.

…but to say that’s what I have been doing all week would be rather hyperbolic, unless one were to suggest that I’m trying to cheat time but time will always win. In fact, time probably just toys with us.

I set myself the ambitious goal of trying to get a Continuous Integration build server set up using TeamCity set up so I can auto create builds to give to friends and to make sure that I’m not breaking things along the way.

I suppose it was crazy of me to think this would be straight forward.

I set up TeamCity once before for an iOS only build of my Breezy Bubbles game, more to prove that I could than for being really thorough about the games integrity (it was a quick ‘throw away’ title, so to speak). I don’t remember the process being that painful because it required two steps.

  • Hook it up to my git repo.
  • Create a build step using the XCode build step template which hooked into my project settings.

I was under the optimistic impression that the Unity Runner plugin for TeamCity would afford the same simplicity but alas, that wasn’t the case. I got so far as getting TeamCity pulling my project from my git repo but the build runner was going nowhere fast.

I got the impression that I could sink a week into something like this. Yes, there is command line documentation on how to use Unity’s command like arguments to create a build but I couldn’t help but shake the feeling it was going to be easier said that done. So I did what any cautious developer does and bailed.

I bailed and went back to coding my unit circles (the little circles that are going to represent infantry, tanks, vehicles etc). I also made a unity Inspector element whereby I can edit the unit circle’s size and colour during runtime.

Combined with the billboard-like 3D labels which I ripped off from that Auckland International Airport video, I can represent each unit on the map.

Unfortunately even that is causing a lot of drama. The unit labels and associated text are billboarded so that they always face the camera. That bit’s fine (I found some code for that quite a while ago) but I’m having ‘fun times’ trying to get the text correctly positioned and in front of the label background.


I feel like I’m wasting time making it look pretty but the reality is that what I’m trying to create could be considered the bare necessity to represent a unit on the map. Once I get that done I’ll post a victory screenshot.