To a certain extent any game is a simulation. It involves an environment with constraints and mechanics, sometimes nearly arbitrary rules, etc. Many of these components may be rather abstract of course, like in chess or other board games.
It’s safe to say that somewhere in there you are simulating something in your game. The tricky bit is identifying what it is you are intending to simulate and to what end.
It’s very, very easy to begin to think of your game as a simulation of a world and not of your game . The FPS genre seems to drive some of this and it’s very natural, but it’s the siren song for the game developer. Unless you are working within a design that truly necessitates emulating the real world in some deep fashion, which is probably unlikely, any accurate simulation of real world phenomena is outside of the core requirements. And really, accurately simulating real world visuals or behaviors is often extremely difficult and time consuming to achieve and expensive to compute.
To make matters worse physics middleware solutions seem to exacerbate the problem by enticing developers (e.g. me) to include all manner of highly unnecessary physics simulation into their games. I think we’ve all been there at one point or another having spent hours on end doing physical simulations for missiles, bullets, sword hits, blah, blah, blah. When really the vast majority of any of those things could be calculated in a more pen and paper style, thereby providing a much simpler and more deterministic system to support your gameplay with.
There’s something about this problem that runs deep in the human psyche though. Why did I scoff at the idea of not using triangle accurate collision detection for, well, everything? Why did the idea of not simulating all of a characters joints just seem lame? In hindsight I didn’t notice those corners being cut very often in the games I’ve played, but it became super important to me that I have those sorts of advanced features. Bragging rights maybe?
But I’ve come around full circle after working with both types of systems. There are several points that I try to keep in mind now:
- Thinking of everything regarding visual motion in a game as falling under “Animation” serves you way better than thinking of those things as physics simulations (again, unless what you’re making is a physics simulation).
- Users tend to not see [relatively] minor inaccuracies in an environment.
- Emergent behaviors will almost always elicit an elaborate explanation by the viewer.
I will, in what has apparently become my typical fashion, address these in order.
In animation the feeling of a motion is far more important than whether or not it is physically accurate or even possible. Users respond to this very positively when it is done right. On the other hand, trying to wrangle some of these behaviors out of a physics system can be nearly impossible. And at the point where that has been done there’s usually enough code there undoing the physics simulation that its use in the first place is questionable.
It may offend some sensibilities to think of a missile striking an object when it was actually a meters worth of space away. Or that that same missile couldn’t go through the legs of a character. Even saying it now I’m thinking to myself that it would be such wasted opportunity for those things to not work. But will it really effect the game negatively? Will people see it? No. No they won’t. If you don’t believe that, go ahead and implement it with that coarse accuracy and test it yourself and on others. It’s actually not a bad idea to get that stuff in place anyway, so seriously, do it.
In the project I’m on in my professional life, that is to say, not the engine I often talk about with Lua and all that crap, we have many agents moving around all the time and only in the most severe cases do you actually notice them pathing through one another. Now between the path following and smoothing and a few other behaviors they’re often not directly in one another’s paths, but even when they do the brain just assumes it didn’t see it. In the case of larger agents closer to the camera it’s really evident when that happens; I’m not telling you to not try to avoid these things from happening… just that you don’t need to have an insanely accurate model to follow.
Similarly to point #2, when people see behaviors, they flesh them out with detailed stories and motives. Read this as an example of what may be a really, really flawed reading, for decades on end, of hunting behaviors in wolves: Wolf Packs Don’t Need to Cooperate to Make a Kill
What’s fun as the engineer in these cases, is that your QA/Business/Non-Programmer types will come to you with obscenely detailed descriptions of what your AI or simulation in general is doing, usually in bug reports.
“So there’s a problem with the visibility checks for the agents. When they go to get food and look around for the other agents, they sometimes get stuck trying to decide whether to run away, fight, or eat.”
“Literally none of that is being checked.”
“No, but, like they’re doing line of sight checks.”
“No. They are not.”
I mean, “Fun” isn’t the correct term for that, but it’s really interesting none the less.
Conclusion / TLDR / What I should have opened with
If you are making a game, then make a game. Your game is a simulation of sorts, but not a physics simulation; not a real world emulator. Serve the rules of your game with your simulation. Every single facet of your development on that project will be much, much more pleasant, manageable, and maintainable.