AI: it’s all to do with multi tasking (continued)

So, a bit overdue I guess, but here’s the next installment of my view of AI in video games. And it’s a good time right now, as I am about to start teaching my AI course this block (GT4 – AI in Games).

The first thing I explain to the students is that what we call AI in games really is not AI. It’s behavior programming, and at a higher level, behavior management. AI stands for Artificial Intelligence, and in computer science means a whole area of research involving lots of fancy terminology such as Genetic Algorithms, Neural Networks and Markov models. However these things still are not commonly embraced in video games, or at least are not the bread and butter, so to speak, of an AI games programmer. An AI games programmer has the task of making entertaining behaviors, ultimately, and this ranges from the operation of a fuel tank to strategic planning; from the need to wait until an animation is completed to a calculation to intercept a ball. Even something as humble (and as useful) as a pause (do nothing for a certain amount of time: for example, by a camera system to allow a player to see the ball for a short while after it enters the goal in a football game) is really no different from the act of moving to a location. It’s all to do with getting stuff to happen next depending on what just happened. State transitions. Fuel is now fuel -1. Animation is now finished, start next animation. I reached a waypoint, now move to next waypoint. I’ve just been killed, play death animation, and so on.

The strange thing about AI is how it is placed a little to one side; a kind of Cinderella in video games. It’s a subject that does not seem to be commonly enjoyed by programmers. It is something that many people try to sell different solutions on, and it’s something that I often see attempted without any architectural design, resulting in very limited AI. I will make one of my bold statements: I created better AI in my football games than I have seen in more than a few current generation titles, something that actually saddens me, especially when I consider the fact that I had my AI for an 11 on 11 football game running on an 8Mhz 68000 and at 50 FPS.

AI is difficult and fiddly. Although someone will pop up and say “We did the AI on our game and it was no problem at all”, I maintain that the art of AI in games is still unreliably practiced, and often leaves me wanting something more. Starcraft 2, for example, was the subject of one of our students for their graduation project. Their graduation exam is coming in a couple of weeks in fact. He has made vast improvements to the AI that comes with the game as standard (as measured by its community of players). I am left asking, “Why is it that one of our students improves the AI on Starcraft 2 even without access to the full development environment instead of the actual developers of the game?”. Perhaps it is just because they did not give AI priority, but still…

I believe that the core reason AI is hard as I have previously covered, is Multi Tasking. Not planning, or decision making, or strategy selection… these are all just algorithms, and algorithms are the bread and butter of programming. What makes it difficult is that we have to have many of these running simultaneously and interacting. It matters not how you do it, but when you make a game with 100 things flying about the screen, you have created some kind of multi tasking system for handling each one of them in some way. This is unavoidable.

Now, I am a fan of state machines, and with good reason. All programming is working with a state machine, because your codebase, in C++ for instance, is just one big hierarchical state machine – A HFSM (Hierarchical Finite State Machine). This is very important to remember, because if you forget this you pretty much forget what you are doing. As soon as you add threads, you effectively get multiple hierarchical finite state machines, and then the challenge is of course to get them to communicate with one another. And here’s the thing: to get two HFSMs to communicate requires that you simulate some kind of sub threading on top of the thread. Say what?

Well, how do threads communicate? Semaphores? Message queues? Events? Shared Memory? Hold that thought a minute. Savor it. Or try to. Because the truth is of course that such inter-thread communication is messy. It is common to hear how multithreading is hard. It is hard to get threads to talk to each other, and that translates to a difficult in getting software threading (or tasks) to communicate as well. This is something I noticed when I wrote Kick Off. The code base, in 68000 assembler, is actually still quite readable. Before GOAL! I used flat state machines (i.e. non hierarchical), but even so I had many ‘threads’ that had to talk to each other, and this part is where most of the bugs occurred. I found this frustrating; the truth is that the more sophisticated algorithms and dynamics of GOAL! exceeded the limits of the simple approach I had use in Kick Off and Player Manager, and solving this problem is how my PROC system was born.

But first of all, I want to explore this problem of communication between threads and explain this “sub threading” thing I just mentioned. The search for the prefect solution to this problem has been a long one and is not over yet. Recently I have been working on a new language incorporating the PROC system, and realized something.  No matter how you try to process a message or form of communication between threads, any given thread cannot deal with that communication without it actually multitasking within itself. The reason is simple: the thread is already doing something (presumably). The moment that some message handler is invoked, that thread is now doing two things: what it is doing and processing the message. Two things. At the same time. In one thread.

Think about this a bit more. There is no way round it. No matter what you do, how many threads you create… at some point you have to both do what you are doing *and* listen/poll/process/act on some communication. Even if I create some kind of message handler in it’s own thread, it then has to communicate with the thread it’s teaming up with, and I have just shown that can’t happen unless the thread being manipulated does two things at once.

Well there is one way around this. You could always cause the thread to change it’s state by simply changing that state externally. If we imagine that each thread is a HSFM, we can change what that thread is doing by forcing a new state. There are two problems with this:

  1. If the HSFM of the thread is C++ code (i.e. simple straight forward code), then the state is represented by the program counter (i.e. the current instruction being executed). C++ provides no mechanism for causing that execution point to jump, and in effect change the state
  2. If a HFSM engine is created in C++, then it becomes possible to simply switch the “execution point”, by changing the state. So instead of sending a message, you reach into the thread and force the state to “Death Animation” for example. The problem with this is that it’s very dangerous because the executing code has no chance to clean itself up, or take action upon the event, which is important to be able to do. Without this, the sender of the message has to take responsibility of the integrity of the receiver, which is messed up because it breaks the natural abstraction of a message. An example would be “You need to know that before sending a “die” message, you must check to see whether the entity is already dead”. Clearly not ideal.. you could wrap this of course, but as soon as you do, you have conceptually created two execution operations at the same time on an object, which means it is now effective split into two tasks: if you wrap it, you have a message handler executing at the same time as the current task.

Why does it all get so inelegant so quickly? I believe the answer lies not only with the language, but with the fundamental architecture of computers. All computers are based on Turing’s model. The only difference between a practical computer and Turing’s Engine is that the latter is an infinite state machine. Other than that they are the same… and the one thing that Turing did not consider was multithreading. And this lack of consideration is something that we inherit, such that we have to emulate a solution to the problem of multiple threads communicating by inventing weird and wonderful creations that are effectively finite state machine simulated on a finite state machine.  And that is my beef. Can’t we just have the machine handle it, instead of having to build some kind of system over the top os a system to solve one central problem? That problem being how to have many multi tasked HFSMs talk to one another flexibly and robustly?

My PROC system attempts to shoehorn such a multitasking system into C++ and pushes C++ to it’s very limits. It is only just possible to do it. But it works, and it works better (for me anyway) than other solutions because it allows me to code AI in C++ as straightforward code. No data trees (e.g. Behavior Trees), no graphing tools (visual AI middle ware designed for non programmers to be able to use), no scripting or domain specific language (lua? lisp? Python? I just want to use one language, thanks!).

In the next part, I will describe the PROC system: my solution to the problem implemented entirely as straight C++.

2 Responses to “AI: it’s all to do with multi tasking (continued)”

  1. Kela Says:

    Thank you for your taking the time to explore such non-trivial things.
    Just for Truth’s sake, and, please believe that I do pretty much respect you: there’s something like a `typo’ in «The only difference between a practical computer and Turing’s Engine is that the latter is an infinite state machine» Well, as far as I can understand any such machine has only a finite number of states: «We will also suppose that the number of states of mind which need be taken into account is finite. The reasons for this are of the same character as those which restrict the number of symbols. If we admitted an infinity of states of mind, some of them will be ”arbitrarily close” and will be confused» (cf. http://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf p. 250) `states of mind’ can be taken to be machine’s states for our purposes, `symbols’ are externalities. What is unbounded is the number of squares on which symbols are input and output. You may wish to look at what Kurt Gödel had to say about this, namely that human mind may very well have only a finite number of states between t and t’, but it could also, unlike Turing (or Von Neumann for that matter) machines have a growing such number, i.e., number_mind_states([t’, t”[) may very well be strictly greater that number_mind_states([t, t'[), t < t' < t''. Sorry for my being pedanto-prolix and again, thank you for being still around, saying and doing stuff.

  2. Kela Says:

    It just occurred to me that, perhaps you were thinking about the whole system, Turing machine *and* length-unlimited tape. Then you are right: There is hardly infinitely many non-null space-time things (such as Registers, DRAM cells, etc.). One may suspect that there are states of mind with no corresponding states of brain.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: