Archive for the ‘Game Development’ Category

The return of Player Manager?

August 16, 2010

Yes, I am working on a new game. This is the sequel to my 1990 game Player Manager that I have been waiting 20 years to do.

Why so long? That’s a topic for another day. What I want to post here is a progress report.

Why now? Because someone whom I gave an introduction to game programming (in an attempt a long time ago to train an apprentice) recently declared that he would be extremely willing to help me work on a new game. He’s David Athay, who now lives and works in the US.

Yes that’s all it took. Someone to say “I’ll help” who could and would actually help… with actual programming. It’s a psychological thing really. Sure I could code it all myself, but… I don’t want to. It’s a lonely road… and I am sick and tired of wrestling with APIs… I want to make games.

Games that I can sell. For actual money. In a world where most of the audience want everything for free, and would be quick to point out “there’s more to life than money”. Money is life. Without it I die of starvation and exposure. I would be quite happy if I could earn enough to live on. iPhone may provide some glimmer of hope there. Maybe.

Of course, both of us are doing this in our spare time. There is no funding, or salaries. The project at this stage is only a few weeks old.

David is working on iPhone primarily, since the current idea is an iPhone product, and like I said I want to actually spend my time working on a game, not learning Objective-C.

I am, however, building the game on OS X, because it is a lot easier to develop on a computer platform rather than messing about with the iPhone development kit. In order to achieve this, I am using the Allegro library which serves my purposes well. I have a representation of the game running natively on OS X.

David maintains the iPhone version, and does the appropriate integration to make sure that everything I do on OS X ports to the iPhone.

This is not as difficult as it sounds, because I have developed an architecture that hides the game code from the platform. So in theory, a simple recompile for iPhone is all that is required to keep the games in sync.

What have I done so far? Well I dug out the old graphics for the Megadrive game Dino Dini’s Soccer (a.k.a GOAL! on Amiga and ST). I recreated a pitch map out of the tile set. I gave these to David who quickly implemented them on iPhone.

I have set up my own custom IDE around the Emacs editor and ported my CoreLib library (with vector classes, debugging functions and string classes and so on) from MSVC to GCC.

I have ported and extended my Property Tree system (more on that in a future post) so that I can make appropriate parts of the game data driven.

I have ported my PROC system (this is a software threading/hierarchical finite state machine/event driven system for managing complex behaviors… it actually allows one to write scripts directly in C++… again maybe more on that at a later date).

I have created various classes to abstract away platform specific details.

Right now, I have a scrolling pitch with some sprites… and am about to add the sprite animation system.

The game is going to be 2D for a few reasons. First of all, 3D is simply not as good as 2D for this kind of game. Secondly, I am using existing 2D graphics so that I don’t have to find an artist (at least for a while). Finally, I want to focus on gameplay as quickly as possible. This is going to be an iPhone game after all, I am not trying to compete with FIFA or PES in terms of graphics. I want to get going with creating the gameplay and tuning it, because that is what I love to do when I make a game: I focus on the gameplay above all else. It has worked for me in the past too: The graphics on KICK OFF were appalling. Yet it beat off all other games in the UK to win the Industry Dinner award for “Best 16 bit product” in 1989. Yes, it even beat Populous. Perhaps there is a lesson there.

Back then, the computer systems were heavily constrained: 8Mhz 68000 processors and 128K of ram. I wrote everything in assembler out of necessity, and hand optimised critical areas of code in order to keep the game running at 50 FPS.

Things are very different now, not only in terms of computer hardware, but also in terms of my skills. For the past 10 years I have worked on getting the best out of the C++ programming language, and I feel I have reached the point where I have figured out how to get the most out of it: it actually turns out to be a fairly limited set of patterns and design heuristics that enable me to do 90% of everything I need. So, especially since computer hardware is so much faster now, I am using a proper architecture which will hopefully allow me to develop the game quickly, while of course making it easy to port to other platforms. Flexibility is a key part of my strategy.

I am also, for the first time since 1995, working in a completely Microsoft free development environment. Just saying that makes me feel good. Kickoff, Kickoff2 and Player Manager were developed on the ST (cross assembled for Amiga). GOAL! and Dino Dini’s Soccer were developed on a 486 Dell Unix box. This was Unix before Linux came along. I loved that Unix environment: so powerful, so robust. This is back in 1991: it had a 1024×768 display, 300MB hard disk and… I think 8MB of RAM. You don’t want to know how much it cost. But it was worth it; it paid for itself easily.

However, since about 1996 I have lived with the hell known as Microsoft. Although I always had a unix box handy, they have always been used as servers. Problems with hardware compatibility, the necessity of PC development and so on meant that I was stuck with Mr. Gate’s efforts. These past 14 years of servitude to Mr. Gate’s empire have not been fun. And I have had enough.

But a great thing happened: Apple adopted Unix in OSX. Unfortunately I did not pay much attention until (spurred on by the idea of iPhone development) I got my first Mac last year. Sick and tired of how my PCs would keep going wrong and seem to slow to a crawl and take 10 minutes to reboot … I tried to use the Mac, more to get out of the rut than anything. Now… I consider it my primary machine. Yes, it is what I always wanted: an all round usable versatile computer with lots of commercial software and hardware which runs UNIX. I ain’t going back now… except of course when I have to (for example when teaching).

No doubt this will have many of my students (and some colleagues) shaking their heads. Hey, but that is normal. I’m used to it. At least there’s a context to it now, perhaps. I love UNIX. So I love my Mac.

Anyway, I digress.

I am making a public statement that I am developing a new game, in part to make sure I actually do it. But I have no idea how long it will take at this time. It’s early days. About 21 or so in fact. Stay tuned for more…

Freek Hoekstra at the IGAD Never Ending Conference 3

July 7, 2010

At the conference I try to encourage students to give talks, so they can gain experience and share ideas. At IGAD NEC 3, Freek (a 3rd year visual arts student) was the only student brave enough to give it a go. And here is his talk on level design.

Yet to come: Stefano Gualeni and the second talk from Joel Dinolt.

Real Time Path Tracing: Jacco Bikker at the IGAD Never Ending Conference

July 5, 2010

At IGAD we have some pretty impressive people, and Jacco is one of them. He is known for his work in real time ray tracing (Arauna), but now he is going to the next level: real time path tracing, which promises to solve the problem of rendering for ever. Soft shadows, area lights, depth of field, ambient occlusion… path tracing can give us everything we ever wanted and we are likely to ever need. Just as long as we can make it run fast enough. Well, the grail of computer graphics is not actually that far off anymore…

Joel Dinolt (Bethesda) at the IGAD Never Ending Conference 3

June 29, 2010

It was a real pleasure to meet Joel again after all these years, and he gave two talks to students and teachers in part 3 of our Never Ending Conference, so called to represent the idea that everything is basically one long conference.

Here Joel talks about Architecture…

Nils Desle at the IGAD Never Ending Conference part 3

June 22, 2010

Let me introduce a colleague of mine, NIls Desle, who is a really cool guy. Not only is he a great programmer, he is one of the nicest people I know. Sometimes I wish I could be more like him. If there is anyone who could make an omelette without breaking eggs, it would be Nils.

Anyway here he is with a really cool talk and demo of a genetic algorithm for image processing, with a twist.

The IGAD Never Ending Conference is something that I started here in IGAD to invite lecturers and students to give talks on anything connected with video games and graphic technology. Each event is treated as part of a Never Ending Conference, with 4 parts a year. Part 4 will take place in the new academic year. It is currently an internal conference, but I am working with my contacts to try to make one part each year an international event.

Part 3 included a guest speaker, Joel Dinolt, from Bethesda Softworks. His talks and the talks of the other speakers (Jacco Bikker, Stefano Gualeni and art track student Freek Hoekstra) will be online as soon as I can make it happen.

If you would like to speak at IGAD, please get in touch, we are always on the look out for interesting guest speakers.

Beware the bemusing triangle

April 5, 2010

If you are a producer, or have been involved in production in some way, you may have encountered a little evil triangle. It seems innocent enough at first, but it is, in my opinion, one of the most dangerous inventions of man.

This triangle is very simple, it basically says that you can choose two out of three of the following options:

  • Fast
  • Cheap
  • Good

Yes, you can have it fast and good, but then it will cost you. You can have it cheap and good, but it will take a long time. You can have it fast and cheap, but it will not be very good.

I may be putting myself out on a limb, but I believe this triangle lies.

I want it all

For a start, when I make something I want all three. Yes, I want to make it good, cheap and fast. Certainly the world is full of cheap, fast and good things. Such things tend to be simple; but simple is not in that triangle. In other words, if you want cheap, fast and good… then keep it simple. It is a question of the scope of the project.

That should be enough to blow the triangle out of the water, but there is more.

The famous book “The Mythical Man-Month” does a pretty good job in destroying one side of the triangle. More people (or money) on a project does not automatically mean you get done sooner or even get a higher quality result. If only it was that simple. I can think of high profile examples of projects that exceeded their budgets, slipped their schedules and delivered mediocrity.

Taking more time can result in higher quality, but even then Duke Nuke’m Forever is going to take Forever.

Software development, and especially video game development is simply too complicated a thing to break down into easy to measure elements.

And that is why the triangle is so dangerous. In one corner, there is quality which is very hard to measure and so tends to be managed on faith. “It’s gonna be good… look at the size of the team, and the investment, and how long it’s been in development!”. The other two corners are easy to measure; budget and time, the two bastions of project management are precisely the two bastions solely because they are easy to measure. Everyone can understand a deadline, and everyone can understand a budget. This is why I believe the secret to success is the ability to understand that which is not readily measured!

This particular trap is a rather big one. Obsession with things that are easy to measure I mean. Just look at western education, for example. Could it be that the decline in education standards stems from the focus on the one thing that is easy to measure? That is the number of students gaining qualifications?

What is harder to measure is the value of those qualifications, in effect the quality of the education. Yet it is arguably the most important aspect of an education. Hard to measure things tend to get compromised in this world, it seems.

The same is true in video games; logically the most important thing in video games is entertainment of the customer. But entertainment value is again hard to measure. Sales, on the other hand, are easy to measure. The problem here is that there is no reliable consistent link between entertainment value (a hard to measure thing) and sales figures (an easy to measure thing). Sure we have faith in the correlation, but as any economist would probably explain, such connections in a market are tenuous at best. There are a ton of things that can cause sales to not reflect entertainment value. Yet decisions on what kinds of products are made in our industry are by and large determined through number crunching.

So, at best the bemusing triangle, in my opinion one of the biggest causes of projects vanishing without trace, is a tool in which budget and schedule stand firm, with the idea that if you mix these in the right amounts, quality will naturally follow. Reflect on this.

Finding better triangles
I would like to propose that we discard this unbalanced, unrealistic triangle and try to replace it with better relationships between forces that bear upon production of, particularly, video games.

I have been thinking about what I call robust triangles for many years. It is a fascinating exercise in abstraction, guided by only two rules… that such relationships be both robust (in other words reflecting a hard truth of some kind) and useful.

So let’s look at that quality thing, and see if we can do better.

QUALITY – EFFORT – SCOPE

Now we need to be careful. This is a new triangle for consideration (I have others that I have had hanging around for years), so I am not yet sold on it. To tell the truth, this one just occurred to me as I wrote this. So let the assessment begin.

For the triangle to be robust you must be able to freeze one of the forces and then observe a clear relationship between the other two, where there is some kind of balance.

This triangle is saying that if we fix quality to a certain level, then we can choose between effort and scope. If we want it to require little effort, we must keep the scope small.

For any given amount of effort, we must choose between quality and scope. A large scope and a small amount of effort will result in poor quality. I think I buy that.

Finally, for any given scope, there is a clear relationship between effort and quality. This feels correct.

Of course, to be precise we must define the terms precisely. What do we mean by effort, exactly? Is this just another name for cost?

The great thing about working with these triangles, I find, is that it brings out interesting questions such as “is cost the same as effort?”.

I don’t think that the triangle works so well if we replace cost with effort. In trying to understand what might be meant by effort, we might uncover a new triangle.

This is an ongoing area of informal research for me, but I will leave you with the most useful triangle I have found so far: RUF

RISK-UNKNOWNS-FLEXIBILITY

This triangle tells you something about managing risk. It is clear that keeping risk low is important. Well, this triangle shows the two key forces involved with risk.

First, risk is caused by unknowns. Clearly, if you do not have any unknowns, then there is no risk. So what do you need to do to lower your risk when you have a large number of unknowns? Simple, you maintain flexibility. Try the triangle out, and see if it works for you.

Using this triangle, I have been able to explain the need to maintain flexibility in the production environment, something which I have noticed is often counter intuitive to a team (at least, to inexperienced teams). Why should you spend time building in flexibility into your project? What is a really good reason for developing a flexible software architecture for the game project you are working on?

Simply put, it all comes down to the concept that investment in a flexible production environment is the only logical course of action when you don’t know what is going to hit you next. It’s a simple concept, and feels like it is at the heart of Agile. It works, because flexibility is clearly the only way to mitigate the risks of the unknown.

I am currently trying to create a network of robust triangles, that perhaps one day could map out the whole of video game development, and be a useful tool in improving video game development practices.

If you can think of any more robust triangles, please get in touch, I’d love to hear from you.

Normalized tunable sigmoid functions

April 5, 2010

Don't leave home without it

OK, so time for some technical stuff… inspired by GDC, I am going to post some articles on my ideas on game development, including some things that I teach in my classes.

Utility Theory

At the 2010 GDC AI Summit, Dave Mark spoke about Utility Theory. Hmm, I said, yep I have been doing that stuff for years, but I never knew what to call it. I realized that I have been using utility theory in some form all the way back to the earliest games I made.

Here is a wikipedia entry on Utility Theory. Good luck…

Anyway, to boil it down, it is applicable in game AI because given any situation you can provide some kind of score, and then cause behavior to change as a result of the way those scores change over time. As a crude example, let us look at weapon selection.

Each available weapon will have advantages and disadvantages… a set of attributes that makes them tactically useful in different contexts. So the question is, how do we make the AI decide which weapon to use?

The principle of utility theory is to evaluate the tactical situation and assign a utility value to each weapon. The AI then chooses the weapon with the highest utility.

For example, the utility of a rocket launcher when facing a target that is some way off is much higher than when the target is very close (too close and the weapon will inflict damage on you, severely reducing the weapon’s utility). Other factors influence the utility of that weapon. It is most effective on slow targets due to its slow rate of fire, and again more effective on large dangerous targets with plenty of armor which carry a powerful weapon.

A machine gun, on the other hand, is best suited for medium range targets which are maneuverable and perhaps less well armed.

Applying utility theory would mean calculating a score that takes into account the various factors that increase or reduce the utility of a weapon, and then choosing the weapon with the highest utility.

This can be applied to all kind of decisions, such as the utility of reloading, the utility of finding cover, the utility of retreating and so on.

In essence this is precisely the kind of stuff that is going on in probabilistic AI simulations, such as my bee hive simulation. In this, the behavior of the bees is determined by weighted probabilities, but the driving factor behind this is utility theory. The probability that a single bee begins or quits ventilation duty depends on the temperature of the hive. The higher the temperature, the more likely a bee will spontaneously start ventilation duty, and the less likely it will stop doing so.

The difficulty with such systems is that they need careful tuning. Should nature itself work by weighted random choices (a very plausible explanation of how bees decide to start or quit ventilation duty in my mind, but that’s a subject for another time), then through a process of evolution, the weightings will have been carefully tuned over millions of years. Unfortunately, the game designer needs to be a little faster than that.

It is thus important to be able to tune these weightings, and more importantly, be able to specify non linear relationships easily.

What do I mean by that? Take the temperature of the hive as an example. It is clear that the danger to the hive is not linear over a range of temperatures. There is likely to be a mid range where too little or too much ventilation will not make much difference. However, as the hive temperature increases away from the norm, at a certain point we might expect to see a major increase in the utility of ventilation due to a dangerous temperature inside the hive.

As an example, take a look at the following graph, showing a possible probability curve for a bee starting ventilation duty. Note that the temperature, t, is normalized to a range from 0-1.

Linear probability curve for bee starting ventilation duty

You can see that the higher the temperature of the hive, the more likely a bee is to start ventilating. However, when I ran the simulation, I found that there was not enough urgency to the bees behavior when the hive was hot. Fine you may say, why not simply increase the probability? The problem was then that there were too many bees ventilating even when there was no real urgency… bees that should probably bee doing something else. Additionally, the system would often start oscillating, leading to a catastrophic swing of temperature that the bees would be slow to respond too.

What was needed was some way of changing from a linear relationship to some kind of exponential curve, so that doubling the temperature might, for example, quadruple the probability of starting to ventilate.

A simple way to do this is to relate the probability to the square of the temperature:
Squared probability curve for bee starting ventilation duty

You can of course arrange to have all kinds of other formulae in there. Perhaps cubed. Or perhaps you could use a sigmoid function.

Here is the ‘official’ sigmoid function (although apparently this is a special case, and that all curves with an S shape are considered to be sigmoids).

Sigmoid Function

Sigmoid Function plot

A few problems, though: this curve goes the wrong way (for t from 0 to 5). Secondly it is not normalized. Thirdly, it is rather fixed in shape.

It would be much easier to use a kind of sigmoid function that is easily tunable (any amount of curve in either direction) and also would guarantee that 0 would give 0 and 1 would give 1. This way, we need not worry about the function changing the range of values being processed, and we would be able to fine tune the curve.

Many years ago I went searching for such a thing, and came up with the following function:

half Tunable sigmoid (from 0 to 1).

half Tunable Sigmoid plot

The above plot shows this “half sigmoid” (because we are only going from t = 0 to 1) with k=0.2.

It turns out with this function that for large values of k, you get a straight line (in theory you need an infinite value of k, but in practice it converges to a straight line very quickly. If you really want a straight line, then it is best to turn this off).

As k approaches 0, the curve gets increasingly sharp, however it always remains normalized:

k = 0.01

This is great. But how about making the curve go the other way? Well it turns out that for values between minus infinity and -1, that is precisely what you get. Again, as you approach minus infinity you get a straight line. With k = -1.2, you get the mirror image of the curve for k=0.2. If you set k to somewhere between 0 and -1, the result you get is nonsense (the curve is still normalized, but no longer generates a sensible curve). However this simple, fast function gives me all the control I want.

Here is a plot with k = -1.2:

k=-1.2

To actually turn this into something resembling a sigmoid function, it is simply necessary to repeat the function for negative values, to get the range from -1 to +1. This is easily achieved by giving the function the absolute value of t, and then changing the sign of the result back to the same sign as t.

Doing this results in a fully tunable, normalized sigmoid that can be used to change boring linear relationships anywhere in your code to curves. I have used this for all kinds of things, from figuring out how hard to kick a soccer ball when dribbling at different speeds, to adjusting the input curves from a joystick, to utility theory style decision making. It is simple, fast, versatile and robust, and I don’t leave home without it 😉

Don't leave home without it

GDC Europe 2009 – Design, Constraints and Integrity

November 25, 2009

Here is the talk I gave at GDC Europe this year, I hope you enjoy it.


%d bloggers like this: