Archive for the ‘C++’ Category

Abstraction: Adding stuff that does nothing?

August 20, 2010

This post is inspired by a tweet this morning suggesting that abstraction is simply a way of making code that does nothing but call code that does. Of course I disagree. Abstraction in computer programming creates code that does something very important… it abstracts. Following my philosophy of “right tool for the job”, then abstraction is the tool you use when you want to be able to hide details from higher level functionality. There is more to writing code than thinking about the steps the processor takes to execute it.

The world of programming can be split into roughly two; those programmers who understand the tool of abstraction and those who do not. In my experience, programmers who properly understand the tool of abstraction and its uses are a minority, even though all programmers use abstraction in some shape or form.

To demonstrate what abstraction is I like to use a real world example: good old fashioned post. When you write a letter to someone, you execute a delivery transaction by placing the letter in an envelope. Upon the envelope you write instructions for its delivery. The postal service is then able to deliver the letter without concern about the contents of it. The machines and personnel that execute delivery really do not care about what the mail contains, only the instructions for where the package of information should go.

The above scenario is very similar to software abstraction. The letter has an implementation (the contents) which is encapsulated (in an envelope). The encapsulation has an interface (the writing upon it) that implements a protocol (the address).

Can you imagine a world where letters were delivered according to contents? Where to decide what to do with a letter is was necessary to read it? Think about all the reasons why this would not be a good plan, and you will have your explanation of why we place letters in addressed envelopes. Although one could say “Envelopes are a way of adding more paper and information to a letter that is not needed by either the person who writes it or the person who reads it”, it would hardly be a fair statement, would it?

Of course, abstraction is not without its pitfalls. Let’s take a look at some of the issues.

First of all there is the problem that we have duplication of information (which translates to more memory usage in computer programs, something that usually want to avoid in low level coding). The identity of the recipient is probably both on the letter itself and also on the envelope. This duplication is a common cost of abstraction, although sometimes you can create clever solutions (a plastic window on the envelope that shows you the address on the letter – the equivalent of providing a constant reference to a part of the implementation).

Additionally, the abstraction increases the weight of the letter (which may be considered a loss in performance).

Further there is an interesting cost that you may not have considered: The envelope actually adds some complexity, and thus creates opportunities for things to go wrong that could not otherwise. The address may be incorrect for the contents, as an example, resulting in the letter being delivered to the wrong recipient. The letter may shift in the envelope hiding part of the address in the plastic window (if one is used). The delivery address may be mistaken for the return address and so on.

Do these disadvantages mean that we should not use the addressed envelope tool? Of course not. They are far outweighed by the worse things that would happen if we did not use addressed envelopes. Such things as:

  • Privacy (it is not desirable for everyone to be able to read the contents)
  • Integrity (if the letter is not protected in an envelope it may become damaged by elemental factors and handling)
  • Speed of processing (although the protocol adds complexity and use of resources, it speeds up the process of delivery through use of a standard protocol that is independent of the contents of the letter)
  • Simplicity (the life of the postal worker is made much easier by removing information they do not require in order to execute their jobs)

Of course, you may say that programming is different. Well it turns out on analysis (at least it turned out after my own analysis, why not do your own analysis to see if I am right) that even though every programming problem is different, there are issues that arise commonly in all circumstances. By studying the example of letter delivery (if one can get over any prejudice such as “Why on earth is a computer programmer going on about the postal service”) you will learn about abstraction and then be able to apply this to software abstractions. Which brings me to a final advantage of abstraction: fitting multiple complex scenarios into a single unified abstraction is a great way to become versatile. Why? Because you develop a general purpose pattern that can be applied to situations you have never encountered before.

Abstraction is an orgnisational tool that can help you maintain flexibility as complexity increases. If you follow my posts, you may recognize this from a previous post called “Beware the Bemusing Triangle”. In it I described the robust FLEXIBILITY – COMPLEXITY – ORGANISATION triangle. To maintain flexibility in an increasingly complex world, one must get organised and abstraction is the most powerful tool for doing so.

Of course you can go too far. Memos in an office of 4 people do not require addressed envelopes, for example. But this is simply a case of using the right tools for the task. Generally, the more complexity, the greater the need for abstraction.

In video game programming, the most popular programming activity appears to be graphics programming. The irony is that this is probably the least complex part of a video game, compared to asset management, behavior management (also known, inaccurately as AI) and user interfaces. All stuff that many programmers find boring or at the very least “not sexy”, but all stuff that is absolutely essential to shipping a game.

Well, if you are a graphics programmer, layers of abstraction are likely to get in your way, but it is misguided to think that means abstraction is bad. It is not a question of whether or not to abstract, but where to abstract.

One programmer I worked with a many years ago kept moaning at me because I was using virtual methods in a console game. “You can’t do that, it will run like a dog! It will break the cache! blah blah blah blah”. Of course, I know this. I know it because I am a rather experienced programmer (30 years) who has worked on all kinds of platforms, configurations and processors. But I also know that virtual functions are a great way to keep his hacky, messy code away from my code. And I also know that it is far easier to take working, but slow, code and speed it up by adding hacks, than it is to hack from the start and get it to work. So created an abstraction layer using pure virtual classes. I developed the gameplay on a PC. Most of the PC code shipped in the console without modification. The threatened disaster of virtual function cache breaking did not appear. I did not even need to optimise everything.

How did I do that? How did commit the cardinal sin of using virtual functions on a processor that hates virtual functions and get away with it? Simple. I chose where to abstract carefully.

Sure in the depths of the rendering pipeline, virtual functions would be a bad idea. But at the object level, where there may be only 10 or 20 objects being manipulated (or even hundreds), the performance cost of a thin abstraction layer that provides code organisational advantages can hardly be measured. The abstraction I am talking about is at the level of “Place this game object at this location”.

“Hey, Fred”, (real name concealed to protect them), “You know those horrible virtual functions I am using? They are only called a total of about 1000 times per second. That’s a few cycles cost once every millisecond. So this performance impact you keep talking about can hardly be measured. Meanwhile the code works. So please get off my back”.

So next time you have a lead programmer, a colleague or even a teacher speak about the wonders of abstraction, do yourself a favor and try to put any prejudices you have about software engineering away. Remember the letter. Remember that the great programmers find a balance, and that you will never find that balance if you dismiss half of the equation out of hand. You can never choose the right tool if you are unaware of the tool that solves your problem. And you can never learn to be a better programmer if you are prejudiced against the paths that will help you.

The return of Player Manager?

August 16, 2010

Yes, I am working on a new game. This is the sequel to my 1990 game Player Manager that I have been waiting 20 years to do.

Why so long? That’s a topic for another day. What I want to post here is a progress report.

Why now? Because someone whom I gave an introduction to game programming (in an attempt a long time ago to train an apprentice) recently declared that he would be extremely willing to help me work on a new game. He’s David Athay, who now lives and works in the US.

Yes that’s all it took. Someone to say “I’ll help” who could and would actually help… with actual programming. It’s a psychological thing really. Sure I could code it all myself, but… I don’t want to. It’s a lonely road… and I am sick and tired of wrestling with APIs… I want to make games.

Games that I can sell. For actual money. In a world where most of the audience want everything for free, and would be quick to point out “there’s more to life than money”. Money is life. Without it I die of starvation and exposure. I would be quite happy if I could earn enough to live on. iPhone may provide some glimmer of hope there. Maybe.

Of course, both of us are doing this in our spare time. There is no funding, or salaries. The project at this stage is only a few weeks old.

David is working on iPhone primarily, since the current idea is an iPhone product, and like I said I want to actually spend my time working on a game, not learning Objective-C.

I am, however, building the game on OS X, because it is a lot easier to develop on a computer platform rather than messing about with the iPhone development kit. In order to achieve this, I am using the Allegro library which serves my purposes well. I have a representation of the game running natively on OS X.

David maintains the iPhone version, and does the appropriate integration to make sure that everything I do on OS X ports to the iPhone.

This is not as difficult as it sounds, because I have developed an architecture that hides the game code from the platform. So in theory, a simple recompile for iPhone is all that is required to keep the games in sync.

What have I done so far? Well I dug out the old graphics for the Megadrive game Dino Dini’s Soccer (a.k.a GOAL! on Amiga and ST). I recreated a pitch map out of the tile set. I gave these to David who quickly implemented them on iPhone.

I have set up my own custom IDE around the Emacs editor and ported my CoreLib library (with vector classes, debugging functions and string classes and so on) from MSVC to GCC.

I have ported and extended my Property Tree system (more on that in a future post) so that I can make appropriate parts of the game data driven.

I have ported my PROC system (this is a software threading/hierarchical finite state machine/event driven system for managing complex behaviors… it actually allows one to write scripts directly in C++… again maybe more on that at a later date).

I have created various classes to abstract away platform specific details.

Right now, I have a scrolling pitch with some sprites… and am about to add the sprite animation system.

The game is going to be 2D for a few reasons. First of all, 3D is simply not as good as 2D for this kind of game. Secondly, I am using existing 2D graphics so that I don’t have to find an artist (at least for a while). Finally, I want to focus on gameplay as quickly as possible. This is going to be an iPhone game after all, I am not trying to compete with FIFA or PES in terms of graphics. I want to get going with creating the gameplay and tuning it, because that is what I love to do when I make a game: I focus on the gameplay above all else. It has worked for me in the past too: The graphics on KICK OFF were appalling. Yet it beat off all other games in the UK to win the Industry Dinner award for “Best 16 bit product” in 1989. Yes, it even beat Populous. Perhaps there is a lesson there.

Back then, the computer systems were heavily constrained: 8Mhz 68000 processors and 128K of ram. I wrote everything in assembler out of necessity, and hand optimised critical areas of code in order to keep the game running at 50 FPS.

Things are very different now, not only in terms of computer hardware, but also in terms of my skills. For the past 10 years I have worked on getting the best out of the C++ programming language, and I feel I have reached the point where I have figured out how to get the most out of it: it actually turns out to be a fairly limited set of patterns and design heuristics that enable me to do 90% of everything I need. So, especially since computer hardware is so much faster now, I am using a proper architecture which will hopefully allow me to develop the game quickly, while of course making it easy to port to other platforms. Flexibility is a key part of my strategy.

I am also, for the first time since 1995, working in a completely Microsoft free development environment. Just saying that makes me feel good. Kickoff, Kickoff2 and Player Manager were developed on the ST (cross assembled for Amiga). GOAL! and Dino Dini’s Soccer were developed on a 486 Dell Unix box. This was Unix before Linux came along. I loved that Unix environment: so powerful, so robust. This is back in 1991: it had a 1024×768 display, 300MB hard disk and… I think 8MB of RAM. You don’t want to know how much it cost. But it was worth it; it paid for itself easily.

However, since about 1996 I have lived with the hell known as Microsoft. Although I always had a unix box handy, they have always been used as servers. Problems with hardware compatibility, the necessity of PC development and so on meant that I was stuck with Mr. Gate’s efforts. These past 14 years of servitude to Mr. Gate’s empire have not been fun. And I have had enough.

But a great thing happened: Apple adopted Unix in OSX. Unfortunately I did not pay much attention until (spurred on by the idea of iPhone development) I got my first Mac last year. Sick and tired of how my PCs would keep going wrong and seem to slow to a crawl and take 10 minutes to reboot … I tried to use the Mac, more to get out of the rut than anything. Now… I consider it my primary machine. Yes, it is what I always wanted: an all round usable versatile computer with lots of commercial software and hardware which runs UNIX. I ain’t going back now… except of course when I have to (for example when teaching).

No doubt this will have many of my students (and some colleagues) shaking their heads. Hey, but that is normal. I’m used to it. At least there’s a context to it now, perhaps. I love UNIX. So I love my Mac.

Anyway, I digress.

I am making a public statement that I am developing a new game, in part to make sure I actually do it. But I have no idea how long it will take at this time. It’s early days. About 21 or so in fact. Stay tuned for more…


%d bloggers like this: