Beam me up Scotty

I will get to the next part of my AI thing soon, I have not forgotten. Work on my Light engine is progressing well, and when that’s at a state where I can use it for a new Player Manager game (whatever it actually ends up being called), I’ll be continuing direct work on that.

But right now, I have to report on a very disturbing trend. There is a backlash against a very important concept occurring it seems in the video game industry (at least). A revolt against some concept that I hold as extremely important in software design. Many video game luminaries have started saying things that seem completely irrational to me. They get to speak at GDC even, with what appear to me to be misguided, ill thought through messages.

In short… Abstraction.

Yes, this is becoming the new evil, and it is perplexing, because every single programmer in existence uses abstraction all the time. A function is an abstraction you know.

Whether it’s Carmack tweeting people to kill an abstraction every day, or Mike Acton trying to tell us all that Abstraction is stuff that does nothing, it seems many are getting in on the act. These people deserve respect, however this trend is quite irrational, and deadly wrong… if not in actuality, then deadly wrong in the message it, perhaps unwittingly, sends.

I wonder how it happened… and yet perhaps it is not so hard to understand. The problem is that the software industry has long suffered from problems stemming from a difficulty in managing complexity. Many approaches have been taken to solve this, and many of these themselves have been misguided. Perhaps one of the worst culprits is C++.

C++ is a language that can be both a data pusher and a functional abstractor and this is the reason why C++ is still my language of choice. For now. But the language is not elegant. It is not easy to use. It is hard to write robust software in it. Unless you are very disciplined in how you use it, it will (so to speak) use you.

Perhaps the problem is that there is a whole generation of programmers now that have been fed the same old view about object orientated programming and have tried top apply it, found it wanting and now reject it. They are moving towards something they call Data Oriented Design (DOD). When I see DOD, I see the way I used to code, and still do code when interfacing to hardware or APIs or building algorithms where data organisation is fundamental to efficiency, both in memory and speed. It is nothing new. It’s what I and many others of my generation cut our teeth on. I was doing it 30 years ago, when I coded games on an Acorn System 1 in 6502 machine code (no assembler!).

I programmed games on the Acorn Atom, the BBC Micro, the Atari 800… and eventually moved on to the Atari ST, the Amiga and then early PCs, which is when Carmack appears to have started. I know all about DOD. I learnt a lot of programming with it. My game designs were even guided by it: it’s a 16 pixel wide sprite, because that’s the width of the bus. You can’t get more DOD than that.

But one thing I learned to do by the time Carmack was getting into gear (this would be after Kick Off was released on Amiga and ST in 1989) was to also abstract. Kickoff on ST and Amiga was well-known for speed. The code was efficient, making, use of the hardware and using every trick I could find to keep it fast. The ST version had no hardware sprites or ‘blitter’ and not even hardware scrolling. The sprite routines were hand optimised in assembler. The whole code base was written in assembler. The operating system was ditched and all hardware accessed directly. On the ST the scrolling pitch was made out of only vertical and horizontal lines for speed. The background turf stripes were done using horizontal blank background colour switching.

And yet, the code base had an abstraction layer. I repeat. I used abstraction to separate the game code from the graphics engine. Even though I had to fit it in 256 K. Even though I wanted 50 frames per second.

The proof that it had an abstraction layer is that it took 15 months to write the ST version, and yet I ported it to the Amiga in 6 weeks (including time taken to read the Amiga hardware manual). I made use of all the features of the Amiga that I could, but because I had the gameplay abstracted from the graphic engine, the game itself ported right over.

Later I did the same thing with the port from Amiga to Sega Genesis (Megadrive). The game (minus GUI) was running in 7 weeks in that case. ST, Amiga and Megadrive all shared the same CPU (8 MHz 68000). But the hardware in each case very different. Particularly challenging was the Megadrive, as it did not have enough VRAM for all the graphics, so I had to develop a DMA caching system to stream the graphics out of ROM into VRAM during vertical and horizontal blanks (both for the pitch and the player animations).

So, for me it has never really been a problem to combine DOD with modular, reusable, portable code. The trick, I learned over time, was to choose the right abstractions.

And this is the thing: choosing the right abstractions is the key to successful programming. And C++ provides methods for creating abstractions, and you need to choose the right ones of those. And far too many programmers do not know how to do that properly, and far too many programmers did not, also, grow up with a DOD mindset either.

Perhaps modern programmers have discovered a great truth: that poor Object Orientated Design (OOD) is a disaster. And yet, code with no or too few abstractions is disastrous too (perhaps unless you write only for one platform and you hack code that you don’t care to ever look at again: that’s not my style, as I hope I have demonstrated ).

I wonder if this backlash, and this supposed move to this ‘new’ idea of DOD (which to me is as old as the hills) is really because modern programmers never went through the process of learning about DOD first before learning about this other highfalutin stuff such as OOD and classes and architecture and so on. Unable to make OOD work for them, they now seek to throw it all out and make statements to the effect that abstract is a bad thing.

I don’t know for sure, but I know I see a lot of nonsense. Here are some examples:

From “Aras Pranckevičius:”

  • Get/Set accessors
  • “If data type changes, won’t have to change code”
  • Bullshit!

No sorry, but it is not bullshit. Apart from being a recommendation of Scott Meyers in his valuable book (not that I always agree with him) “Effective C++: 55 ways to improve your programs and design” there are very strong, profound and real reasons for using accessor functions. In fact the only reason for not using them is in the rare cases that the accessor function slows things down. I say rare, because if you inline the accessor function, it will not slow anything down at all.

Accessor functions make sure that you can control what happens when a piece of data is read or set. This is actually very valuable and very hard to retro fit. Ever heard of a certain optimisation technique called “caching?”. Yes. You can only do that if you control the access to data. So I will tell you a little secret. I have accessor functions in my vector classes for the coordinate values.

****dum dum dum****

Oh.. I can hear a bunch of you screaming now. Oh the uproar! The cries of “I knew it, Dino Dini is a mad man!”.

Please. Relax. First of all you must realise that I have yet to have encountered a situation where this approach has cost me. Ever.

Second, 99% of the time vectors are operated on as vectors and access to individual coordinates is not required.

Thirdly, it does not slow anything down because they are inlined.

Fourthly, by overloading the [] operator the interface can be quite elegant, perhaps more so than, say, x,y,z

Fifthly, if I ever use a fixed point implementation (aye, that’ll be an optimisation concern on certain platforms from this ‘ere mad man, capt’n) I can ensure that reading or setting these coordinates from, say , a float will actually work rather than do weird things.

Sixthly, if I want, I can cache the length of the vector so that finding the vector length only requires one square root function call. Yes I know that is not much of a concern these days on most platforms. But, you know, I get a cosy feeling from knowing I could if I wanted to.

Seventhly, since different domains often need data in different formats (for example, the game code might use one kind of vector, and the graphic engine another.. but because I want portable code I can’t change the game vector format), I can make sure that duplicate data in different formats (where I choose to make such a trade off in the interests of the various concerns of the project) can be kept in sync.

Eigthly, I create flexibility for coping with unknown future constraints and platforms. At really no cost. Which gives me a warm cosy feeling too. For free.

Ninthly, if I find a part of the code base that really needs specialised handling for efficiency’s sake, then I can create a special vector structure or class for that as required. For example, my engine my have these fancy vector template classes, but you can be sure that the graphic side of the engine uses good old fashioned vertex arrays of straight floats for meshes. It does not use an array of my fancy vector class.

So… you know what I do these days? I always, as a default, use accessor functions. It’s a great approach, because I do not need to think about it. I just do it. One agonising design decision I do not have to make. And it is always possible and indeed easy to make the member variables public whenever I want to, or create a friend class that can do what it likes in the name of efficiency, when I feel it is appropriate.

So there!

Mike Acton was not going to escape from this blog entry, was he. Well here are three big lie lies:


The mistake Acton makes here is not to define either what software is or what a platform is in this context. What kind of software? What kind of platform? He also states something that for the life of me I can’t track down. Hello, Mike? Who said “Software is a Platform” and what did they actually mean by it when they said this? The only one who seems to have said it on a search on Google apart from vendors on specific software solutions who are fond of saying “Our …. Software is a Platform for ….” is, er… You.

Perhaps I can help here a little. For me as a game designer (that’s me with my game designer hat on) I could say that Software is a platform on which I build my games. But I don’t because it is clumsy language. I would actually say that software is the thing I craft to express a game design on a computer system. And believe it or not, there are concepts, such as CAMERA and LIGHT and TANK and SOLDIER and TERRAIN and LINE OF SIGHT and COLLISION that are nice and easy for me to work with that as a designer I don’t give two hoots about how they are implemented, as long as it fits in memory and it runs fast. And I can do this because there are people like you (erm and me with my programmer hat on) who can tell me how many OBJECTS and COLLISIONS and LINE OF SIGHT CHECKS and so on I can have per frame. So I make sure I design within reasonable limits. Abstraction is a wonderful thing indeed. You just have to choose the right abstractions of course. Choose the wrong abstractions and it will spell disaster. That is why one should leave abstractions to software architects who know what they are doing.

When you look at making a game (that’s the whole game and not just a renderer), then indeed the SOFTWARE IS A PLATFORM, if I understand your meaning correctly.


The mistake Acton makes here is, well, where do we start? Logically, a video game creates a model of a world. Thus somewhere in the code it needs to be designed around the model of the world we are to simulate. But there is some truth to what he is trying to say, although it is awkwardly put and misleading difficult to understand. What Mike is really saying, I think, is that the code should not be written in the way a designer thinks. In part I agree. But it does present us with a bit of a poser. See, what the designer thinks has to be recreated in the code somewhere. It must be expressed, and preferably in a way that is readable. So the code must reflect the model of the world.

However, the hardware of the computer has no interest at all in this model of the world. It just needs to transform data. It would be bad to model the renderer around the game design. Thus that side of the world is indeed something that should not be designed around a model of the world. Unfortunately, Acton is misleading when he uses the word “CODE”, because there are different kinds of code. There is code that drives the hardware. and there is code that simulates a world. These two kinds of code have different requirements, and should never be treated the same. Something I learned long, long ago…

Fortunately, there is a concept, first spoken (perhaps) by Dijkstra: abstraction. And the thing to realise about solving the problem of getting these two seemingly irreconcilable views to work together is to build an interface between them. This is the primary abstraction in video game development that I use. It is most commonly implemented in the form of a scene graph, which is a marvelous thing. See, if you provide accessor functions to scene graph data, you can abstract the data layout of your scene graph from the use of it. This means that you create your scene graph using data orientated design, but the accessor functions for, as an example, positioning a TANK and setting its orientation, can be object orientated. You model the world in code on the OOD side of things, and shunt data around on the DOD side. And the funny thing is that this works, in just about every game. So I will correct Acton a little: The lie should be “CODE REQUIRED TO SHUNT DATA AROUND SHOULD NOT BE DESIGNED AROUND A MODEL OF THE WORLD”, which to someone like me is stating the obvious… although perhaps it is not so obvious to many programmers who are blinded by OOP (and there are many). Here I think we start to understand the source of Acton’s frustration, but in venting this frustration the way he does, he is in my opinion missing a great chunk of important stuff, and throwing the proverbial baby out with the bath water. And confusing novice programmers too, perhaps.


As a side note, this is really puzzling, because no one I know ever says this. A search on google reveals links that say that data is more important than code. So I am at a loss, unless what he is saying is that programmers currently are not very good. Is it really true that programmers tend these days to think that code is more important than data? Experienced programmers do know that it is the data that matters. The starting point on all software design is data. Having said that, I would add that code is really just another form of data too. But I’ll save that for another time.

In conclusion, I named this blog entry Beam Me Up Scotty, because I am seriously concerned over what is going on. I am hoping that it is all a big misunderstanding and that soon everything will be alright again. In the meantime, I’d appreciate it if Scotty would warm up the old Heisenberg compensators and get me the hell out of here, so I can continue work on my Light Engine abstractions. After all, the art of programming is, at its heart, the art of abstraction, isn’t it?

7 Responses to “Beam me up Scotty”

  1. steve Says:

    Great post. This assault on abstraction is getting a little out of hand. I’m a fan of the DoD stuff when it is about “here’s how to write performance critical systems that deal with large numbers of things.” But when it gets into “this is a software revolution! Everything is new!” territory I get turned off, and this post did a pretty good job of articulating why.

    I do have one bone to pick – I hate, hate, hate scene graphs. Now maybe what you are calling a scene graph isn’t what I consider a scene graph, but I find them to be the wrong abstraction most of the time. I’ve written extensively about my feelings on scene graphs: (last of a 3 part series)

    • Dino Dini Says:

      Thanks for your comments.

      Aha! well TBH, scene graphs are a rather prickly subject. I guess what I mean by a scene graph is a high level database of properties of objects, as abstract as possible. Stuff like “It’s mesh “TANK”. It’s at (100,200,300). It’s orientation quaternion is (10,0,1,0) and so on. I actually have not decided on the best implementation for what I call a scene graph, and there are arguments that you should not have one at all. In fact, the choice really depends on the platform. Thus, I usually delegate the scene graph functionality (in the way I mean it) to a platform specific layer. So, although I use scene graphs as primitive example of Polymorphism when teaching, what I actually do in practice is have a thin, but nicely abstract (i.e. independent of implementation) wrapper around whatever scene graph methodology works for any given platform.

      If I told you what I do in my current scene graph, my current opponents would eat me alive. So I am not going to tell. They may not be ready for it yet. But what I will say is that I make sure I can drop in whatever system works for any platform, and that the architecture is sound enough that I know I can optimise when required.

      In choosing the correct abstraction level in an engine (and this is a VERY important point) one must exercise extreme caution. OGRE for example tries to (I believe, it’s been a couple of years) make the scene graph implementation cross platform. This is an error in my opinion (and suddenly the DOD-only advocates either ignore me or shout “yay!”). The scene graph implementation should always be tailored to the platform, because a) Its actually not that much code anyway, b) It may already provided in some form by the platform, and c) and most importantly of all, the performance of the system is vitally dependent on the data structure of the scene graph.

      So, I choose the abstraction “cut” deliberately higher than is often seen. The scene graph implementation is throw away. It’s upgradeable. It can be implemented in an inefficient (in terms of performance), but very quick to implement way when constructing and testing the architecture (to be replaced by something better later). It can be hacked in the craziest way by the craziest programmers for maximising performance, but because there is an abstraction layer (a thin one) none of these details if effect the higher level functionality (which is common to all platforms).

      Creating a good architecture is a design problem: it is a kind of sculpting. It’s an art, not a science (and definitely not a religion). It is the thing that excites me the most about programming, because to me programming is an art form. And this art *can* be both efficient and beautiful. This is the true expression of the art of the engineer, in my opinion.

  2. Aras Pranckevičius Says:

    First of all, I should say that I generally agree with your post. Yes, DoD is just a new fancy name for nothing new, just like most Design Patterns are names for things that have existed for decades. And yes, taking extremist stance is almost never good – everything should be done with reason & pragmatism, choosing right tools, designs (and abstractions!) for the job.

    About my “Get/Set functions are stupid” part: in my actual talk I was only talking about “they are good because they are future proof with respect to underlying data type changes” (this can be very well lost in the slides).

    And I was talking about purely internal code, where you can change all calling places when the interface changes (the talk is for internal Unity “conference” and is about internal code exclusively). Controlling access to a member variable that was public? When needed, make it private, add accessor function and fix compile errors. It’s not hard at all. Of course, when you’re designing a public API that will be used by someone outside, you can’t afford to do just that. My talk & rant was not about that case though.

    • Dino Dini Says:

      OK thanks for clarifying. A few points though: firstly it is very important to be clear on these things, otherwise young people who are learning about this stuff can be mislead. If it is purely internal, I would agree with you, with some caveats.

      Firstly, the future proofing is no joke, it is entirely valid. Perhaps it should be better stated however: the point is that an accessor function abstracts the implementation of data. This is extremely important. What it means is that on one platform, for instance, X might be a float, while on another X might be a fixed point number (a change of implementation because of different hardware). Having worked on many games that used fixed point (and it is still relevant, I worked on a published DS title recently in which I created a custom physics engine. The DS only has fixed point in hardware) I can tell you that bugs due to accidental mis assignment of integers and floats without conversion trend to be a major pain in the back side.

      Anyway this abstraction does “future proof” in the sense that it helps cope with “future unknowns” or in my chosen parlance, it adds flexibility to the system to cope with future, possibly unpredicted, issues like that one.

      Secondly, the whole point of starting off with accessors is that it can be very hard indeed to retro fit accessor functions when hundreds of lines of code access variables directly. I have had to do it, so I learned to avoid this problem by always using accessors.

      The final point, which I hope to write about at some point, is the idea that it is almost always pretty easy to rip out an abstraction, and almost always hard to add one. So my personal strategy for development is to start off with well architected code with well chosen abstractions, making the ‘sculpted form’ of the codebase as ideal as possible (meaning giving initially the architecture the first priority in the design). It is far easier, in a good architecture, to find inefficiencies and implement optimisations on a system that is flexible, clean and working than it is on a codebase that was ‘optimized’ upfront. There is tons I could talk about on this, so I’ll leave that for another time. It is very important to stress, however, that an ideal architecture is one that allows the wiggle room needed to optimise for a platform. As I have stated elsewhere, the trick to doing this is to carefully choose the right level at which to abstract. Too high, and you end up with a general purpose codebase that cannot easily be ported or applied to specific tasks; too low and you end up with a very specific, inflexible codebase that can only serve a single task (such as a particular game on a particular platform with the sequel rewritten from scratch).

      Thanks for the response.

  3. Gavan Woolery Says:

    I agree with almost everything you say here. Abstraction is often abused as a paradigm, but I think that in terms of program design it is very necessary. I cannot count how many times I have used many different types of data to model something, only to realize that they can all be beautifully and efficiently abstracted into one data type, with or without the abstraction mechanisms built into the given language.

  4. Design philosophies & achieving performance on the Cell Broadband Engine | Bhavan's Portfolio Says:

    […] D. 2010a. Beam me up Scotty [Online]. Available: [Accessed January 11 […]

  5. Design philosophies & achieving optimal performance on the Cell Broadband Engine | Bhavans Portfolio Says:

    […] D. 2010a. Beam me up Scotty [Online]. Available:  [Accessed January 11 […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: