I will get to the next part of my AI thing soon, I have not forgotten. Work on my Light engine is progressing well, and when that’s at a state where I can use it for a new Player Manager game (whatever it actually ends up being called), I’ll be continuing direct work on that.
But right now, I have to report on a very disturbing trend. There is a backlash against a very important concept occurring it seems in the video game industry (at least). A revolt against some concept that I hold as extremely important in software design. Many video game luminaries have started saying things that seem completely irrational to me. They get to speak at GDC even, with what appear to me to be misguided, ill thought through messages.
In short… Abstraction.
Yes, this is becoming the new evil, and it is perplexing, because every single programmer in existence uses abstraction all the time. A function is an abstraction you know.
Whether it’s Carmack tweeting people to kill an abstraction every day, or Mike Acton trying to tell us all that Abstraction is stuff that does nothing, it seems many are getting in on the act. These people deserve respect, however this trend is quite irrational, and deadly wrong… if not in actuality, then deadly wrong in the message it, perhaps unwittingly, sends.
I wonder how it happened… and yet perhaps it is not so hard to understand. The problem is that the software industry has long suffered from problems stemming from a difficulty in managing complexity. Many approaches have been taken to solve this, and many of these themselves have been misguided. Perhaps one of the worst culprits is C++.
C++ is a language that can be both a data pusher and a functional abstractor and this is the reason why C++ is still my language of choice. For now. But the language is not elegant. It is not easy to use. It is hard to write robust software in it. Unless you are very disciplined in how you use it, it will (so to speak) use you.
Perhaps the problem is that there is a whole generation of programmers now that have been fed the same old view about object orientated programming and have tried top apply it, found it wanting and now reject it. They are moving towards something they call Data Oriented Design (DOD). When I see DOD, I see the way I used to code, and still do code when interfacing to hardware or APIs or building algorithms where data organisation is fundamental to efficiency, both in memory and speed. It is nothing new. It’s what I and many others of my generation cut our teeth on. I was doing it 30 years ago, when I coded games on an Acorn System 1 in 6502 machine code (no assembler!).
I programmed games on the Acorn Atom, the BBC Micro, the Atari 800… and eventually moved on to the Atari ST, the Amiga and then early PCs, which is when Carmack appears to have started. I know all about DOD. I learnt a lot of programming with it. My game designs were even guided by it: it’s a 16 pixel wide sprite, because that’s the width of the bus. You can’t get more DOD than that.
But one thing I learned to do by the time Carmack was getting into gear (this would be after Kick Off was released on Amiga and ST in 1989) was to also abstract. Kickoff on ST and Amiga was well-known for speed. The code was efficient, making, use of the hardware and using every trick I could find to keep it fast. The ST version had no hardware sprites or ‘blitter’ and not even hardware scrolling. The sprite routines were hand optimised in assembler. The whole code base was written in assembler. The operating system was ditched and all hardware accessed directly. On the ST the scrolling pitch was made out of only vertical and horizontal lines for speed. The background turf stripes were done using horizontal blank background colour switching.
And yet, the code base had an abstraction layer. I repeat. I used abstraction to separate the game code from the graphics engine. Even though I had to fit it in 256 K. Even though I wanted 50 frames per second.
The proof that it had an abstraction layer is that it took 15 months to write the ST version, and yet I ported it to the Amiga in 6 weeks (including time taken to read the Amiga hardware manual). I made use of all the features of the Amiga that I could, but because I had the gameplay abstracted from the graphic engine, the game itself ported right over.
Later I did the same thing with the port from Amiga to Sega Genesis (Megadrive). The game (minus GUI) was running in 7 weeks in that case. ST, Amiga and Megadrive all shared the same CPU (8 MHz 68000). But the hardware in each case very different. Particularly challenging was the Megadrive, as it did not have enough VRAM for all the graphics, so I had to develop a DMA caching system to stream the graphics out of ROM into VRAM during vertical and horizontal blanks (both for the pitch and the player animations).
So, for me it has never really been a problem to combine DOD with modular, reusable, portable code. The trick, I learned over time, was to choose the right abstractions.
And this is the thing: choosing the right abstractions is the key to successful programming. And C++ provides methods for creating abstractions, and you need to choose the right ones of those. And far too many programmers do not know how to do that properly, and far too many programmers did not, also, grow up with a DOD mindset either.
Perhaps modern programmers have discovered a great truth: that poor Object Orientated Design (OOD) is a disaster. And yet, code with no or too few abstractions is disastrous too (perhaps unless you write only for one platform and you hack code that you don’t care to ever look at again: that’s not my style, as I hope I have demonstrated ).
I wonder if this backlash, and this supposed move to this ‘new’ idea of DOD (which to me is as old as the hills) is really because modern programmers never went through the process of learning about DOD first before learning about this other highfalutin stuff such as OOD and classes and architecture and so on. Unable to make OOD work for them, they now seek to throw it all out and make statements to the effect that abstract is a bad thing.
I don’t know for sure, but I know I see a lot of nonsense. Here are some examples:
From “Aras Pranckevičius:”
- Get/Set accessors
- “If data type changes, won’t have to change code”
No sorry, but it is not bullshit. Apart from being a recommendation of Scott Meyers in his valuable book (not that I always agree with him) “Effective C++: 55 ways to improve your programs and design” there are very strong, profound and real reasons for using accessor functions. In fact the only reason for not using them is in the rare cases that the accessor function slows things down. I say rare, because if you inline the accessor function, it will not slow anything down at all.
Accessor functions make sure that you can control what happens when a piece of data is read or set. This is actually very valuable and very hard to retro fit. Ever heard of a certain optimisation technique called “caching?”. Yes. You can only do that if you control the access to data. So I will tell you a little secret. I have accessor functions in my vector classes for the coordinate values.
****dum dum dum****
Oh.. I can hear a bunch of you screaming now. Oh the uproar! The cries of “I knew it, Dino Dini is a mad man!”.
Please. Relax. First of all you must realise that I have yet to have encountered a situation where this approach has cost me. Ever.
Second, 99% of the time vectors are operated on as vectors and access to individual coordinates is not required.
Thirdly, it does not slow anything down because they are inlined.
Fourthly, by overloading the  operator the interface can be quite elegant, perhaps more so than, say, x,y,z
Fifthly, if I ever use a fixed point implementation (aye, that’ll be an optimisation concern on certain platforms from this ‘ere mad man, capt’n) I can ensure that reading or setting these coordinates from, say , a float will actually work rather than do weird things.
Sixthly, if I want, I can cache the length of the vector so that finding the vector length only requires one square root function call. Yes I know that is not much of a concern these days on most platforms. But, you know, I get a cosy feeling from knowing I could if I wanted to.
Seventhly, since different domains often need data in different formats (for example, the game code might use one kind of vector, and the graphic engine another.. but because I want portable code I can’t change the game vector format), I can make sure that duplicate data in different formats (where I choose to make such a trade off in the interests of the various concerns of the project) can be kept in sync.
Eigthly, I create flexibility for coping with unknown future constraints and platforms. At really no cost. Which gives me a warm cosy feeling too. For free.
Ninthly, if I find a part of the code base that really needs specialised handling for efficiency’s sake, then I can create a special vector structure or class for that as required. For example, my engine my have these fancy vector template classes, but you can be sure that the graphic side of the engine uses good old fashioned vertex arrays of straight floats for meshes. It does not use an array of my fancy vector class.
So… you know what I do these days? I always, as a default, use accessor functions. It’s a great approach, because I do not need to think about it. I just do it. One agonising design decision I do not have to make. And it is always possible and indeed easy to make the member variables public whenever I want to, or create a friend class that can do what it likes in the name of efficiency, when I feel it is appropriate.
Mike Acton was not going to escape from this blog entry, was he. Well here are three big lie lies:
(LIE #1) SOFTWARE IS A PLATFORM
The mistake Acton makes here is not to define either what software is or what a platform is in this context. What kind of software? What kind of platform? He also states something that for the life of me I can’t track down. Hello, Mike? Who said “Software is a Platform” and what did they actually mean by it when they said this? The only one who seems to have said it on a search on Google apart from vendors on specific software solutions who are fond of saying “Our …. Software is a Platform for ….” is, er… You.
Perhaps I can help here a little. For me as a game designer (that’s me with my game designer hat on) I could say that Software is a platform on which I build my games. But I don’t because it is clumsy language. I would actually say that software is the thing I craft to express a game design on a computer system. And believe it or not, there are concepts, such as CAMERA and LIGHT and TANK and SOLDIER and TERRAIN and LINE OF SIGHT and COLLISION that are nice and easy for me to work with that as a designer I don’t give two hoots about how they are implemented, as long as it fits in memory and it runs fast. And I can do this because there are people like you (erm and me with my programmer hat on) who can tell me how many OBJECTS and COLLISIONS and LINE OF SIGHT CHECKS and so on I can have per frame. So I make sure I design within reasonable limits. Abstraction is a wonderful thing indeed. You just have to choose the right abstractions of course. Choose the wrong abstractions and it will spell disaster. That is why one should leave abstractions to software architects who know what they are doing.
When you look at making a game (that’s the whole game and not just a renderer), then indeed the SOFTWARE IS A PLATFORM, if I understand your meaning correctly.
(LIE #2) CODE SHOULD BE DESIGNED AROUND A MODEL OF THE WORLD
The mistake Acton makes here is, well, where do we start? Logically, a video game creates a model of a world. Thus somewhere in the code it needs to be designed around the model of the world we are to simulate. But there is some truth to what he is trying to say, although it is awkwardly put and misleading difficult to understand. What Mike is really saying, I think, is that the code should not be written in the way a designer thinks. In part I agree. But it does present us with a bit of a poser. See, what the designer thinks has to be recreated in the code somewhere. It must be expressed, and preferably in a way that is readable. So the code must reflect the model of the world.
However, the hardware of the computer has no interest at all in this model of the world. It just needs to transform data. It would be bad to model the renderer around the game design. Thus that side of the world is indeed something that should not be designed around a model of the world. Unfortunately, Acton is misleading when he uses the word “CODE”, because there are different kinds of code. There is code that drives the hardware. and there is code that simulates a world. These two kinds of code have different requirements, and should never be treated the same. Something I learned long, long ago…
Fortunately, there is a concept, first spoken (perhaps) by Dijkstra: abstraction. And the thing to realise about solving the problem of getting these two seemingly irreconcilable views to work together is to build an interface between them. This is the primary abstraction in video game development that I use. It is most commonly implemented in the form of a scene graph, which is a marvelous thing. See, if you provide accessor functions to scene graph data, you can abstract the data layout of your scene graph from the use of it. This means that you create your scene graph using data orientated design, but the accessor functions for, as an example, positioning a TANK and setting its orientation, can be object orientated. You model the world in code on the OOD side of things, and shunt data around on the DOD side. And the funny thing is that this works, in just about every game. So I will correct Acton a little: The lie should be “CODE REQUIRED TO SHUNT DATA AROUND SHOULD NOT BE DESIGNED AROUND A MODEL OF THE WORLD”, which to someone like me is stating the obvious… although perhaps it is not so obvious to many programmers who are blinded by OOP (and there are many). Here I think we start to understand the source of Acton’s frustration, but in venting this frustration the way he does, he is in my opinion missing a great chunk of important stuff, and throwing the proverbial baby out with the bath water. And confusing novice programmers too, perhaps.
(LIE #3) CODE IS MORE IMPORTANT THAN DATA
As a side note, this is really puzzling, because no one I know ever says this. A search on google reveals links that say that data is more important than code. So I am at a loss, unless what he is saying is that programmers currently are not very good. Is it really true that programmers tend these days to think that code is more important than data? Experienced programmers do know that it is the data that matters. The starting point on all software design is data. Having said that, I would add that code is really just another form of data too. But I’ll save that for another time.
In conclusion, I named this blog entry Beam Me Up Scotty, because I am seriously concerned over what is going on. I am hoping that it is all a big misunderstanding and that soon everything will be alright again. In the meantime, I’d appreciate it if Scotty would warm up the old Heisenberg compensators and get me the hell out of here, so I can continue work on my Light Engine abstractions. After all, the art of programming is, at its heart, the art of abstraction, isn’t it?