I was (in my opinion irrationally) challenged recently on my claim that I ‘invented’ my normalised tunable sigmoid function. This challenge prompted the following post. It got me thinking. It is all very well and good saying I made or discovered a formula or a technique, but it’s a hard thing to prove. So far I have not found the formula anywhere else, but I’d be delighted to know if it anyone else has come across it before, and I invite anyone to help me locate another occurrence of it, if it indeed one exists (as is high probable). But since there is no reference I can provide, I thought it might be useful to document how I came about creating it. This might be useful in inspiring and helping others to search for and find mathematical solutions which don’t come as obviously and easily as Googling for them.

The starting point was quite simple: in around 2002 I was trying to solve a problem in tuning the playing characteristics of a game. It was a football game prototype which I spent a year developing with an Italian game developer. The problem was this: I needed to tune the amount that the player kicked the ball forward while dribbling to the rate of movement of the player so that it worked well for a wide range of speeds. Also I wanted to adjust the sensitivity of the analog sticks (this being a console game). I still tend to call analog sticks “joysticks” because that’s the term we used for them way back when I started making games.

Tuning a video game is mostly about changing the way variables relate to each other. For example, the left right position of an analog stick might get returned as a number between 0 and 255. We then must convert that into some other value, for example radians per second of rotation, in a range of, perhaps, -0.2 to 0.2. The most obvious way to do this is to do something like the following:

We have a problem now, however. This arrangement only has one useful tunable parameter: 0.2. This is not actually that useful. I can change it to adjust the sensitivity of the controls, but the result will also be a change in the maximum movement rate. For example, I can tune the above to give more control by changing 0.2 to 0.1. But then the maximum rotation rate will he halved.

I am not happy with this. I want both to be able to change the precision and also still have a full range. In fact being able to do so is extremely useful, not only for, ahem, “Joysticks”, but also for any situation where a linear relationship is insufficient. But I will stick with the joystick example because it is probably the easiest example to understand.

So, how might we achieve this? First of all we draw a graph for the kind of thing we want:

This is a non linear function, because it is not a straight line. An input of 0 gives 0 and an input of 1 gives 1. This is desirable because it is a lot easier to make a general purpose function when it is normalized. I can multiply any term in any expression by my function that has a scale of zero to one.

The other characteristic is that there is a smooth curve in between. A common way around this problem is to chop the area into sections and provide different slopes, but that is ugly and also suffers (in the case of analog sticks) from a sudden change in precision at the cross over point which can be annoying for the player. So instead we want a smooth curve. And ideally we want to be able to adjust the curve.

So we know what we are looking for, what next?

Well the first thing we can do is observe that the curve appears somewhat asymptotic. This means that it tends towards a value, in this case as the input value gets smaller. This means that the formula we are looking for is probably going to have a division in it, with the input term (x) on the bottom.

Let’s try something simple first:

Next we can shift it to the right:

Now remember that we are looking for zero at zero, so we can shift it down too (currently the result is 1 at 0):

This is looking good, but now we have to fix the value for an input of 1. So how can we change the function to give us that without changing zero giving zero? This is a little more difficult, but after considering it a little bit we can think that if we express the function as a division, we can obtain zero if the top part is zero and one if the top and bottom are equal (that will be the numerator and denominator in math speak). So, perhaps we should look for a formula which is a fraction and involves x and behaves in such a way. We could look at:

Most of the time the answer will be 1, except when x=0 when it will be undefined. What can we do to this to avoid that? Well we can try leaving the top part as it is (or multiplied by something) which will guarantee that zero gives zero. Then we need to find something for the bottom that equals the top when x=1. Hmm. Well we can try a few things out here. How about we try including some kind of scaling factor to the top? We can do that without changing zero giving zero:

How can we modify the denominator so that it is equal k when x = 1? Well, now when x = 1, the numerator is k. So we want the denominator to be k when x = 1. That’s not too hard to arrange:

This has the right characteristics. Zero gives zero, one gives one and something happens in between that might well be curve like. Not sure until we try, by plotting. But I can see looking at the formula that large values of k tend to make the function linear, as the -1 term below become less significant. Let’s try with k=5:

Aha! A curve. How about with k=2?

These are going the wrong way to the way intended. We can flip it around pretty easily:

Plotted for k=2:

And you can follow the original post for the rest of it. Further investigation revealed that apart from a range between zero and minus 1 for k, it is possible to choose negative values of k to curve upwards and positive to curve downwards, thus providing a single simple formula to create any amount of curve in either direction between zero and one.

As you can see, it was not a particularly difficult or improbable journey. As for whether it is original, or someone came up with it before me, how would I know? It’s not really possible to search for a mathematical formula through the formula itself! You need the name, and the only name I have is the one I gave it. It probably does exist in some other form, or in some paper somewhere that I am not aware of. But since I am not able to give a reference for its origin, it is only appropriate for me to say, as I did in my article:

Many years ago I went searching for such a thing, and came up with the following function…

January 13, 2012 at 6:33 pm |

I met you in 2003 at the Kick Off 2 World Cup organized in Milan…I remember how much I was excited about the idea of playing your game in collaboration with Trecision!🙂

August 23, 2012 at 10:23 pm |

Do you have any publications where this sigmoid function is used? I may have found a use for it in a slightly different manner from yours.

August 23, 2012 at 11:42 pm |

There are a couple of other articles in my blog and I teach this to my students. I have do not have any paper though if that is what you mean. I’d be interested to know of any use you may have found for it🙂

August 24, 2012 at 9:51 am |

After some further research, it looks like your function is similar to the “fast sigmoid” function with some transformations (see h(x)).

f(x) = x / (1 + abs(x))

g(x) = ax / (a + abs(x) + 1)

h(x) = ax / (a + abs(x) – 1)

abs() is the absolute value.

Your function:

s(x) = a*x / (a-x+1) = h(-x) for a = 0.

Using a piecewise function:

t(x) = s(x), x >= 0; -1*s(-x), x < 0

should give you

t(x)=h(-x) for a 1, but its piecewise function s(x) and -1*s(-x) seems to give you a tangent-like function (like an inverse sigmoid).

August 25, 2012 at 4:59 pm |

Thanks🙂 do you have a reference for the “fast sigmoid” functions? Just so I can connect the dots. Cheers!

August 26, 2012 at 4:08 pm

I found the reference in this paper:

Beiu, et al. Close Approximations of Sigmoid Functions by Sum of Steps for VLSI Implementation of Neural Networks. 1994.

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.29.5332

which cites:

G.M. Georgiou. Parallel Distributed Processing in the Complex Domain. Ph.D. dissertation, Tulane, 1992.

http://dl.acm.org/citation.cfm?id=142812

November 8, 2012 at 10:23 am |

And what about “Kick-Off” dribbling? It should be quite more simpler? The speed (no acceleration) of the player seems to be really well adjusted. Could you detail the principle ?

December 4, 2012 at 12:17 am |

Kick Off was simpler and did not use this. In Kick Off it was just engineered with a linear relationship and an offset, if I remember right!