Advent of Alpha Day 11: Calibration

Let’s suppose you think the price of something should be 2.0 - how do you test if that method of deriving the price is any good?

Calibration.

There’s a few methods for this, but I think there is alpha in just looking at the outputs from the simplest method, particularly when looking at how good somebody else’s prices are.

Take 1000 events, take the prices and turn them into implied probabilities (2.0 = 50%, 5.0 = 20%, 1.5 = 66.6%, and so on). Now look at how often that outcome occurred, and count it as a percentage. Plot it as a line.

The first time I saw a calibration graph like this was in Nate Silver’s The Signal and The Noise, in which he showed the line of weather forecasters predictions of rain wobbled around the 50% mark, because forecasters knew people hated being told there was a 50% chance of rain so would move the number to between 40% and 60%.

Thing is, I then picked up a load of BSP prices for some low league football somewhere in the World and ran this process and noticed some very strange behaviour. In-play numbers around 1.01 and 1000 for horse racing also wobbled around a bit in ways I hadn’t expected.

Then when I was trying to price my own books, I calibrated and spotted weaknesses. I then took bookmaker prices, adjusted them for over-round and found other clear inconsistencies in some markets for some sports at some periods of time.

Sometimes drawing a graph is all you need for an idea. Once you have a bit of code to draw the graphs quickly, all you need is a decent sized set of data for your target, and you can get to work.

I’d also suggest you try and figure out the “why” before you trade, so you can make sure you’re dealing with the right parameters when making calibration choices.