Advent of Alpha Day 8 Markov Chains
Markov Chains describe a sequence of possible events based on the probability of moving into a given state based on your current state.
They are the foundation of a great deal of thinking in the reinforcement learning World, but you don’t need all that to start making use of them.
Suppose a selection is currently priced at 1.5/1.51.
What is the chance of it moving to 1.51/1.52? Or 1.6/1.61? Or 1.8/1.81?
What is the chance of it moving to 1.49/1.5? 1.3/1.31?
Markov Chains are a method in which you can figure out those actual probabilities, in essence by collecting data on how frequently they’ve appeared in the past.
Now, to be clear, if it was as easy as looking at current price, we’d be done as an industry.
I think the alpha here is to see “state” as having more than one feature (for example current back/lay prices), and perhaps include features like last few prices matched, volume of money matched in last X seconds at the current back or lay price, and so on.
Be careful of over-fitting, but with some judicious training and a bit of collected data, you might be able to put something together that allows you to state with some precision and accuracy “there is a 95% chance this selection’s price is going to move to X.YY in Z seconds”, and trade on that.
Markov Chains are not alpha, but they’re a way of modelling data that allow you to start to understand where alpha might be if you want to stay focused on price action.