We’re all familiar with the variance of poker, and we’ve all heard that MTT poker has very high variance compared to ring-games or STTs. Some days we play a near-perfect game, always getting our chips in good, and lose to a brutal series of suckouts. On other days we get our chips in way behind, hit our gutshot straight draws, and go on to win a tournament. So how much variance is there in MTTs?

Let’s say we have a group of a 1000 online poker players who all play a series of multi-table tournaments. The cost to enter each tournament comprises the buy-in plus a 9% entry fee. The players have a variety of skill levels, and these translate into a certain expectation.

First there are the donkeys that on average are only expected to win half of their buy-in back. For every $100 in the prize pool, each donkey will win an average of $50, resulting in an ROI of -54% (when the entry fee is included). Then there are the sheep that have an average expectation to win their buy-in back. Due to the entry fee they still lose money overall, but only at a rate of -8%.

Next come the foxes, who are skilled practitioners of the game with an average expectation of winning one-and-a-half times their buy-in each time they play, giving them an overall ROI of +38%. Finally there are the wolves, constantly on the prowl for donkeys. These players expect to win double their buy-in for an overall ROI of 83%.

When these 1000 players sit down and start to play this series of tournaments, there is no telling whom is a donkey, a sheep, a fox, or a wolf. All we know is that there must be more donkeys than the combined number of foxes and wolves. If there are 50 wolves, then there will need to be 100 more donkeys than foxes to even out the prize distribution, so we could have 350 donkeys, 350 sheep, 250 foxes and 50 wolves, but how can we tell who is whom?

Over the long run the foxes and wolves will emerge as winners, but over the shorter term it can be hard to distinguish the different types of players simply from their results, due to the variance in the game. So how many tournaments would this group of players have to play to distinguish amongst them?

1000-man tournaments typically pay-out 100 places with 25% of the prize pool awarded to the 11th – 100th places, a further 25% awarded to the 4th – 10th places, and a whopping 50% going to the top three places. If 50% of the money is awarded to the top three places, then in general, 50% of each player’s expected winnings should come from finishing in these places (in reality the best players earn more than 50% of their winnings from top threes, but I’m setting that aside for the purpose of this analysis). With such a general rule, we can deduce that a donkey will get a top three finish 0.15% of the time, a sheep 0.3%, a fox 0.45%, and a wolf 0.6%.

After a series of 100 tournaments, the prize-money results of the four groups of players would be almost completely indistinguishable. Out of our group of 50 wolves, only 30 of them can be expected to have posted a top three finish, along with over 50 donkeys, over 100 sheep, and over 100 foxes. After such a small sample size, almost half of the wolves would appear to be donkeys, and almost a third of the donkeys would appear to be foxes or wolves. It is clear we need a much bigger sample size to sort out these beasts.

After 1000 tournaments, we might expect that most of the wolves would emerge from their sheep’s clothing. Well, perhaps, but not as many as you might think. Each donkey would expect to achieve 1.5 top three finishes, each sheep 3.0, each fox 4.5, and each wolf 6.0. These are small numbers, and it is easy to understand that due to variance, any given donkey, sheep, fox, or wolf might have anywhere between 0 and 6 top threes after a thousand MTTs.

The first chart shows an estimation of the expected ROI variance of each type of player after 1000 MTTs. We can see that there are large overlaps between the groups, thereby making it difficult to determine which players are in which groups.

So if 1000 MTTs is not enough to distinguish the groups how high do we need to go? The second chart shows an estimation of the expected ROI variance after a series of 10,000 tournaments. While there are some small overlaps, each group has now emerged and can be clearly distinguished.

After a series of 10,000 MTTs, therefore, most players should be able to establish their long-term ROI expectation in 1000-man tournaments to within perhaps +/-20%. It will take an even longer series of up to 20,000 MTTs to get a more accurate long-term ROI assessment.

Variance can be decreased by playing in tournaments with smaller entry fields. It may take 3000 tournaments to get within 20% of your long-term ROI expectation in 300-man tournaments. By the same logic, it takes a much shorter series of smaller-field sit-and-goes to establish a winning record. On the other hand, it would appear impossible for a player to establish their ROI in the Pokerstars Sunday Million because it will only occur a few thousand times in a player’s lifetime and has several thousand entrants every week. The same logic applies even more so to the WSOP Main Event and probably to every large buy-in live event ever held (i.e. exclusively live pros cannot be expected to converge on their long-term MTT ROI within their lifetimes).

Browsing through the statistics of some high profile players in the pocketfives rankings, a number of them have clocked-up several thousand MTTs, but I can’t find anyone with a record of over 10,000 MTTs. The average field sizes are several hundred, so while some of these players are converging on their long-term ROIs, they should still expect some variance along the way. It is worth noting that I’ve only found two players with an ROI of over 100% after 4,000 or more MTTs, and many of the high-profile ranked players with 1500-3000 MTTs under their belts may still be significantly adrift from their long-term expectation. There is a player with a lot of recent success and a very high ROI who has played less than 650 MTTs. Clearly this player has a long-way to go before he can establish his true expectation.

It would be unfair of me to comment on other players’ statistics without discussing my own. I’ve tracked over 1700 MTTs across several sites, and I’m still far-away from establishing my long-term expectation, although I believe I’ve got a good enough track record to establish myself as a winning player. The graph below shows how my ROI has varied over this series, and it appears to be stabilizing, but I still need to play thousands more MTTs before I can really be sure.

This analysis should help you to understand the long-term nature of variance in large-field MTTs. A good player can have stretches of hundreds or even thousands of MTTs significantly below their expectation, whereas a not-so-good player can get lucky and appear to have better results over a similar period. Only after several thousand MTTs will the real distribution of donkeys, sheep, foxes, and wolves emerge, and it may take up to 20,000 MTTs to get a true estimate of your long-term expectation in 1000-man MTTs. Even for the most dedicated players, this is going to take a few years, so if you’re not seeing results after a few weeks or months, don’t panic; focus on playing your best game, and eventually the results will come your way.

Author’s Note: The mathematical analysis used to generate the estimates in this article used what I feel to be realistic approximations for the distributions of results based on statistical variance theory. A more rigorous analysis using actual results simulations is beyond the scope of this article.