Mark Lloyd of Seattle, Washington recently e-mailed me the following essay of his. I invited him to let me post it here on the blog as a guest contribution, and he agreed. Here it is, with light editing.
A couple of years ago I wrote a computer program simulating a basketball team with players' shooting percentages varying on different days according to distributions with a mean of 50 percent, but with different variances. I then tested the strategy of
giving the ball to players who made two shots in a row compared to giving the ball to players randomly. In the long term giving the ball to the hot hand was a winning strategy.
Thinking about this recently I came up with an analogy that makes clear the
difference between the casino and hot hand theory and the mistakes many
statisticians make assuming that believing in the hot hand is just the same as the gambler's fallacy of believing that one should bet on a roulette wheel that has shown a streak of results.
Let’s say we have an imperfect casino with five roulette tables that, if in
perfect condition, select red and black 50 percent of the time. The casino’s roulette tables however are in disrepair, and further, the casino sits above a subway line that occasionally shakes the tables, causing them to vary their percentage of hitting black or red from 40-60 to 60-40 with a mean of 50-50. There is no way to observe the condition of the roulette wheel except to observe the results of wagering on the tables.
With most visitors not having the time to do a long-term statistical analysis of the tables, is wagering on a table that has hit red or black twice in a row a better strategy than randomly betting on any color on any table? Clearly yes: A table that is hitting red 60 percent of the time will hit red twice in a row 36 percent of the time, while a table hitting 40 percent will only hit two in a row 16 percent
of the time. With the tables at some range between 40-60 and 60-40, this two-in-a-row percentage will vary between 16 and 36 percent. More often imperfect tables that hit two in a row will have a higher winning percentage.
If every day you walk in to the casino, wait for a table to hit a color twice in a row, then bet on that table and color, over time you will do better than if you randomly bet on a random table and color.
A real-life non-statistician watching for a hot hand in a basketball game may well be thinking something like: “He hit two in a row, I think he is shooting better than average today.” The non-statistician has also made a correct observation that at least in the case of basketball, daily shooting percentages vary more than one would expect randomly. In the course of a game there is limited information to measure this variance, so looking for shooting streaks is an imperfect way to find players who have a higher underlying skill that day. The statistician's job is to operationalize that observation.
What statisticians will observe is that in the short term on any given day, if you measure any roulette table in the imperfect casino, the percentage of hitting a color after hitting the same color twice in a row will not be different than after any other sequence (unless a subway train passes underneath). This is the same as observing a basketball player on a given day (or a given hour depending on that shooter's pattern of consistency). This observation misleads the statistician into believing the hot hand is no different than a casino winning streak in a perfect casino, but the statistician is asking a different question than the non-statistician.
The statistician should be asking how the percentage of streaks varies
from day to day; this will more closely operationalize what the non-statistician is observing and make the statistician wealthier as well.
The world of sports is a world of imperfect casinos. This confounds statisticians.