Along with David Locke, the radio voice of the Utah Jazz, I’ve started a weekly podcast we call NBA Mythbusting. Each week, we’ll discuss a certain NBA belief and look at how the numbers assess it. Then I’ll post that audio here along with some additional supporting stats.
To start things off, we discussed a hot topic in Portland I touched on last week: Do some teams play better on the road than at home, even after accounting for the typical home-court advantage? In the podcast, the main bit of evidence I utilize is that there is little year-to-year correlation between adjusted home-court advantage, as well as that over the four years for which I have data, this values largely converge toward zero. Here are the complete four-year rankings:
Team HCA DEN 4.3 UTA 4.0 GSW 2.2 CHA 1.5 POR 1.4 IND 1.4 TOR 1.0 CLE 0.7 SAC 0.5 PHX 0.4 WAS 0.4 HOU 0.2 MIL 0.2 CHI -0.1 ORL -0.2 ATL -0.2 SAS -0.3 NOH -0.3 LAC -0.6 MEM -0.8 DAL -0.9 OKC -0.9 PHI -0.9 LAL -1.2 DET -1.2 NJN -1.3 MIN -1.5 NYK -1.9 MIA -2.0 BOS -2.4
Just two of these values are larger than the typical home-court advantage (a little more than three points most seasons, but over four this year). There is ample evidence, here and elsewhere, that Denver and Utah do enjoy a unique advantage because of their altitude. This makes sense since we see a similar effect in college hoops. At the risk of denigrating home crowds, their impact doesn’t seem to be large enough over time to make a meaningful difference. There are some results here that make sense casually (Golden State and Portland both ranking in the top five), but other counterintuitive results that can’t be ignored (Charlotte is in between them; Oklahoma City, even without the lame-duck Seattle season, rates as a reverse advantage).
BONUS NOTE ON YEAR-TO-YEAR VALIDITY: While looking for other references to Thinking, Fast and Slow and the NBA, I came across a post on the NBA Geek blog questioning plus-minus, with my treatise on evaluating basketball players as the (counter)example. The conclusion:
If a measurement is horribly inconsistent over time, there are two possibitities (sic):
- Whatever you are measuring is itself wildly inconsistent over time.
- You are not measuring what you think you are measuring.
I would submit that there is a third possibility, that the sample size is too small to measure the effect in question. Let’s consider three-point shooting. Last season, there was a 0.224 correlation between Ray Allen‘s three-point percentage from one game to the next. The standard deviation in his single-game shooting (.247) is large enough that we can barely predict within two standard deviations whether he’ll make all of his threes or none of them in a game. If you looked strictly at the single-game level, you’d think that three-point shooting was not a skill. But obviously this is not the case, once we aggregate it over more observations.
There are two requirements for a measurement to be meaningless: Not only must it be inconsistent, but when grouped over a longer sample size, it must converge to zero. This is, more or less, what we see with adjusted home-court advantage. If we used 10 years, the averages would be smaller than the four-year averages, and show little variation between teams. This is not the case with plus-minus. I have net plus-minus from BasketballValue.com handy for four seasons. During that time, Kevin Garnett has never rated as worse than 8.1 points per 100 possessions better than his teammates. If plus-minus wasn’t measuring something, we would not see such extreme values over such a long period.
Essentially, each single-season rating is made up of a signal (the player’s ability to help his team win while on the floor) and noise. Over time, the noise does converge to zero, so what remains is a much more reliable measurement. (This is in fact discussed in Thinking, Fast and Slow in terms of explaining the importance of regressing to the mean.) That’s how plus-minus can be unreliable for a single season, yet become more useful over time.