In my work life, I prepare metrics for a living. I spend a lot of time sifting through data and trying to create effective and sustainable models to turn it into information. This information is then used to run large manufacturing facilities that make products that save people’s lives. Creating a metric or a model which is inexact or misleading can have dire consequences both financial and to the public health. So it is tantamount that I create models & metrics that are robust and explain the variability that I am trying to manage.

Bad metrics that mislead me and my company are a grave problem. My continued employment and success speak to the fact that I’ve managed to successfully identify and segregate the good information from the noise. I achieve this thru the use of correlation and the scientific method. Simply put when I have a set of Variables (Y’s) I am trying to manage (Cost, Output, Yield, OEE,Mean time Between Failures etc.) I map my process, figure out the x’s (Man,Machine,Method,Enviroment,Material) and either use existing data to figure out the x’s that matter, create a measuring system to measure the data and then figure out the x’s or build and experiment to test it out.Rinse and Repeat.

It is by this method that we come up the metrics that allow us to stay in business and succeed. So it is no wonder that I look at the majority of stats for performance in the NBA in dumbfounded amazement.

What I hope to do in this post is to take a look at the data regularly collected for NBA teams for one season (2009-2010) and submit it to the same kind of rigor I would in my work life to see what x’s (stats) I should be looking at if I want to accurately predict my Y (Wins). For the purposes of this discussion (and for your own amusement, blogging and frankly whatever else you crazy kids do with statistical data), I’ve put together all the stats for the 2009-2010 season in one convenient spreadsheet using data from Andres’ fancy site & Basketballreference com . Using this data I will be able to calculate the correlation for each of my x’s versus wins. Let’s get started.

I’m going to be looking a three sets of stats: NBA boxscore stats for teams,NBA boxscore stats for opponents & predictive stats. Boxscore stats I won’t explain but for the predictive here’s what I included (Warning Math Content):

- Hollinger’s
**Player Efficiency Rating**used extensively by ESPN to rank players (explained here -**CAUTION**you will need to have a trained math person closed by if you have questions). PER is calculated:

uPER = (1 / MP) * [ 3P + (2/3) * AST + (2 - factor * (team_AST / team_FG)) * FG + (FT *0.5 * (1 + (1 - (team_AST / team_FG)) + (2/3) * (team_AST / team_FG))) - VOP * TOV - VOP * DRB% * (FGA - FG)- VOP * 0.44 * (0.44 + (0.56 * DRB%)) * (FTA - FT) + VOP * (1 - DRB%) * (TRB - ORB) + VOP * DRB% * ORB + VOP * STL + VOP * DRB% * BLK - PF * ((lg_FT / lg_PF) - 0.44 * (lg_FTA / lg_PF) * VOP) ]

Where

factor = (2 / 3) - (0.5 * (lg_AST / lg_FG)) / (2 * (lg_FG / lg_FT)) VOP = lg_PTS / (lg_FGA - lg_ORB + lg_TOV + 0.44 * lg_FTA) DRB% = (lg_TRB - lg_ORB) / lg_TRB

Got it ? Let’s move on.

- The second number is NBA Efficiency used by the NBA itself . The calculation for this one is:

**NBA Efficiency= ((Points + Rebounds + Assists + Steals + Blocks) – ((Field Goals Att. – Field Goals Made) + (Free Throws Att. – Free Throws Made) + Turnovers)**

A little simpler no?

- Third is Win Score (WS) and it’s derivations WS/min ,Position adjusted WS/Min and predicted Wins per 48 (WP48) and predicted Wins Produced from Win score.

** Win Score = PTS + REB + STL + ½*BLK + ½*AST– FGA – ½*FTA – TO – ½*PF**

** Win Score/min = WS/minutes played**

** Position Adjusted Win Score/min= WS/min – average Win score/min for all players @ pos.**

** Predicted WP48 = PAWS/min *1.617 +.100**

** Predicted Wins Produced = Predicted WP48 *Minutes Played/48
**

- Finally are Wins Produced and Wins Produced per 48 minutes (WP48) (explained here)

So now that we’ve explained everything let’s look at some tables & results:

It is interesting to note that the first three no predictive statistics by predictive power (Opponent Assists, Opp. Points and Opponent Field Goals Made) are all defensive. NBA efficiency is not much better than those three at 50%. Win Score and it’s derivatives and PER come in a virtual tie in 2nd place with 70% correlation. Wins Produced stands alone in first with a 94.9% correlation to wins. Despite all the fancy math we saw, of all of these statistics only one was developed using correlation (no prizes for guessing which one).

Readers of this blog will also note that we came up with a few metrics here (see this article)

- Using just a players box score stats (
**85% correlation over eight seasons)**

**W = 84.0 + 0.0445 FG – 0.0583 FGA + 0.0550 3P – 0.00866 3PA + 0.0176 FT- 0.0170 FTA + 0.0635 ORB + 0.0555 DRB + 0.0118 AST + 0.0683 STL****+ 0.0112 BLK – 0.0620 TOV + 0.00656 PF)**

- Using just a players & opponent’s box score stats (
**94% correlation over eight seasons)**:

**W = 64.6 + 0.0743 FG – 0.0307 FGA + 0.0194 3P + 0.00513 3PA + 0.0397 FT**

– 0.0155 FTA + 0.0364 ORB + 0.00278 DRB + 0.00332 AST + 0.0100 STL

+ 0.00308 BLK – 0.0169 TOV – 0.00484 PF – 0.0605 OppFGM + 0.0113 OppFGA

– 0.0275 OppFTM + 0.00692 OppFTA – 0.0378 Opp3PM + 0.00461 Opp3PA

– 0.0105 OppORB + 0.0135 OppDRB + 0.00032 OppAsst + 0.00181 OppSTL

– 0.00751 OppBlk + 0.00544 OppTOV + 0.00214 OppPF

– 0.0155 FTA + 0.0364 ORB + 0.00278 DRB + 0.00332 AST + 0.0100 STL

+ 0.00308 BLK – 0.0169 TOV – 0.00484 PF – 0.0605 OppFGM + 0.0113 OppFGA

– 0.0275 OppFTM + 0.00692 OppFTA – 0.0378 Opp3PM + 0.00461 Opp3PA

– 0.0105 OppORB + 0.0135 OppDRB + 0.00032 OppAsst + 0.00181 OppSTL

– 0.00751 OppBlk + 0.00544 OppTOV + 0.00214 OppPF

So to recap:

- Wins Produced clearly the best model of the ones evaluated.
- A guy with a degree, a blog for a hobby, excel & minitab and a free afternoon can develop a metric for productivity that correlates much more strongly than the one developed leading basketball stat geek for the most influential font of sports information out there and the NBA preferred statistic.

Some might ask, why is this a big deal? Teams in the NBA, media and fans are making and evaluating multimillion dollar decisions based on these bad statistics (and in some cases overly complicated ones, yes I’m looking at you PER). People live and die by these teams and we can statistically prove that they’re teams are being mismanaged. At the end of the day it may be just sports but bad statistics should offend everyone.

**Quick Note:**

It’s been pointed out to me that the numbers for Win Score, PER & NBA efficiency get progressively worse with a larger data set . For 1978 through 2010:

Metric Correlation

Win Score 60%

PER 28%

NBAEFF 28%

Wins Produced remains at 95% for that data set as well.

**Note #2:**

Professor Berri notes in the Comments:

*“Those results Arturo reports are correct (and there were not adjustments made to the data). One needs to remember that Arturo’s analysis is only based on one year. So his n is 30. This is a very small sample. The larger sample gets closer to what is going on.*

*One should also add that the original one-season result is misleading. PER is adjusted for pace. Win Score is not. At the player level, pace is not really important. But at the team level it matters. Win Score + Pace will explain more than PER for this past season as well.”*

To that end here’s a version of the spreadsheet with pace adjusted Win Score*.*

The final numbers for 2009-2010 are :* *

**Metric R-SQ**

**NBAeff 50%**

**PER 72%**

**Pace adjusted Win-Score 80%**

**Wins Produced 95%**

It seems there are lots of deaf ears to go along with big mouths. If someone of this mindset ever gets their hands on a GM position I would predict that Michael Lewis would be writing a book about them with a few years. In his last book the guy who had it all figured out who noone was listen to was named Mike Bury. Thats sounds alot like Dave Berri.

and this is why you don’t write on 4 hours sleep

Arturo: Are those R-squareds in the table, as indicated? And for your two metrics, you report “correlation” — is that r or R^2? Sounds like your R^2 may actually be the same as PER and WS.

The 1978-2010 results don’t make sense, unless WS/WP is being adjusted for changes in league norms while the other metrics are not. The correlations can’t change this much.

Arturo, I’m curious to hear how you might respond to Guy’s question here. That change in correlation is staggering.

Guy,

The values reported are R-SQ. I had the same thoughts you did. 2010 was an aberration.See Prof. Berri’s response below for full detail.

Thanks, Arturo. Looks like 2010 was quite an outlier.

Getting the aggregate team stats to predict wins is not that hard, of course. Point differential gets you an R^2 of about .95. For example, I believe you could simplify your 2nd metric to just FG, 3P, FT, oppFG, opp3P, and oppFT and achieve the same accuracy.

What we really want to know is how accurately a metric allocates value among a team’s players. At the team level, even a very bad metric will have an R^2 of about .95 as long as it effectively sums to the point differential (like MP*team differential, which assumes every single player is equally valuable on a per minute basis). So a great study IMO would be to measure how well various metrics predict wins in the following year. That is, if you take the 2007-2008 metrics and weight them by 2009-2010 MP, how well do Win Score, PER, your metric #1, SPM, and any other boxscore stat metrics do at predicting 2009-2010 wins (test prior year’s MP too)? Better still, how well do they predict two seasons ahead? The more time you have for team composition to change the better, because that’s the only way to know if a metric is allocating wins properly among players (if a squad remains intact, my team prediction will look good even if my metric rates the worst players as the best and vice-versa).

Guy,

Some good questions. I agree on the point differential. As for win prediction over time it get’s a little trickier. Both PER & ADJP48 (the raw number) are fairly consistent over time and ADJP48 correlates highly to wins but there are significant sources of error: age, rookies & the league average performance (which really are age and rookies interacting with each other). I can give you a reasonable approximation for Durant next season but someone like Paul Pierce or John Wall is a harder challenge. So you’ll know some x’s but have to guess with some others. Given that you could reasonably predict some teams (median age not a lot of high draft picks) but other not so much. It’s an interesting experiment to consider (and I will).

You don’t have to wait, you can do this for seasons already played as long as the predictive metrics are from prior seasons. For rookies and those with very limited MP in prior season, you can just use their actual stats in the season being predicted. (Since that will be done for all metrics, it shouldn’t advantage one over another.)

Sounds reasonable.

Use the prior season for players with >400 MP (Both seasons)

Rookies and Players at less than 400 MP use actuals.

I’d incorporate an age model as well. (I’d have to show that)

I would expect that all models will perform at less than the final corr. number above.

Should be a monster of a build (but fun).

I’ll put it on the cue.

Those results Arturo reports are correct (and there were not adjustments made to the data). One needs to remember that Arturo’s analysis is only based on one year. So his n is 30. This is a very small sample. The larger sample gets closer to what is going on.

One should also add that the original one-season result is misleading. PER is adjusted for pace. Win Score is not. At the player level, pace is not really important. But at the team level it matters. Win Score + Pace will explain more than PER for this past season as well.

If you build your model using only 1978-2009, what are the residuals in the win totals for 09-10?

Jay,

I assume you mean the model I built using team & opponent Stats. When I built it I did not have the opponent stats available for 09-10 but I can certainly address it in a future post (I’m curious as well)

I agree with what Guy said above. The WP model is extremely useful in showing how a team’s total box score summations will sum in team wins.

The question is how that is parsed within the individual players who makeup the team. For instance, if each player got 0.6 for a DRB and the other members on the floor got 0.1 (to reward them for their defense that helped cause the missed shot), the team would still get a +1.0 and sum towards wins correspondingly, but the valuation of the individual players could change significantly. In that scenario, WP would correlate just as highly, but the way the stat was divied amongst players would have been adjusted.

I think that WP has a 0.949 correlation is a great result. But to also assume that same result for individual player valuation is possibly misleading. As you show, it’s more like 0.60. And I wonder, with tweaks amongst the assignments of overall team stats within the players, could that correlation be improved? This ability to even more accurately measure individual players’ performance is the real sought after value in predicting future team performance (which must account for player movement).

Prof. Berri has stated that since rebounds tend to correlate year to year for players that indicates that each player should wholly receive credit for rebounds grabbed, but I’m not sure if I’m convinced of that yet. FG% for the league as a whole is fairly constant, and thus, rebounds will be as well. I think the level of consistency is not necessarily an indication that the stat should not have shared credit.

I really look forward to your further exploration of year-to-year predictive power.

Westy,

I shared some of your concerns and did some work on year to year correlation. I’ll post some of it at some point but the work I’ve done shows a high correlation in ADJP48 year to year (around 80%+). I do think there are some opportunities around defense & some other things (see here)

[…] Noah and Deng to Denver for Carmelo Anthony (see here – PER likes this trade for the Bulls, god bless PER). Let’s also do what the Bleacher Report article I referenced before suggested and give them […]

[…] wins is the same thing as when we evaluate any model, like WP, WS, PER, etc. Bascially, we do this. You predict the number of wins for each team and whoever is the most accurate overall wins. Or […]

PER wasn’t designed to predict team performance at all. It was designed to rate players.

It a bad way to assess the contribution of the individual to the team.

If you wanted a fair comparison of PER to Wins Produced, PER should get and have the right to get a team rebounding adjustment (to shift from its stated focus on individual rebounding impact to total team rebounding) and a team defensive adjustment comparable to Wins Proudced. If you can adjust one for what it previously left out you should be able to for the other.

But again PER didn’t intend to be used to project teams. If you want to project team wins against Hollinger compare directly to his current model for that at ESPN. He has been doing it for several years. Have you ever compared against that, if you are dying to make comparisons?

I have on a case by case basis. He does not publish the model for review and evaluation. Dean Oliver does and Prof. Berri does. And minitab does not lie to me.

You don’t need the back-end detail of the model to compare against Hollinger’s published projections. Just a thought, to try to move things forward.

I am not saying PER is great at all. It is old, like 15 years old or so.

If WP is great at backward mapping team wins- woohoo! As if that was that tough. Recent comments at Berri’s site admit that it is better at that than team prediction.

There remain issues that keep me from using it as-is for player credit, a very different enterprise. The issues are fixable though. Like other metrics are improvable.

Nonetheless, good blog and thanks for the responses.

Fred,

We’re actually working on something along the lines of being able to do similar exercises for PER (it just really hard to build). The big problem is that currently, all PER analysis has to be done by hand. That said the win projections from the ESPN trade machine (based on Hollinger’s model) are fairly ridiculous.

[…] in Stumbling on Wins just buy the book). I’ve covered this before (see The Basics , and here ). In the book (which I recommend you buy ), the finding is that there is diminishing returns with […]

[…] between them. This is what Prof. Berri did. He did it again in Stumbling on Wins. I did it here and at least five other […]

[…] easy to predict outcomes over the course of a season. Here’s a longer explanation and a shorter one of the method. The big insight is that NBA coaches and executives overvalue scoring and undervalue […]

[…] We’ve always known how to accrue value. It’s point margin and it’s super correlated to wins (see the Basics or here for my work on point margin). The key finding with Wins Produced is how to assign that value to boxscore stats. Again this is highly correlated. The remaining questions all have to do with accountability and our own technical limitations. Wins Produced uses the available boxscore stats and builds a model to assign value to players based on those stats and the know and confirmed weights. The end result is an effective model with some know and listed deficiencies. Because of lack of play by play/game by game information or the technical challenge of getting it, fair assumptions where made about the numbers not available. For each player, the model assumes his opponent is performing at an average level and the results that we get by making this assumption are for the most part good and consistent. […]

[…] We’ve always known how to accrue value. It’s point margin and it’s super correlated to wins (see the Basics or here for my work on point margin). The key finding with Wins Produced is how to assign that value to boxscore stats. Again this is highly correlated. The remaining questions all have to do with accountability and our own technical limitations. Wins Produced uses the available boxscore stats and builds a model to assign value to players based on those stats and the know and confirmed weights. The end result is an effective model with some know and listed deficiencies. Because of lack of play by play/game by game information or the technical challenge of getting it, fair assumptions where made about the numbers not available. For each player, the model assumes his opponent is performing at an average level and the results that we get by making this assumption are for the most part good and consistent. […]

[…] http://arturogalletti.wordpress.com/2010/07/20/predictive-stats-bad-metrics-correlation-in-the-nba/ […]

[…] in their “about” section. In particular, Arturo Galletti has a nice post on the matter here. I’m more a fan of Dean Oliver’s work on the four factors of basketball, which is his […]

How do Knicks fans feel about Jeremy Lin signing with the Houston Rockets?…Dre – Is Melo worth the money he’s paid, relative to his impact on the court? Probably not – if he’s the same old selfish player he’s always been**, I agree completely. Melo’s salary isn’t a very good deal relative to other more productive and wel…

Can you explain the average Win score/min for all players @ pos. statistic?