Seven teams show up on your laptop screen: Texas, Oregon, Seton Hall, Drexel, Northwestern, Tennessee, and Xavier. You rank those teams 1 to 7 by clicking on a circle next to each name. Those results are tabulated, and then a new set of teams pops up for your evaluation. The teams on your screen are always exceptionally similar in quality — you’re never asked to compare Kentucky to Miami — and over and over again for days on end you make fine distinctions by clicking the mouse.
This is more or less what the NCAA tournament selection committee’s doing right now on the 15th floor of the Westin in Indianapolis. Their mouse clicks will determine the field and its seeding, not any prior words uttered, written, read, or absorbed on the importance of a team’s body of work, margin of victory, the RPI, the eye test, per-possession efficiency, good wins, running up the score, or strength of schedule.
My guiding assumption is that with each year that goes by, those mouse clicks will be informed by better and better information. Improvement in such matters is achieved less often by persuasion than by mere attrition.
Good information fortifies whatever subjective preference you choose to adopt. Want to elevate “good wins” to near-sanctity and underline anew that tournament selection and seeding are about rewarding victories as opposed to projecting future performance? Think how much more emphasis you could put on good wins if the committee and everyone else were working from a sound and coherent top 50 informed by proven evaluative metrics. That dialectic will be harnessed, and soon.
I’m less concerned about what the RPI does to basketball teams and more concerned about what it does to everyone else. Even people who think they don’t like the RPI can find themselves speaking in a manner that mimics the metric’s severe mathematical foregrounding of strength of schedule at the expense of what is, after all, the issue at hand: actual performance.
In 2012 “Yeah, but who did they beat?” has long since decayed from what it would be ordinarily — one common-sense question among several — into something closer to an evaluative sinkhole. If we went back in time to 1973, grabbed John Wooden and UCLA, transported them to the present, stuck them in this year’s MEAC, handed them the No. 345-rated strength of schedule, and made sure they lost their conference tournament title game, it is an absolute certainty that it would be said of the results: “Yeah, but who did they beat?” Using this test the Bruins would be on the bubble, if they’re lucky.
The problem is not that a silly and backward NCAA is so clueless that they still believe in the RPI in 2012. The problem is precisely that the intelligent and meticulous people one finds at the NCAA do not believe in the RPI anymore. (No one does.) Discussion “in the room” is therefore more vague and diffuse than it needs to be.
Conversely if the selection committee had a sorting metric that had the confidence of the people in the room, it would do what all metrics worthy of confidence do. It would give the people in the room an enlightening departure point for further discussion.
We’ll get there. Meantime people will continue to say things like: “Drexel’s strength of schedule is weaker than that of any at-large in years.” It’s a statement that’s self-evidently and logically independent of how well the Dragons play basketball. And, at the risk of sounding dull and insufficiently ironic, the men’s basketball committee should be evaluating how well men’s basketball teams play men’s basketball.
BONUS salute to brevity! For a far more concise version of this post, please refer to my colleague Ken Pomeroy and his recent white paper on this very matter: “I don’t like the RPI.” Somewhere Strunk & White are happy.