I’ve attempted to write this for over a week but now that Joe Lunardi has put out his most ridiculous column in a long time, I figured now I need to pick up the pace.
The following is an up to date ranking of college basketball’s elite, based on a method started by VegasWatch here. The method is simple, really. Using KenPom ratings, compare how other teams’ would have fared by playing Team X’s schedule. Today, I’m going to hit on a few of the big outliers compared to Lunardi’s current bracket, and, barring laziness, give updates every few days after big wins/losses.
OK, let me go over the columns, as there’s a lot of information in that graphic. c(W%) and f(W%) represent the team on the left’s current winning percentage (absent games against non-D1 opponents) and what their winning percentage is expected to be at the end of the regular season based on KenPom ratings. Pyth is the team’s current pythagorean rating, per KenPom. Now, we get into the basis of the method. c(SOS) is average difference in win percentage from the baseline teams (UCLA, Iowa, Oklahoma St, Villanova, Pittsburgh & Wichita St) and the actual win percentage from the team on the left. Note, that those 6 teams listed are completely arbitrary. They can be changed, but the general path of the data will remain the same.
So, if those 6 teams were to play Arizona’s schedule, we would expect their win% to be .814. Since Arizona’s is currently 1.000, we take the difference and end up with the Curr. Diff. column, in this case .186. This entire exercise has been done for all the teams with seeds 1-12 in the current Lunardi bracket, along with some bubble teams.
The f(SOS) and End. Diff. columns are the same method, only it’s projecting through season’s end. I think this is the best way to look at potential seeding, as this paints the picture of how we would expect this method to look in 6 weeks. I probably did a horrible job explaining this, but whatever. Head to the VegasWatch link at the top and search his archives. He probably explains it better than I did.
The first team I want to look at is Lunardi’s current darling, Kansas. Lunardi has the Jayhawks as a one seed, and proceeded to write an article today explaining that. His logic (that their schedule has been super hard) is completely ignoring the fact that they already have 4 losses. In order for Kansas to end up as a one seed, you need to use some pretty optimistic projections for the rest of Kansas’ season. Luckily, I’m using some reasonable projections instead. I have Kansas ranked #10 when projecting through the end of the season. (Note: end of season is just that. This does not factor in the conference tournament.)
Yes, Kansas has played an incredibly difficult schedule. The hardest of all the teams I am tracking, in fact. But just because you have played a difficult schedule does not mean the results of the games shouldn’t matter. With his method, if you played 20 road games against the 20 best teams in college basketball and lost them all, you would have 40 “Winning Points” and would be a 1 seed in the NCAA Tournament. Right.
Look, Kansas is a good team, and if they won the entire tournament this year, no one would be shocked. But to project them as a 1 seed now because they happened to play Villanova, Colorado, Florida and San Diego St., with total disregard for the outcome of the game, is absolutely insane.
And I thought the Kansas seeding was bad. First of all, Kentucky hasn’t been good. Their only quality win was against Louisville. They are actually underperforming the performances expected for my 6 baseline teams. I think you can make a better case for them being unranked than being #14. They have one road win. They have done nothing of note this season and are being ranked and projected as if this were 2011. How Lunardi (or anyone) can have a team like Pittsburgh either ranked behind, or projected ahead of, Kentucky, should not be covering this sport. (Spoiler alert: Lunardi has Pitt as a 5 seed. I have them projecting as the 7th best resume in the country.) While Pitt’s schedule has been easier to-date, it projects equal at the end of season (.805 vs. .806). If you play an equal schedule, but have 2 fewer losses, that has to matter, right?
Anyway, there are a few other major differences between Lunardi and this method that I will touch on later. For now, here is the EnglePomWatch Top 40 projections for the end of the season.
1 comment