by: Professor Dwight Wynne
MOTIVATION
While trying to adjust our expected points model to account for team and opponent strength, we ran into a few issues. As far as we could tell, the only analytics-based rating system out there was the Pablo rankings, and we wanted ratings that (1) are not paywalled, (2) can be reproduced by other interested people without having to guess how/if the model specifications change, (3) we can adjust model settings for if necessary, and (4) we can interpret when we finally incorporate them into an expected points model.
To satisfy (1) and (2), we are posting ratings on the blog and putting the complete set of R code on Dwight's Github for anyone who wants to try it out or modify it.
BASIC THEORY
Our rating system uses a Bradley-Terry model to predict the chance that Team A wins the point when serving against Team B. The idea behind a Bradley-Terry model is that the log-odds of Team A being "preferred" over Team B is linearly related to a difference in ratings. Odds should be relatively familiar if you're used to sports gambling: if Rutgers is listed as a 100:1 underdog, then the odds of Rutgers winning are estimated to be 1/100 (statisticians are weird and compute "odds in favor" as opposed to "odds against" that are more common in gambling) corresponding to about a 0.99% chance to win. The log-odds is just the natural logarithm of the odds: if Rutgers is listed as a 100:1 underdog, then the "log-odds" of Rutgers winning is ln(1/100) or about -4.6.
Instead of modeling on a match-by-match basis, we model on a serve-by-serve basis. This means that in addition to just the difference in team rating, we also offset for which team is serving (because the serving team starts at a disadvantage in each rally) and any home-court advantage. We fit the model using maximum likelihood estimation, that is, we find the unique set of ratings and serve/home-court adjustments that maximizes the chance that the observed serves would have the results they actually did.
To fit the model, we use play-by-play data taken directly from the NCAA website, so that we can be as accurate as possible with respect to which team was serving on each point. We do not use box score data. On one hand, this means that we ignore any matches for which the box score is clearly nonsense. On the other hand, we also throw out any matches for which play-by-play data does not exist. This means that the model pretends like those games were never played.
Our ratings are always scaled to have a mean of 1500 and a standard deviation of 500. Once we have the team ratings, these are simply ranked all the way down (1 to 340) to become our VolleyDork Ranking. Since the scaling may be a little bit different every week, we're not sure that a small change in a team's week-to-week rating is particularly informative. However, a general rule of thumb for D1 women's volleyball is:
Rating --- Conversion to Ranking
2500 --- Elite team, top 5-10 in the nation
2250 --- Top 25 team...ish
2000 --- Top 50 team...ish
1750 --- Top 100 team...ish but probably near the cellar for a Power-5 conference
1500 --- Average D1 team
EXAMPLE PREDICTION
Suppose Minnesota (rating 2564) is playing at Wisconsin (rating 2647). Based on the season so far, the serve adjustment is -539, the home court advantage is +44, and the scaling factor is 1886 (these are the actual values from our first set of ratings).
To find the log-odds of Minnesota scoring a point on serve, we take the difference in ratings, add the (negative) serve adjustment, and subtract the home court advantage because Wisconsin has it. This gives us 2564 - 2647 - 539 - 44 = -666. We then divide the result by the scaling factor: -666/1886 is about -0.353. We "inverse logit" transform the result to get a probability of about 0.413. In other words, when Minnesota is serving, we expect them to score 41.3% of the time and Wisconsin to sideout the other 58.7%.
We then do the same computation for Wisconsin: 2647 - 2564 - 539 + 44 = -412, divide by the scaling factor to get about -0.218, and "inverse logit" transform to get about 0.446. So when Wisconsin is serving, we expect them to score 44.6% of the time and Minnesota to sideout the other 55.4%.
We can then predict the result of the match using the volleysim package:
minn_wisc <- tibble::tribble(
~team, ~sideout,
"Wisconsin", 1/(1 + exp(-666/1886)), # Chance of Wisconsin sideout when Minnesota serving
"Minnesota", 1/(1 + exp(-412/1886)) # Chance of Minnesota sideout when Wisconsin serving
)
volleysim::vs_simulate_match_theor(minn_wisc, process_model = "sideout")
The resulting output indicates that Wisconsin has about a 67% chance to win the match. Moreover, the volleysim package predicts a 21% chance of Wisconsin winning in 3, 26% chance of Wisconsin winning in 4, and 20% chance of Wisconsin winning in 5.