In my previous post I used my simulator to come up with a set of equations to convert an MLB Over/Under to an average runs scored per game number. Basically, a conversion tool to go from the median to mean for runs scored in a game. In this post I am going to show what the actual empirical data looks like based off of the 1266 games played so far. Obviously, the sample size here will be problematic. The next step will be to add data from previous seasons to the data that I have for the current 2014 season. I may or may not be able to do this but here is the 2014 data nonetheless. And keep in mind this data is not taking into account the odds or percentage chance of the game going over or under. It is assuming that all games have a 50/50 chance of going over or under, which is wrong but it should even out a little bit.
Over/Under | Count | Average RPG |
---|---|---|
5.5 | 1 | 11.00 |
6 | 9 | 7.33 |
6.5 | 93 | 6.59 |
7 | 245 | 7.59 |
7.5 | 305 | 8.09 |
8 | 204 | 8.45 |
8.5 | 208 | 8.53 |
9 | 126 | 8.77 |
9.5 | 44 | 8.98 |
10 | 18 | 11.50 |
10.5 | 18 | 11.33 |
11 | 1 | 13.00 |
11.5 | 2 | 12.00 |
As you can see the sample size problem makes this data pretty close to unusable. And that is part of what I am trying to show here. What I would expect to see in the "Average Runs Per Game" column of the table had the sample size been in the tens of thousands is a number about 0.45 higher than the over/under number. Our largest sample size is the over/under of 7.5 and the average runs scored per game is 0.59 higher than the over/under.
No comments:
Post a Comment