This past training camp I recorded every GM's training camp inputs and measured each player's attributes before and after the training camp simulation. My goal was to better understand the mystical black box that is training camp. Training camp giveth and training camp taketh away and we all just deal with it. Is Mr. Uter a safer bet than user inputs? Is there actually anything we can do as GMs to really make a difference in training camp outcomes? This post seeks to answer these questions and help us all have more successful training camps.
We'll first tackle to big question. Is Mr. Uter a better trainer than us users? I started with a multivariable regression that measured the effect of opting for recommended inputs on total player attribute change while controlling for the player's potential and age. But first, let's talk about potential.
I used two measures of potential. I will be referring back to these two methods throughout the post so remember them. Each method assumes the following scale:
A - Player can achieve a maximum score ranging from 81-100
B - 61-80
C - 41-60
D - 21-40
F - 0-20
The first scale I used simply measures a player's potential by the difference between their current ranking in a given attribute and the highest rating of their potential score. For example:
Anthony Carter - Current Steal rating = 69 - Potential Steal rating = A (100) - Potential = 31
The other scale I used for determining potential categorized each player's current ranking into the appropriate ABCDF category and then measured the difference between that category and their potential caategory. Using the Anthony Carter example again:
Anthony Carter - Current Steal rating = 69 (B) - Potential Steal rating = A - Potential = 1
This scale accounts for the fact that Carter probably doesn't actually have the potential to reach 100 in steals.
So, back to that regression. . . The table below contains the result of the regression using both potential scales.
- Full regression.GIF (9.25 KiB) Viewed 4769 times
These three simple variables explain 70% of the variance in total training camp attribute change. 70%! Mr. Uter is smart but he is, after all, just a computer. As expected, age is the key determining factor of training camp outcome. It is very significant and has a relatively large and negative coefficient. For those of you less familiar with regression, the coefficients of -4.696 and -4.613 mean that every year a player ages you can expect total training camp outcome about 5 points lower than the previous year on average and holding all else equal. We, of course know that this isn't quite how it works but we'll come back to that later.
Potential is also significant. The 100 scale coefficient indicates that for each point below the maximum rating of their potential score, they will gain 0.122 points overall. Using the 5 scale suggests that for each attribute currently ranked in a category below their potential, they will gain 2.578 points. I personally prefer the 5 scale althought they are equally significant.
To measure the difference in outcomes between players with user prescribed training and computer recommended training, I used a dummy variable representing whether the player was trained with the recommended setting. The good news for those of us that use the recommended settings is that the coefficient is positive in each case, suggesting that players trained by Mr. Uter have net changes 1.9 points higher on average. The kind of good news for those of us that come up with our own training regiments is that neither of those coefficients is significant and that zero lies in the 95% confidence interval for both of them, meaning that there is no statistical difference between the outcomes of players trained by the computer and players trained by users. Plotting average attribute changes of user versus computer trained players by age shows that there is no clear winner.
- Age by recommended and user.GIF (20.08 KiB) Viewed 4769 times
I said that it the lack of difference was kind of good news for us that create our own training plans. Why? Well, does our effort even matter? Does it pay off to do the analysis? It could be that some of us are good at it and some are not so good at it, which would really confuse the results. Nevertheless, I proceed with an indepth analysis of the effect of user inputs on player outcomes.