Thursday, December 19, 2019

Course and Slope Rating Uncertainties Continued


The USGA estimates Course Ratings to the nearest tenth and Slope Ratings to the nearest point.  As pointed out in a previous post (How Accurate is the Slope System?, 10/8/2012), there is larger uncertainty in the Ratings estimates than the USGA cares to admit.  A course that has recently been re-rated by the Southern California Golf Association (see Appendix) demonstrates the spurious accuracy of ratings.  This note is not written to complain about the Ratings, but to illustrate the problems of making accurate ratings.   Ratings are not a science and only an art in the same sense that finger painting is an art form.  This can be demonstrated by examining the new and old ratings at the re-rated course.

Course Set-Up – A major rating problem occurred when the course was shortened for men (green tees).   The green tees, according to the scorecard, are a combination of white and red tee placements. (Note: There are no green tee markers.)   The actual placement of the green tees, however, is different.  On 15 of the holes where the green tees and red tees should be of the same length, the green tees are typically set at least 10 yards behind the red tees. This is probably done so men can retain the illusion they are not playing from the red tees.   Similarly, when white and black tees are supposedly shared, the black tees are placed 10-15 yards back from the white tees.  Then there are outright errors in tee placement.   The green tees are supposed to set alongside the white tees on one hole.  Instead, they are set alongside the red tees making for an error of some 60 yards.

So what course did the SCGA rate?  Apparently, it rated the course shown on scorecard since that is the distances it reported.  In essence, the SCGA has rated a course that does not exist.

Ratings Changes – Sometimes Rating Committees make small changes just to justify their existence.  The small changes in the ratings at the course (e.g., one point change in the Slope Rating) are not due to changes in the courses, but changes in the Rating Committee. The Committee does not have to explain the ratings, but only send them along to the Club as if they were inscribed in stone.  Could the Committee actually state a physical reason for a Course Rating increasing by 0.1 or the Slope Rating by one point?

Posting 9-holes or 18-holes –Courses are rated by each 9-holes. The 18-hole Course Rating is the sum of the two 9-hole Course Ratings.  The 18-Hole Slope Rating is the average of the two 9 hole Slope Ratings rounded to the nearest integer.  The rounding error in the 18-hole Course Rating could be as much as 0.1.  The Slope Rating for 18-holes will be the same whether the difference in the 9-hole Slope Ratings are some odd number or that odd number plus one.  For example, the Course Rating from the Gold Tees is 73.7 and the Bogey Rating is 98.5.  The Slope Rating for 18-holes should be 133 (5.381(98.5-73.7)). The two 9-holes Slope Ratings are 130 and 137.  When the Slope Ratings are averaged and rounded, the 18-hole Slope Rating is134.

Whether a player posts an 18-hole score or two 9-hole scores can make a difference.  Suppose a player shoots a 90 from the gold tees.  His differential is 13.7 ((90-73.7)113/134).  The table below shows that if he posts various combinations of 90 as 9-hole scores, his differential can be as high as 14.0 and a as low as 13.6. In essence, player who shoots 40-50 is considered a better player than one who shoots 50-40. 

Table

Combined Differential for 9-hole Scores

Front Nine Score
CR =36.5 SR=130
Back Nine Score
CR = 37.2 SR = 137

Combined Differential
50
40
14.0
49
41
14.0
48
42
14.0
47
43
13.9
46
44
13.9
44
45
13.8
44
46
13.8
43
47
13.8
42
48
12.7
41
49
13.6
40
50
13.6



World Handicap System (WHS) – The USGA Course Rating System is based on taking a player’s best 10 out of 20 differentials.  The WHS will only use a player’s best 8 differentials in calculating his Index.  Therefore, as of January 1, 2020 every Course and Slope Rating will be in error.  Rather adjust the Ratings, the USGA will just let the Indexes of every player drop by approximately 0.5.

Lessons Learned – This post continues the blog’s efforts to document the uncertainty surrounding the accuracy of Course and Slope Ratings.  Recent changes in the Handicap System only introduce another layer of complexity without an accompanying benefit.  The best example is the Daily Course Rating (DCR) to correct for bad weather now part of the World Handicap System (WHS).  Below is the equation used by Golf Australia to make that adjustment. 

              DCR = SR +SUM(36+Par-SR-CPA-mh-b-S)/(m’h+b’)2)/SUM(1/m’h+b)2)+1/CSD2)

It is assumed the WHS has a similar equation.  Any regulation that is not understood by those being ruled is not a good one.   Moreover, the Handicap System is marked by rounding errors, measurement errors (see above), random errors, and systematic errors (i.e., sandbaggers).  To believe a quadratic equation can make a significant advance in the equity of competition is myopic.  Sadly, such claims are often made by “quants” and adopted by administrators who are dazzled by the mathematics.  This fulfills the bureaucrats need to do something even though it is of little or negative value.   

The major lesson in all of this is to not take handicap ratings too seriously.  They are not precise, but “good enough” and probably as good as can be done. Errors in ratings can cause you to lose and win a match.  Things should even out in the end



Appendix

Course and Slope Ratings


Tees
Old  Course Rating
New Course Rating
Old Slope Rating
New Slope Rating

Old Yardage

New Yardage
Gold
73.5
73.7
133
134
6972
6972
Gold/Black
72.2
72.3
129
130
6689
6689
Black
71.0
71.2
126
127
6445
6445
Tournament
70.0
70.0
124
124
6195
6195
White
68.4
68.4
119
120
5851
5870
Green
66.0
65.7
111
112
5365
5204


Tuesday, October 22, 2019

Eliminating the Blind Draw


(Note: This is a corrected version of a post of the same name from 2012.  The previous post omitted the Appendix.  The Appendix is shown in this version)
Introduction - Many tournaments consist of a format where foursomes compete against other foursomes in the field.  When the field cannot be divided evenly into foursomes, threesomes are created.  The threesome is then allowed a “blind draw” for the fourth player (i.e., the score of another player in the field is drawn and his score becomes that of the missing fourth player)'

While the “blind draw” is equitable it has several problems.  First, a team’s performance is determined in part by luck rather than on how well the team played.  Second, if the blind draw played well, his performance can help the threesome and therefore hurt the chances of his own team. Third, it is more difficult for the player in a threesome to evaluate risk/reward decisions when the performance of the fourth player is unknown.

This paper evaluates two methods around this problem:

·         Method 1: The threesome is allowed to use one player’s score twice on a hole.  The chosen player is rotated each hole so that each player’s score can be used twice on six holes.  A typical rotation would have the lowest handicap player take the first hole, the second lowest handicap the second hole, and the third lowest handicap player the third hole.  This rotation would be repeated every three holes.

·         Method 2: The threesome is assigned a player who always has a net par on each hole


The evaluation proceeds in four steps.  First, the basic probability model for the evaluation is described.  Second, probability values are estimated using data from two courses.  Expected hole scores for various methods are then computed to determine the preferred method for threesome competition.  Third, a sensitivity analysis is performed to see over what range one method is preferred over the other.  Fourth, conclusions are drawn as to the best method for achieving equitable competition.   



1. The Probability Model - Assume a player has three different outcomes when playing a hole.  A net birdie is assigned the value of 0, a net par is assigned the value of 1, and a net bogey is assigned the value of 2.  For demonstration purposes, probabilities are assigned to each outcome as shown in Table 1:

Table 1

Probability of Scoring 

Score
Probability
0
.25
1
.50
2
.25


The criterion for measuring equity is the expected hole score for each team.  The method that yields an expected score for the threesome closest to that of the foursome would be preferred.  

The foursome has 81 different scoring combinations as shown in Table A-1 of the Appendix.  Each combination has a team score and a probability of occurrence.  The expected score is the product of the team score and the probability of occurrence summed over all outcomes.  The expected two-best ball score of the foursome is 1.11.

For Method 1 where the threesome can use one ball twice, there are 27 different scoring combinations.  Those combinations and their associated probabilities of occurrence are shown in Table A-2 of the Appendix.  The expected two-best ball score on each hole for the threesome would be 1.25.  In an eighteen-hole competition, the foursome would have a two and a hall stroke ((1.25-1.11)·18=2.52) advantage over the threesome.

Under Method 2, the probabilities of each outcome for the three players is the same as in Method 1.  The value of the outcomes may differ, however, as shown in Table A-3.  The expected hole score under Method 2 is 1.28.  The foursome has a 2.5 stroke advantage over a threesome competing with Method 2.


2. An Empirical Test - The selection of the best method will depend upon the player’s probability function at a course.  The probability function was estimated for two courses using the same 88 players.  The net scores for each player were sorted into five categories as shown in Table 2.  The estimated probabilities are the number of hole scores in each category divided by the total number of hole scores.  These probabilities are presented in Table 2. 

Table 2

Estimated Probability Functions

Probability
Score
Course 1(CR=71.2)
Course 2(CR=71.7)
2 or More Under Par
.024
.027
1 Under Par
.191
.178
Even Par
.333
.319
1 Over Par
.307
.308
2 or More Over Par
.145
.168



Table 2 shows there is a significant probability that a player will have a net score of 2 over par or more.  The three-score model (0,1,2) used here does not take into account such high scores.   To have a score of two over par used in a foursome event, however, three players must have that score.  The probability of that outcome is small, so the bias introduced by the three-score model should not be large.

            To evaluate the expected scores under each scoring alternative, the probabilities of 2 under and over are combined with the probabilities for 1 under and 1 over, respectively, as shown in Table 3.   (Note: Par is considered “1” in the three-score model.)

Table 3

Estimated Probabilities

Probability
Score
Course 1
Course 2
P(0)
.215
.205
P(1)
.333
.319
P(2)
.452
.476


These probabilities result in the expected hole scores shown in Table 4 for each method.


Table 4

Expected Hole Scores


Course
Foursome
Method 1
Method 2
Course 1
1.48
1.64
1.46
Course 2
1.55
1.72
1.50



The table demonstrates Method 2 is the preferred format at these courses.  The expected differences in hole scores is .02 for Course 1 and .05 for Course 2.  For an 18-hole competition, a threesome would have a small edge of less than one-stroke.  Under Method 1, the threesome has an expected 18-hole score approximately three strokes higher than that of a foursome. 


3. Sensitivity Analysis - The expected value of the score will depend on the probability distribution of individual hole scores by a player.  Table 5 below shows the expected team scores for alternative  probability distributions.


Table 5

 Alternative Probability Distribution


Probabilities
Expected Hole Score
Alternative
P(0)
P(1)
P(2)
Foursome
Method 1
Method 2
1
.1
.5
.4
1.85
1.94
1.77
2
.2
.5
.3
1.38
1.46
1.44
3
.3
.5
.2
0.95
1.06
1.14
4
.4
.5
.1
0.62
0.74
0.86

            The table demonstrates the preferred method depends on whether a course is relatively easy or difficult.[1]  When net bogeys are likely (i.e., P(2)=.4 or .3) Method 2 is the most equitable format for threesomes.   On an easier course (i.e., P(2)= .2 or .1), Method 1 yields an expected score closer to the foursome expected score and would be the preferred format. 

            Realistically, courses where Method 1 is preferred are rare.  The expected net score of a player with 4th probability distribution, for example,  would be 5.4 under par.   This would imply that the course rating is approximately 9 under par.[2]   A review of the golf courses in Southern California found no golf course with such a wide disparity between par and the course rating. [3] 


4. Conclusion - The research found that Method 1—one player’s ball counting twice—is not an equitable format.  This method was found to be marginally superior only on courses that do not seem to exist.  On most courses, a threesome playing under Method 1 would have an expected score some three strokes more than a foursome (e.g., on Course 1 the difference would be (1.64-1.48)·18=2.88).   Method 2 appears to ensure equitable competition on courses where the course rating is around par.[4]  Since most course fall in this category, Method 2 is the recommended format.



Appendix A


Table A-1 presents the possible combinations of scores for a foursome (0 = Birdie, 1 = Par, 2=Bogey).   Column 2 shows the probability of each combination.  Column 3 presents the frequency of each combination.  That is, how many different ways can a foursome make two bogeys and two birdies for example?  As shown in the table, there are 6 ways that combination can occur.    The probability of having two birdies and two bogeys is 0.003906.  Since this combination can occur in six different ways, the probability of this outcome is.0234375 as shown in column 4.  The 2-best ball score for each combination is shown in column 5.  In the example there are two birdies so the two best ball score is zero.  The expected team score is the product of the Probability of Occurrence and the 2-Best Score summed over all combinations.   In this case, the expected team score for a foursome is 1.11.



The expected score of a threesome under Method 1 is derived using the same methodology as shown above.  The expected score is 1.44 as shown in Table A-2.  The 2-best score is found by taking the expected value for each combination.  For example, assume a team has scores of 2,1,0.  If the player scoring a 2 could be used twice, the 2-best score would be 1.  If the player scoring 1 could be used twice, the 2-best score would be 1.  If the player scoring 0 could be used twice, the 2-best score would be 0.  Since each player is equally likely to be able to use his score twice, the expected 2-bes score is .67 (1/31 + 1/3∙ +1/3∙0).  The expected team score under Method 1 is 1.25





Under Method 2 the probabilities stay the same but the 2-Best Scores are slightly different.  Having a guaranteed par on a hole reduces the size of a bad hole score.  The expected score under Method 2 is 1.28.







[1] The best measure of difficulty is the difference between the course rating and par.  If the course rating is much lower than par (e.g., 67 versus 72), the player would be expected to have fewer net bogeys than on a course with a course rating of 73.0. 

[2] A player’s index is determined by the average of his ten best scores out of the last twenty scores.  Depending on the variance in the player’s scoring distribution, the average used for his handicap will be around 3-5 strokes lower than his average for all scores (i.e., the course rating must be 3-5 strokes lower than his expected score).    

[3] Southern California Directory of Golf, Southern California Golf Association, North Hollywood, CA 2006

[4] On courses where the course rating is much higher than par, Method 2 may yield too big of an advantage to the threesome.   When adopting any method, records should be kept so that the equity of competition can be empirically tested.  That is, do threesomes or foursomes win more than their fair share of competitions?