Friday, December 5, 2014

The Pacific Amateur Golf Tournament: If you go…

The Pacific Amateur (Pac-Am) golf tournament is held in Bend, Oregon every year.  It is run by the Central Oregon Visitors Association (COVA) to promote tourism in the shoulder season between golf and skiing.  If you go, you can expect good golf courses, good food, and a great selection of ales from Bend’s many breweries.  What you cannot expect is an equitable competition.

The Pac-Am attracts approximately 400 golfers each year and they compete in flights based on age, sex, and handicap.   The typical flight has 24 players.  After three rounds, the top four players in each flight compete in the finals at the Crosswater Course at the Sunriver Resort.

With so many golfers, it is likely that a few will not be in strict adherence to the USGA Handicap System. The Pac-Am assures its participants that they will have to pass a handicap background check before they are allowed to enter.  Here are a few warnings from the Pac-Am website: 

Any unusual posting procedures, such as ceasing to post scores for a period leading up to the event, may be grounds for disqualification.  Any participant found using a fraudulent Handicap will be expelled from the tournament and all fees will be forfeited.

Participants in the competitive net divisions will compete using an assigned Tournament Handicap.  The participant will need to provide a complete scoring history to the Handicap Committee.  This handicap is calculated using one to two methods.  The committee may assign a handicap from a previous month if it deems the participant’s historical handicap is a better representation of scoring potential.  (Alternatively), the Tournament Handicap can be calculated by using past Pac-Am tournament scores and scores from other tournaments.  Tournament scores (especially those from the Pac-Am) are weighted more heavily in the calculation to help prevent golfers from having “repeat career round” during tournaments.

The vetting procedure sounds so strict that no sandbagger could make it through.  In truth, the Pac-Am’s policy is essentially a paper tiger.  With a limited staff, the Tournament Committee cannot check the bona fides of all entrants.  Basically, if your entry check clears, you are in.
    
The 2014 Pac-Am demonstrated the Tournament Committee’s inability to conduct an equitable tournament.  In this tournament, the same player was a repeat winner of the over-all net competition. The Bend Bulletin (September 26, 2014) in an act of journalistic naiveté extolled the accomplishment.  Asked to explain his unlikely performance, the player said  “I really can’t explain it.  I love the course and I think playing four, five or six days in a row just helps me swing better.”  The Tournament Committee was equally impressed and featured the winner’s picture on its website along with an article on the remarkable accomplishment of being the first repeat winner in the Tournament’s history.

No one seemed to ask the obvious question “Is the player’s handicap legitimate?” [1] Though the player was from Washington State, he did not have a handicap with any golf club belonging to the Washington State Golf Association.  When this was brought to the attention of COVA, it responded:

We are happy to provide you with (the player’s) USGA handicap card, with his index dated 9/15/2014, issued by his home course.

COVA was incorrect when it wrote the player’s index was issued by his home course (emphasis added)His index was issued by an affiliate club (i.e., no course) through a company named MyScorecard.  MyScorecard sells a handicapping service for $14.95 a year.   Supposedly clubs formed under the aegis of MyScorecard have peer review.  But this is in theory only.  In practice, I bought a MyScorecard index that had no relationship to my potential ability.  I never received a query from my alleged handicap chairman who could possibly be a figment of someone’s imagination.
  
Even though the lack of peer review makes it is easy to cheat with MyScorecard, it does not necessarily follow that this player cheated.  To determine the authenticity of the index, it is necessary to examine his posted scores.  Here is what that examination revealed:

1. The player never posted a score from Crosswater even though he shot two net 66s in winning the Pac-Am finals in 2013 and 2014.
2. He never posted any round as a T-Score even though the Pac-Am Tournament Committee was supposedly to weight these scores heavily in arriving at the player’s tournament handicap.
3. The player played at a higher index in 2014 than in 2013 even though he won the tournament in 2013.  Apparently the Pac-Am Tournament Committee is not as vigilant in adjusting handicaps based on past performance as its website claims.

The missing 2013 tournament score (i.e., had it been posted it would have been in the last twenty scores and used in calculating the player’s index used for the 2014 tournament) means the player’s handicap was fraudulent and he should have been stripped of his title as the Pac-Am rules stipulate.  This did not happen.  COVA continued to defend the player’s performance as not extraordinary.  By any measure, however, the performance was extraordinary.  First, the player had to finish in the top four of his flight.  The probability of doing this is 1/6.  Then he had to win the finals against approximately 40 competitors.  The probability of doing this is 1/40.  The probability of doing this twice in a row is 1/57,600.  Even the probability of a defending champion repeating is 1/240—i.e., highly unlikely.

COVA did not hand down a harsh verdict on this player’s performance, and the reason is fairly clear.   To do so would reveal handicaps are not getting the close scrutiny the Pac-Am claims.  Prospective players could be turned off and participation could be reduced if they believed the tournament was not an equitable competition.   Since COVA’s real purpose is to fill motel rooms (from which it gets a percentage of the Transient Occupancy Tax), advertising the misconduct of the player and the ineffectiveness of its staff would not be in its interest.  Best just to forget about it and hope the player does not come back and “threepeat.”














[1] Dean Knuth, former Senior Director of Handicapping of the USGA, did pioneering work on the probability of exceptional scores.—see www.popeoftheslope.com.  Knuth recognized the weakness of the USGA’s handicap system in catching flagrant sandbaggers.  He invented the Knuth Point System where a player’s handicap is reduced based on what he wins and not on what he scores. 

Sunday, November 16, 2014

The Randomness of Course Ratings

Course and Slope Ratings are determined through a process that measures the yardage and obstacle values of a course.   The yardage of courses can be measured with some precision.  There are some judgments (e.g., roll, elevation) in measuring effective yardage, but any errors will be relatively small.  It is in the measurement of obstacle values where the greatest chance of random errors occurs. 

Errors in measuring obstacle values arise from the lack of precision in defining obstacles, confusing standards for rating each obstacle, model errors, and differences in subjective judgment among Rating Committees:

Lack of Precision in Defining Obstacles - The size, firmness and shape of a green in relation to the length of the approach shot is one obstacle.  The three characteristics -- size, firmness, and shape—of the obstacle are not defined with any specificity.[1]  If it is not clear what is to be measured, a lack of precision in the estimate is ensured.
Confusing Standards The Rating Committee must assign a value between 0 and 10 to each obstacle. The ratings criteria are both confusing and lack specificity.   Obstacle values are increased, for example, if a green is in poor condition or a player’s stance is moderately awkward.  Much of the confusion arises because the obstacles are not independent.  “Trees,” for example, present their own obstacle but also have an impact on the “fairway” and “green target” obstacle ratings.  It is not clear in the Course Rating Model how the independent effect of “trees” should be evaluated. [2]
Model Errors - The Psychological Obstacle Value is determined by the value of the other nine obstacle values.   This covariance among variables (Obstacle Values) leads to errors in the estimate the Scratch and Bogey Obstacle values. 
Differences Among Rating Committees - It is likely some Rating Committee members will weigh obstacle differently.  Given the subjective nature of the ratings process, this is a foregone conclusion.

To examine the “randomness” hypothesis the Course and Bogey Ratings of a course have been studied over the past rating cycle.  The course has had no change in its yardage, and there has been no significant change in the rated obstacles between ratings.[3] The changes in the men’s ratings are presented in Table 1.

Table 1
Change in Men’s Ratings

Tees
CR Old
CR New
Difference
BR Old
BR New
Difference
Gold
65.0
65.1
+0.1
86.4
85.7
-.6
Silver
68.4
68.2
-.0.2
90.9
91.1
.2
Green
70.7
71.0
+0.3
95.4
95.9
.5
Black
73.8
73.7
-0.1
99.6
100.1
.5

The differences in Course Ratings can be described as random.  That is, the course did not get uniformly tougher or easier for the scratch player or the bogey player.  Instead, the course was judged to be more difficult from two sets of tees and easier from two sets of tees for the scratch player.  For the bogey player, the first set of tees (gold) is rated easier while the remaining tees are rated more difficult.
A similar random pattern is shown in the change of ratings for women as shown in Table 2.  The new Course Ratings are higher from the gold and silver tees, but lower from the green tees.  For the bogey player, the course is now rated more difficult from the gold and green tees, but easier from the silver tees.     

Table 2
Change in Women’s Ratings

Tees
CR Old
CR New
Difference
BR Old
BR New
Difference
Gold
70.1
70.4
+0.3
100.6
101.1
+0.5
Silver
73.3
73.4
+0.1
107.3
107.1
-0.1
Green
77.6
77.1
-0.5
112.5
113.2
+0.7

The random variation probably stems from the subjective nature of the ratings procedure.  For example, assume the new obstacle ratings for topography are higher than the old obstacle ratings by one point on each hole.  Further assume the old and new ratings are identical for the nine other obstacles.  The increase in the obstacle value for the scratch player and in the course rating would be 0.2 strokes (1 x 0.1 x 18 x 0.11 = .198 rounded to .2).[4]  A similar error would lead to an increase of the bogey rating of .6 strokes (1 x .12 x 18 x .26 = .56 rounded to .6).  The Slope Rating would increase by 2 points ((.6 - .2) x 5.381 = 2.15 rounded to 2.0).  In essence, it only takes a small difference in the subjective ratings, rather than real changes in the obstacles, to lead to the small Course Rating changes shown in Tables 1 and 2.

The USGA could argue the systematic error in the measurement of topography described above is unlikely.  The error is more likely to be random with one hole being rated too high and another being too low.  The net result would be a much smaller change in the obstacle value.  There are two problems with this defense.  First, reliance on random errors discredits the measurement process—i.e.,”errors will cancel out” is not a rigorous defense for the Course Rating System.  Second, random errors do not always cancel out.  The 18-hole total of the weighted obstacle values only has to differ by 2.0 to get a 0.2 stroke change in the Course Rating.  A difference of 2.0 is not unlikely given the variance in the rating of individual obstacles.  For example, if the rating was 3 for each obstacle, the weighted obstacle value of the course would be 54.  A difference of 2.0 would only be an error of approximately 4 percent.   Such a small difference should be within the 95 percent confidence interval of the estimate of the total weighted obstacle value.  Therefore, small changes in the ratings are more likely due to the “randomness” of the rating process than to any physical changes in the course.      

The importance of random errors in the measurement process raises two questions for the USGA and local golf associations to consider.  First, if the re-rating results in small and apparently random differences from the old ratings, should the ratings be changed?   Unless the Rating Committee can point to physical changes that caused the differences, the prudent course would be leave the ratings unchanged. [5] After all, there are some costs (new scorecards, player confusion) in making changes to the ratings.   Second, are the required periodic re-ratings the best use of a Rating Committee’s time?   It would be more efficient and effective to re-rate a course if 1) its ratings seem out of line (e.g., visitors score higher or lower than expected or team performance is exceptionally good or bad) or 2) the club professional believes there have been significant alterations in the course since the last rating.  To rate for ratings sake better serves the bureaucratic interest of golf associations, but is not the most effective method for ensuring the equity of the handicap system. 



[1] USGA Course Rating System: 2012-2015, United States Golf Association, Far Hills, NJ, 2012.
[2] Op. cit., p. 27.
[3] A tree was removed from one fairway.  The tree was not an obstacle for the scratch player, and only affected the bogey player when he played from the green or black tees.  
[4] USGA Course Rating System: 2012-2015, p. 72.  The weight for the scratch topography obstacle is 0.1.  The sum of the weighted obstacle values is multiplied by .11 in the formula for the course obstacle value.
[5] Golf associations rarely explain why ratings have changed.  This is due in part to a lack of understanding of the USGA’s Course Rating Model.  Numbers from the field are entered into the model, and then the model produces Course and Bogey Ratings. This makes it difficult for the association to make a defense of the ratings based on physical changes in the course.  Instead, associations will defend the “process,” but not identify physical changes that led to the new ratings.  

Thursday, October 23, 2014

Can the USGA Slope Rating Decrease as Yardage Increases?

There are cases where the USGA Slope Rating decreases as the yardage increases.   It is never clear, however, whether this anomaly is due to an error by the Rating Committee or an oddity in the course design.  This post examines one such situation to determine the most likely explanation.
Our example is drawn from the files of the Oregon Golf Association (OGA) in 2006.  The yardages and Slope Ratings of the course in question are shown in Table 1 below.

Table 1
Yardages and Slope Ratings for Women

Tee
Yardage
Slope Rating
Green/Silver
6249
150
Green
6559
148

As Table 1 shows, the Slope Rating decreases as yardage increases.  A player with a 14.0 Index would receive 19 strokes if she played the Green/Silver tees, but only 18 Strokes if she moved back to the Green tees.   An examination of the Slope Rating formula reveals why the decrease in Slope Rating is unlikely.

                Slope Rating = 4.24·((Y/120 + BOV +51.3) – ( Y/180 + SOV +40.1))
                                    = .0118·Y + 4.24·(BOV –SOV) +47.5
Where,
                             Y = Effective Course Yardage
                       BOV = Bogey Obstacle Value
                       SOV = Scratch Obstacle Value

The Slope Rating is an increasing function of yardage.  For each 100 yard increase in yardage, the Slope Rating—all things being equal—will increase by approximately 1.2 rating points.  For the 310 yard increase in length in the example, the Slope Rating would increase by 3.7 rating points.  The only way for the Slope Rating to increase with distance is for the SOV to increase more than the BOV.  It is difficult to conceive of a course design where this could happen.

Where an error may have occurred can be found by examining the SOVs and BOVs implicit in the new ratings.  Table 2 shows the BOV for the Green tees is actually lower than from the Green/Silver tees. This is highly unlikely.  The bogey player should have shorter approach shots from the Green/Silver tees which should reduce the BOV.  If the shorter tees bring more hazards in to play, the bogey player can just use less club so that her landing area is the same as from the longer tees (i.e., there is no change in obstacle values). 

Table 2
Obstacle Values

Tees
SOV
BOV
Green/Silver
0.5
7.2
Green
1.1
6.5

The case for an error in ratings is strong.  The OGA, however, did not see it that way. 
Jim Gibbons, Executive Director, of the OGA wrote the following:

We have received your review of the course ratings. Nancy Holmes has
started the process to double check our ratings, but initial review shows we
are correct.  The bogey rating for women from the Green/Silver tees of 110.6
slopes to a 150 based upon the course rating of 75.3 at 6249 yards.  This
relates to a 19 handicap to shoot(sic) a net 75.3 for women.
The Green tees at 6559 have a course rating of 77.6 with a bogey rating of
112.5 (because some obstacles are located with less impact).  This provides a
handicap of 18 to shoot (sic) the net score of 77.6.
The rating process has changed since the previous ratings were done. Green
speeds and rough heights may be different. A check of the BOV between the
Green/Silver and the Green (tees) will be made and if there is a change that will
be posted.  Realize that the BOV sometimes is lower from the longer tees, but
many organizations will adjust the numbers to avoid having to explain the
reason.  We choose to do the rating correctly. 

Mr. Gibbons does not really give a defense of the Slope Ratings. He is correct that the various course and bogey ratings yield the peculiar Slope Ratings. This math was never in question. The issue was whether the Obstacle Values were estimated correctly. He states this will be addressed in the future, but never did. Gibbons ends with the audacious claim that the BOV is frequently lower from the longer tees than reported because other associations fudge the numbers. I suspect he never filed a complaint to the USGA to that effect.

The course was re-rated in 2014. The new BOVs and Slope Ratings are shown in Table 3. Both the BOV and the Slope Rating increase with distance as expected. The difference in the Slope Ratings between the two sets of tees is 7 rating points (+2 to -5). Since the course was essentially unchanged between ratings, a difference of 7 rating points is too large to be attributed to random error. Given the more reasonable 2014 ratings, it is likely that the 2006 ratings were due to Committee error rather than course design.

Table 3
2014 BOVs and Slope Ratings

Tees
BOV
Slope Rating
Green/Silver
6.7
147
Green
7.2
153
 
While the 2014 ratings are more sensible, it is difficult to prove they are more accurate than the ratings of 2006. Rating Committees take measurements, assign number to subjective judgments, and plug those results into a model that has never been empirically verified. Ratings come out of the computer and are sent to the golf course. You can’t argue with a rating, but only the logic behind the rating. Since the OGA never presents that logic, there can be no debate.

Thursday, October 2, 2014

Why Does the USGA Treat Women Differently?


The USGA does not have an enviable record when it comes to its view of women:
  1. The USGA has a “separate but equal” handicap system for women that codifies them as the weaker sex. 
  2. While the United States Tennis Association provides equal prize money for men and women at its Opens, the USGA awards more than twice as much prize money to men than women at its Opens.
  3. No woman has ever served on the USGA’s Handicap Research Team.
  4. In recommended handicap allocations, the USGA seems to presume women and men are different psychologically (i.e., men are bigger risk takers and women are more conservative on the golf course). 
The first two actions of the USGA can at least be defended on physiological and economic grounds.  The third result could stem from no women wanting to work in area where the possibility of publishable research is nil.  It is the fourth action—the USGA’s perceived difference is the psychological make-up of men and women –that is the subject of this post.


In many competitions, the USGA recommends different handicap allowances for men and women.  For example, in four-ball stroke play men are allowed 90 percent of their handicap while women are allowed 95 percent of their handicap.  Why are women treated differently?  Much of the USGA’s research on multi-team events was done over 35 years ago and there appears to be no mention of any differences due to the gender of the player. [1]   It is likely the USGA had no empirical evidence for the women’s allocation, and the percentage was just a consensus guess by members of the Handicap Procedure Committee.   If women were studied, it is probable any difference in the estimated optimal allowance for men and women was not statistically significant.  Remember, all of the studies used to justify four-ball allowances were completed long before the introduction of the Slope System.  With this error and others, it is likely any difference as small as five percent was not significant.  Since the USGA does not release its research for peer review, the accuracy and validity of the USGA’s allowance may never be known.  

The typical reason given for reducing handicaps in multi-ball events is that the higher handicap player has a larger standard deviation in his/her scores and hence an advantage.  Given that women get a smaller reduction in handicap, the USGA must believe women have as smaller standard deviation in their scoring.  Women must be steadier and/or less prone to risk taking as noted above.  In the appendix below, it is shown that the difference in standard deviations between teams is the same regardless of gender.  Therefore, it is difficult to defend the different allocations based on differences in standard deviations of scoring.[2]

While the allocations should be reviewed and revised, it is doubtful the USGA will take such action.  The allocations were never based on sound science, but rather on the internal politics at the USGA.  The allowances are considered “settled law” by the myriad of attorneys that guide the USGA.  To make a small step toward the equal treatment of women, however, the USGA could keep the hallowed men’s allowances and simply eliminate any allowance specific to women. 



[1] Ewen, Gordan, What the Multi-ball Allowances Mean to You, www.usga.org, Far Hills NJ, 1978.  The USGA has not released the original research for peer review. 
[2] The USGA has the data to examine if there are differences in scoring patterns between men and women.  It has chosen not to do so.
    


Appendix

The Slope Handicap System assumes that the standard deviation of a player’s scores increases linearly with handicap.  The standard deviation for each gender would be:
1)            σ (m,h) = σ(m,0)·(1 + a·h)
                                σ(m,h) = standard deviation of a male player with handicap h
                                σ(m,0) = standard deviation of a scratch male player
                                          h = handicap of the player
2)            σ (f,h) = σ(m,0)·(1 + b·h)
                                σ(f,h) = standard deviation of a female player with handicap h
                                σ(f,0) = standard deviation of a scratch female player
                                        h = handicap of the player
The USGA assumes that the line plotting average scores versus handicap would have a slope of 1.13.  The equation for males reflecting this assumption is:
3)            1.13 = (Average Score(h) –Average Score(0))/h
If a normal distribution of scores is assumed, then:
4)            Average Score(h) = ATBD(h) + .8 ·Ïƒ(m,h)
Where,
                                ATBD(h) = Average of Ten Best Differentials of a player with a h-handicap
Substituting eq. 4 into eq. 3:
5)            1.13 =(ATBD(h) + .8·Ïƒ(m,0)·(1 +a·h)  - (ATBD(0) + .8·Ïƒ(m,0))/h
Since,
6)            h = ATBD(h)·.96
Eq.  5 can be rewritten as:
7)            1.13 = (h/.96 +.8·Ïƒ(m,0)·(1 + a·h) – .8·Ïƒ(m,0))/h
                1.13 = 1.04 + .8·Ïƒ(m,0)·a                               
Since the same equality must hold for women, it follows that:
8)            σ(m,0)·a = σ(f,0)·b
Using eq. 8, the equations for the standard deviations can be rewritten as:
9)            σ(m,h) = σ(m,0) ·(1 +a·h)
10           σ(f,h) = σ(m,0)·(a/b) (1 +b·h) = σ(m,0)·(a/b +a·h)
For simplicity, assume we have a team where both players have a handicap of h1, and another team where both players have a handicap of h2.  The difference in the average standard deviation of the two teams is:
11)          Average Male Difference = σ(m,0)·(h1 - h2)
12)          Average Female Difference = σ(m,0)·(h1 - h2)
Therefore, the difference in average standard deviation between teams is the same regardless of gender (i.e., any advantage a team has is the same for both genders).  This finding makes it difficult to justify different handicap allocations for men and women in four-ball stroke play.







 






[1] Ewen, Gordan, What the Multi-ball Allowances Mean to You, www.usga.org, Far Hills NJ, 1978.  The USGA has not released the original research for peer review. 
[2] The USGA has the data to examine if there are differences in scoring patterns between men and women.  It has chosen not to do so.

Monday, September 15, 2014

The Four-Ball Stroke Play Adjustment Under Section 3-5

The USGA recommends different procedures for adjusting handicaps under Section 3-5 depending upon the format of the competition.  In foursomes and Chapman competitions, the team playing the course with the higher Course Rating adds d-strokes (d being the difference in Course Ratings) to its team handicap.  In four-ball competitions, however, each player has d-strokes added to his handicap.
In foursome and Chapman formats, the effect of the adjustment is certain and known in advance.  The teams playing from the tees with the higher Course Rating will have their net scores reduced by  d‑strokes.  The effect in four-ball stroke play, however, is not certain.  If the two players have the same handicap, the effect will be to reduce the team score by d-strokes, the same as in foursome and Chapman competitions.  If the difference in handicaps between partners is equal to or greater than d, the expected reduction in team score will be d-strokes but can range from zero to 2d-strokes.
The probability that a team will gain more than d-strokes increases with the value of d.  As shown in the Appendix below, if d is equal to 3, a team has a 34 percent chance of reducing its team score by more than 3-strokes.  In a large field competition, the overall winner will most likely come from the teams playing the tees with the higher course rating.[1]   To eliminate this inequity and to make the handicap adjustment under Sec. 3-5 consistent over all forms of competition, it is suggested the adjustment for four-ball should also be a reduction in team score by d-strokes.


Appendix
Expected Reduction in Net Score

To simplify the model, it is assumed that only five scores on a hole are possible—eagle. birdie, par, bogey, and double bogey.  The probability of making each score for the two players is shown in Table A1 below.
Table A1
Probability of Making Various Scores

Score
Player 1
Player 2
Eagle
a1
b1
Birdie
a2
b2
Par
a3
b3
Bogey
a4
b4
Double Bogey
a5
b5

Now on any hole there are 25 possible outcomes as shown in Table A2.  Assuming the scores by each player are independent, the probability of each outcome is shown column 3.  Assume that d is equal to 1 so that each player gets an additional stroke, and the handicap of Player 1 is at least one stroke lower than the handicap of Player 2.  Column 4 shows the results for when Player 1 gets an additional stroke.  As an example, if both players have eagled the hole, the additional stroke does not result in a reduction in the total score (i.e., player 2 by definition already has a stroke on that hole).  That is why zero is shown in Column 4 for the eagle-eagle outcome.
Similarly, Column 5 shows the reductions when Player 2 gets an additional stroke, but Player 1 does not.  The eagle-eagle outcome results in a reduction of -1 since Player 2 now strokes on the hole.
Table 2A
Reduction in Net Score for Possible Outcomes

(1)

Player 1
(2)

Player 2
(3)

Probability
(4)
Player 1 Gets +1 Stroke
(5)
Player 2  Gets +1 Stroke
Eagle
Eagle
a1·b1
0
-1
Eagle
Birdie
a1·b2
-1
0
Eagle
Par
a1·b3
-1
0
Eagle
Bogey
a1·b4
-1
0
Eagle
Double Bogey
a1·b5
-1
0
Birdie
Eagle
a2·b1
0
-1
Birdie
Birdie
a2·b2
0
-1
Birdie
Par
a2·b3
-1
0
Birdie
Bogey
a2·b4
-1
0
Birdie
Double Bogey
a2·b5
-1
0
Par
Eagle
a3·b1
0
-1
Par
Birdie
a3·b2
0
-1
Par
Par
a3·b3
0
-1
Par
Bogey
a3·b4
-1
0
Par
Double Bogey
a3·b5
-1
0
Bogey
Eagle
a4·b1
0
-1
Bogey
Birdie
a4·b2
0
-1
Bogey
Par
a4·b3
0
-1
Bogey
Bogey
a4·b4
0
-1
Bogey
Double Bogey
a4·b5
-1
0
Double Bogey
Eagle
a5·b1
0
-1
Double Bogey
Birdie
a5·b2
0
-1
Double Bogey
Par
a5·b3
0
-1
Double Bogey
Bogey
a5·b4
0
-1
Double Bogey
Double Bogey
a5·b5
0
-1


The probability that Player 1 successfully uses an additional stroke on a hole to lower the team score is:
1)            p = a1·b2+a1·b3+a1·b4+a1·b5+a2·b3+a2·b4+a2·b5+a3·b4+a3·b5+a4·b5
The probability that Player 2 successfully uses an additional stroke on a hole to lower the team score is:
2)            q = a1·b1+a2·b1+a2·b2+a3·b1+a3·b2+a3·b3+a4·b1+a4·b2+a4·b3+a4·b4 +a5
Assume the difference in Course Ratings is “d” strokes.  The probability that Player 1 lowers the team score on “n” holes is:
3)            P1(n) = (d!/(n!(d-n)!) )· pn·(1-p)d-n
Similarly, the probability that Player 2 lowers the team score on “n” holes is:
4)            P2(n)  = (d!/(n!(d-n)!))·qn·(1-q)d-n
To evaluate these probabilities, it is necessary to know the likelihood of making each score for both players.  In previous posts, reasonable estimates of these likelihoods have been used and are presented in Table 3A below.
Table 3A
Probabilities of Scoring

Score
5-Handicap
10-Handicap
Eagle
.005
.003
Birdie
.140
.090
Par
.450
.350
Bogey
.310
.380
Double Bogey
.095
.177

Based on the assumptions in Table 3A, the estimates of p and q are shown in Table 4A.
Table 4A
Probability of a Reduction in Team Score for One Additional Handicap Stroke

 Player
Probability
Estimated Probability
Player 1
P
.44
Player 2
q
.56

Two cases are examined to demonstrate how the competition could be affected by using Sec. 3-5.  In the first case “d” is one stroke.  The probability of each player lowering the team score is shown in Table 5A below:


Table 5A
Probability of Team Score Outcomes When d =1

Outcome
Formula
Estimated Probability
0
(1-p)·(1-q)
.25
-1
p·(1-q) + q·(1-p)
.50
-2
p·q
.25

The expected reduction in team score is -1 stroke.  There is a 25 percent chance, however, a team will lower its score by -2 strokes.
Table 6A shows the probability of various outcomes when the difference in course ratings is 3 strokes and the difference in handicaps is 3 or greater.
Table 6A
Probability of Team Score Outcomes When d=3

Outcome
Formula
Estimated Probability
0
P1(0)·P2(0)
.015
-1
P1(1)·P2(0) + P1(0)·P2(1)
.092
-2
P1(2)·(P2(0) +P1(1)·P2(1) + P1(0)·P2(2)
.235
-3
P1(3)·P2(0) + P1(2)·P2(1) + P1(1)·P2(2) +P1(0)·P2(3)
.315
-4
P1(3)·P2(1) +P1(2)·P2(2) + P1(1)·P2(3)
.235
-5
P1(3)·P2(2) + P1(2)·P2(3)
.092
-6
P1(3)·P2(3)
.015


Table 6A indicates the probability of reducing a team score by the full 2d strokes declines as d increases.  The probability of reducing a score by more than d-strokes, however, increases with d.  In this case, a team has a 34 percent chance of reducing its net score by more than 3-strokes.




[1] The winner will most likely come from the teams playing the tees with a lower course rating if those teams have their handicaps adjusted downward by d-strokes.