From Doak to Digest: Golf Rankings Are Built on Assumptions No One Talks About
Aesthetic Frameworks in Golf Course Ratings
Last year I wrote an essay called Golf Course Rankings Are Mostly Useless, and while that piece focused more on the lack of practical benefits of golf course rankings, I ignored the underlying theories behind them. I find this stuff really fascinating and I want to talk about it here.
In my ongoing mission to write about philosophy under the guise of writing about golf, I’ll discuss some of the assumptions behind these different ranking systems. Even if most rankings are meant to be fun, the frameworks behind them reveal a lot: what these institutions value, the influence these values exert, and the changes to golf culture that might result. I’ll leave my personal thoughts on aesthetic frameworks in the footnotes,1 but in this article I’ll be discussing the axioms — the foundational assumptions — that are required to construct an aesthetic framework.
Part 1: Let’s Talk About Axioms
For those without a background in math or logic, an axiom is just a starting point — something a framework takes as given. Axioms aren’t always obvious. They’re the rules that govern how someone thinks about things. When I’m making some decision, I need rules on how to decide. I know that is all a bit meta, but I hope it makes sense once we get into the specifics. This discussion will be about rating golf courses, but it’s really applicable to any theory of aesthetics. Below, I want to look at four different different choices we must consider when we decide what makes a course good:
1. Objective vs Subjective Perspectives
This is probably the most obvious distinction, and it has serious implications.2 When I say a course is the best, do I mean it’s the best period, or am I just saying that the course is my favorite?
An objective perspective means that I’m saying it’s the best course, period, based on some provable fulfillment of universally agreed upon criteria. This is not just personal preference. You might argue that you prefer a different course, but under an objective framework, you’d be wrong in your assessment.
In contrast, a subjective perspective begins with the idea that aesthetic judgments are personal. A golf course rating system with a subjective perspective doesn’t try to claim that any course is the best in some general sense — just that someone (or some group) prefers it. These systems are less about laying down universal standards and more about mapping clusters of preference.
The biggest criticism of a subjective perspective is that there are experts who just have a more informed perspective than most other people, and their judgements should be considered more valuable than a layperson’s.3 This is the basis for the vast majority of golf course rating schemes: get a bunch of experts together and then aggregate their opinions to achieve a more “correct” list. This should lead, albeit awkwardly, to concocting an “objective” perspective that no individuals actually fully agree with.
On the other hand, subjective perspectives are also fraught. Publications still typically gather experts to rate courses, but those experts don’t really attach much beyond their name and reputations to the ratings. Whether this is a good system really just depends on whether readers actually benefit from these expert opinions. If your local paper’s film critic keeps recommending art films when you only like big budget blockbusters, there isn’t really a reason to read those reviews.
2. One Score or Multiple Metrics to Consider
Another key distinction to consider is whether our framework gives us a single score or lets us know all the factors that it considered. A one-dimensional score is like a letter grade. If something gets an “A” we know it is good, and that’s easy to understand. Multi-dimensional scores grade something on different metrics. Suppose a course scores 95% on design, 98% on beauty, and 55% on accessibility. Thus, a multi-dimensional score communicates much more information, but isn’t as easy to understand.
Tom Doak’s Confidential Guide is a clean example of a single score.4 In reviewing each course the authors simply give a score from 0 to 10. That’s it. The simplicity is part of the appeal.
Golf Digest, by contrast, asks raters to score aesthetics, challenge, character, conditioning, fun, layout variety, and shot options.5 They then average these scores into a composite ranking, but you can still see the individual parts. The upside of this is transparency: if you care more about fun than challenge, you can reweigh the scores accordingly. The downside is that it assumes those variables can be separated cleanly, which isn’t always obvious.
Some systems split the difference.678 They might use multiple input variables but still produce a single overall score. For clarity, I’m calling a framework multi-dimensional only if the final output shows multiple dimensions to the audience.
3. Impartiality vs Context
This one’s subtle, but important. Should a course be judged in a vacuum, or should context count?
Most people I talk to claim to prefer impartiality. They say they want to know about the course, and just the course. You’re not supposed to consider the setting or story. But when I ask whether a perfect replica of the Old Course would deserve the same score as the course in St Andrews, nearly everyone hesitates. History, atmosphere, location — these things seem to matter, even if we pretend they don’t.
As far as I know, no major publication openly embraces contextualism in its general rankings. The practical reason for this is that few people have actually played enough courses to be able to create a full ranking from a single perspective. Since so many outlets must aggregate opinions, the only way to allow any context is to just create lists with limited context. You will see lists of the best public courses, resort courses, or links courses, for example.
This is helpful, but it won’t provide the type of perspective that a single rater can give about an overall experience that extends beyond just the physical course itself. Does travel time to a course matter? If so, how much? Should raters consider whether it’s worth going to if they’re already in the area? Systems that allow context can answer different types of questions than impartial ratings systems can. One very specific type of context that can be considered leads to another axiom for frameworks: whether or not to consider the price of the experience and the course’s value.
4. Excellence vs Value
Value isn’t really a foundational axiom in the way the others are. It acts as a sort of meta analysis we can make after we decide how much we like a course. Most high-profile rankings ignore value altogether. This can be for practical reasons: for most club courses, the cost to play varies between members, guests, and visitors. Trying to square that difference is as challenging as it is gauche. Value is also different depending on different budgets. A billionaire could trivially fly across the world just to visit a single resort course, at a cost that might be completely unaffordable for someone with a median income. Still, while accounting for value may be impractical, it seems odd to completely ignore it.9
Mixing & Matching Axioms:
Combining different axioms creates different types of rating systems. Here are a few common frameworks:
Objective, One-Dimensional, Impartial, Excellence Rating:
Objective, Multi-Dimensional, Impartial, Excellence Rating:
Golf Digest14
Subjective, One-Dimensional, Impartial, Excellence Rating:
Fried Egg Golf15
Subjective, One-Dimensional, Contextual, Excellence Rating:
Part 2: Some Thoughts on Prevailing Frameworks
Shifting Values is a Problem for Objectivity
Changes in course ratings over time present real problems for objective frameworks of golf aesthetics. I’ve always been a bit amused by the idea that there are new rankings published every year, where the clubs move up and down, despite whether anything has actually changed. In fact, there have been entire golf course architecture movements that have upended the way we think about what makes a course good. When golf technology began to make some courses too easy, the culture started to value more challenging courses. When golf course development was dominated by real estate developments, the culture started to place a premium on width. While it’s perfectly natural for us to look for what is missing from our current golf paradigm, we end up valuing different things at different times.
An Argument for Contextualism
The strongest argument for contextualism is its existing use in other art forms. In film we have strong genres that create a mood for different kinds of films. We have drama and comedy, but we also have more niche categories like action, horror, fantasy, or documentary. These films all serve different purposes. So, while a documentary will likely never win a Best Picture Oscar, most people, most of the time, don’t even want to sit down to watch a Best Picture winner. They want a film that suits their mood at the exact time they’re sitting down to watch a movie.
The same should apply to golf courses. We have championship courses designed to test. We have anti-penal resort courses designed to make sure everyone shoots a fantastic score. We have match play courses designed to encourage risk-taking. We have historic courses designed to be played with historic equipment. We have scruffy munis designed for players on a budget.
That isn’t to say nobody is making lists and ranking these genres. They are.18 It’s mostly that I don’t experience people framing their tastes around courses in this way. The conversation is usually about how this course is good and that course is bad. Courses are good and bad – I’m not arguing there – but they should be good or bad because they do or don’t serve a function. If we start directly comparing a Broadway show to a school play, we are forgetting that the school play entertains the parents in the audience in a very different way.
Why Golf Needs Genres: an Argument for Embracing Fashion
Where I think contextualism becomes incredibly helpful isn’t just when we break down golf courses into genres, but when we break them down into eras. That is to say, when we break golf into eras, we can appreciate being able to experience an era of golf as a whole. We may prefer minimalism to technical and narrow courses built in the ‘70s and ‘80s, but if we were to grab a set of persimmons, we might be able to appreciate just feeling what it was like to be in an era, regardless of whether it’s not our favorite.
Now, I’m probably not going to make any fans for revivals of different golf eras by pointing to the swing dancing revival of the ‘90s or the various incarnations of ska music that seem to recur every generation. However, I think there is something valuable here that demonstrates how aesthetic judgements can be influenced almost arbitrarily. We can appreciate a genre and an era independent of whether or not it exists in the current paradigm. When we are not concerned with objective aesthetics and we’re fine with framing our judgements in a context, we can really see how fashion can, seemingly arbitrarily, ebb and flow through a culture. The same could be said for golf.
Conclusion
All of this is meant as an argument for a wider breadth of architectural appreciation. I may dislike one course when playing with modern clubs, but appreciate it with hickories. I might really enjoy getting psyched up for the challenge of a championship course, but maybe not on a day I’m just trying to hang out with friends. I really appreciate playing an expensive, once-in-a-lifetime, resort course when I’m on vacation, but playing that same course every Wednesday would be financially unfeasible. Different courses are enjoyable for different reasons, at different times, and have significantly different replay value. Almost none of the current rating systems that major publications use account for any of this. This is exactly why I understand how different rating systems create different incentives is so useful. If I’m looking for a specific experience, I’m more likely to find it when I’m looking at the right kind of system.
I completely understand how impractical any of this is for publications. The origin of publications ranking courses likely started out just being something fun to do. I honestly think it’s a worthwhile endeavor if it really is just for fun. Unfortunately, ratings and rankings are now a very serious part of the industry. I’ve found the amount of courses that publish their rankings pretty shocking. I visit a lot of courses’ websites as I work to build the wiki and these rankings seem to be the most prominent signaling mechanism for courses. I honestly worry that the prominence of these ratings and interests of the people who create them aren’t always aligned with the readership. I will follow this article with one that explores these concerns and how we might adjust ratings for different potential biases.
If you’re reading this footnote, it’s to understand my general view of aesthetics. It’s going to take a significant amount of time to get through, but ultimately it starts with an evolutionary biology-centric view. I’m keeping this in a footnote because while my perspective may be of interest, it’s not a focus of the article.
Here, when I say I suspect that aesthetics are ultimately inseparable from evolutionary biology, what follows is that conceptions of beauty come from the mind, instead of being identified by the mind. Now, when we look at these perceptions of beauty in aggregate, we can find some obvious rules: e.g., symmetry seems to be paramount in the beauty of human faces; however, while this rule does seem mostly universal, it is not exactly transferable from person to person. People obviously have different preferences, so we always just find loose, general rules. This seems to mirror itself in other aesthetic concepts, like how sugar universally tastes sweet, probably because it was an effective signaling mechanism that plants that contain sugars are good for survival.
Here, we have exactly what we would expect if evolutionary biology were to be the driving force in preferences; that is, that there are selection biases that show up (effectively as a black box in our perception) and push us to prefer some attributes that promote genetic survival and away from extinction. This theory should also explain the occasional selection biases that run amok in species when there is an absence of predators, as these black boxes do not output functional goods outside of the contexts in which they were developed. If we find that some species of fish prefer partners that are shiny in the absence of predators, but the predators prevent them from becoming shiny, then we suppose that there is something about the shine that promotes effective reproduction. Thus, I suspect that our preferences for the artistic are heavily influenced by the evolutionary advantages of the creative mind in the history of primates. I will fully admit here that this leaves a falsifiability problem, but this is a problem of most philosophical frameworks.
Now, a few things fall out of this “black box” kind of aesthetics. Firstly, any a priori notion of beauty is rejected. I’m just going to immediately reject Kant’s or any other objective analysis. Next, even between humans, we should expect subjective opinions to diverge, arguably quite widely. There is reason to suspect that “nature” would reward aesthetics of “nurture” insofar as endorsing the preferences of the previous (surviving) generation’s aesthetics. Lastly, the inherent materialism here will allow ideas related to existentialism to be compatible with this theory, even if the results, like genuinely enjoying unpleasant things, seems counter-intuative or nonsensical.
Outside of formal writing, I frame this dichotomy as elitist aesthetics vs subjective aesthetics. I prefer the term “elitist” aesthetics because if we have an objectively good aesthetic framing, we must have an elite framing for the best of the best. That best must be the best, so any concept of an antagonistic preference is a de facto philistine or vulgarian. As I hope to explain, there are a lot of problems that fall out of this view, mainly to do with fashions. Suffice to say, the fact that we must effectively, at least by implication, deride folks who deviate from an orthodoxy of “the good” I prefer the term “elitist” with its pejorative implications, simply because I think people should own the view if they choose to hold it.
I think there is some debate as to whether or not there is much practical benefit to this perspective. If I’m leaning heavily into the metalogic of rating systems, I think that a rating system based on expert opinion could be valuable if it actually led to recommendation that created a superior aesthetic experience, but I honestly don’t know whether it would. Given my previous writing on the subject, I hope it’s obvious that I pretty much reject this perspective. Domain expertise in something as arcane as golf course architecture is valuable, sure, but whether that translates to objectively better aesthetic preferences is a leap. Even if domain experts all agreed on general aesthetic preferences, the idea that this should translate to the layperson is also dubious. The idea that introducing an artistic novice to John Cage or Pablo Picasso just seems like a waste of time. These artists’ work almost requires someone to know a nontrivial amount about the domain and not appreciate what they are doing.
I think there is a strong argument for domain specific stages of artistic appreciation. Two very different results fall out of that if we assume that there is some sort of linear end of artistic appreciation, where the experts converge, then you’d end up with a traditional objective aesthetics. However, I think it’s quite clear that the opposite is true. That the concept of aesthetic appreciation is specific to one’s place in history. That our preferences are, in part, determined by what already exists for us to make our determinations. This would suggest that a subjective perspective ultimately must prevail. Obviously there is no way to determine this, which is why axiomatic frameworks are inherently fraught. This is why I argue for maximizing function over form.
I don’t intend to list out the Doak Scale scores here. It’s 0-10 scale and it’s important to remember that the scale is meant to be logarithmic. So, it’s supposed to allow for a significant amount of argument between whether a course falls between two scores, say 6 or 7, but it should be obvious when a course is two steps away. That is, a “Doak 4” should never be confused for a “Doak 6.” I will add here that Tom Doak specifically notes that his scale is based on gut feeling, so it truly is a univariate system. One variable input and one variable output:
“When I’m trying to rate a course I’ve just seen for The Confidential Guide, generally it’s more of a feeling than a rigorous analysis: To me, the best courses are the best because they go their own way and succeed at it, instead of ticking boxes on someone’s checklist.”
Doak, Tom. “Tom Doak on How To Rate Your Home Course.” Links Magazine, retrieved June 24, 2005.
Duncan, Derek, and Stephen Hennessey. “Golf Digest's course rankings: How Our Process Works.” Golf Digest, May 07, 2021. Retrieved June 24, 2025.
Golf Magazine leaves input variables up to the reviewer. Some might take them into careful consideration, others may not. Here it’s notable that this will necessitate a single output variable. In Golf Magazine’s case, that is a ranking:
“Because we don’t prescribe a set method to assess courses as other ranks do, no one opinion carries the day — our rank is a democracy. Some panelists believe that enjoyment is the ultimate goal, and thus prioritize design attributes such as width and playing angles, while frowning upon the need to constantly hunt for balls in thick rough. Other panelists value challenge and the demands of hitting every club in the bag. Still others consider a course’s surroundings and overall environment of paramount importance, thereby emphasizing the setting and naturalness of the course. In the end, allowing raters to freely express their tastes is what produces the desired eclecticism in our Top 100 lists.”
Morrissett, Ran. “Inside GOLF’s Top 100 Courses vote: How we decide our rankings” Golf Magazine, Nov 2, 2020. Retrieved June 24, 2025.
While there are some interesting philosophical problems with rating vs ranking. For my purposes here, a rank can be considered a rating for the most part. It’s still useful in determining whether or not to go see a golf course. I discuss these issues in some detail in Golf Course Rankings are Mostly Useless.
Johnson, Andy. “How The Fried Egg Rates Golf Courses.” Fried Egg Golf, December 14, 2022. Retrieved July 7, 2025.
Here we can make a direct parallel to music. The Wu-Tang Clan produced the album Once Upon a Time in Shaolin that only had one copy ever made. It has sold on a few different occasions and for millions of dollars each time.
Suppose a parallel to golf. Create a course that can, say, only be played in a tournament (to create publicity) or for the green fee of $100,000 dollars. Enough to make the operators of Shadow Creek GC blush. The proposition is absurd, but supposing it was done, and the course was genuinely amazing, does it make sense that the costs associated with it shouldn’t be a concern?
I think it should. Maybe others don’t. It’s hard to ignore the fact that a course could become so costly that we start having genuine concerns about the consequences of actually playing it. It just seems hard to ignore.
Morrissett, 2025.
Bertram, Chris. “Golf World Top 100: Best Golf Courses and Resorts.” Today’s Golfer. Retrieved July 7, 2025.
Note here that Golfweek and Golf World do provide some context in their myriad lists, but within the context of the list, they do not indicate that contextual choices.
Lusk, Jason. “Golfweek's Best: How we rank courses with a score of 1 to 10.” Golfweek, June 3, 2024. Retrieved July 7, 2025.
Duncan, 2021.
Johnson, 2022.
The original Confidential Guide’s introduction is considered here. It’s difficult to say whether and how much he allows for context, but I’m basing that decision on:
“The beauty of the course, the atmosphere of the club, and the members’ attitude toward the game all play their part in the total golf experience; but of these factors, I’m much more interested personally in the scenery than the service.”
Doak, Tom. The Confidential Guild to Golf Courses. Sleeping Bear Press, Chelsea, MI, 1996, p. 3.
This and other sections suggest that he is partial to a vibe around the course rather than judging the course apart from all that.
Lawrence, Adam et al. Architects’ Choice Top 100 Golf Courses. Golf Course Architecture. July 2013. (PDF)
Much of the placement is based on the introduction, especially:
“Even if one can agree set criteria against which voters should make their judgements, one doesn’t have objectivity, partly because those criteria are themselves subjective, and partly because the individual voters have to be trusted to apply them in the same way, which is impossible. We chose the opposite route: to define no criteria and to say to our voters, in true Potter Stewart fashion, ‘We believe you know what good is when you see it’.”
Bang for your buck: https://www.golfdigest.com/story/the-best-value-courses-in-golf-money-goes-farthest
Links courses: https://www.golfmonthly.com/courses/32-of-the-best-links-courses-in-the-world
Munis: https://golf.com/travel/courses/americas-30-best-municipal-golf-courses/
PGA Venues: https://www.top100golfcourses.com/best-pga-championship-venues
Resort Courses: https://golfweek.usatoday.com/story/sports/golf/2025/01/14/best-200-resort-top-golf-courses-u-s/77584442007/
I think SO much about rankings - but mostly my own. I may need to write about this myself and will be sure to cite this article!
Pretty solid assessment of unneccessary but needed, ranking systems.