Net Promoter Score Isn't Great: Business Growth Isn't That Simple
Synecdoche is a linguistic term for when one refers to the whole of something by identifying it only by one of its parts. For example, calling a car a "set of wheels" or calling Champagne "bubbly." It is a figure of speech used to simplify a much more complex object or concept into something easier to grasp (or just a fun way to come up with slang).
Considering the world is a very complex place it isn't uncommon for people to come up with ways to simplify their reality. This is usually harmless wordplay, but in certain areas (i.e. business) the oversimplification of complexity can lead us down the wrong path and cause a myriad of troubles: missed opportunities, revenue losses, layoffs, and bankruptcy (of course I could be accused of brutalizing another linguistic term here: hyperbole).
One such oversimplification that I have long been suspicious of is the Net Promoter Score (NPS). The fact that Harvard Business Review initially published the article and Bain & Co. did the initial research in its development only exacerbates the proliferation of this methodology among the analytically indolent.
I have been around NPS since I first entered market research and it never made sense to me how so much could be derived from "the one number you need to grow." That's like saying the ERAs of the Chicago Cubs starting pitching rotation is the only thing that led them to win the World Series. Yet if you believe in NPS then surely a statement like that isn't all that out in left field (pun intended).
If you are using one metric (regardless of where it originated from) to drive important business decisions, then you are not doing your due diligence -- I would think that goes without saying yet alas...
A few issues I've had with NPS:
- The cut points for Promoters, Passives, and Detractors can differ across companies and research providers. Sometimes a 10/9/8 score is a "Promoter" and sometimes it's just 10/9 (which was originally in the Harvard Business Review article). During some client meetings when NPS would come up in survey discussion, we'd actually ask clients where their company's cut points were. NPS was not intended to have this scale malleability in mind.
- When you look at a 0-10 or 1-10 scale in a survey (which is yet another issue I've seen in implementing NPS) and only the extreme endpoints are labeled (called a Semantic Differential Scale or Anchored Scale), what exactly is a "6?" Or a "3?" Is a "6" twice as likely to recommend as a "3?" The ordinal scale used is inherently subjective and open to the same biases of other survey scales -- especially when used for cross-cultural measures which NPS is frequently a part of.
- The wording of the NPS question ("How likely is it that you would recommend [Company X] to a friend or colleague?") is based on future behavior. People don't know exactly what they are going to do in the future. It's the same as asking purchase intent -- "Sure, I would buy it...but I don't have enough money/I already have something similar/only if it comes in blue/etc." These questions simplify reality. Regarding NPS in particular, what about all the other factors that impact growth that aren't in the control of your business or customer? New federal regulations, global competition, or any one of Porter's Five Forces for example?
The basis for NPS lies on shaky grounds from a validity perspective. Although I was unable to acquire the original study performed by Fred Reichheld (Bain & Co.) and Satmetrix, some of the hints at its details from the Net Promoter System website elude to strong assumptions based on loose thresholds (all quotes taken from the website):
- "In most industries, Net Promoter Scores explained roughly 20% to 60% of the variation in organic growth rates among competitors." In other words, there is only a weak to moderate relationship between NPS and growth (this is the R-squared value they are referring to I assume). Another place on the website indicates the R-squared range is between "10% to 70%" which isn't any better and makes it hard to determine which reported proportions are correct.
- "In 11 of the 14 industry case studies that the team compiled, no other question was as powerful in predicting behavior." So NPS only had the strongest relationship to growth about 3/4 of the time AND they go as far to say that NPS is predicting behavior. Statistics 101: correlation does not equal causation.
- Bain & Co. indicate that NPS is not always meaningful unless 1) your business operates in a well-developed industry with several competitors, 2) low consumer switching costs with competitors, and 3) industry products/services are widely in use among consumers. "Wherever these conditions do not hold, the relationship may be weak or inconclusive." So basically, NPS is only useful for a small fraction of businesses and industries and even then we see that the relationship between NPS and growth is weak to moderate at best. Yet, from what I've seen businesses of all shapes and sizes indiscriminately use NPS.
To be fair, the Net Promoter Score is supposed to be taken as part of the Net Promoter System which is a much larger voice of the consumer framework. Although it doesn't alleviate any of the bullet points above, it does provide decent guidance on processing feedback within an organization. However, this appears to have been lost on many organizations that simply track NPS by itself and think that improving the score alone will help achieve growth. Other research has shown the NPS by itself is absolutely not a reliable indicator of future growth in and of itself.
Alternatives to Likelihood to Recommend
The "Likelihood to Recommend" survey question is a mainstay in consumer research. Knowing whether or not a customer would recommend your product/service/company is not at all a bad data point to capture and I believe there is value in understanding this. Recommendations from a friend or family member do tend to be held in higher regard when considering a purchase decision -- there are just better ways to capture this information:
- Consider capturing what influenced the purchase decision either at the PoS or shortly after. This could be through a purchase receipt survey or even a verbal question from the sales associate. These are by no means perfect methods though and they might not elicit much participation without incentive (even that might not help). The point is that you can directly link a purchase to a recommendation and turn it into a binary response rather than an ambiguous 10 or 11 point scale.
- Measure recommendation using referral marketing methods. Basically, the concept is that when a product/service is purchased, the customer is given a set of discount codes or coupons intended for them to share with friends and family. The assumption is that they will share these if they are satisfied and are willing to recommend your product/service and if they aren't then into the wastebasket. Should their family/friends use these, then a recommendation can be considered to have been made and tied back to the original purchaser with the new purchaser(s) continuing the cycle.
It remains to be seen how long the NPS trend continues until it is relegated to some backwater of over-hyped business trends. The extension of its shelf life is due to decision makers' considerably busy schedules and the usual dislike of paging through dull and lengthy reports.