Your Net Promoter Score percentage program is intended to gather accurate, useful data. If you are skewing your results, one way or the other, you are producing the opposite. It doesn’t just make it seem like you’re doing better or worse than you are—it could impact your entire customer retention strategy, and not in a good way.
If you want to avoid creating unintentional bias, don’t make any of these five common mistakes:
1. Leading questions
An effective email survey is all about accurate, unbiased communication. Your customers will receive an introduction email, the survey itself, and potentially a follow up email—at a minimum. What is said in these emails could influence how customers respond to a significant degree.
What is said in these emails could influence how customers respond to a significant degree.
One example of this is including something like: “Do you think we deserve a high score?”. Asking in this fashion means that you are placing too much significance on the score itself for the customer; simply asking this question so directly could also influence your customer into scoring higher than they otherwise might.
Honest feedback, even if it is negative, is valuable. The above example is extreme, but even subtle changes in word choice or phrasing could impact your survey results. Stay as neutral as possible if you want accuracy.
Related content: How to calculate NPS
2. Poor timing
Compare two surveys: one is asked just before you start a significant marketing campaign, and one is asked just after. Do you think there could be a difference in the opinions of your customers?
Asking about people's opinions immediately after a significant business move can be an effective way to determine its impact and level of success. But if you only ever ask for opinions at these points, you are not seeing the full spectrum of your customers’ everyday experience. They are only considering your latest advertisement, or your latest product, or your latest office opening, and so on.
Ideally, you’d launch your survey in what’s referred to as a “steady-state” environment: your standard state of affairs with no big events either side: posing a survey after a purchase is fine, but posing a survey after watching a new ad is not. This will ensure you get a more everyday opinion from your customers.
The exception to this rule is when you’d send a touch-point survey, for example onboarding a new customer or gathering feedback from board members.
3. Cherry picking your respondents
You might be cherry picking survey respondents without even knowing it: by selecting customers who are new on the platform, letting your sales or service team hand-select customers, or excluding customers likely to churn.
You might be cherry picking survey respondents without even knowing it.
All of your customers who have the potential to influence the relationship with your company should be included in the contact list. It may be a good idea to have someone who is impartial to the business or process to review the list a final time before deploying the survey.
4. Skewing your data
Deciding what scores are included in your NPS equation can be tricky. It might seem sensible to include them all.
However, you need to consider decision makers compared to end users or multiple respondents from a single customer account. You risk skewing the score based on who’s invited to respond, who responds, how you weight respondents, and so on.
You can do this in several ways: count all of your responses, averaging them all, and the waterfall method. Regardless which approach you use, it’s important to remember that your process and methodology stay consistent when you analyse your survey results.
5. Survey method
You can deploy and monitor your surveys by several means, either online, over the phone, or in-person. Each of these methods can impact responses.
People tend to be more honest (i.e. have a tendency to be more critical), when the survey doesn’t involve face-to-face contact. To avoid a positive skew, you should avoid face-to-face responses for this reason.
Phone surveys can work really well, but unless they’re conducted by an independent interviewer, they can be quite biased as well.
The best method overall is an online survey, for multiple reasons. They are completed behind the safety and anonymity of a computer screen, when the respondent has enough time and is not rushed (usually they are very brief and quick). This allows for the most honest and reliable feedback.