Tracking User Metrics #1: The Infamous Net Promoter Score

This will be the first post about some tools I use to track my users in my work as a UX designer & researcher. I met the Net Promoter Score years ago, as a way to (and I quote) “Track our users' satisfaction”. Since then, the NPS has been one of the tools I use not to track the global satisfaction of our users but for what it is meant to answer: Do our users recommend our app/service or not?

Some chemistry tools

What is it, how does it work?

The NPS works with a simple question:

How likely are you to recommend our service to your friends, colleagues & family?(on a scale of 1 to 10).

You can then use a second following up question asking “How could we improve?”. And this, my friends, is where the interesting bit lies more for me as a UX designer as the NPS was primarily developed as a marketing tool 😉.

To calculate your global NPS, you subtract the sum of all the people that gave you 6 and below (called Detractors) to all the people who gave you 9 or 10 (called Promoters). Yes, 7 and 8 are ignored. This gives you a score that you are supposed to be able to compare to other industries in the same field.

Still from the Teacher puppet from the movie & concert “The Wall”, Pink Floyd
Still from the Teacher puppet from the movie & concert “The Wall”, Pink Floyd

The European Dilemma

In Europe however, we encounter two big problems:

  • Usually, European companies don’t share their user metrics with the outside world. There’s a total lack of transparency about who stands where and to guess how users of a certain app are considering it, you have to rely on market studies.
  • We have a European bias regarding NPS. Even when our users are totally satisfied, they tend to give an 8 (exceptionally a 9), rarely a 10 because we have been taught that we can “always improve !” (I’m looking at you my dear teacher who gave 9/10 on a perfect test, yes yes you). Therefore, European NPS tend to be really hard to compare with our American counterparts.

To help with that, I tend to calculate two different NPS by creating a “reviewed European NPS” including the 8 in the promoters while subtracting the 6 from the detractors (as proposed by this Checkmarket blog post), which in turn allows me to compare the NPS score toward American counterparts.

“It’s alive”, still from the movie from the same name
“It’s alive”, still from the movie from the same name

Is this thing alive? Why is it moving?!

It is really important to then correlate your NPS score with a timeline of your product to be able to check what might have impacted or not this score (which feature was launched when, what event happened, …). This timeline should also include some important events that happened outside of our app.

For example, working for an app in the banking sector in 2020, two things severely impacted our NPS: the launch of the DSP2 (a new European directive regarding the banking sector) and the global pandemic (which led to a decrease of NPS score globally because, surprisingly, people are not in the best mood!).

How to use it then?

On top of using this double NPS scale, I survey each week two different populations of our application. First Month Users and One Year Users. This allows me to get to understand how people approach our app with fresh eyes and the problems they encounter on one side, and on the other side to see how people “bond” with our app after some time.

But the juicy part of the NPS, for me, is the comment section. By aggregating the NPS into some tool (Like Productboard or Dovetail to begin with), I’m able to understand (partially) the problems our users encounter and to hierarchize those problems in a quantitative approach. This in turn allows me to have pointers on directions I could take for some user research and interviews to be able to cross-reference quantitative and qualitative data about our user pain points!

People barred as an interdiction sign
People barred as an interdiction sign

Things I should avoid?

  • You should never change the NPS question. It’s standardized and made to be unbiased, so it’s really important to stick to it. However, you can enhance the follow-up question to your taste and problems!
  • Indicate that 1 stands for “Not at all” and 10 for “Totally”. Some people tend to see the scale as inverted without those indicators.
  • The NPS is not a percentage, the score can change between -100 & +100, it’s only useful when you use it to compare with others. You should also use the calculation method described above to get your score. There’s no such thing as “the average NPS”.
  • You should always track your NPS across time. A single measure doesn’t give you enough to work on, as the NPS score might be impacted by a lot of things outside of your control. Track through time, check with your timeline, be mindful of world events.
  • Do not take your user comments at face value, use them to dig a bit more in some area to get a better understanding and more details! Usually, users don’t talk much inside text fields, go to them, explore, and understand.
  • If you use the NPS inside your application, do not break your user's flow! Add it at the end of their experience, track when and where you asked it to be able to frame and correlate it with your global app experience.

What next?

In the next post I will talk about another user metric tool I use and really like, the UMUX-Lite, so keep in touch 😉

UX Designer & User Researcher • Human Jukebox • Book Eater • Human After All