Tracking User Metrics #2: The Usability Metric for User Experience (Lite)
I wanted to take some time in this second part to present my pet peeve: The UMUX-Lite survey. It’s a short tool that I use to quantify the experience of my users with our app. I searched far and wide for a tool to quantify this experience as I needed a tool that was both accessible and where users could answer quickly.
What is it, how does it work?
The UMUX-Lite survey consists of two simple questions ranked on a scale of 1 to 7.
Does (this app) meets my needs?
Does (this app) is easy to use?
What I love with the UMUX-Lite system is that it gives results similar to the more standard SUS survey (which is waaay longer) once you recalculate it through a simple equation (have a look there if you need) while being really quick to fill up.
This allows me to have way more feedbacks than with the SUS while being also really easy to understand (as I encountered some understanding problems with some questions of the SUS survey).
Similar to how I work with the NPS (see the previous article), I use the SUS on two user populations: first month users & one-year users. This gives me a set of 4 scores that I’m able to track every month and follow on a timeline regarding what we deployed and events that happened outside of the application.
The two scores determinate how our app rank in terms of both ease of use and adequacy with our user's needs, and the correlation with the SUS scale allow us to compare it to other standards in the industry (the problem still being that European companies tend not to share those metrics).
To calculate your UMUX score, simply use this formula:
UMUX-Lite score= ((Question 1 Score) + (Question 2 Score)-2)*100/12
Once you have your UMUX-Lite answers, you can quickly get an equivalent in SUS by using this formula :
SUS Equivalent score = 0.65*(([Question 1 Score] + [Question 2 Score] — 2)*100/12 +22.9
One hidden advantage I discovered while using the UMUX-L is that those scores fluctuated way less than their NPS counterpart. Focusing the people on adequation and ease-of-use tends to reduce a lot of the affect and give results focused on the global experience instead of one single pain point. This also helps to compare our scores with our American counterparts without having to deal with a European bias.
Similar to the SUS survey, I added a third question asking the users “How could we improve?” that allows me to capture instant feedback and to tag and hierarchize those feedbacks in an aggregation tool, giving me pointers on where I should focus my user research in the future.
- As for the NPS, clearly indicate your scale: 1 being “Not at all” and 7 being “Totally”. Some users read the scale as inverted, giving them pointers help to avoid false positives.
- The UMUX-Lite is still being reviewed and tested, it looks like we could also use a 5 points scale which would be easier regarding other ranking systems. As usual for every tool, test on your users, see what works, what doesn’t, and act accordingly.
- You should always track your UMUX-Lite across time. A single measure doesn’t give you enough to work on, as the UMUX-Lite score might be impacted by a lot of things outside of your control. Track through time, check with your timeline, be mindful of world events.
- Do not take your user comments at face value, use them to dig a bit more in some area to get a better understanding and more details! Usually, users don’t talk much inside text fields, go to them, explore, and understand.
- If you use the UMUX-Lite inside your application, do not break your user’s flow! Add it at the end of their experience, track when and where you asked it to be able to frame and correlate it with your global app experience.
- The original version of the UMUX-Lite uses “requirements” instead of “needs”. However, for some users, it’s complicated to understand. Switching to “needs” makes it easier. If your surveys use words complicated to understand, you will lose some users. Don’t sacrifice your users for survey purity. Improvise, adapt, overcome.
In the next post, I will talk a bit about how exactly I aggregate, hierarchize and sort some of our user's feedback and how I use them in my day to day work 😉