top of page
System Usability Scale

A quick and dirty way to check your UX result

sus ipad_edited.png

Overview

This is the usability test for a program, marker editor. I was optimizing the main function of our editor platform, and after building the whole prototype I decided to collect feedback from our interior and exterior users. Since the user base is way too large, SUS became a quicker way to reflect the quality of the optimization than conducting interview with each segment of our users until the samples are big enough. The result of the SUS test isn't as nice as I expected - the score is 61.9, level D. NPS level is Passive, Acceptance level is Marginal and the Subjective level is OK. The outcome tells me that the new version isn't good enough, therefore I further collected more quantitative feedback to iterate the second version.

clock.png

Time

2/2021 (a week)

user.png

Role

Researcher

Background

The reasons of using SUS to measure success of this project are as follows. Firstly, this was the quickest way to get feedback result during just one week since I had to hand in the report right after the new version was prototyped. Secondly, the main users are both from interior (our customer success) and exterior (mostly from exhibitors). Therefore it would be a huge amount of time if I interviewed each segment of the users.
Next, the discussion came to the problems SUS may have overlooked. The characteristic of "dirty" means that the metrics can be less persuasive because there are no quantitative data showing what exactly the user confusion is. To fix this, I added questions to collect the pain points and wishlist to collect their demands so I could dig those potential needs. Meanwhile, users' segment and working industry were collected to make sure a clearer use case picture.

Type of Users in the Sample (N=24)

Customer Success & Product Application

25.1%

Development Intern

12.5%

Customer
(Sightseeing Related)

58.4%

Customer
(Design Related)

4.2%

Survey Design

The survey was created by Google form with a built-in prototype for users. Since the main point of the optimization focused on user interface and some fixed bugs, the survey explains the goal is to experience the new visual panel rather than the function flow. 

At the beginning, users had to fill their basic info including name, email and their working industry.

For the new version, there are only ten questions to ask about users' feeling:

  1. I think that I would like to use this system frequently.

  2. I found the system unnecessarily complex.

  3. I thought the system was easy to use.

  4. I think that I would need the support of a technical person to be able to use this system.

  5. I found the various functions in this system were well integrated.

  6. I thought there was too much inconsistency in this system.

  7. I would imagine that most people would learn to use this system very quickly.

  8. I found the system very cumbersome to use.

  9. I felt very confident using the system.

  10. I needed to learn a lot of things before I could get going with this system.

At the end of the survey, there's an open field for users to leave comments about their feeling, non-satisfaction, and expected function, etc.​

Histogram Result of 10 Questions

This is the primary result of the 10 questions shown as Google Form histogram.

It shows that the new version of Marker Editor is higher on 3 perspectives "Usability", "Useful", and "Desirable".

As for the unacceptance of the new version, the percentage of complexity is 0%, of inconsistency is 4.5%, of the trouble of using the system is 9.1%, and with total average below 4.6%。
As the dispersed distribution of "I think that I would need the support of a technical person to be able to use the system" and "I needed to learn a lot of things before I could get doing with this system", after analyzing, most of the agreement are from the customers and other customers who have use experience with the help of our customer services at most of the time.

sus_edited.png

SUS Result

The final result shows below. As we can tell, the average score is 61.9, which can be converted to the level of Adjective "OK", Acceptable "Marginal", and NPS "Passive".
According to to research by John Brooke in 1986, a SUS score above a 68 would be considered above average and anything below 68 is below average.

result.png

SUS Result

Takeaways

1. The Limit of SUS
In other words, there might be several factors causing the result - is it because they are not familiar with the system enough? Is it because the edit platform hasn't become a common tool?  Or is it because our customer have already got used to our onboarding session in the beginning of using this product?These questions couldn't be answered immediately, however, based on the strong opinion of the 2Us and the open feedbacks, I could at least get their thoughts about the new version and had progressive elaboration.

2. The importance of user feedback

Being the only designer of the product is a double-edged sword. On one hand, I had the full autonomy to decide the project vision and direction. On the other hand, I have no one to bounce off ideas or validate my assumptions before I test the product in the real world. This makes collecting user feedback extremely critical in the process. I’m very thankful that many of our customers and my colleagues were generous enough to share their thoughts with me.

bottom of page