2_Usability_Progression.png

User Testing Consultant

Usability Testing - Data Analysis - Qualitative & Quantitative

Usability_Banner.png

A usability testing of a mobile application: Sigma Client. Due to an NDA, I will not be able to demonstrate the results and wireframes which I designed after the tests. But I can explain the whole process which I carried out.

 

Duration

14 weeks

Client

REDCOM Laboratories, Victor, NY. 

Platform

Mobile Application

Test type

Usability Testing & Heuristics Evaluation

 

End users

Military Personnel

Team

5

My Role

Client-team facilitator,  test design, test moderation, test observation, data analysis, wireframes

 
 

The general purpose of the study was to identify usability issues that can be solved to improve the design of the next generation product.

I evaluated three major functionalities of the application: video call, voice call, and chat.

The objectives of the test are outlined as follows:

  • Collect quantitative and qualitative usability data on three functions of Sigma Android.

  • Identify usability issues in Sigma.

  • Designing the wireframes of next generation Sigma app.

Introduction 

 
 

First of all, I had to study the application inside out and then I decided to break down the test into two phases. The reason behind the breakdown was there were some obvious issues with the UI which need to be fixed. 

1. Heuristics Evaluation: An expert evaluation carried by 15 set of design principles. 

2. Usability Testing: An empricial evaluation carried out with participants.

Test Breakdown

 
 

I thoroughly studied Jakob Nielsen's 10 heuristics which I perceived as standard usability principles for any UI. Along with that I also l also studied design principles of Donald Norman, Deiter Rams  and Gestalt Psychology.

And then I laid out a total 15 design principles which will lead my team and I during the heuristic evaluation.  

  1. External Consistency:  The system uses interactions and design patterns that are consistent with the platform and analogous systems.

  2. Widgets and labels near targets: Place widgets (controls) adjacent to their respective targets of operation and labels on, or directly adjacent to, their associated controls.

  3. Group like widgets/functions: Use the gestalt principles of proximity, similarity and closure to group widgets with similar functionality.

  4. Frequently used functions optimized: Optimize the functions/labels/buttons which will be used the most. 

  5. Speak the User’s Language: Keep it simple and concise. 

  6. Perceptibility of Feedback: User interaction should be followed by perceivable feedback: Aural/Visual/Haptic

  7. Perceptibility of System State: At any point of time the state of the system should be perceivable: Visual/Aural/Haptic

  8. Internal Consistency: A good design is where the icon, color hue, typefaces, etc. are used consistently throughout the application.

  9. Appropriate Selection of UI Patterns:  Right UI pattern makes users feel efficient while interacting with it.

  10. Minimize Knowledge In The Head: Recognization rather than recalling.

  11. User Control and Freedom: User Control and Freedom: User should feel free to behave any way with the application as long as it is not hindering users' goal.

  12. Error Prevention: A system should show a warning if in the case of human error.

  13. Error Recovery: There should be an easy recovery of human error.

  14. Novel interactions easily learned and recalled: Recognition of new functions, interaction if not intuitive.

  15. Help & Documentation: Support should be provided anytime to users if they want to learn a new interaction/function.

An extensive heuristics evaluation suggested a few changes. After the changes were done my team and I proceeded for usability testing. 

Heuristics Evaluation

 
 

First of all the research questions were outlined. Efficiency of the tasks were collective determined based on if users succeeded finishing a task, how much time did they take, and how many errors they made while performing the tasks.

  1. Making video calls

  2. Making conference calls: 

  3. Initiating a message chat. 

Tasks and Scenarios

 
 

As you can see below the blue print of the test room. Where a camera was used to captures the user interaction, another camera was used to capture the user expressions while interacting with the application.

Test Environment 

 
 
Screen Shot 2018-01-06 at 1.39.33 AM.png
 
 

As the end users of this applications are military personnel, we planned to recruit 8 students who were enrolled in Reserve Officers' Training Corp (ROTC). And additional 2 backup participants were recruited. 

Challenge: Due to lack of interest we could only recruit 5 ROTC students.

Proxy Participants: After talking to clients, I recruited rest of freshman students test the application.  Usually military personnel have a low education background so freshman students could be a perfect substitution. 

Recruitment

 
 

I had to design scenarios of the tasks with following things in mind: 

1) Real-world: While performing the tasks, the users should feel a real, practical, and day to day scenario. 

2) No interface terms: Scenarios should be easy to understand and interpret button no way should include interface terms.

Task Scenarios

 
 

Familiarity with the interface might affect the performance of the subsequent tasks. And this knowledge carryover could be a risk of not getting objective results from the performance. To reduce this risk, I counterbalanced the tasks that users will perform the tasks in different order.

Task Design

 
Screen Shot 2018-01-06 at 4.59.59 AM.png
 
 

1) First of all, participants filled out the background questionnaire so I  can gauge their current mobile and web literacy during the data analysis. 

2) Then they performed the tasks in different order. 

Think-aloud Protocol:  Users were encouraged to think aloud while performing the tasks. So, I can understand how they perceive and interpret the information while performing the tasks. 

3) After the tasks, the participants filled out the post-task questionnaire about their experience using the application compared to the similar applications out their in the market.

Test Design

 
 

Quantitative data included:

  1. Likert scale ratings of post-task questions.

  2. Number of incorrect paths before the correction path.

  3. Number of discrete steps to complete tasks.

Qualitative data  included:

  1. Participants’ think-aloud comments during test sessions. 

  2. Post as questionnaires that include questions like rating the ease of use, preferences, etc.    

Data Collection

 
 

Due to non disclosure agreement, I would not be able to present the findings. But I can  explain the type of statical analysis I carried out. 

Mean ratings of easiness of tasks:

Data Analysis

 
Artboard.png
 
 

Number of clicks taken to complete the tasks:

 
Clicks.png
 
 

Confidence Interval of the easiness of the tasks :

(Significance p= 0.05, Interval = 95%)

 
95% confidence Level.png
 

In the task of messaging, there were two ways users can complete the task. And clients wanted to know which tasks is more preferable and  efficient over another. 

The test showed that Method 2 was first choice to complete the task. But, it took longer and more errors. So, I suggested new designs to client where Method 1 (Efficient method) is quite obvious to pursue over the Method 2 which users found less efficient. 

A:B.png

A/B Test

 

At the end, I calculated the system usability scale defined by Jeff Sauros. Participants answered 10 questions on the likert scale with 5 options (Strongly disagree to Strongly Agree). The response format is shown below.

System Usability Scale (SUS)

 
 

10 Questions to evaluate SUS of each participant:

  1. I think that I would like to use this system frequently.

  2. I found the system unnecessarily complex.

  3. I thought the system was easy to use.

  4. I think that I would need the support of a technical person to be able to use this system.

  5. I found the various functions in this system were well integrated.

  6. I thought there was too much inconsistency in this system.

  7. I would imagine that most people would learn to use this system very quickly.

  8. I found the system very cumbersome to use.

  9. I felt very confident using the system.

  10. I needed to learn a lot of things before I could get going with this system.

 

My team and I interpreted the answer of those questions and calculated the SUS score for each participant gauge the usability of the application. According to Jeff Sauros, 68 and above signifies the application is usable. 

 
 

After the data analysis and interpreting the participants think aloud comments, I designed the wireframes of suggest next generation Sigma Client mobile application. Due to NDA I would not be able to demonstrate them here. 

Wireframes

 
 

1) Client Communication

Since I was the client-team facilitator, I communicated with the team of REDCOM during the usability test. Effective communication is must for desired, and fruitful test results.

2) Keep it objective, Keep the ego aside. 

It is really important for usability testers to be objective while testing a web or mobile application. If the users do not interact with the application the way they are supposed to, it does NOT mean users are inefficient. It means, perhaps, the app is NOT USABLE.

Learning Points