Usability Assessment of a Visualization Website for Public Health Practitioners
Project Summary
January 2021 — April 2021
Northwest Center for Public Health Practice, Uba Backonja, Melinda Schultz, Greg Whitman, Anna Trakhman, Jay Cunningham, Jessie Zhang
A usability evaluation is needed to ensure a website built for rural public health practitioners is usable before its launch.
Identify problems and make recommendations that accommodate the technical limitations of using Tableau as the visualization tool.
Reported Findings, Recorded Data, Annotated Data
6 remote usability sessions were conducted with public health practitioners, ranging in practice from small, remote clinics to statewide institutions. Each session required participants to share their screen while completing 8 tasks (5-7 subtasks) and verbally expressing their thoughts. Session recordings were reviewed and annotated to create 150 data points to categorize issues by frequency and severity. These findings were used to argue the need for an appendix, simplification of user interface, and increase visual contrast for navigation features.
Findings
Homepage &Â Site Navigation
"Drugs could be associated with injury and violence prevention because of injury. Sometimes it’s hard to tell which one, so I would poke at both." - P5
- Process of elimination was used when trying to find public health subtopics such as suicide or drug abuse.
- Public health subtopics were  contested as being misclassified.
- Ambiguity about what the training topics cover and their purpose.
- The gray color palette of the original application was considered boring.
- Uncertainty about where the data comes from.
Dashboard
"I didn't even notice that there were more tabs." - P1
- Iconography of states served as filter buttons but instead confused users.
- The gray subheadings were not accessible and often unnoticed by participants.
- There was no distinction between rural and non-rural communities.
- All state counties were denoted by a singular color, difficult to read.
- Type of jurisdiction filter was ambiguous confusing.
- Filters were usable by all participants, but Tableau auto-refreshing after each selection was irritating.
- In order to use the application, one will always deselect all counties before selecting relevant ones.
Tableau Features
- Tableau specific features were often ambiguous.
- Website could act as an intermediary, with lessons teaching how to use Tableau.
- Tableau tooltips, the hover menu, was not an apparent behavior for first time users.
I started the project understanding the context of the application. While rural public health clinics have higher rates of public health crises, they are more disadvantaged to treat them when compared to urban counterparts.
Our clients had a list of volunteers that they selected from. I used IRB-complicit, RedCap to collect consent of our participants and understand participant biases as we did not engage in the recruitment process. Below shows the discrepancies between the subjects and  anticipated end users of the final product.
As the website was still being built, IÂ audited the existing information architecture to understand what content and features can be evaluated.
Based on the information, available 5. research questions were used to formulate our study.
1. How easily and successfully can users navigate the SHARE-NW website to access the data visualizations they’re looking for?
‍
2. How well can users manipulate the data visualizations to better understand the data?
‍
3. How well do users understand the content, such as charts and icons?
‍
4. Is there any critical information missing that users need to better comprehend the visualization?
‍
5. Are users able to locate and identify relevant training materials related to the public health topic of interest to them?
Many metrics were thought to be unreliable due to unreliable internet connection of our participants. Therefore this study focuses on qualitative data as evidence.
These research questions guided our study to 8 tasks (5—7 subtasks); filled with open-ended, closed-ended, and likert scale questions.
These sessions tasks were previewed to run through an hour, with additional time to allow for technical issues. I participated in 3 of the 6 usability sessions, switching between note taking and moderating.
Notes and recordings were reviewed to conduct a thematic analysis. Grouping findings by category allowed us to identify the prevalence of different issues.
‍