This essay includes a July 2020 interview with Graphicacy Creative Director Jeffrey Osborn and Lead Data Visualization Designer and Associate Creative Director, Carni Klirs. It’s about their COVID-19 dashboard collaboration with a team at Johns Hopkins University. All quotations in this essay were transcribed from that interview in July 2020.
Introduction
Data visualizations, especially dashboards, can become information-rich reading experiences that spark reason and intuition. Data designer Steve Wexler notes that a dashboard is a “visual display of data used to monitor conditions and/or facilitate understanding.”
When making dashboards, data designers can consider creating a reader experience instead of a user experience. This reading experience is not just about completing tasks and interacting with charts, zooms, and filters — a common goal of data dashboards. Anyone who’s read a great book or had a pen pal knows that they experience a correspondence, not just a utilitarian interaction: between themselves and the text, and sometimes even the author of the text. If data designers can think of their users as readers instead, they can promote a closer correspondence between often distant camps — scientists and the rest of the public.
Like a high-powered microscope or telescope, data visualizations compel us to lean closer, to see and understand with clarity what was once not visible. Yet unlike high-powered microscopes and telescopes, the public now has easy access to data visualization as a tool. During a pandemic, this tool can also become a way to promote graphicacy: literacy in reading, understanding, and even making charts. Yet literacy isn’t just performed at the individual level. It’s also about becoming a part of a culture — or even subculture — that’s into making meaning from data, and even taking meaningful action informed by data. As past public health emergencies illustrate, charts that better relate facts to everyday people can also instigate new degrees of data literacy.
Testing Trends Tool
Consider the “Testing Trends Tool,” a data visualization dashboard for a preeminent pandemic website, the Johns Hopkins University Coronavirus Resource Center. This project, a collaboration between Graphicacy and public health experts at Johns Hopkins University, shares the purpose of dashboards, defined by Wexler: Track trends in COVID-19 cases and tests, and facilitate understanding between those collecting the data, organizing it, and reading it in everyday settings.
The dashboard helps people working in public health departments — and also people in the general public who seek out information — complete tasks in picturing and communicating COVID-19 cases and testing trends, in their states and regions.
As stated on the dashboard’s website, the “Testing Trends Tool” introduces the reader to charts that
“layout the key metrics for understanding the reach and severity of COVID-19 in a given area: number of new daily cases, tests per 100,000 people (testing rate), and percentage of tests that are positive (positivity rate).”
The dashboard also provides an interactive way to explore the data. Exploration offers value for those who have specific questions about the pandemic: users can begin at a high level and then search, sort, and filter to customize the experience according to their questions.
An effective data design for public health can include options to view the data simply and elaborately. University of Maryland Human-Computer Interaction Professor Ben Schneiderman calls this way of interacting with a data design, “Overview first, zoom and filter, then details on demand.”
Typically, the reader can begin with a high-level view, and if they choose, they can zoom in to a particular example and access it, filter out information as needed, and read details through interactive elements, such as tooltips. For example, a map can give a big picture view of the topic, allowing the readers to get an imprecise portrayal. The reader then has options to delve into the data to more granular levels, if they choose. The accompanying text provides context for the experience.
These activities support reasoning and intuition — the dual concerns of all designed products, which are about making experiences that are at once usable and meaningful. In any duality, there’s also an implied hierarchy. In the data world, it’s common to prize reason over intuition. Yet experts and everyday people need both reason and intuition to read data and make decisions from what they read.
Intuition helps us determine the signal from the noise. Data designer and educator Ben Jones argues,
“Only human intuition can discern between the useful patterns and the useless ones. They get their ‘usefulness’ from our goals and values.” Next, Jones argues that intuition helps guide us where to look next. “Often, the best outcome of an interaction with data is that we sense another, even better question to ask.”
Finally, intuition helps us to know when to stop looking and when to take action. Data visualization, then, involves intuition and reason. Data visualizations are not inherently intuitive, but people can learn how to read data intuitively. While empiricists celebrate reason, intuition relies upon our goals and values to assign meaning and purposes to a data visualization project.
Graphicacy’s “Testing Trends Tool” offers an example of how data visualization can extend our capability of thinking about a public health problem, and how it offers support in completing tasks, intuiting, and asking questions. I interviewed Carni Klirs, Lead Data Visualization Designer, and Jeffrey Osborn, Creative Director, about their experience working with public health experts, and what they learned about the role of data design in the public health sector.
According to Ben Jones, there are three purposes for data visualization dashboards: One, help with a task; two; awareness; and three, open exploration. With any technology, Ben Jones describes how we can break down the user experience according to the user, the task, the data, and performance. The text below summarizes the interview with Klirs and Osborn, according to Jones’s framing of how dashboards work.
Figure 1. Johns Hopkins University Coronavirus Resource Center, “Testing Trends Tool” (2020).
Figure 2. The legend describing how to read the charts.
Figure 3. These thumbnails of the snapshots from the map views were downloaded on July 15, 2020.
Purpose
With this dashboard, raising awareness was not the primary purpose. Users were already coming to the dashboard out of a place of concern. Primarily, this dashboard is about open exploration and offering help with a task. Specific tasks on the table view include getting a two-week horizon window on testing — either increasing or decreasing — and filtering the views at the state and regional levels. The scope of the visualization is focused on the present, the past week, and the prior two weeks. The tilemap view allows a reader to see spatial patterns.
Users
The users of this dashboard tool include public health experts, decision-makers, and the general public. While information about COVID-19 testing rates had been reported since early in the pandemic, it was often displayed on its own. The “Testing Trends Tool” is one of the first visualizations to display testing data alongside case counts, to provide a more holistic lens into the outbreak. For public health experts, snapshots of the current situation can be downloaded onto the desktop and displayed in slideshow presentations and social media. A table of small multiple “sparklines” helps readers see the past two weeks for every state — views that can be sorted and filtered across multiple facets.
A tilemap view, meanwhile, shows a seven-day moving average, and whether or not averages are increasing or decreasing. The map offers a spatial way to see imprecise patterns: the big picture. For many well-informed general users, testing also provides a window into how states are managing the pandemic. The number of confirmed cases and testing rates are important, but the percent positivity rate established by the World Health Organization is important too. Rates above 5% indicate broader problems. The site explains: “A positivity rate over 5% indicates a state may only be testing the sickest patients who seek out medical care, and they are not casting a wide enough net to identify milder cases and track outbreaks.”
Tasks
When Johns Hopkins first started tracking the Coronavirus outbreak, they focused on mapping confirmed cases and the number of deaths. However, this provides only a partial view, Klirs notes. Confirmed cases rely upon testing: what can be discovered within this system through tracking, scraping, and compiling. Klirs likens the resulting visualizations to the tip of the iceberg; they miss what’s outside of the system. Testing regimes can provide some inferences into what’s happening on the ground outside the system and below the iceberg. As testing regimes become more rigorous and accessible, the entire system for reporting data benefits. More people get on board, and the data becomes more accurate.
Klirs believes that what we need is a holistic view. A holistic view would help users ask a simple question: is COVID-19 in my state under control or not? To answer in the affirmative, a user would need to discover three things: one, that the number of confirmed cases is down (a two-week trend); two, that the number of tests per 1,000 people is increasing, and three, the percent of positive cases from tests is below 5% (the WHO threshold).
Design factors
In this project, the user experience was the one factor that needed acute attention. The original Johns Hopkins Coronavirus Resource Center team consisted of data scientists, who in January 2020 created maps with ArcGIS, and charts using the coding language Python and a charting library, Plotly. Quickly, the site became an authoritative resource, and the team had ambitions for creating new visualizations to answer questions on critical trends related to the outbreak. While the existing charts were functional, the team wanted a partner with more expertise in designing the user experience. While the “Testing Trends Tool” needs to be useful more than it needs to be beautiful, Klirs notes, “Good design and good user experience are inseparable from being a useful tool.”
The research backs Klirs up here. In the early 1990s, two Japanese researchers, Masaaki Kurosu and Kauri Kashimura, studied the user interfaces of ATMs in Japan. “All versions of the ATMs were identical in function,” Don Norman writes in his book Emotional Design, “the number of buttons, and how they operated, but some had the buttons and screens arranged attractively, the others unattractively. Surprise! The Japanese found that the attractive ones were perceived to be easier to use!” The correlation of aesthetics and usability replicated in another culture, Israel.
What explains these results? Norman answers:
“Emotions aid in decision making. Positive emotions are as important as negative ones — positive emotions are critical to learning, curiosity, and creative thought….Being happy broadens the thought process and facilitates creative thinking.”
Aesthetics offer small gifts in a data visualization user experience. These aesthetics include classic graphic design principles like open space, not using too many typefaces and colors, alignments, and hierarchy. They also include reducing visual clutter and using formal attributes like position, size, and color of the design elements mindfully to draw the audience’s attention to important places. Aesthetics enhance the user’s ability to interact with affordances, the buttons and other tools that indicate particular actions to take.
While function is the first concern for public health dashboards, it’s the aesthetics, the form, that makes the design usable. For data visualization, Alberto Cairo adds nuance to the classic design dictum coined by architect Louis Sullivan, form ever follows function:
“The form of the technological object must depend on the tasks it should help with….the form should be constrained by the functions of your presentation….the better defined the goals of the artifact, the narrower the variety of forms it can adopt.”
Given that the goals of the dashboard do not include advocacy or awareness-raising, a more subtle and information-rich reading experience seems appropriate for an audience constituted of experts or motivated everyday citizens. For this reason, Klirs and Osborn chose to use small multiple charts in the dashboard. Small multiples, a term coined by data visualization expert Edward Tufte, were first popularized in newspaper “sparklines” of financial exchange data. “A sparkline is a small intense, simple, word-sized graphic with a typographic resolution,” writes Tufte.
“Sparkline graphics can be everywhere a word or number can be: Embedded in a sentence, table, headline, map, spreadsheet, graphic. Data graphics should have the resolution of typography.”
When stacked into a grid with clean alignments, and consistent visual encoding, sparklines become small multiples. These small multiples allow the reader to identify variation between each chart within an eye scan, aiding comparisons and other reasoning tasks. Highlight colors and warning colors can help readers see outliers in the data or exceptional changes in the magnitude and direction of the data. To summarize the charts, Graphicacy includes indicator arrows, with orange-red indicating a negative change, and blue-green indicating a positive change. However, the reader must be mindful of context. Sometimes an upward arrow is bad, as in increasing confirmed cases and percent positivity, and sometimes it is good, as in increasing testing rates. According to Andy Kriebel and Eva Murray, indicator arrows are:
“Commonly used to in dashboards to show performance against a reference point. For example, if the quarterly revenue figures are below target, a downward arrow, colored in red, can be placed next to the charts or numbers to indicate, at a glance, whether a result is good or bad.”
This visual language of tracking money works well for science and public health data too. A lot of information, packed in a small space, can provide the conditions for assessing health, be it economic or medical.
Figure 4. This snapshot of the Florida ‘small multiple’ was downloaded from the dashboard on July 15, 2020.
The small multiples also provide a less visible goal than task completion and open exploration. They help people reporting the data to the dashboard at the state level develop more consistent data reporting practices. Klirs says that this dashboard “shines a light on health departments across the United States.” The reader can see this in anomalies and outliers in the data, which are incorrect.
For example, some of the small multiple charts have tall bars indicating a 100% positivity rate. Why do these anomalies appear? “There are some instances,” Klirs says, “where a state chooses a methodology and then gets backlogged in their reporting. This creates a false spike. For example, a state might not report for three days in a row, and then report everything in one day, or only add positive cases as they update.”
This tool, then, is intended to provide a slight nudge to data workers in state public health departments to improve their data bookkeeping and reduce discrepancies between states. For example, when they upload data after a lag, they should include the total number of test results and not just positive ones. Here, another goal emerges: the “Testing Trends Tool” is a teaching tool for close reading and spotting errors, with the aim of correcting them. These errors are much easier to spot through the small multiple displays.
Figure 5. Examples of anomalies in the data reporting. Alabama and Arkansas report 100% positivity rates. This is due to discrepancies in the data bookkeeping from these states.
Many mission-driven organizations need to improve pipelines in their data collections. They also need to show transparency in how they report their data for diverse purposes, such as accountability, fundraising, education, and advocacy. Data designers can help these organizations fulfill all of these needs by first co-creating functional working processes for gathering, structuring, and visualizing data — ultimately leading to platforms that display their data to intended audiences.
To arrive at their current design, Klirs and Osborn worked with the Johns Hopkins team on an iterative design process. Due to the emergency, the project was conceived and produced in a design sprint of four weeks.
This necessitated quick gut checks using empathy and intuition (putting oneself in the shoes of the user, asking oneself, if “I was looking at this dashboard for the first time, would I know how to interpret it?”) from the internal teams at Graphicacy and Johns Hopkins. They didn’t have sufficient time for more traditional user testing. Now that the tool is live, Johns Hopkins is collecting feedback from real users. As feedback arises, Klirs notes that improvements to the design loom.
Conclusion
During COVID-19, the public health sector needs designers to provide a bridge between slow, nuanced scientific study and the urgent needs of data curators and the public. These needs include being informed and taking joint actions to contribute to solutions. To create this bridge, data designers work with a visual language, and a set of tools and technologies to found informative platforms.
Data design is an ongoing human dialogue rather than an automated, computational innovation. When thought through well, a data design can help readers overcome barriers of complexity in how they think about problems in the world. It can also facilitate dialogues that lead to informed caring and positive action.
Each day, data designers direct their efforts toward surveillance and profit — or to promote the interests of science, the social good, and public health. Amidst COVID-19, people have rallied around the mantra, flatten the curve. Data visualization designers, meanwhile, need to help turn the tide of a visual culture that can often misread, misunderstand, or misuse charts.
Data designers can think of their users as readers, who don’t just interact in a transactional way with the data — they correspond with it. A correspondence suggests a more long-term relationship. Here, data designers create experiences that promote a culture of data literacy in reading, understanding, and making charts that are truthful and purposeful. This type of literacy creates a culture around data too. Data literacy, or graphicacy, is not just individual. It’s social. Graphicacy is for everyday people too.
Joshua Korenblat is an Assistant Professor of Graphic Design at the State University of New York (SUNY) at New Paltz, where he teaches a data visualization course. His research includes visual communication, literacy, sustainability, and the digital humanities. A co-founding team member of Graphicacy, he continues to contribute to selected projects as an Art Director-At-Large.