COVID

COVID is exposing the problem with percentages in research

Every day we receive a numerical update on the scope and severity of the COVID-19 crisis in America and around the world. Usually, we are given the latest total of cases and deaths across the country, in our state, and then in our local county. Ironically, the numbers are rarely (if ever) reported as percentages. In an age where percentages are used to explain almost everything in popular “research,” it’s notable that news agencies aren’t using them for the pandemic. The reason? It’s because percentages don’t tell the story of the phenomenon. Percentages don’t illustrate how contagious and unpredictable the virus is.

COVID research problems

ror example, in my town of about a quarter-million people, 0.24% of the people have had the virus and 3% of those people have died as of this writing. The numbers mirror that of the entire USA where .36% of the population have had the virus and 5.7% of them have passed (and we will not be discussing the reporting of cases here. This is for illustration). Again, taking my illustration, if city leaders said, “Hey, just a fourth of a percent of the population is getting the virus,” we all know that wouldn’t tell the story in any way. And yet, before the pandemic, we were looking to percentages to tell us all sorts of things… but they don’t tell the story. That fraction of a percent doesn’t account for the mitigating behaviors, like social-distancing, and the hotspots elsewhere in the state.

When reports come out regarding percentages of populations who think or do something, we decide that if “X” number of people are doing something then it must mean something. The problem is that we don’t know what the percentages mean if we don’t explore the numbers further. But most popular research does not so e don’t know what relates to the numbers, what caused them to be, and whether the number really matters to our lives. But the writer and speaker will then usually offer an interpretation and (often) a solution to shape the percentages. In the current virus-related context, most of us recognize that percentages aren’t representing the potential danger at hand. And I want to add that in most cases percentages fall short of telling the story in modern marketing-shaped research.

Research is all about telling the story, or at least it should be. Percentages alone are silent on the matter despite their popularity today, but it’s something that many executives of nonprofits, pastors, and CEO’s know too well. Almost without exception, every time Arbor Research Group is hired, we step into a board room and find that the organization had paid (often feeling like they over-paid) for previous research. Sometimes, they even have a nicely-bound hard-copy report, but it doesn’t explain anything about who they are, where they are going, or give them any insights beyond how people (e.g. customers, congregations, employees) are grouped together.

The problem is that most “research” being done today follows a marketing model honed in the 1970s and 1980s by newspapers (e.g. USA Today) and survey-based Likert questions. You know, the ones that ask “From 1-5, answer from Strongly Disagree to Strongly Agree.”  Additionally, the scope of popular research is so large, national, or focused on one part of the USA that we’re not sure that the findings apply to our situation or not. The problem is that demographic percentages, what most “research” today really is, rarely include any connection to values, variations, story, or context unless it is a mixed-methods (stats and qualitative) study done by people is with experience in analyzing text from interviews and is customized to a particular context.

The 3 problems with percentages

  1. Percentages don’t explore how they might be wrong. Most “research” today is a report of “how many people” fit into the category of doing or thinking something, but there’s no explanation as to how they got there, why they are in that grouping, and what it means. Expert research accounts for ways that the percentage might be wrong in what it’s indicating.
    • At one conference, a particular seminar was extremely well attended so the conference organizers were excited about that presenter and wanted to know what made her seminar so (surprisingly) appealing to attendees. It wasn’t the content or the speaker. When we drilled in we discovered that it was located next to the restrooms and it was a large seminar room. And so, when people finally got out of the restrooms (there were really long lines!), they were so late in getting to their desired seminar that they just ducked in the back of the nearest seminar that seemed to have open seats in the back…. and that was hers.
  2. Percentages are one-side of the story. We sat in a seminar of a prominent speaker who was quoting statistics about adolescents and it quickly became apparent that the numbers represented a largely White population in the southeastern United States. What was being reported as “representative” actually only was true for about half of teenagers in the USA. The organization had paid a lot of money to have this speaker in but the numbers were inaccurate from beginning to end, especially for those in attendance.
    • There are studies that work to circumvent this. As an example, for studies about adolescents, such as the National Study of Youth and Religion, researchers conducted mixed methods (adding in interviews and focus groups) and explored the data over time to disconfirm and confirm the various findings along the way.
  3. Percentages rely on memory at one moment in time. This is the most problematic element of marketing-influenced surveys. It’s a fundamental problem and few people recognize it. Most popular studies are done over the phone or online. The problem that psychologists would tell us is that our answers often rely on memories that may or may not be true. Those who assist the police with lineups and sketch artists can tell you that our memory is not as accurate as we feel it is.
    • Our answers also depend on how we are ‘experiencing the world” at that moment. If we’re joyful we may be more open to possibilities than when we’re squeezing in an interview between two difficult meetings with high levels of conflict. There are ways to compensate for this, including interviewing a large population, but even though the ratings may be truly representative of how people act or think we may not know why.

5 steps you can take to overcome the deficits of percentages

Most businesses, churches, and organizations will be “checking in” with their people post-COVID via some form of survey or research. Not all organizations are in a position to bring in a team like Arbor Research Group to help them. So, I want to give you five steps you can take to make sure you get the good data you need to lead well going forward.

  1. Know the scope. When we step into a context, we first work to make sure we “listen to the edges.”  We want to make sure that we have gathered data until we quit hearing something new.
    • One of the best things we offer back to a group is that everyone in his/her organization has been heard. More than just filling out a survey, their voices and opinions matter. We usually do this through face-to-face interviews and surveys.
    • Leaders can benefit from distributing a survey and hiring someone (like Arbor Research Group) to do a series of interviews with a strong analysis. That combination is a great “one-two” to start with.
  2. Know the depths of the context.  One change that’s coming is that “local” will matter more than national. The helpfulness of national statistics has been found wanting these days. It’s been more helpful to have state-specific and even county-specific numbers to tell us the story. The same is true with other research. Your context is particular and unique and, though nationally-scoped projects help point us in particular directions, the reality is that they may not be representative of our context at all.
  3. Know the connections. When you discover a number or hear a group of responses, be sure to ask “why?” or listen for moments where respondents offer connections. Words like “because” are gifts because (see what I did there?) people are telling you about causation, a rarity in research. However, the best methods for this, interviews and focus groups, are what take so much time and expertise to do well.
  4. Know for the negative instances. Most people don’t like to take a survey. We are surveyed SO often these days on our phones and some of the most important people to our work won’t complete a survey. It is beneficial to pursue data from those who may disconfirm what we think are “findings.” We need to know the instances that are not “true” about what we think; their indifference may be THE story of your market… but that story rarely gets heard in mass surveys conducted from cubicles.
    • The best way to do this is to think of a dozen people who may not respond to a survey or would be on the “negative” side and to do a short interview with them using a series of 4-6 questions. You don’t need much more than that but they do need to be specific.
    • When doing analysis of interviews and surveys, specifically look for the responses that suggest something other than what you think is true or that don’t support the main “percentage.”
    • The bottom line is that you need to know these so that if you’re asked, you have a ready answer regarding ways that the main percentages aren’t representative for all.
  5. Know the right “fix” to the problem. This is really the reason we have people do research, right? However, it’s surprisingly common that this is never explored in a research project and the “fix” is the program or resources being offered by the organization doing the research. If the data doesn’t show what the “fix” should be, then there’s no confidence that the resources are the solution.
    • To illustrate, we can draw comparisons to vaccine research. Most popular research would be compared to the first stage of a vaccine, the “hey we discovered a possible vaccine!” moment. And then we offer it to the world. But, vaccines then are tested over time (too long of a time for most people) to see if it works.
    • This is where an outside agency like Arbor Research Group can step in and be as unbiased as possible in helping organizations discover their path forward. We all know the levels of bias and the elements of the power dynamics that shape opinions and decisions. An external agency helps to mitigate those.

The context has just changed significantly; the rules and prospects are not the same anymore. The people you and I are working with have shifted their values 180 degrees in some instances and not at all in others (in spite of what the news headlines suggest). The organizations, churches, and businesses that invest in listening well will discover what is really going on, how widely (or not) it is true and will be able to stay ahead of the changes… and maybe even survive. The future is going to be different, but no one really knows in what ways yet. Get ahead of the game by listening well to your people. We can help if needed.

Photo by Stephen Dawson on Unsplash

Share This