As we face new difficulties with determining what information is reliable online, it’s important to recognize when research says something … well, different … than what the reporting might say.
To help show the difference, I’m going to show you two examples where the studies say something different than what the reporting implies (and what those differences are).
The examples are pretty non-controversial and minor on purpose.
Example One: Do You Like Strawberries?
You always — always — have to find out who was actually sampled with any research, and make sure all the important variations are covered. This is something pretty obvious when you think about it: if you ask people at a Strawberry Festival if they like strawberries, you’d probably be pretty surprised if most people didn’t really like strawberries.
There’s nothing inherently wrong with this — almost all surveys and studies have some kind of limitation. But it does mean that — to continue our example — that if you try to take your survey results from the Strawberry Festival and claim that all Americans like strawberries based on your survey, you’re almost certainly going to be wrong.
With that in mind, let’s take a look at this breathless headline: "Survey: Employer-sponsored Health Insurance Is the Most Desired Form of Coverage for Americans."
The custom excerpt continues the same tone:
Nearly 90% of Americans prefer to receive their health coverage through an employer versus other means, according to a new survey. Quality, affordability and convenience are the key reasons for why Americans favor employer-sponsored insurance.
Seems pretty straightforward, doesn’t it? Even in the main article says, "81% of respondents would rather get coverage from an employer than through a government-provided health plan, a response consistent among Republicans, Independents and Democrats."
Seems like that’s a slam-dunk recommendation against single-payer. There’s just one small clause in one sentence that blows that conclusion out of the water (emphasis mine):
It was conducted from November 14 to November 19 and surveyed 2,334 people with employer-sponsored health insurance.
They surveyed nobody who has bought their insurance on the Marketplace. Nobody on Medicaid or Medicare. It’s rather safe to say that pretty much anyone they surveyed was likely to have only ever had employer-based insurance in their entire life.
And considering that Medicaid enrollment is expected to hit 100 million next month — 1 in 3 Americans — they definitely did the equivalent of surveying the Strawberry Festival.
That realization completely changes how this whole story feels. Imagine the headline "Nearly One In Five People At Strawberry Festival Dislike Strawberries." Feels very different, right?
At best, this headline should claim that a percentage of people who already have employer health insurance would like to keep that insurance. Because of sample bias, the question that was asked leads to an answer that only applies to a very specific group of people.
And that’s when the question mostly lines up with what they’re trying to answer.
Which is a great segue into a "study" that’s even more nothingsauce than this one.
Example Two: The Treadmill Clothesrack
It’s not just who you ask, but also what you ask. The article headline for this one is "Women With Preeclampsia Can Benefit From Targeted App-Based Intervention," and it claims that:
New research showed that the apps used to treat women with severe preeclampsia are well-received if they target specific factors like fat and sugar intake.
Except that’s not what it showed at all.
Let’s skip past the fact that the "researchers" only surveyed 35 patients, a sample size that I would have encouraged students in an undergraduate class to at least double.
What these researchers actually asked were tied to three things:
- What factors existed "that made it hard for [respondents] to maintain a healthy lifestyle"
- Whether more respondents said they would be "interested in an app-based intervention" than those who reported "personal struggles with health behaviors"
- That out of a list of interventions, which one respondents said they would prefer.
These are important questions to ask at a certain stage of exploratory research, sure. But they are not the same thing as the claim in the headline or article. We know nothing about whether or not such an app would actually work to change behaviors, or be more (or less) effective than anything else. The method to determine the "need" for such an app — whether more respondents said they would be interested in an app than said they struggled with health behaviors — is particularly weak.
After all, how many of us are getting ready to say "I’m interested in doing something to change for the better" next week with the New Year? How many people are interested in treadmills, ellipticals, and stationary bikes … but those things just end up becoming very expensive clothesracks? We would laugh at a "study" saying that more people were interested in ellipticals than treadmills this year with a headline of "People Who Need Exercise Can Benefit From Ellipticals That Measure Time On The Machine" and then claimed "New research showed that the equipment used to improve cardiovascular health are well-received if they are ellipticals that track calories lost and time spent on the machine."
Why This Matters
While both of these examples seem like they are mostly overreaching to try to prove whatever point they have, there are also plenty of people who are intentionally misrepresenting studies to try to shape policies around transgender and identity issues.
Being able to look at just the most basic parts of a study or survey — Who were the participants in the study, what specific questions were asked, and if those questions could result in the claimed answers — will do you a world of good in separating the wheat from the chaff.