The most visible vaccine-skeptical public figures, the likes of Tucker Carlson or Senator Ron Johnson (R-Wisconsin), understand this. They don’t need to spread demonstrable falsehoods. They can simply focus night after night on outlier cases of severe side effects. Or they can selectively present results of scientific studies or government communications in ways that seem to suggest something ominous about either the virus or the vaccine. Or they can skirt the scientific question entirely in favor of ranting about how the government’s vaccine push is really about social control. Like any illusionist, they know that the most powerful tool available is not misinformation, but misdirection.

That subtle distinction is often lost on members of the media and the political establishment. At times, “misinformation” becomes a catch-all term for any material used to dissuade people from getting the shot, whether or not it is objectively false. A recent New York Times article about the influential anti-vaxxer Joseph Mercola, for example, titled “The Most Influential Spreader of Coronavirus Misinformation Online,” concluded by noting that Mercola had made a Facebook post suggesting that the Pfizer vaccine was only 39 percent effective against infection by the Delta variant. Mercola was accurately relaying the findings of a real study, one that had been covered by mainstream news outlets. The Times article tweaked him, however, for not mentioning the study’s other finding, that the vaccine is 91 percent effective against serious illness.

No doubt Mercola—an osteopathic physician who has made a fortune selling “natural” health products often advertised as alternatives to vaccines—would have done his followers a service by sharing that data point. Cherry-picking true statistics to sow doubt in vaccines is dangerous. But to sweep that example under the umbrella of misinformation is to engage in concept creep. Misinterpretation is not the same thing as misinformation, and this is not merely a semantic distinction. Facebook, YouTube, and Twitter are rightly under immense pressure to do more to prevent the spread of dangerous falsehoods on their platforms. They often take their cues from established media organizations. It would be a troubling development for online free speech if, in the name of preventing real-world harm, platforms routinely suppressed as “misinformation” posts that don’t contain anything objectively false. It’s hard enough to distinguish between truth and falsity at scale. It would be reckless to ask platforms to take on the responsibility of judging whether a user’s interpretation of the facts—their opinion about a matter of public policy—is acceptable or not.

“It for sure is the case that misinformation is making things worse,” said Gordon Pennycook, a behavioral psychologist at the University of Regina. “There are people who believe things that are false, and they read those things on the internet. That for sure is happening.” But, Pennycook went on, “the more you focus on that, the less you talk about the avenues in which people come to be hesitant that have nothing to do with misinformation.”

In his research, Pennycook runs experiments to figure out how people actually respond to online misinformation. In one study, he and his coauthors tested whether people would be convinced by the claim in a fake news headline after being exposed to it online. (Sample headline: “Mike Pence: Gay Conversion Therapy Saved My Marriage.”) In one phase of the experiment, exposure to fake news headlines raised the number of people who rated the claim as accurate from 38 to 72. You could look at that and say online misinformation increases belief by 89 percent. Or, you could note that there were 903 participants overall, meaning the headlines only worked on 4 percent of them.

The current debate over vaccine misinformation sometimes seems to imply that we’re living in an 89 percent world, but the 4 percent number is probably the more helpful guidepost. It would still be a serious problem if only a small percentage of Facebook or YouTube users were susceptible to vaccine misinformation. They’d be more likely to refuse to get vaccinated, to get sick, and to spread the virus—and, perhaps, their false beliefs—to others. At the same time, it’s important to keep in mind that somewhere around one third of American adults are still choosing not to get vaccinated. Even if Facebook and YouTube could erase all anti-vaxx content from their platforms overnight, that would only take one bite out of a much larger problem.