Nahhhh! That makes no sense. In those very few areas where I’ve done a lot of homework and some listening to conversations about the news, it’s very clear that very, very few (say 20%?) can spot fake news (enough adverbial uses of “very?”). The newscasters and opinion writers can’t or don’t, and neither can John Q Public.
Though not the same thing, we know, based on hundreds of studies, that people can spot a liar 54 percent of the time — a ratio that is perilously close to pure chance. Spotting lies from people suggests that
But back to my sample, which, BTW is too small to be valid. At best the 20% is only a wild hunch. So I can’t and shouldn’t trust my intuition. How’s that for screwing up this entire blog? Still, I want to make a research point. What may be accurate about the Pew report—if their sampling is appropriately diverse and large—is that 80% of adults believe they can spot fake news. Obviously, that says nothing about their actual abilities at spotting error.
Let’s get this straight: both John Q Public and I are suffering from a flaming case of overconfidence bias, a widespread error.
Why do so many believe they can spot fake news?
My aim in this blog is to identify the issues resulting in overconfidence and suggest ways to improve the ability to understand errors of judgment and choice. Of course the side effect will also provide a richer and more precise language to discuss judgment and choice. In this blog, I am indebted to Nobelist Daniel Kahneman’s work, Thinking Fast and Slow.
To begin, our automatic thinking skills create a number of problems. First, they suppress ignorance and ignore the potential for ambiguity. Most of the time we work like “W” Bush, who once declared, “I don’t do nuance.” Stupidly typical of him. The truth is most of us suppress all doubts about our conclusions and build stories to make those conclusions possible. And, we do it quickly and automatically. After all, thinking differently takes time and training. To add to the problem, my consulting experience reveals that most people have little training in bias and just blunder on.
A third problem requiring attention is the “law of small numbers,” typical of drawing conclusions from a few anecdotes. For example, based on my limited experience I concluded that no more than 20% can spot fake news on Facebook, Google, or anywhere else. I doubt that I’ve heard more than 20 people talk openly about news of which I’m knowledgeable. So though my conclusion might be correct, the number in the sample is way too small for validity.
Finally, sample size, whether large or small, says nothing inherently about the quality of the sample. Taking just education as a variable, my sample was made up primarily of people from my apartment building, a group below the Minnesota 35% of college grads, and not especially well-read. Yet I had (erroneously) concluded that merely 20% of all adults can spot fake news.
In sum, both my sample quality and size were questionable. All this indicates that both my conclusions and that of the Pew survey are, in the least, very suspicious.
So when quizzed, how do you talk about ideas of size and sample? Kahneman provides four thoughtful examples.
“Yes, the studio has had three successful films since the new CEO took over. But it is too early to declare he has a hot hand.”
“I won’t believe that the new trader is a genius before consulting a statistician who can estimate the likelihood of his streak being a chance event.”
“The sample of observations is too small to make any inferences. Let’s not follow the law of small numbers.”
“I plan to keep the results of the experiment secret until we have a sufficiently large sample. Other wise we will face pressure to reach a conclusion prematurely.”
I’ve reframed Kahneman’s well-worn acronym on bias by inserting the negative and making it sound like Cincinnati: WYSINATI (What You See Is Not All There Is). It will force you to look further and make better judgments, choices and decisions.