On regular occasions I converse with my business clients about the difference between business science (technically, social science ) and consultantancy snake oil. It's an important issue because a lot of the snake oil is regularly touted in business magazines and among execs. In today's world, where evidence-based management is available on an awful lot of issues, it's downright stupid to follow just any consultant's recommendation without analyzing it. That includes my own recommendations.
So how do you tell the difference between snake oil and genuine science? And do you have to have a research degree to be able to figure out the validity of many recommendations? Janet Stemwedel, one of those rare birds with two PhD's from Stanford, has simplified the issue for us. Furthermore, she doesn't think a PhD is necessary to assess the validity of science.
In two blogs, posted on the Scientific American site, she deals forthrightly with the issue. She emphasizes that though scientific knowledge is built on empirical data and that though the data can vary significantly within the different disciplines, there are common patterns of reasoning that all scientists use.
In what follows, she spells out an approach that most any slightly critical thinker can use. Because the material is very relevant to much of what's going on in business--much less all the political conversations regarding science-I am printing parts of two blogs, the first distinguishing between baloney and real science, and the second explaining the scientific method.
The big difference (Karl) Popper identifies between science and pseudo-science is a difference in attitude. While a pseudo-science is set up to look for evidence that supports its claims, Popper says, a science is set up to challenge its claims and look for evidence that might prove it false. In other words, pseudo-science seeks confirmations and science seeks falsifications.
There is a corresponding difference that Popper sees in the form of the claims made by sciences and pseudo-sciences: Scientific claims are falsifiable — that is, they are claims where you could set out what observable outcomes would be impossible if the claim were true — while pseudo-scientific claims fit with any imaginable set of observable outcomes. What this means is that you could do a test that shows a scientific claim to be false, but no conceivable test could show a pseudo-scientific claim to be false. Sciences are testable, pseudo-sciences are not.
One of the things I like about her emphasis is that she's made the route to knowledge "potentially" open to any of us. So here's the core input on scientific reasoning:
One way to judge scientific credibility (or lack thereof) is to scope out the logical structure of the arguments a scientist is putting up for consideration. It is possible to judge whether arguments have the right kind of relationship to the empirical data without wallowing in that data oneself. Credible scientists can lay out:
- Here’s my hypothesis.
- Here’s what you’d expect to observe if the hypothesis is true. Here, on the other hand, is what you’d expect to observe if the hypothesis is false.
- Here’s what we actually observed (and here are the steps we took to control the other variables).
- Here’s what we can say (and with what degree of certainty) about the hypothesis in the light of these results.
- Here’s the next study we’d like to do to be even more sure.
Lest there be any question, the usual Webster definition of "hypothesis" is appropriate to the above: a proposition tentatively assumed in order to draw out its logical or empirical consequences and so test its accord with facts that are known or may be determined.
The underlined terms will take you to the relevant blogs in The Scientific American.