"A good scientist, in other words, does not merely ignore conventional wisdom, but makes a special effort to break it." - from Paul Graham's Hackers and PaintersI've thought for a long time about how often science is misused as a label. It started in college when I noticed how certain subjects - including political science, behavioral science, and notably, economics (because it was often referred to as a 'social science') - referred to themselves as sciences despite deviating from the scientific method whenever doing so proved convenient, conventional, or uncontroversial.
Using my somewhat narrow and certainly 'true on average' definition for scientific process (observe, try to break, and test under strict experimental conditions), it seems to me that the opposite of science is statistics. This is not because statistics necessarily fails to do those things. I remember many economics lessons where our class discussed an observation, the attempts to model it, and the results which proved or disproved the hypothesis.
But in statistics, a 95% likelihood in is good enough to represent fact. Some say that zero to non-zero is the biggest quantifiable jump - if so, the move from 'not 100%' to '100%' is surely the only relevant runner-up.
The key to knowing whether a given topic is science is found in the quote at the top of the post. The sole reason a scientist tries to break conventional wisdom is because, once broken, there is no repair. This is not quite how it works with statistics.
In statistics, a rock which suddenly floated off the ground would be described as 'variation' or 'error'. It would be noted with bemusement but the event would likely pass without further incident. In science, a rock doing the same would not be dismissed as a single fluke observation among many other well behaved rocks - it would be known as The End of Gravity and usher in a new era of scientific progress.
Now, the statistical approach is far from the worst mentality. In many cases, rigorous statistical analysis is the only possible approach. But statistics is useful only if applied to the correct problem and only if the measured inputs are perfectly understood. Otherwise, the key step of aggregating the results is marred with possible bias or distortion.
If a field deludes itself into considering its statistical approach 'scientific', what's the worst that can happen? I think there are two potentially significant ramifications.
First, it makes reversing the accepted wisdom a much more difficult task. I used to wonder what took so long for scientists to figure out that smoking was a major health risk; when I realized how not every 'two packs a day for two decades' smoker got lung cancer, I stopped wondering. The problem of proving the harm of a cigarette was not scientific question but a statistical one because every lifelong smoker with a healthy set of lungs disputed the hypothesis that smoking causes cancer.
The scientific problem thus redefined as a statistical one, the task became to show how an addiction to cigarette smoking reduced quality of life and increased the likelihood of a smoking-induced death. Unfortunately, to do this properly required carefully comparing groups with similar-enough characteristics to identify cigarette smoking as the lone factor influencing different health outcomes. To do such a thing properly using the methods of statistics took, unfortunately, a lifetime.
The second ramification is how it may tempt those with something to lose into leaning heavily on special-case explanations. I could again return to cigarettes here and discuss all the ways tobacco companies fought against research projects into their products. But let's instead imagine if the 16th century world viewed the 'flat Earth' idea in a statistical way. Magellan's descendants would probably still be sailing in circles around the globe, seeking a significant p-value to prove their hypothesis! Science proved a much more efficient guide in this case because it knew the Earth wasn't flat if someone sailed one way and came back from the other. Magellan did it and that was that.
I'd like to acknowledge the reverse of the argument here, even if it risks discrediting ('discrediting') the title of this post, I do not really see this as a one or the other issue. I see science as the combination of Questions Without Answers and Questions Definitively Answered. Statistics occupies a strange in-between: Questions Prematurely But Probably Answered. It sounds stupid when I phrase it that way, of course, but it's totally understandable. A different way to describe the above: conclusions from good statistics often predicts future science. Again, clunky, but the main idea here is the role statistical methods play in fulfilling a basic human need to understand how the world works. Like any product serving a need, statistical results have a healthy market share and will maintain that market share until something better comes along.
I don't prefer one approach to the other despite my tone thus far in this post. I studied statistics all through college and I believe in its approach when applied correctly. And sometimes, I acknowledge the methods of science take a very long time. It is understandable in such cases if people look around for a faster way. But acquiring a taste for cookie dough is no way to resolve the problem of waiting for cookies to finish baking.
I think the key is to know when one approach is more appropriate than the other. Someone able to do this well will have a world of opportunity open up for them. The challenge in determining if a new observation refutes a known truth (science) or if it simply is something to keep in mind for later (statistics) is no doubt a difficult one. But as we continue to accelerate the pace of accumulating irrelevant data about every aspect of our lives, I cannot think of a skill more important to have for navigating the unseen challenges of the future.