The current issue of Science News features an indictment of statistics by writer Tom Siegfried. He pulls no punches with statements like this:
“…a mutant form of math has deflected science’s heart..”
“Science was seduced by statistics…”
“…widespread misuse of statistical methods makes science more like a crapshoot.”
“It’s science’s dirtiest secret: …testing hypotheses by statistical analysis stands on a flimsy foundation.”
“Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous…”
Draw your own conclusions on whether science fails to face the shortcomings of statistics by reading Siegried’s article Odds Are, It’s Wrong.
My take on all this is that the misleading results boil down to several primary mistakes:
- Confusing correlation with causation
- Extrapolating from the region of experimentation to unstudied areas
- Touting statistically significant results that have no practical importance
- Reporting insignificant results from studies that lack power to see differences that could be very important as a practical matter.
I do not think statistics itself should be blamed. A poor workman blames his tools.
#1 by Carpenter on April 7, 2010 - 10:57 am
Guess I’m puzzled with how Mr. Siegfried wants to handle uncertainty and variability in his data, if he doesn’t use stats.
I’m right with him, though, on misuse of stats. One fellow I conversed with, a researcher who was showing that humans weren’t causing climate change. He was proud of himself for using better statistical techniques than his compatriots, but hadn’t employed the use of an actual statistician in his analysis. Which was obvious, unfortunately.
Since his statistics were crappy (not abysmally so, but not what you’d exactly call adequate), I can’t comment on his belief that his methods were better than others’. But his pride in saying that statisticians who had heard his talk noticed some of the same errors and inefficiencies I’d noticed and that he’d rebuffed them all was telling — he wanted me to trust him, not his methods. And the data be damned.
#2 by Carpenter on April 7, 2010 - 11:21 am
I guess I’m doubly puzzled by his apparently specific assertion that statistical hypothesis testing stands on a flimsy foundation. I know he doesn’t quite mean that, but it takes careful reading of the article http://www.sciencenews.org/view/feature/id/57091/title/Odds_Are,_Its_Wrong to get that. And I wish he hadn’t said it that way, because there are uncareful and even malicious readers of Science News who will tangle that statement with their own.
#3 by Lyndsay on April 7, 2010 - 12:05 pm
Wow, the article you reference is extremely inflamitory. However, I think it shows just how important it is for every scientist to consult a profesional statistician or obtain an advanced degree in statistics. As you say, don’t blame the tools, learn how to use them.