Archive for December, 2019
Testing the adage that if you drink beer before wine then you will feel fine
Just in time for the partying hearty for Christmas today, my son-in-law Ryan, a chemist with 3M, alerted me to a statistical study published after last year’s holiday season by the The American Journal of Clinical Nutrition that questioned the advice of grape or grain but never the twain. Naturally, being a drinker of these undistilled alcoholic beverages, I wondered if my tendency to drink beer before dinner and wine for the meal would pass the test. But being a wonk for design of experiments, I was most curious to see a randomized controlled multiarm matched-triplet crossover trial—pictured below for this experiment on the order of addition for beer and/or wine.
Based on results from 90 participants, including a control group, “neither type nor order of consumed alcoholic beverages significantly affected hangover intensity (P > 0.05)”. What really mattered was the total consumption, although, interestingly, hangover intensity did not correlate to breath alcohol concentration (BrAC). However, the authors warn that
“The fact that we did not find a direct correlation between maximal BrAC and hangover intensity should not be misinterpreted as an invitation to drink until the cows come home. Likely, this correlation overall does exist but is not directly apparent in the narrow range of peak alcohol levels studied here.”
It’s disclosed at the end that Carlsberg provided the beer (premium Pilsner lager recipe from 1847) free of charge “for the sole purpose of utilization in this study”. Although I trust the author’s disclaimer of any bias, perhaps further study is warranted with stronger beers such as a Belgian trippel. Maybe wine would then be best drunk first. To be continued…
Business people taking notice of pushback on p-value
Posted by mark in Basic stats & math on December 15, 2019
As the headline article for their November 17 Business section, my hometown newspaper, the St. Paul Pioneer Press, picked up an alarming report on p-values by Associated Press (AP). That week I gave a talk to the Minnesota Reliability Consortium*, after which one of the engineers told me that he also read this article and lost some of his faith in the value of statistics.
“One investment analyst reacted by reducing his forecast for peak sales of the drug — by $1 billion. What happened? The number that caused the gasps was 0.059. The audience was looking for something under 0.05.”
– Malcom Ritter, AP, relaying the reaction to results from a “huge” heart drug study presented this fall by Dr. Scott Solomon of Harvard’s Brigham and Women’s Hospital.
As I noted in this May 1st blog/, rather than abandoning p-values, it would pay to simply be far more conservative by reducing the critical value for significance from 0.05 to 0.005. Furthermore, as pointed out by Solomon (the scientist noted in the quote), failing to meet whatever p-value one sets a priori as the threshold, may not refute a real benefit—perhaps more data might generate sufficient power to achieve statistical significance.
Rather than using p-values to arbitrarily make a binary pass/fail decision, analysts should use this statistic as a continuous measure of calculated risk for investment. Of course, the amount of risk that can be accepted depends on the rewards that will come if the experimental results turn out to be true.
It is a huge mistake to abandon statistics because of p being hacked to come out below 0.05, or p being used to kill projects due to it coming out barely above 0.05. Come on people, we can be smarter than that.
* “Know the SCOR for Multifactor Strategy of Experimentation”