Jittery gauges making people crazy on election night
Posted by mark in Graphics, Uncategorized on November 15, 2016
Early last Tuesday evening I went to the New York Times elections website to check on the Presidential race. It had Clinton favored, but not by much—just a bit over 50% at the time, with the needle wavering alarmingly (by my reckoning) towards the side of Trump. A few hours later I was shocked to see it at a plus 70% for Trump. By the time I retired for the night the Times had him at near 100%, which, of course turned out to be the case, to my surprise and many others, even President Elect Trump himself, I suspect.
Being a chemical engineer, I like the jittery gauge display—it actually is less unsettling for me than a needle that is fixed (which usually happened only when a measuring instrument failed). Even more important, from my perspective as an aficionado of statistics, is the way this dynamic graphic expressed uncertainty—becoming less jittery as the night went on and returns came in. However, the fluctuating probabilities freaked out a lot of viewers, leading to this explanation by NYT as to Why we used jittery gauges.
For an unbiased, mainly positive, review of this controversial graphical approach by the Times to report election results see this Visualizing Data blog.
“Negativity expressed towards the jitter was a visceral reaction to the anguish caused by the increasing uncertainty of the outcome, heightened by the shocking twist in events during the night, [but] I found it an utterly compelling visual aid.”
— Andy Kirk, author of Visualizing Data
P.S. Here’s a new word that I picked up while researching this blog: “skeuomorphism”, meaning the designing of graphics to resemble real world counterparts, for example, Apple Watch’s clock-like interface. Evidently battles have been raging for years in the tech world over using this approach versus flat, minimalist, designs. I had no idea!
Scary statistics about Halloween
Posted by mark in pop, Uncategorized on October 27, 2016
I am torn whether it will be scarier to dress up as the nightmarish Freddie Krueger from Elm Street or as a statistics instructor. Which would you rather be locked in a windowless room with? Hmmm… best you not answer that.
Anyways, here are some frightful facts about the upcoming holiday reported in yesterday’s USA Today:
- 171 million Americans plan to partake in Halloween festivities. Crazy!
- On average, women will pay double for “non-sexy” Halloween costumes. The “sexy” costumes cost on average around $30, while the demure ones (boo!) go for near $60.
- Witch and pirate are the first two costumes of choice, followed by Trump and Clinton. Hmmm… is this a case of perfectly opposite correlation?
Happy Halloween!
Obscurity does not equal profundity
Posted by mark in Uncategorized on October 9, 2016
In 1989 I attended a debate where George Box defended the standard approach for design of experiments against the Taguchi method. In summary he simply put up a mathematical equation on three scraps of transparencies that projected “Obscurity” “not equal to” “Profundity”. This created a memorable uproar from the Taguchi disciples in the audience.”
I am reminded of this upon the news that the winner of the 2016 Ig Nobel Peace Prize is this paper by University of Waterloo Ph.D. psychology candidate Gordon Penny et al On the reception and detection of pseudo-profound bullshit. This treatise sorts out what is serious bullshit versus simply nonsense or harmless mundanity. It provides this example of pseudo profundity from an actual tweet sent by a well-known New Age healer and advocate of alternative medicine:
Attention and intention are the mechanics of manifestation.
Evidently many people are not only prone to eating up stuff like this but they also lack to ability to sniff it out. The Waterloo researchers tested a large number (280) undergrads on a Bullshit Receptivity (BSR) scale. They then completed several follow up studies, going all out to shovel the BSR. ; )
It composts down to bullshit not only being more ubiquitous than ever before (being a big part of internet) but also increasingly popular. The authors’ hope by their study to reduce BSR and thereby the generation of it due to this improved detection of obscure pseudo profundities. That would be good!
A curve in the road to grade inflation
The New York Times Sunday Review features an opinion by Wharton School Professor Adam Grant as to Why We Should Stop Grading Students on a Curve. He asserts that his peers now give over 40% of their grades at A level—a percentage that has grown steadily for the last 30 years as detailed in this March 2016 report by GradeInflation.com. I am not surprised to see my alma mater the University of Minnesota near the top on the chart of Long Term Grade Inflation by Institution, because, after all, we pride ourselves on being nice.
During my years at the “U” most classes were graded on the curve, which Prof. Grant abhors for creating too much competition between students. However, it worked for me. I especially liked this system in my statistical thermodynamics class where my final score of 15 out of 100 came out second highest out of all the students, that is, grade A. Ha ha. This last week President Obama chastised the U.S. press for giving Trump a pass based on grading on the curve. I see no problem with that. ; )
I do grant Grant an A for creativity in coming up with a lifeline for struggling students. He allows them to write down the name of a brighter classmate on one multiple choice question. If this presumably smarter student gets it right, that question earns full credit. My only suggestion is that whomever gets called in the most for providing lifelines should be graded A for being on top of the curve. But then I see nothing wrong with rewarding the best and the brightest.
The increasing oppression of soul-less algorithms
As I’ve blogged before*, algorithms for engineering and statistical use are near and dear to my heart, but not when they become tools of unscrupulous and naïve manipulators. Thus an essay** published on the first of this month by The Guardian about “How algorithms rule our working lives” gave me some concern. In this case the concern is that employers who rely on mathematically modelled ways of sifting through job applications tend to punish the poor.
“Like gods, these mathematical models are opaque, their workings invisible to all but the highest priests in their domain: mathematicians and computer scientists. Their verdicts, even when wrong or harmful, are beyond dispute or appeal. And they tend to punish the poor and the oppressed in our society, while making the rich richer.”
– Cathy O’Neil
Of course we mustn’t blame algorithms per se, but those who write them and/or put them to wrong use. The University of Oxford advises that mathematicians don’t write evil algorithms. This October 2015 post passes along seven utopian principles for ethical code. Good luck with that!
P.S. A tidbit of trivia that I report in my book RSM Simplified: “algorithm” is an eponym for Al-Khwarizm, a ninth century Persian mathematician who wrote the book on “al-jabr” (i.e., algebra). It may turn out to be the most destructive weapon for oppression ever to emerge from the Middle East.
* Rock on with algorithms? October 2, 2012
** Adapted from Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy — a new book on business statistics coming out tomorrow by “Math Babe” Cathy O’Neil.
“Bright line” rules are simple but not very bright
Just the other day a new term came to light for me—a “bright line” rule. Evidently this is commonplace legal jargon that traces back to at least 1946 according to this language log. It refers to “a clear, simple, and objective standard which can be applied to judge a situation” by this USLegal.com definition.
I came across the term in this statement* on p-values from American Statistical Association (ASA) on statistical significance:
“Practices that reduce data analysis or scientific inference to mechanical ‘bright-line’ rules (such as ‘p < 0.05’) for justifying scientific claims or conclusions can lead to erroneous beliefs and poor decision-making.”
The ASA goes on to say:
“Researchers should bring many contextual factors into play to derive scientific inferences, including the design of the study, the quality of the measurements, the external evidence for the phenomenon under study, and the validity off assumptions that underlie the data analysis.”
It is hard to argue that if the p-value is high, the null will fly, that is, results cannot be deemed statistically significant. However, I’ve never bought into 0.05 being the bright-line rule. It is good to see ASA dulling down this overly simplistic statistical standard.
I can see the value for “bright line rules” in legal processes, a case in point being the requirement for the Miranda warning being given to advise US citizens of their rights when being arrested. However, it is ludicrous to apply such dogmatism to statistics.
*(The American Statistician, v70, #2, May 2016, p131)
Models responsible for whacky weather
Posted by mark in Basic stats & math, pop, science on August 14, 2016
Watching Brazilian supermodel Gisele Bundchen sashay across the Olympic stadium in Rio reminded me that, while these fashion plates are really dishy to view, they can be very dippy when it comes to forecasting. Every time one of our local weather gurus says that their models are disagreeing, I wonder why they would ask someone like Gisele. What does she and her like know about meteorology?
There really is a connection of fashion and statistical models—the random walk. However, this movement would be more like that of a drunken man than a fashionably-calculated stroll down the catwalk. For example, see this video by an MIT professor showing 7 willy-nilly paths from a single point.
Anyways, I am wandering all over the place with this blog. Mainly I wanted to draw your attention to the Monte Carlo method for forecasting. I used this for my MBA thesis in 1980, burning up many minutes of very expensive main-frame computer time in the late ‘70s. What got me going on this whole Monte Carlo meander is this article from yesterday’s Wall Street Journal. Check out how the European models did better than the Americans on predicting the path of Hurricane Sandy. Evidently the Euros are on to something as detailed in this Scientific American report at the end of last year’s hurricane season.
I have a random thought for improving the American models—ask Cindy Crawford. She graduated as valedictorian of her high school in Illinois and earned a scholarship for chemical engineering at Northwestern University. Cindy has all the talents to create a convergence of fashion and statistical models. That would be really sweet.
Studies on the intelligence of cats versus dogs and their owners
Posted by mark in pop, Uncategorized on July 20, 2016
It is a demonstrable fact that dogs know calculus as reported here by The Mathematical Association of America. On the other hand, everyone knows that cats, while obviously intelligent, are too lazy to learn any tricks like all dogs do, at least until they become too old. Therefore, for these two reasons, dogs must be smarter than cats by my reckoning.
But now comes news that felines fathom physics, or at least they naturally grasp the principles of gravity. This conclusion comes from an ingenious experiment on thirty cats done by Japanese researchers. The creatures were found to be inordinately curious about magnetic balls that did not fall out of an overturned metal container. For more details, see this recap by phys.org.
Then to make matters worse for dog lovers like myself, a recent study by a Wisconsin researcher indicates that cat owners are smarter than dog owners. Read it here in Psychology Today and whimper. If it’s any consolation, the study shows that dog people are less neurotic.
“The greatest pleasure of a dog is that you may make a fool of yourself with him, and not only will he not scold you, but he will make a fool of himself, too.”
― Samuel Butler
Hold on a second—the lords of time elect to extend the year of 2016
The controllers of clocks at the International Earth Rotation and Reference Systems Service (IERS) decided recently that 2016 ought to leap an extra second to stay in synch with Earth’s rotation. This will create a great deal of consternation for computers, thus IERS is giving six months’ notice for IT people to prepare themselves. Despite that lead time, about 10 percent of networks around the world are expected to fail, e.g.; a worldwide airline booking system that went down in 2012 for several hours when its computers’ internal clocks could not reconcile the discrepancy with outside systems. (I suggest you stock up on water, food stuffs and toilet paper.)
Here are some stats I gleaned from reports on this astronomical happening by New Scientist and National Geographic:
- Clocks will read 23:59:60 on the 31st of December (I am doubtful this will work on my timepieces)
- 86,400 seconds tick off every day on the master atomic clock for Coordinated Universal Time (UTC), however; the push and pull of the Moon causes the Earths massively heavy oceans to slosh around, which decelerates the spin between 1.5 and two milliseconds every 24 hours, on average.
- Gauging by sightlines from far off galaxies, IERS monitors changes to Earth’s spin. When it goes off by more than 0.9 seconds plus or minus, they mandate a 1 second adjustment.
- In 1972, when the adjustments began, the world got 10 extra seconds to make up for lost time. Since then 16 more seconds have been added—the last one on June 30, 2015. IERS have never removed a second. (If you are a rocket scientist, please compute how long it will it be until the Earth stops and let me know so I have plenty of time to begin packing up my things.)
Since antiquity the Earth’s rotation has provided us with our timescale – it is the Earth’s rotation that gives us our most basic unit of time, the solar day.
— Rory McEvoy, Curator of Horology, Royal Observatory Greenwich
Stand up now and then while working at your desk and do half again more than sitting the whole time
Posted by mark in pop, Uncategorized on June 19, 2016
According to this Wall Street Journal report, call-center workers given “stand-capable” desks were 46% more productive than their peers who remained sitting. This astounding improvement in output is attributed to the benefits of moving around.
“We hope this work will show companies that although there might be some costs involved in providing stand-capable workstations, increased employee productivity over time will more than offset these initial expenses.”
-Mark Benden, Director of the Texas A&M Ergonomics Center
P.S. It seems to me from what I gather off the internet that sitting or standing all day at work may not be as healthy as varying positions, e.g., see this essay from 538 blog.