New study reveals the benefits of being a bad wrapper

My wife Karen pulled a good trick on me this weekend. While surrounded with gifts for our very large family (now including 9 grandchildren), she asked “would you like to help me with wrapping?” Ha ha. At the beginning of our nearly half-century together, I taped comics from the Sunday paper around my gifts to her—pathetic but colorful. When Hallmark introduced handled gift bags in 1987, my presentation improved greatly with little effort required other than finding a suitable size and decor—thank goodness.

The ribbons! The wrappings! The tags! And the tinsel! The trimmings! The trappings!

Dr. Seuss

However, I may need to give wrapping another go after seeing this scientific study on gift wrapping that reveals why presents should look messy. It turns out that family and friends prefer sloppily wrapped gifts significantly more than those neatly trimmed. So it seems that my lack of talent in anything crafty is a strong point. Ha ha. (Last laugh.)

Cheers!

No Comments

Genius dog challenges boggle the mind

Due to the social distancing necessitated by the current pandemic, dogs have become more valuable than ever for their devoted companionship. I have enjoyed owning a number of dogs throughout my life—observing a remarkable range in their intelligence, even within the same breeds. However, I doubt that even the smartest of my canines ever came close to six genius dogs now competing for this world championship in Budapest. They qualified by knowing the names of at least 10 objects. The winner, to be crowned later this month, will need to identify 12 toys with only a week of training.

This feat of memory and recognition seemed impressive by my meager experience with dogs (and limited talent for training them). However, it turns out that a Border Collie named Chaser, who passed away a year ago, knew over 1000 nouns. Incredible! In 2018, he and his owner and trainer, John Pilley, were put to a randomized test (with a surprising twist!) for PBS Nova by Neil deGrasse Tyson. Watch the 6-minute video: You will be amazed.

No Comments

Raisin Bran sun wearing sunglasses and other shady “alternate memories”

Your mind plays many frustrating tricks. For example, as I detailed in How to arrest what’s-his-name’s [Ebbinghaus] forgetting curve, the brain purges valuable information far too quickly. A fellow statistical trainer recently refreshed my memory of the forgetting curve—citing this study that replicated the original experimental results from Ebbinghaus.

Coincidentally, I watched Friday’s episode of the quirky new “How to With John Wilson” HBO show*, which featured widely shared alternate (false!) memories such as:

  • The Raisin Bran sun wearing sunglasses
  • Stouffer’s Stove Top Stuffing mix
  • Mandela dying in prison.

The last common misconception spawned a growing belief in what became known as the “Mandela Effect”. Check out this list with hundreds of other alternate memories and see if some resonate with your recollection. If so, you may be living in an alternate universe!

The “Mandela Effect” really plays tricks with your mind with memories that never happened but seem as if they did. However, it may not be evidence of a multiverse, but rather more mundane mental mistakes explained here by Healthline.

Never mind the Mandela Effect, the memory lapse that works for me is the forgetting curve—it doing its magic on the year of 2020.

“Forgetfulness is a form of freedom.”

― Kahlil Gibran

* Reviewed highly here by Vulture.

No Comments

Moving averages creating coronavirus confusion

The statistics being reported on Covid-19 keep pouring in—far too much information by my reckoning. Per the nation’s top infectious disease expert, Dr. Anthony Fauci, I focus on positivity rates as a predictor of the ups and down of the coronavirus. However, the calculations for even this one statistic cause a great deal of controversy, especially in times like now with rising cases of Covid-19.

For example, as reported by The Las Vegas Review-Journal last week, positivity rates for the Nevada now vary by an astounding five-fold range depending on the source of the statistics. It doesn’t help that the State went from 7-day to 14-day moving averages, thus dampening down an upsurge.

“We’re trying to get that trend to be as smooth as possible, so that an end user can look at it and really follow that line and understand what’s happening.”

State of Nevada Chief Biostatistician Kyra Morgan, Nevada changed how it measures COVID’s impact. Here’s why., The Las Vegas Review-Journal, 10/22/20

My preference is 7 days over 14 days, but, in any case, I would always like to see the raw data graphed along with the smoothed curves. The Georgia Rural Health Innovation Center provided an enlightening primer on moving averages this summer just as State Covid-19 cases spiked. Notice how the 7-day averaging takes out most of the noise in the data. The 14-day approach goes a bit too far in my opinion—blunting the spike at the end.

I advise that you pay attention to the nuances behind Covid-19 statistics, in particular the moving averages and how they get shifted from time to time.

PS My favorite method for smoothing is exponentially weighted moving averages. See it explained at this NIST Engineering Statistics Handbook post. It is quite easy to generate with a simple spreadsheet. With a smoothing constant of 0.2 (my preference) you get an averaging similar to a moving average of 5 periods, but it is far more responsive to more current results.

No Comments

Twenty declared plenty: How slow will we go?

As reported earlier this week by the Center of the American Experiment, motorists face a new 20 mph speed limit in Minneapolis and St. Paul. City authorities figure on a significant reduction in neighborhood traffic fatalities, based on the statistic that a person hit at 35 mph is three times as likely to die as someone hit at 25 mph (they are reducing limits another 5 mph to 20 mph out of an abundance of caution, presumably). Prior to a new law that came into effect a year ago, the Minnesota Department of Transportation (MnDOT) set speed limits based on engineering and traffic studies. But now cities need not involve MnDOT when setting traffic laws for residential streets.*

The lowering of speed limits in the Twin Cities follows a trend in USA metro areas from coast to coast as evidenced by this Seattle Department of Transportation post last December (check out the animated graphic showing how a person’s chance of surviving being hit by a car decreases drastically with faster speeds).

My thoughts:

  • If 20 mph on residential streets got enforced, that would be a relief for those like me with young children at home (grandchildren in my case). However, I doubt this will happen, especially with cutbacks in police after the troubles in Minneapolis earlier this year. The lower limits will only work with plentiful speed bumps (more appropriately known as “sleeping policemen” in UK).
  • Being an engineer, I worry about taking experts on traffic safety out of the loop in favor of politicians making sweeping edicts with no regard for varying factors for individual streets.
  • What are the economic trade-offs of the added time needed to travel at slower speeds versus the increased safety? Is 20 mph optimal?

“Typically, drivers travel 8 to 10 mph above the posted speed limit with a perception that the posted speed limit is a minimum, not a maximum [and] when the posted speed limit is reduced, drivers do not obey the new limit or even pay attention to it unless there is significant enforcement.”

Research Brief: Review of Current Practices for Setting Posted Speed Limits, April 2019, AAA Foundation for Traffic Safety.

One thing for sure, I find it excruciating to drive at 20 mph for any distance. The seems to slow to me.

*Focus on New Laws: Cities Authorized to Set Certain Speed Limits, July 22, 2019, League of Minnesota Cities.

No Comments

Statisticians earn residuals by airing errors

A new book by David S. Salsburg provides a series of Cautionary Tales in Designed Experiments. Salsburg wrote the classic The Lady Tasting Tea, which I read with great delight. I passed along the titular story (quite amazing!) in a book review (article #4) for the July 2004 DOE FAQ Alert.

Salsburg’s cautionary tales offer a quick read with minimal mathematics on what can go wrong with poorly designed or badly managed experiments—mainly medical. I especially liked his story of the Lanarkshire Milk Experiment of 1930, which attempted to test whether pasteurization removed all the “good”. Another funny bit from Salsburg, also related in The Lady Tasting Tea and passed only by me in my review, stems from his time doing clinical research at Pfizer when a manager complained about him making too many “errors”. He changed this statistical term to “residuals” to make everyone happy.

With all the controversy now about clinical trials of Covid-19 vaccines and the associated politics, Cautionary Tales in Designed Experiments offers a welcome look with a light touch at how far science progressed over the past century in their experimental protocols.

“It is the well-designed randomized experiment that provides the final ‘proof’ of the finding. The terminology often differs from field to field. Atomic physicists look for “six sigma” deviations, structure-activity chemists look for a high percentage of variance accounted for, and medical scientists describe the “specificity” and “sensitivity” of measurements. But all of it starts with statistically based design of experiments.”

David S. Salsburg, conclusion to Cautionary Tales in Designed Experiments

No Comments

Engineer detects “soul crushing” patterns in “A Million Random Digits”

Randomization provides an essential hedge against time-related lurking variables, such as increasing temperature and humidity. It made all the difference for me succeeding with my first designed experiment on a high-pressure reactor placed outdoors for safety reasons.

Back then I made use of several methods for randomization:

  • Flipping open a telephone directory and reading off the last four digits of listings
  • Pulling out number from pieces of paper put in my hard hat (easiest approach)
  • Using a table of random numbers.

All of these methods seem quaint with the ubiquity of random-number generators.* However, this past spring at the height of the pandemic quarantine, a software engineer Gary Briggs of Rand combatted boredom by bearing down on his company’s landmark 1955 compilation of “A Million Random Digits with 100,000 Normal Deviates”.**

“Rand legend has it that a submarine commander used the book to set unpredictable courses to dodge enemy ships.”

Wall Street Journal

As reported here by the Wall Street Journal (9/24/20), Briggs discovered “soul crushing” flaws.

No worries, though, Rand promises to remedy the mistakes in their online edition of the book — worth a look if only for the enlightening foreword.

* Design-Expert® software generates random run orders via code based on the Mersenne Twister. For a view of leading edge technology, see the report last week (9/21/20) by HPC Wire on IBM, CQC Enable Cloud-based Quantum Random Number Generation.

**For a few good laughs, see these Amazon customer reviews.

No Comments

Magic of multifactor testing revealed by fun physics experiment: Part Three—the details and data

Detail on factors:

  1. Ball type (bought for $3.50 each from Five Below (www.fivebelow.com)):
    • 4 inch, 41 g, hollow, licensed (Marvel Spiderman) playball from Hedstrom (Ashland, OH)
    • 4 inch, 159 g, energy high bounce ball from PPNC (Yorba Linda, CA)
  2. Temperature (equilibrated by storing overnight or longer):
    • Freezer at about -4 F
    • Room at 72 to 76 F with differing levels of humidity
  3. Drop height (released by hand):
    • 3 feet
    • 6 feet
  4. Floor surface:
    • Oak hardwood
    • Rubber, 3/4″ thick, Anti Fatigue Comfort Floor Mat by Sky Mats (www.skymats.com)

Measurement:

Measurements done with Android PhyPhox app “(In)Elastic”. Record T1 and H1, time and height (calculated) of first bounce. As a check note H0, the estimated drop height—this is already known (specified by factor C low and high levels).

Data:

Std   # Run   # A: Ball type B: Temp deg F C: Height feet D: Floor type Time seconds Height centimeters
1 16 Hollow Room 3 Wood 0.618 46.85
2 6 Solid Room 3 Wood 0.778 74.14
3 3 Hollow Freezer 3 Wood 0.510 31.91
4 12 Solid Freezer 3 Wood 0.326 13.02
5 8 Hollow Room 6 Wood 0.829 84.33
6 14 Solid Room 6 Wood 1.119 153.54
7 1 Hollow Freezer 6 Wood 0.677 56.17
8 4 Solid Freezer 6 Wood 0.481 28.34
9 5 Hollow Room 3 Rubber 0.598 43.92
10 10 Solid Room 3 Rubber 0.735 66.17
11 2 Hollow Freezer 3 Rubber 0.559 38.27
12 7 Solid Freezer 3 Rubber 0.478 28.03
13 15 Hollow Room 6 Rubber 0.788 76.12
14 11 Solid Room 6 Rubber 0.945 109.59
15 9 Hollow Freezer 6 Rubber 0.719 63.43
16 13 Solid Freezer 6 Rubber 0.693 58.96

Observations:

  • Run 7: First drop produced result >2 sec with height of 494 cm. This is >16 feet! Obviously something went wrong. My guess is that the mic on my phone is having trouble picking up the sound of the softer solid ball and missed a bounce or two. In any case, I redid the bounce.
    • Starting run 8, I will record Height 0 in Comments as a check against bad readings.
  • Run 8: Had to drop 3 times to get time registered due to such small, quiet and quick bounces.
    • Could have tried changing setting for threshold provided by the (In)Elastic app.
  • Run 14: Showing as outlier for height so it was re-run. Results came out nearly the same 1.123 s (vs 1.119 s) and 154.62 cm (vs 153.54). After transforming by square root these results fell into line. This makes sense by physics being that distance for is a function of time squared.

Suggestions for future:

  • Rather than drop the balls by eye from a mark on the wall, do so from a more precise mechanism to be more consistent and precise for height
  • Adjust up for 3/4″ loss in height of drop due to thickness of mat
  • Drop multiple times for each run and trim off outliers before averaging (or use median result)
  • Record room temp to nearest degree

No Comments

Magic of multifactor testing revealed by fun physics experiment: Part Two—the amazing results

The 2020 pandemic provided a perfect opportunity to spend time doing my favorite thing: Experimenting!

Read Part One of this three-part blog to learn what inspired me to investigate the impact of the following four factors on the bounciness of elastic spheroids:

  A. Ball type: Hollow or Solid

  B. Temperature: Room vs Freezer

  C. Drop height: 3 vs 6 feet

  D. Floor surface: Hardwood vs Rubber

Design-Expert® software (DX) provides the astonishing result: Neither the type of ball (factor A) nor the differing surfaces (factor D) produced significant main effects on first-bounce time (directly related to height per physics). I will now explain.

Let’s begin with the Pareto Chart of effects on bounce time (scaled to t-values).

First observe the main effects of A (ball type) and D (floor surface) falling far below the t-Value Limit: They are insignificant (p>>0.05). Weird!

Next, skipping by the main effect of factor B (temperature) for now (I will get back to that shortly), notice that C—the drop height—towers high above the more conservative Bonferroni Limit: The main effect of drop height is very significant. The orange shading indicates that increasing drop height creates a positive effect—it increases the bounce time. This makes perfect sense based on physics (and common knowledge).

Now look at a multi-view Model Graphs for all four main effects.

The plot at the lower left shows how the bounce time increased with height. The least-significant-difference ‘dumbbells’ at either end do not overlap. Therefore, the increase is significant (p<0.05). The slope quantifies the effect—very useful for engineering purposes.

However, as DX makes clear by its warnings, the other three main effects, A, B and D, must be approached with great caution because they interact with each other. The AB and BD interactions will tell the true story of the complex relationship of ball type (A), their temperature (B) and the floor material (D).

See by the interaction plot how the effect of ball type depends on the temperature. At room temperature (the top red line), going from the hollow to the solid ball produces a significant increase in bounce time. However, after being frozen, the balls behaved completely opposite—hollow beating solid (bottom green line). These opposing effects caused the main effect of ball type (factor A) to cancel!

Incredibly (I’ve never seen anything like this!), the same thing happened with the floor surface: The main effect of floor type got washed out by the opposite effects caused by changing temperature from room (ambient) to that in the freezer (below 0 degrees F).

Changing one factor at a time (OFAT) in this elastic spheroid experiment leads to a complete fail. Only by going to the multifactor testing approach of statistical DOE (design of experiments) can researchers reveal breakthrough interactions. Furthermore, by varying factors in parallel, DOE reveals effects far faster than OFAT.

If you still practice old-fashioned scientific methods, give DOE a try. You will surely come out far ahead of your OFAT competitors.

P.S. Details on elastic-spheroid experiments procedures will be laid out in Part 3 of this series.

No Comments

Magic of multifactor testing revealed by fun physics experiment: Part One—the setup

The behavior of elastic spheres caught my attention due to a proposed, but not completed, experiment on ball bounciness turned in by a student from the South Dakota School of Mines and Technology.* I decided to see for myself what would happen.

To start, I went shopping for suitable elastic spheres. As pictured, I found two ball-toys with the same diameter—one of them with an eye-catching Spider-Man graphic.

My grandkids all thought that “Spidey” would bounce higher than the other ball—the one in swirly blue and yellow. Little did they know just by looking that “Swirley” was the one with superpowers, it being made from exceptionally elastic, solid synthetic rubber. Sadly, Spidey turned out to be a hollow airhead. This became immediately obvious when I dropped the two balls side by side from shoulder height. Spidey rebounded only to my knee while Swirley shot all the way back to nearly to the original drop level, which really amazed the children.

My next idea for the bouncy experiment came from Frugal Fun for Boys and Girls, a website that provides many great science projects. Their bouncy ball experiment focuses on the effect of temperature as seen here.

However, I could see one big problem straight away: How can you get an accurate measure of bounce height? That led me an amazing cell-phone app called Phyphox (Physics Phone Experiments) which provided an ingenious way to calculate how high a ball bounces by listening to them hit the floor.** Watch this short video to see how. (If you are a physicist, stay on for how the narrator of the demo, Sebastian Staacks, worked out all his calculations for the Phyphox (In)elastic tool.)

The third factor came easy: Height of drop. To make this obvious but manageable, I chose three versus six feet.

The fourth and final factor occurred to me while washing dishes. We recently purchased a thick rubber mat for easy cleanup and comfortable standing in front of our sink. I realized that this would provide a good contrast to our hardwood floors for bounce height, the softer surface being obviously inferior.

To recap, the four factors and their levels I tested were:

A. Ball type: Hollow or Solid

B. Temperature: Room vs Freezer

C. Drop height: 3 vs 6 feet

D. Floor surface: Hardwood vs Rubber

Using Design-Expert® software (DX) I then laid out a two-level, full factorial of 16 runs in random order. To be sure of temperature being stabilized, I did only one run per day, recording the time the first bounce and its height (calculated by the Phypox boffins as detailed in the videos).

When I completed the experiment and analyzed the results using DX, I was astounded to see that neither the type of ball nor the differing surfaces produced significant main effects. That made no sense based on my initial demonstrations on side-by-side bounce for the two balls on the floor versus the rubber mat.

Keeping in mind that my experiment provided a multifactor test of two other variables, perhaps you can guess what happened. I will give you a hint: Factors often interact to produce surprising results, such as time and temperature suddenly coming together to create a fire (or as I would say as a chemical engineer—an “exothermic reaction”).

Stay tuned for Part 2 of this blog on my elastic spheroid experiment to see how the factors interacted in delightful ways that, once laid out, make perfect sense to even for non-physicists.

*For background on my class and an impressive list of home experiments, see “DOE It Yourself” hits the spot for distance-learning projects.

**I credit Rhett Alain of Wired for alerting me to Phyphox via his 8/16/18 post on Three Science Experiments You Can Do With Your Phone. From there he provides a link to a prior, more detailed, post on Modeling a Bouncing Ball.

No Comments