Archive for category design of experiments

Crater Experiment Makes a Big Impact

Craters are crazy and cool.  One that is quite amazing was created by the Barringer Meteorite that crashed into Arizona about 50,000 years ago with an explosion equal to 2.5 megatons of TNT.  Based on this detailing of what a 2 MT bomb would do I figure that Barringer would have completely wiped out my home town of Stillwater, Minnesota and its 20,000 or so residents, plus far more beyond us.  The picture my son Hank took of the 1 mile wide 570 foot deep crater does not do justice to its scale.  You really need to go see it for yourselfMeteor Crater as the two of us did.

Because of my enthusiasm for craters, making these rates number on my list of fun science projects in DOE It Yourself.  As noted there, members the Salt Lake Astronomical Society wanted to drop bowling balls from very high altitudes onto the salt flats of Utah, but workers in the target area from the U.S. Bureau of Land Management objected to the experiment.

Kudos to science educator Andrew Temme for leading students through a far more manageable experiment shown in this video.  In reply to me asking for permission for providing a link to his fantastic impact movies Andrew gave me this heads-up.  “I attended a NASA workshop to get certified to handle real moon rocks and meteorites at the NJ State Museum in Trenton.  This lab in the educator guide suggested mixing up your own lunar powder and throwing objects to simulate impact craters.  When I got home I ran the lab with a few of my classes and then made the video.  I used a Sony handheld camera that had a slow motion setting (300 fps).”  Awesome!

The other day I went up to the 9th floor of my condo building in Florida and tossed a football down on to the parking lot.  I am warming up to heaving a 15 pound mushroom anchor over on the beach side from atop one of the far pricier high rises along the Gulf.  However, I have to wait until the turtle nesting season is over.

, ,

No Comments

A helpful hierarchy for statistical analysis spells out how deep to drill on the statistics

Fred Dombrose, a force for use of statistical design of experiment in biomedical research, alerted me to an enlightening article on statistics asking “What is the question?” in the March 20 issue of Science magazine.  It lays out these 6 types of data analyses laid out by biostatistician Jeffrey Leak:

  • Descriptive
  • Exploratory
  • Inferential
  • Predictive
  • Causal
  • Mechanistic

For the distinguishing details going up this ladder see this Data Scientist Insight blog.  However, the easiest way to determine where your study ranks is via the flowchart provided in the Science publication.  There you also see four common mistakes that stem from trying to get too much information from too little data.

“Poor design of experiments is one of, if not the most, common reason for an experiment to fail.”

– Jeff Leek, “Great scientist – statistics = lots of failed experiments”, simplystats blog of 4/12/13

No Comments

Design of experiments (DOE) most important for optimizing products, processes and analytical technologies

According to this February 2014 Special Report on Enabling Technologies two-thirds of BioProcess readers say that DOE makes the most impact on their analytical work.

 “The promise of effective DOE is that the route of product and process development will speed up through more cost-effective experimentation, product improvement, and process optimization. Your ‘batting average’ will increase, and you will develop a competitive advantage in the process.”

–Ronald Snee

No Comments

A lot to love in new release of software–v9 is mighty fine!

Here’s a shout-out for Valentine’s Day that there’s a lot to love in the new release of Stat-Ease software-see the major improvements here http://www.statease.com/dx9.html#description.  Consultant Wayne Adams put v9 to fun use by developing equations that produced the 3D response surface renderings of the  heart and number 9.  Geeks rule!  (No offense, Wayne, I am one.)9_Model Graphs of R1-9(Equation Only) Heart_Model Graphs of R1Heart (Equation Only)

1 Comment

Must we randomize our experiment?

In the early 1990s I spoke at an applied statistics conference attended by DOE gurus George Box and Stu Hunter.  This was a time when Taguchi methods had taken hold, which engineers liked because the designs eschewed randomization in favor of ordering by convenience–with hardest-to-control factors changed only once during the experiment.  I might have fallen for this as well, but in my early days in R&D I worked on a high-pressure hydrogenation unit that, due to risks of catastrophic explosion, had to be operated outdoors and well away from any other employees.  (Being only a summer engineer it seemed that I was disposable.)  Naturally the ambient conditions varied quite dramatically at times, particularly in the Fall season when I was under pressure (ha ha) to wrap up my project.  Randomization of my experiment designs provided me insurance against the time-related lurking variables of temperate, humidity and wind.

I was trained to make runs at random and never questioned its importance.  Thus I was really surprised when Taguchi disciples attending my talk picked on me for bothering to do so.  But, thank goodness, Box had already addressed this in his 1989 report Must We Randomize Our Experiment.  He advised that experimenters:

  1. Always randomize in those cases where it creates little inconvenience.
  2. When an experiment becomes impossible being subjected to randomization
    • and you can safely assume your process is stable, that is, any chance-variations will be small compared to factor effects, then run it as you can in non-random order;
    • but, if due to process variation, the results would be “useless and misleading” without randomization, abandon it and first work on stabilizing the process;
    • or consider a split-plot design.

I am happy to say that Stat-Ease with the release of version 9 of its DOE programs now provides the tool for the compromise, as Box deems it, between randomizing or not, that is—split plots.  For now it is geared to factorial designs, but that covers a lot of ground for dealing with hard-to-change factors such as oven temperature in a baking experiment.*  Details on v9 Design-Expert® software can be found here http://www.statease.com/dx9.html along with a link to a 45-day free trial.  Check it out!

*For a case study on a split-plot experiment that can be easily designed, assessed for power and readily analyzed with the newest version of Stat-Ease software, see this report by Bisgaard, et al (colleagues of Box).

No Comments

Educational system turned upside down by distance-based learning

I’ve been watching with interest the trend for ‘flipping’ classrooms; that is, using time together for working on homework and leaving the teaching to web-based and other materials (books, still!) for students to teach themselves on their own time.  At the college level this new educational approach for is gaining momentum via massive open online courses, called MOOCS.

For example University of Minnesota chemistry professor Chris Cramer will teach this 9-week MOOC on Statistical Molecular Thermodynamics starting next month.  Follow the link and watch him demonstrate a thermite reaction.  If anyone can make statistical molecular thermodynamics interesting, it will be him, I think, so I enrolled.  It’s free, thus there’s nothing to lose.  Also, I still feel guilty about getting an A grade in the stat thermo class I took 30 years ago—the reason being it was graded on a curve and thus my abysmal final score of 15 out of 100 got rated highly as the second highest in my class.  As you can infer, it was not taught very well!

P.S. I recently unveiled a distance-based lecture series on design of experiments called the DOE Launch Pad.  It augments my book (co-authored by Pat Whitcomb) on DOE Simplified.  Contact me at mark @ statease .com to sign up.  It’s free for now while in pilot stage.

No Comments

George Box–a giant in the field of industrial design of experiments (DOE)

George Box passed away this week at 94.  Having a rare combination of numerical and communication skills along with an abundance of common sense, this fellow made incredible contributions to the cause of industrial experimenters.  For more about George, see this wonderful tribute by John Hunter.

My memorable stories about Box both relate to his way with words that cut directly to a point:

  • In 1989 at the Annual Quality Congress in Toronto seeing him open his debate with competing guru Genichi Taguchi by throwing two words on an overhead projector–“Obscurity” and “Profundity”, and then after a dramatic pause, adding the not-equal sign between them.  This caused Taguchi’s son Shin to leap up from the front row and defend his father.  This cause the largest crowd I have ever seen at a technical conference to produce a collective gasp that one only rarely experiences.
  • In 1996 at a DOE workshop in Madison, Wisconsin enjoying his comeback to a very irritating disciple of Taguchi who kept interrupting the lecture: “If you are going to do something, you may as well do it right.”

Lest this give the impression that Box was mean-spirited see this well-reasoned white paper that provides a fair balance of praise and criticism of Taguchi, who created a huge push forward for the cause of planned experimentation for quality improvement.

The body of work by George Box in his field is monumental.  It provides the foundation for all that we do at Stat-Ease.  Thank you George, may you rest in peace!

No Comments

Random thoughts

The latest issue of Wired magazine provides a great heads-up on random numbers by Jonathan Keats.  Scrambling the order of runs is a key to good design of experiments (DOE)—this counteracts the influence of lurking variables, such as changing ambient conditions.

Designing an experiment is like gambling with the devil: only a random strategy can defeat all his betting systems.

— R.A. Fisher

Along those lines, I watched with interest when weather forecasts put Tampa at the bulls-eye of the projected track for Hurricane Isaac.  My perverse thought was this might the best place to be, at least early on when the cone of uncertainty is widest.

In any case, one does best by expecting the unexpected.  That gets me back to the topic of randomization, which turns out to be surprisingly hard to do considering the natural capriciousness of weather and life in general.  When I first got going on DOE, I pulled numbered slips of paper out of my hard hat.  Then a statistician suggested I go to a phone book and cull numbers from the last 4 digits from whatever page opened up haphazardly.  Later I graduated to a table of random numbers (an oxymoron?).  Nowadays I let my DOE software lay out the run order.

Check out how Conjuring Truly Random Numbers Just Got Easier, including the background by Keats on pioneering work in this field by British (1927) and American (1947) statisticians.  Now the Australians have leap-frogged (kangarooed?) everyone, evidently, with a method that produces 5.7 billion “truly random” (how do they know?) values per second.  Rad mon!

,

No Comments

Strategy of experimentation: Break it into a series of smaller stages

Tia Ghose of The Scientist provides a thought-provoking “Q&A” with biostatistician Peter Bacchetti on “Why small is beautiful” in her June 15th column seen here.  Peter’s message is that you can learn from a small study even though it may not provide the holy grail of at least 80 percent power.*  The rule-of-thumb I worked from as a process development engineer is not to put more than 25% of your budget into the first experiment, thus allowing the chance to adapt as you work through the project (or abandon it altogether).  Furthermore, a good strategy of experimentation is to proceed in three stages:

  • Screening the vital few factors (typically 20%) from the trivial many (80%)
  • Characterizing main effects and interactions
  • Optimizing (typically via response surface methods).

For a great overview of this “SCO” path for successful design of experiments (DOE) see this detailing on “Implementing Quality by Design” by Ronald D. Snee in Pharm Pro Magazine, March 23, 2010.

Of course, at the very end, one must not overlook one last step: confirmation and/or verification.

* I am loathe to abandon the 80% power “rule”** but, rather, increase the size of effect that you screen for in the first stage, that is, do not use too fine a mesh.

** For a primer on power in the context of industrial experimentation via two-level factorial design, see these webinar slides posted by Stat-Ease.

 

,

No Comments

Video of paper-helicopter fly-offs at South Dakota School of Mines & Technology

Stat-Ease Consultant Brooks Henderson produced this video — it’s quite impressive!

For background on the paper helicopter experiment, see this previous StatsMadeEasy post.

No Comments