Archive for category Uncategorized
Heads up: A great web site for keeping tabs on workday baseball
Posted by mark in Uncategorized on July 12, 2008
My hometown Major League baseball team, the Minnesota Twins, typically play one weekday game every home stand. They do this to entice commuters in the area to knock off early for a
‘meeting’ at the ballpark.
I try to catch every game by some means — in person (I split season tickets with my sister), on the radio or via television. However, these matinee games proved too challenging to track while working — even the radio created too much disruption. However, I recently discovered the internet-based MLB GameCast, which provides live updates and a myriad of stats. It’s accessed via ESPN Scoreboard during any given baseball game. I leave it open for spot checking while I’m doing work on my computer — mainly to catch up via the helpful status report that’s continually updated. However, I get a great charge out of seeing the live updates pitch by pitch. It even shows the directional flight of batted balls!
Whatever loss of time MLB Gamecast creates is more than made up by the stimulation it provides to my afternoon productivity. That’s my hypothesis and I am sticking to it just as tight as my man Joe Mauer does to a baseball when an enemy player barrels into home plate!
PS. Another heads-up: I captured the screenshot via the handy Snipping Tool that came with my Windows Vista. If you use this operating system, look for the utility in your Accessory folder. It may not be there. In that case, go to your Start menu, click Programs and look at the “Turn Windows features on or off” list. Turn on the Tablet PC Optional Components. (The Snipping Tool was originally developed by Microsoft for the Tablet PC.)
Origin of 3.2% alcohol beer – an antidote for those dispirited by the Great Experiment
Posted by mark in Uncategorized on June 21, 2008
“Our country has deliberately undertaken a great social and economic experiment, noble in motive and far-reaching in purpose.” — Herbert Hoover
75 years ago, legal beer – albeit only 3.2% alcohol, returned to the U.S. and provided a spark of hope for a country in a depression – this is the subtitle of a recent story by the LA Times by Maureen Ogle on “The day the beer flowed again”. This ended a hugely unsuccessful experiment on temperance that lasted over a dozen years beginning in 1920. I still remember a home brew recipe from this era, yellowed and curled, that my grandfather had tacked up above his workbench – it was labeled “Bill’s Beer.” I’ll bet it tasted good on a dry decade!
My interest in 3.2 beer was piqued by the author of Land of Amber Waters, Doug Hoverson, who spoke last weekend at gathering sponsored by our county’s historical society. Before Prohibition* the alcohol level in beer was 2.75% but on April 7, 1933 it went back on the market with a higher amount of 3.2% that was considered “chemically necessary to make a better beer.” Hoverson said that two US Congressmen experimented on how much they needed to feel so intoxicated that they could no longer function properly in their work. [Insert your joke here.]
This experiment by Prohibition-busting USA lawmakers may have benefitted from a more scientific “titration” to develop a dose-response curve as illustrated by this white paper from the University of British Columbia. I’d always thought of titration as something a chemist did, for example to precisely determine pH of a solution. However, my colleague Pat Whitcomb showed me how this concept can be applied in a very sophisticated statistical approach for dose response curves. This is presented in a new workshop he developed called Designed Experiments for Life Sciences — a great introduction to powerful tools of value to scientists, engineers, and technical professionals working in the pharmaceutical, biomedical technology and biomedical device fields, as well as organizations and institutions that devote the majority of their efforts to research, development, technology transfer, or commercialization of life enhancing products.
* According to the USA’s National Institutes of Health (NIH) detailing of 3.2% beer, the 18th Amendment, which outlawed intoxicating liquors, but made no reference to alcohol content. However, the Volstead Act, named after a Representative from Minnesota (land of intemperate emigrated Scandinavians) set the legal alcohol limit at one-half of 1 percent. Thus the only “beer” that could be sold legally in the United States during Prohibition (1920-1933) was “near beer”(now known as “low point beer”) – a “wishy-washy, thin, ill-tasting, discouraging sort of slop that it might have been dreamed up by a Puritan Machiavelli with the intent of disgusting drinkers with genuine beer forever,” according to food critic Waverly Root.
Brains for beer
Posted by mark in Uncategorized on June 18, 2008
“Three of the four regressions give a negative coefficient to drinking 13 or more drinks per week. The magnitudes of these signs were greater for freshmen as well. While drinking has a negative affect [sic] on GPA throughout all the students, freshmen’s GPA is hit harder. This indicates that upperclassmen become more efficient in their drinking as well. Upperclassmen learn how and when to drink in moderation. Students may not change the amount they drink as they progress through college, but they do change how they drink.”
Sign for physics students still unclear on the concepts
Posted by mark in Uncategorized on June 13, 2008
While waiting around for my daughter to register for her first semester of college at the University of Wisconsin at Eau Claire, I read this sign in their physics department. It took me a moment to process the directions at the bottom, but then it hit me.
GPA mongers lose out to students willing to take on tougher classes
Posted by mark in Uncategorized on June 1, 2008
Advice from famous physicist Feynman: “You must not fool yourself”
Posted by mark in Uncategorized on May 25, 2008
My bookseller friend Rich emailed recently about a find he made:
>In your physics days, did you ever encounter the famous Feynmann Lectures in book form (three volume set)? Feynmann was a respected renegade, if there IS such a thing. But he was good enough to be appointed to the Challenger’s explosion evaluation. Interestingly, before the SSTs flew, he predicted a 2% failure rate — and he’s been right on.<
That reminded me of one of my favorite quotes by this renowned that I bring up when discussing the dangers of deleting experimental outliers:
“The first principle is that you must not fool yourself — and you are the easiest person to fool.”
It comes from a Feynmann’s talk on Cargo Cult Science.
PS. While searching the internet on the topic of bias, I came across an interesting website that provides “an outlet for experiments that do not reach the traditional significance levels (p < .05)”! It’s called the Journal of Articles in Support of the Null Hypothesis. The journal’s purpose is to reduce the ‘file drawer problem,’ that is the tendency for unpublished results to get buried in researchers’ file cabinets.*
*To learn about this propensity to publish only the positive, read this study by Jeffrey D. Scargle of the Space Science Division National Aeronautics and Space Administration, Ames Research Center.
Duck named DOE (pronounced "dewie")
Posted by mark in Uncategorized on May 20, 2008
What would Deming say about the demise of testing in education?
Posted by mark in Uncategorized on May 19, 2008
Earlier this month I was listening to a local talk radio show when a caller provided this “end of the world” development: To be more kindly and gentle to students stressed out by math tests, teachers now refer to these as “celebrations of knowledge”!
At the time I heard of this outrageous slackening of educational standards, I was in the midst of reading a book that my buddy Rich sent me that’s titled “The World of W. Edwards Deming.” (If you are a bibliophile like me, check out his eclectic mix of uncommon books offered for sale via Ebay). My entry into the world of quality assurance was catalyzed by the electrifying documentary “If Japan can… Why can’t we?” by NBC in 1980 featuring Deming and his use of statistical methods.
One of my favorite stories in the book on Deming, which was written by his long-time personal assistant Cecelia S. Kilian, involves another pioneer in the field of statistics, a fellow named Harold Dodge. Deming worked with Dodge during WWII to develop statistical standards on a wartime emergency basis. Over a decade before that, Deming had an internship with Bell Telephone Laboratories – Dodge’s employer. During these times the statisticians working under Dodge played a neat trick on him as he tried to get a feeling for the cord length on a newly-developed handset: They clipped off millimeter or two every day. Deming recalls seeing Dodge stoop to an astoundingly uncomfortable level to take a phone call. Evidently it’s not hard to fool an absent-minded statistical genius!
Getting back to Deming’s views on education, I really do wonder how he would feel about the practice of testing as an incentive for students to develop a profound knowledge of their subject. In his book “Out of the Crisis” (1982, MIT) he said “I have seen a teacher hold a hundred fifty students spellbound, teaching what is wrong.” He credits Sir Ronald Fisher as his inspiration for learning statistics, despite being a “poor teacher on every count”! Deming made no secret of his dislike for grading, rating, and testing in industrial settings. Therefore I suppose he would approve of the new, more positive approach of celebrating knowledge, rather than making students take final exams.
“When teachers are forced to teach to the test, students get bored and genuine education ceases, no matter what the test scores may say… The examination as a test of the past is of no value for increased learning ability. Like all external motivators, it can produce a short term effect, but examinations for the purpose of grading the past do not hook a student on learning for life.”
— Myron Tribus (from his essay Quality in Education According to the Teachings of Deming and Feuerstein )
Baseball batting averages throw some curves at statisticians
Posted by mark in Uncategorized on May 5, 2008
“I had many years that I was not so successful as a ballplayer, as it is a game of skill.”
— Casey Stengel (from testimony before United States Senate Anti-Trust and Monopoly Hearing, 1958)
Last week the University of Minnesota School of Statistics sponsored a talk titled “In-season prediction of batting averages: A field test of empirical Bayes and Bayes methodologies,” presented by Lawrence D. Brown from the Statistics Department of Wharton School at the University of Pennsylvania. My colleague Pat Whitcomb attended and told me about a few findings by Brown that baseball fanatics like me would find a bit surprising:
“The simplest prediction method that uses the first half [of a season’s] batting average …performs worse than simply ignoring individual performances and using the overall mean of batting averages as the predictor for all players.”*
Evidently these professional players perform at such a consistent level that the ones hitting at a higher than average rate up until the mid-season break tend to regress back to the mean the rest of the way, and vice-versa.
Of course, by looking at many years of past performance, one would gain some predictive powers. For example, in 1978, more than ten years into his Hall of Fame (HOF) career, Rod Carew batted .333 for the Minnesota Twins. He made it to the Major Leagues only a few years ahead of fellow Twin Rick Dempsey, who hit at an average of .259 in 1978. Carew finished up his 19-year playing career with a lifetime batting average (BA) of .328, whereas Dempsey hung on for an astounding 24 years with a BA of only .233! It would not require a sabermetrician to predict over any reasonable time frame a higher BA for a HOF ballplayer like Carew versus a dog (but lovable, durable and reliable defensively at catcher) such as Dempsey.
Brown also verifies this ‘no brainer’ for baseball fans: “The naıve prediction that uses the first month’s average to predict later performance is especially poor.” Dempsey demonstrated the converse of this caveat by batting .385 (5 for 13) for his Baltimore Oriole team in the 1983 World Series to earn the Most Valuable Player (MVP) award!
Statistical anomalies like this naturally occur due to the nature of such binomial events, where only two outcomes are possible: When a batter comes to the plate, he either gets a hit, or he does not (foregoing any credit for a walk or sacrifice). It is very tricky to characterize binomial events when very few occur, such as in any given Series of 4 to 7 games. However, as a rule-of-thumb the statistical umpires say that if np>10 (for example over 50 at-bats for a fellow hitting at an rate of 0.200), the normal approximation can be used for binomial distributions and the variance becomes approximately p(1-p)/n.** From this equation one can see that the bigger the n, that is – at-bats, the less the fraction (batting average) varies.
PS. I leave you with this paradoxical question: Is it possible for one player to hit for a higher batting average than another player during a given year, and to do so again the next year, but to have a lower BA when the two years are combined?
*Annals of Applied Statistics, Volume 2, Number 1 (2008), 113-152
**This Wikipedia entry on the binomial distribution says that “this approximation is a huge time-saver (exact calculations with large n are very onerous); historically, it was the first use of the normal distribution, introduced in Abraham de Moivre’s book The Doctrine of Chances in 1733.”
Musings on matrices
Posted by mark in Uncategorized on April 27, 2008
Evidently due to its concentration algorithmic ‘philic’ minds, Stat-Ease gets review copies of new technical tomes from the Society for Industrial and Applied Mathematics (SIAM). I once fancied myself as a ‘mathelete,’ but I learned differently after moving up from being the big fish in my small pond at high school to a miniscule minnow at a major university in the Midwestern USA. My mistake was skipping into an advanced calculus class populated by some of the country’s top talent – National Merit scholars like me. Very quickly I realized that my math skills only put me on the very bottom rung and that only by the very tip of one fingernail. What saved me was begging for mercy by the teacher who, luckily, was sick and tired of the smart-mouths in the class who really got it and made sure to flout their chops in math. Thus, when the newest SIAM publication arrives, I always look it over in wonder before quickly passing it along to our master’s statisticians and algorithmic programmers, who may understand its true value.
For example, the book this week is Functions of Matrices, Theory and Computation by Nicholas J. Higham, which “emphasizes Schur decomposition, block Parlett recurrence, and judicious use of Padé approximants.” That blew me away immediately, but I rifled through the pages anyways and found a few pages of interest on the history of matrix functions, which really are useful in our business of experiment design and statistical analysis. (Thank goodness for the power of computers to do the calculations!) Higham credits English-born James Joseph Sylvester as the inventor of the matrix (not to be confused with the famous movie trilogy!). Sylvester emigrated to the USA where he founded the American Journal of Mathematics in 1878, the self-proclaimed “oldest mathematics journal in the Western Hemisphere in continuous publication.”
What amazes me is that anyone can read such esoteric materials, but it’s good they do, because great advances are made possible by developments in math. For example, Higham points out that the first practical application of matrices led to the elimination of unwanted flutter in aircraft wings. (Galloping flutter, or wake vortex flutter, caused the spectacular failure of the Tacoma Narrows Bridge in 1940.*) This work was done by the Aerodynamics Department of England’s National Physical Laboratory (NPL) in the 1930s. In parallel, not far away in the UK, Ronald Fisher, the founder of modern-day applied statistics, was developing the core catalog of experimental design matrices that still remain in use today.
“Here I stand because of you, Mr. Anderson. Because of you, I’m no longer an Agent of this system. Because of you, I’ve changed. I’m unplugged.”
– Agent Smith (played by Hugo Weaving ) from The Matrix Reloaded (2003)
PS. Neither the quote nor the picture really have much to do with matrices, but they provide me some amusement. For example, I saw the second movie of the Matrix trilogy with my brother Paul, an techie type like me. We annoyed the exiting theater patrons greatly by regurgitating Agent Smith’s lines about “Mr. Anderson” this and “Mr. Anderson” that – all with gagging glee.
The picture exhibits a physical matrix – the screen window. I just inserted all the screens earlier this week when it seemed as if Spring had finally arrived in Minnesota. However, we citizens of this northern State were chagrined to see a coating of snow yesterday morning – over a foot in some parts. 🙁
*If you’d like to set up an experiment on flutter that requires only a hair blower and some other materials that can be procured from your local hardware store, see this posting on Aeroelasticity Phenomenon by Wright State’s College of Engineering and Computer Science (Dayton, Ohio).