Complexity and Disaster

April 6th, 2012

With the increasing complexity of engineered systems (and their interactions with the environment in which they operate — not to mention the organizational and human factors which impact their operation), concepts for improving reliability are increasingly important. Designing for reliability also requires an understanding of the nature of complexity itself. Klaus Minzer, in ” Thinking in Complexity: The Complex Dynamics of Matter, Mind and Mankind” (a book I strongly recommend) defines complexity in terms of the resulting non-linear behavior of complex systems. He explains the non-linear dynamics of complex systems with fascinating examples, from the evolution of life and emergence of intelligence to complexity in cultural and economic systems. I find his thoughts to fit in very well with concepts of complexity in engineered systems, and especially with how they fail.

Failure in complex systems often comes about due to a non-linear response to a load or an input (wheteher the input is something expected during normal operation or is due to an external event, such as a weather phenomenon or an accident). Engineers study how these non-linear responses happen, and how techniques for robust design of systems or incoporation of sensors and automated response systems can detect and correct a process or mechanism “going off the rails” before disaster can strike. In many cases, the non-linearity is due to an unseen or unintended interaction between compoents or processes. A relatively small loss in elasticity in an o-ring due to cold weather can lead to a rapid escape of burning gases which in turn leads to a catastrophic failure of a space shuttle, for example. I feel that failure is, in a sense, a way of recognizign the true complexity of a system. Of course, it would be far better to understand the complexity, the accompanying interactions, and the potential for non-linear response in an engineered system before a failure occurs.

I am a co-author on two articles appearing in Mechanical Engineering, the magazine of ASME, which address complexity and failure. You can find the first at:

The second should be appearing in the March issue.

Both will help to explain some of the issues which make reliability of complex systems both a critical and difficult goal for engineers.

Study justifies closing airports in volcano event – Yahoo! News

April 30th, 2011

Study justifies closing airports in volcano event – Yahoo! News.

In reading this article about a study justifying the closing of airports (and grounding of flights) in Europe last year after the eruption of a volcano in Iceland (to avaoid problems due to the large amounts of particulate material dispersed into the atmosphere), I am reminded of the arguments concerning the large amounts of money invested in avoiding possible problems due to the “Y2K” issue in the late 1990’s.  The aitport closing costs the airline industry several billion dollars, but resulted in no loss of aircarft due to the cloud from the volcanic eruption.  Would any planes have crashed had this not been done?  Who knows — but if past disasters have taught us anythign, it is that we must be prepared to act based on the best possible knowledge of the impact of extreme conditions (or known faults, in the case of Y2K) before a failure occurs.  If engineers (and policy makers) are successful, failure will be avoided.  But that will always lead to arguments over whether the investment was worth it.

This also reminds me about arguments over investment in preventative medical care, etc.  One can never tell what the outcome would have been had these precautions not been taken.  Yet, for financial and other reasons — including any possible negative impact of the remedy or precautions — decisions must be made based upon peer-reviewed scientific evidence, collected past experience, and use of comprehensive computational calculations, modeling and simulations.  This is an expensive perscription, but perhaps the best way to avoid some disasters.

Information from the Department of Energy on the Deepwater Horizon spill

April 28th, 2011

The U.S. Department of Energy has published a number of reports related to the BP/Deepwater Horizon oil spill. These make great background information for educators, scientists and engineers, and anyone who would like to know more about this oil spill disaster. The website is:

While some of the information may be a bit too detailed for general consumption (such as the “Flange Connector Spool Assembly Report”), the general timeline of the failure, details on the design of the blowout preventer (which failed) and subsequent technologies to contain the oil flow, as well as the footage of the oil flowing from the broken pipe, are fascinating.

Lessons from Japan

March 20th, 2011

The terrible results of the earthquake and tsunami last week in Japan hold many lessons for engineers. It is of course obvious that the impact on human health, the search for and care of survivors should be the primary concern. However, much has ocurred and is occurring which can provide insight into the design of flooding control systems, earthquake-resistant building and infrstructure design, and the safety of nuclear reactor facilities.

While the most recent focus has been on the nuclear reactors and the damage to the spent fuel pool, a recent article in the New York Times (  discusses the design of seawalls.  As in the case of the hurricane protection system built in New Orleans (which failed in a spectacular way after Hurricane Katrina due primarily to poor design and outdated data, as well as the failure of backup pumps which should have pumped out some of the water which initially flowed over levees), the seawalls designed to protect shoreline areas including the nuclear power facilities were overwhelmed.  The New York Times article (“Seawalls Offered Little Protection Against Tsunami’s Crushing Waves” by Norimitsu Onishi, 3/13/11) quotes one engineer , Peter Yanev, who points out the fatal miscalculation that “ the diesel generators [used to pump cooling water] were situated in a low spot on the assumption that the walls were high enough to protect against any likely tsunami.”  While higher seawalls can be constructed, it is always possible that a wave too large even for a 40 foot high or more seawall may occur.  This is not to say that seawalls are useless (and in fact have protected communities and power facilities from typhoons and smaller tsunamis).  This just teaches engineers that the best “defense” against nature may be siting critical equipment (and in some cases entire facilities) in stable, protected locations, and also to use the principles of “absolute worst case design” in such cases.

Absolute worst case design (or just “worst case design”) is an important techniques (along with hazard analysis and redundancy) used to enhance reliability of complex systems.  It is most often used in the case of electronics design, but also plays an importnat role in military and space systems.  As you might guess, it starts with the basic idea that you design your system to withstand the worst possible operating conditions.  We often note that electronics or mechanical devices designed for military use tend to be very expensive — in fact it is often a common criticism of expenditures for items built for the U.S. Department of Defense.  Yet one contributing factor to this cost is the “worst case” design specifications used.  A computer used in your home has far fewer requirments (in terms of reliability) than one designed to go into a tank or into a spacecraft.  By developing design requirments which take into account extreme conditions coupled with the need for high reliability, engineers can create systems able to handle harsh conditions without fail.  This concepts should certainly be applied to nuclear reactor components, including cooling systems.

If you are interested in reading more about the U.S. Army’s “design for reliability” practices, there is a handbook available at

One other concept which is very important in ensuring reliability of critical systems is the use of engineering standards.  Standards for nuclear power facilities (both for design as well as for operations and maintenance — including handling fuel and waste) are some of the most complex and rigorous ever developed.  For example, the American Nuclear Society maintains a set of standards which consider everythign from “Nuclear Criticality Safety Training” to “Containment System Leakage testing requirements” to “Nuclear Plant Response to an Earthquake”.  ANS, with the experience of many engineers and scientists to guide them, have developed standards for fuel handling, determining the impact of weather on facilities,  alarm systems and reactor design.  (see 

In 2006, ANS published a position statement on Nuclear Facility Safety Standards (  In it, they state:

” The American Nuclear Society believes that consistent application of such standards provides a high level of safety. The ultimate responsibility for ensuring safety, however, rests with the operator of the nuclear facility in rigorously applying these standards. An effective and independent regulatory authority is also essential.” 

As always, while use of standards is critical, engineering design is essentially a “human” enterprise, and it is up to those who design, operate and maintain nuclear facilities to make safety their highest priority — a lesson learned from Three Mile Island and Chernobyl as well.

A greate website on Galloping Gertie….

February 24th, 2011

There is a terrific website for learning about the tacoma Narrows Bridge disaster, at: (from the Washington State Department of Transportation). I especially like their analysis because they discuss the psychological “blind spot” of the engineers who designed the bridge — how the distinguished, accomplished engineers of the early twentieth century somehow ‘forgot’ the aerodynamic lessons of 19th century bridge designers. Combined with the need to understand the often complex torsional effects of winds on suspended structures, this oversight led to one of the most spectacular bridge disasters (and provides a terrific learning tool). Thank you, Washington State DOT!

Article on Learning from Disaster in Prism magazine

February 3rd, 2011

Charles Q. Choi has written an article entitled “Learning from Disaster” in Prism (published by the American Society of Engineering Education). It focuses primarily on lessons learned from the Gulf oil spill by engineers, including the role of redundancy (done correctly) in limiting the possibility of failure, and the need for engineers to be conscientious in their designs and persuasive and forthcoming in their critques of engineered systems when they know something to be wrong.

NRC: Backgrounder on the Three Mile Island Accident

January 31st, 2011

NRC: Backgrounder on the Three Mile Island Accident.

As I prepare for another semester teaching “Learning from Disaster”, I am collecting background materials  for study.  The above link provides an excellent outline of the Three Mile Island nuclear power plant failure, which I will use in class this semester as a reading assignment prior to discussion of this disaster.

I will post more of these materials, as well as my class syllabus, over the next few weeks.

How Munich Re Assesses Risk – Yahoo! News

December 3rd, 2010

How Munich Re Assesses Risk – Yahoo! News.

An interesting article on the difficulties in risk assessment (from the perspective of a re-insurance company — one that insures the insurers against unexpected consequences).  In considering risk assessment (and hence avoidance of failure in engineering), engineers and managers of engineering companies face many of the same challenges.

Historical bias in disasters

November 25th, 2010

I have recently read a fascinating account of the yellow fever epidemic which killed a significant percenatge of Philadelphia’s (and surrounding area) population in 1793. The book is entitled “Bring Out Your Dead” by J.H. Powell (originally published in 1949 by University of Pennsylvania Press, and reissued by Time Life Books in 1965). While this is an account of a natural disaster (with important human factors — especially bias-related errors made in determining the likely cause of the plague and the best treatment methods), many of the author’s observations can be applied to issues of bias in the human causes and study of any disaster. The following quote I found especially relevant:
“Facts do not make tradition; they are swept away as it forms. But tradition makes history in its own terms, and gives each disaster such place in knowledge as men can know and use. This process begins as soon as disaster is over, and as soon as those who survive begin to forget a part of their experience, and devote the unforgettable remnant to some use. Afterwards, disaster is tortured by reason, tragedy averted by the simple persistance of living.”
While the author referred to the yellow fever epidemic, and how the key actions of the doctors and others involved were viewed by history, we can see how this might apply to bias in the analysis of more recent engineering disasters.

“Teaching by Disaster”

October 17th, 2010

I presented a paper entitled “Teaching by Disaster: The Ethical, Legal and Societal Implications of Engineering Disaster”, at the American Society for Engineering Education, Fall 2010 Middle Atlantic Section Conference (October 15-16, Villanova University), which discussed the results of my first course offering exclusively on engineering disasters. I have included the text of the paper as a separate page with a link on the right — please have a look and let me know what you think!