In his Monday post, my colleague, Phil Palin, wrote about the lack of any regulatory requirement to plan for the catastrophic failure of the Deepwater Horizon drilling platform in the Gulf of Mexico. In what for me was the punchline of his post, Phil wrote, “What I cannot find in any of these regulations or plans or instructions is significant attention to catastrophic possibility. This is not surprising. It is typical. I do not view this inattention as proof of regulatory or corporate malfeasance. It is, though, a mistake. It is a mistake we now have an opportunity to recognize and plenty of motivation to correct.”
It might be helpful then for us to review what we mean when we classify a decision or action as a mistake. Each decision or action we take (including decisions not to decide or act) has two components: its intention and its execution. When the execution matches the intention, we tend to view the performance as a success (cf. the folk notion of intentionality as described by Malle & Bennett 2002). Likewise, an execution that appears inconsistent with its underlying intention is readily appreciated as an error.
What then do we call a situation in which our execution matches its intention, but the intention is, at least in hindsight, ill-suited to the circumstances? How about a situation in which both our intentions and execution of them are inappropriate (that is, they produce unexpected or undesired results)?
We cannot rightly address mistakes without considering the intentions of those considered to have made them. Each discipline applies somewhat different standards to this assessment, however. And herein lies one of the difficulties now confronting us in the aftermath of yet another catastrophe whose origins seem to lie in what we so glibly called a “failure of imagination” in the months after 9/11 (Kean & Hamilton 2004:339-348; cf. Rowling 2008)
For a significant part of my career, I was involved in the development, administration, and application of building and fire safety codes. In the late 1980s and early 1990s, regulators in Europe began to wonder whether building regulations, or rather their prescriptive form, was inhibiting innovation in the design, construction, and building materials industries. Engineers and architects complained that building officials (many of whom did not hold professional qualifications in building technology) were insufficiently qualified to render judgments about their designs. Construction tradesmen complained that architects and engineers did not understand how buildings really worked. Politicians worried that building regulations, like other areas of government intervention, were no longer required. And the public assumed that the system was working to protect their interests, individually as well as collectively.
The most ardent advocates of performance-based regulations assumed that the vested interests of market participants would prevent failures without unduly hindering imagination. Few market participants questioned these assumptions, much less bothered to ask whether these assumptions were in fact conditions that would, if they proved false, render us vulnerable to very significant losses and disruptions that we had neither previously experienced nor imagined.
The case for government intervention usually rests on an assumption of market failure. For decades, we assumed that building codes were required to minimize the effects of negative externalities arising from the construction, use, occupancy, and maintenance of buildings. This argument assumed that what was good for a building owner and his agents might still expose others to unusual and unforeseen risks if these transactions remained unattended by independent scrutiny. This diagnosis of the situation required of us regulations that would protect innocent third-parties from the acts of building owners and their agents.
What changed our minds about this? Well, for one thing, we stopped having major conflagrations that burned down whole cities or at least major parts of them. We still had big fires that killed a lot of people for awhile, but as the codes got better these too all but disappeared. (The U.S. still has a higher fire death rate than many other developed countries, but the number of people killed in fires has dropped by about 2/3 since 1970, while the population has increased by about 50 percent. Fewer than 150 people die annually in commercial property fires, and almost all of these deaths involve explosions or intimate involvement with or proximate exposure to an ignition source.)
At the same time, the toll taken by natural disasters has increased, in large measure due to the effects of urbanization. Our expectations of buildings, even in the most rudimentary sense related to life safety, has evolved to consider measures that permit escape but still render the building uninhabitable after a major but not necessarily catastrophic event unacceptable or at least undesired. As the recent financial crisis has aptly illustrated, we expect our buildings to serve as stores of wealth not just by protecting their contents and facilitating productive activities but by serving as means of exchange themselves.
When I moved to New Zealand in 1999, I did so, in part, to see how so-called performance-based regulations worked in practice. Did they, like the reinventing principles being applied to other aspects of government involvement in society and markets, I wondered, make the country freer? Did they produce more innovation? Did this innovation yield social dividends or increase social welfare without compromising economic efficiency?
In short, the answer was no to all of the above in one degree or another. By 2002, 10 years after New Zealand enacted its experiment in deregulation of construction, the country was confronting a full-blown crisis of confidence in building regulations spawned by problems with exterior insulation systems that threatened to bring construction lending and indemnification of builders, designers, and certifiers to a grinding, screeching halt.
Experts assembled to review the situation and redesign the building regulations quickly concluded that failures had occurred in both intention and execution. This led them to believe that government still had a role to play in preventing or managing the effects of market failure, but they diagnosed the cause of that failure differently. Negative externalities were not the cause, information asymmetry was. As such, they reckoned that any new system had to take account of both intention and execution.
Why is this significant? Information asymmetries produce problems for the first and second parties to a transaction, not just those exposed to their decisions, particularly when a transaction involves a high degree of complexity, as do most modern buildings.
When we started writing building codes, most people had practical knowledge of building trades or the mechanical arts as they were then known. Today, shop courses in which students learn craft-based skills have all but vanished from the secondary education curriculum (see Crawford 2009). By the time the crisis became evident in New Zealand it had become abundantly clear that our distance from such knowledge was an impaired ability to interpret price signals. Absent a better way of informing participants’ choices and framing their expectations of one another and the building itself, they had little hope of achieving performance much less improving efficiency or encouraging innovation.
Information asymmetry produces insidious effects. At best, the ambiguity accompanying complex transactions leaves participants vulnerable to problems of adverse selection in which they cannot readily distinguish the qualities that should form the basis for discriminating one course of action from another on the basis of its prospective utility. At worst, participants are lulled into a false sense of security that suggests their choices matter very little, which leads them to make decisions their morals might otherwise compel them to avoid.
Today, we live in an environment in which complexity like this has overwhelmed common sense to such an extent that the ethical codes of some professions all but compel conduct that in some circumstances seems immoral. One such provision in the Institute of Professional Engineers of New Zealand’s canon of ethics that prohibited one engineer from publicly criticizing the work of another engineer was even cited as the basis for efforts to silence a whistleblower who suggested in the wake of New Zealand’s building code crisis that even bigger problems than those exhibited by external insulation and building envelope systems threatened buildings’ structural integrity.
In a perfectly functioning market, we expect incentives to exert both positive and negative pressure over participants. As a consequence, we assume all actions to reflect the ordering of participants’ preferences. If we are not careful, this can become a self-fulfilling prophecy rather than a simplification that helps us understand the limits of our understanding and the efficacy of our interventions.
Our expectation that government can or should intervene rests not only on experience that markets perform their functions imperfectly, but also upon a desire to perfect the human condition by promoting the common good. If people were purely rational and acted only out of self-interest, we would not enjoy many of the benefits of innovation we now take for granted.
These innovations flow not from unbridled freedom, but from a desire to overcome the limits and constraints of the human condition, to free ourselves from restrictions that inhibit our potential. We measure this potential in many ways, and often sacrifice present welfare for future benefit even when it means others besides ourselves will enjoy the fruits of our labor.
As Phil noted, we now have a lot of motivation to look differently at the risks of technological failure surrounding efforts to supply our insatiable demand for oil. Where will we turn for the inspiration to inform our motivation?
Like the information asymmetries that plague complex transactions, a praise-blame asymmetry (Hindriks 2008) often affects our perception of and response to the choices before us. The tendency to dispense praise and blame disproportionately will undoubtedly influence how the market responds to this event. It has already become evident that no amount of praise will improve the response or ameliorate the effects on the Gulf ecosystem or its inhabitants. Sadly, neither will blame.
Our best chance to correct both the information and praise-blame asymmetries always precedes the disaster. Recognizing that it is easier to dispense praise, if not rewards, as it requires no prior motivation or justification, has motivated more innovation in environmentally sustainable design than any regulations have to date (see USGBC 2010).
Rather than imaging all the ways in which a system might fail either through intention or execution, why not consider the ways in which we might encourage or promote the behavior we desire? How might we apply the lesson of this catastrophe to improve the quality and performance of other elements of our homeland security system?