The financial crisis and BP share a common attribute: regulatory failure.
It’s time to dust off Stanley Kubrick’s 1964 dark comedy, “Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb.” After the hapless regulatory response to the financial crisis and the BP oil spill, maybe we should just learn to stop worrying and love technological uncertainties.
The satirical doomsday film made the point that new technologies can’t go back into the genie’s bottle. Even more than in the anxious 1960s, the pace of technological change is faster than the ability of regulators to address risks. One result is that unanticipated catastrophes are more frequent, whether in the engineering of financial instruments or deep-ocean drilling.
The financial-reform compromise Congress reached last week would not have prevented the housing bubble that led to the credit crisis. The financial models that failed the derivatives market, for example, would still fail, because the reform didn’t include Fannie Mae and Freddie Mac. Loose lending by these government-created institutions inflated the housing bubble, undermining longstanding financial models for risk. Their combined losses for taxpayers could reach $1 trillion, a leak that looks as unstoppable as BP’s.
Likewise, one reason the BP oil accident became a disaster is that the regulators issued a faulty model to the industry of what would happen if there was a deep water accident. In both cases, models that were supposed to predict bad outcomes failed—financial models didn’t correlate risks relating to a housing bubble, and drilling models didn’t predict what would happen if a rig blew far below sea level.
BP and other oil companies are required to use a risk model known as OSRA, for oil-spill risk analysis, prepared by the Interior Department’s Mineral Management Service. In 2000, the agency said it would issue a new model to assess deep water spills, but the latest model, from 2004, still requires oil companies to base risk assessments and contingency plans on government estimates for what would happen in a surface spill. It turns out that with spills far below the surface, oil travels in unpredicted ways, hitting land faster and in more volume than the model predicted. Bad information in meant bad information out. As Exxon CEO Rex Tillerson said, response plans by oil companies are “prescribed by regulation, including the models that are used to project different scenarios for oil spills.”
Instead of acknowledging the role of bad information leading to unreliable models and a failure to assess and address risks properly, the government now claims false certainty. When the Interior Department in May issued a drilling ban below 500 feet, it claimed its analysis had been peer-reviewed by experts at the National Academy of Engineering. These engineers now say they never backed a blanket moratorium.
A federal judge last week lifted the ban on drilling. “The court is unable to divine or fathom a relationship between the findings and the immense scope of the moratorium,” wrote Judge Martin Feldman. The Interior Department report justifying the ban on drilling “lacks any analysis of the asserted fear or threat of irreparable injury” and instead is “only incident-specific and driven: Deepwater Horizon and BP only.”
The judge’s exasperation is a measure of how far we are from having sound risk management: “If some drilling equipment parts are flawed, is it rational to say all are? Are all airplanes a danger because one was? Are oil tankers like Exxon Valdez? All trains? All mines? That sort of thinking seems heavy-handed and rather overbearing.”
We live at a time when we expect access to good information, whether to assess financial risks or the impact of drilling accidents. “The parallels between the oil spill and the recent financial crisis are all too painful: the promise of innovation, unfathomable complexity, and lack of transparency,” wrote Harvard economist Kenneth Rogoff earlier this month. “The accelerating speed of innovation seems to be outstripping government regulators’ capacity to deal with crises, much less anticipate them.” When there is uncertainty about big risks, regulation “perpetually overshoots or undershoots its goals.”
Some risks are so unknown that they are not subject to being managed. Richard Posner, the federal judge and University of Chicago law professor, recently recalled in the Washington Post how in his 2004 book, “Catastrophe: Risk and Response,” he had identified many disaster possibilities. “Yet I did not consider volcanic eruptions, earthquakes or financial bubbles,” he wrote, “simply because none of those seemed likely to precipitate catastrophes.”
Regulators, banks and oil companies can’t prevent catastrophes, but they can do a better job of identifying risks, from Fannie and Freddie to untested drilling techniques. As “Dr. Strangelove” reminds us, new technologies have enough risks of their own without adding the further unpredictability of human error.