Operational Risk and Financial Stability
“There are only two types of organizations: those that know that they’ve been hacked and those that don’t yet know.” Dmitri Alperovitch, Crowdstrike (CNBC, December 15, 2016)
Recent disasters—both natural and man-made—prompt us to reflect on the relationship between operational risk and financial stability. Severe weather in sensitive locations, such as Hurricane Irma in Florida, raises questions about the resilience of the financial infrastructure. The extraordinary breach at Equifax highlights the public goods aspect of data protection, with potential implications for the availability of household credit.
At this stage, it’s important to pose the right questions about these operational shocks and, over time, to draw the right lessons. We expect that systemic financial intermediaries’ risk managers, members of their boards, their regulators, and their ultimate legislative overseers are currently in the midst of an intensive review of exposures (and that of the financial system as a whole) to these risks.
So, what is operational risk (OR)? The Basel Committee for Banking Supervision (BCBS)—which has collected data on OR losses, developed principles for managing OR, and designed OR capital requirements—defines OR as “the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events.” The BCBS definition “includes legal risk, but excludes strategic and reputational risk.” It also encompasses the risk of fraud, such as the rogue trading events that caused large losses at numerous firms and, on occasion, led to outright failure (e.g. Barings in 1995). In its data collection, the BCBS has classified OR events into several categories, ranging from internal fraud to execution, delivery and process management, and including damage to physical assets and business disruption (see Appendix 2 here).
Studies show that OR events are very important for bank capital planning. For example, de Fountnouvelle et al find that, for many banks, operational risk exceeds traditional market risk (say, from exposure to interest rate fluctuations). The potential for spillover from OR events to reputation also implies a need for capital to ensure institutional resilience. However, the same authors uncover a bias for under-disclosure of large OR losses. Even so, in the 10 years ending in 2002, they found more than 100 operational losses exceeding $100 million.
The OR problem is widespread. In the latest BCBS loss data collection exercise (published in 2009), 121 banks from 17 countries reported cumulative losses totaling €64 billion (mostly between 2002 and 2008). Relatively few losses are attributed to business disruption or to damage to physical assets. Notably, only 41 of the more than 10 million events reported to the Basel Committee resulted in losses of more than €100 million (see Table ILD6 here). That is, only a tiny fraction of OR events threaten the existence of an institution and the resilience of the financial system. Because they are sparse, and quite varied in character, it is vital that we explore in depth each new large-scale occurrence to understand how to mitigate the threat.
September 11, 2001 remains the most dramatic example of a systemic disruption arising from OR. When terrorists destroyed the World Trade Center (WTC) towers, they also interrupted power and communications in Manhattan’s financial district. Not all firms had anticipated the potential business continuity need for usable working space or for various backup mechanisms. For example, one of the two key clearing banks—that handled hundreds of billions of dollars’ worth of Treasury securities transactions each day—had located both its primary and backup operations within blocks of the Trade Center. Unsurprisingly, failed settlements in the Treasuries market surged on September 11 and for some time thereafter (see Fleming and Garbade).
More importantly, September 11 posed a major threat to the U.S. payments system that could only be contained by aggressive central bank intervention (see, for example, Lacker). As one immediate symptom, the number of transactions on Fedwire—where the largest transfers among banks clear almost instantly—plunged on September 11 to 249 thousand, down from 436 thousand the previous day, and remained below the September 10 level for a week (see McAndrews and Potter). More broadly, many financial firms were initially unable to complete vital transactions (such as helping their clients issue new commercial paper to replace expiring issues). And, with U.S. aviation grounded for a week in an era where checks were literally flown around the country to complete transfers between bank accounts, incomplete payments (known as float) spiked. To forestall a liquidity crisis, the Federal Reserve engineered a brief, unprecedented expansion of its balance sheet, including a then-extraordinary rise in direct lending. Fortunately, the surge in demand for the safest, most liquid instruments subsided as the overall payments system quickly went back to normal (see following chart).
Aggregate Reserves of the Federal Reserve System (Billions of U.S. dollars), September 2001
Everyone learns from terrible events like September 11. By the time Hurricane Sandy ravaged the Manhattan financial district in 2012, the largest financial firms typically had established backup systems elsewhere, sometimes at quite a distance, improving the resilience of the financial system. Even so, Sandy triggered the first weather-driven two-day closure of the New York Stock Exchange (NYSE) since 1885.
Despite the knowledge gained from these events, operational risks from disasters remain difficult to anticipate long in advance, and can still threaten the financial system. Shortly before Hurricane Irma reached Florida earlier this month, forecasters feared an unprecedented storm surge, in addition to the heavy rain, high winds and possible tornadoes. Naturally, the key concern was getting people to safety. Yet, from a financial stability perspective, the focal point was the major back-office financial hub that emerged in Tampa in the aftermath of September 11. Today, household names like Citigroup, JPMorgan and MetLife, as well as one of the leading financial market utilities (FMUs)—the Depository Trust and Clearing Corporation (DTCC)—have significant operations in Tampa.
While relatively few people have heard of it, DTCC is part of the fundamental plumbing of the U.S. financial system for which there is no substitute in the short run. Reflecting its systemic importance, DTCC is one of only eight firms designated by the Financial Stability Oversight Council as an FMU, subjecting it to the scrutiny of the Federal Reserve and providing it access to the Fed’s discount window in a crisis. According to its 2016 Annual Report, DTCC cleared an average of 100 million U.S. securities market transactions each business day, resulting in an annual clearance volume exceeding $1.5 quadrillion (that’s more than $1,500 trillion; for comparison, the financial assets of U.S. households totaled $75 trillion at the end of 2016). In short, the services provided by DTCC constitute a critical public good.
Fortunately for the financial system, the impact of Irma on Tampa—however frightful—was less than initially feared. More importantly, DTCC itself is clearly thinking about how to limit the threat of future Irmas to dedicated financial facilities in a cost-effective manner. Just this past May, they published a White Paper entitled Moving Financial Market Infrastructure to the Cloud: Realizing the Risk Reduction and Cost Efficiency Vision While Achieving Public Policy Goals. The paper highlights the advantages of cloud computing—“a model for enabling on-demand network access to a shared pool of configurable information technology capabilities/resources (e.g. networks, servers, storage, application, and services)” (see definition on page 35 here). From the perspective of financial stability, the most important benefits of the cloud are the access to multiple data centers, the strengthened networks, the ability to replicate, the top-quality talent employed by cloud vendors, and the reliability that supports business continuity and recovery. (Interestingly, while the September 11 attacks compelled an evacuation of the NYSE, located just one-third of a mile from the WTC, the NASDAQ electronic network was still functional, but stayed closed until the NYSE re-opened on September 17.)
We defer to IT specialists in assessing the costs and benefits of the cloud compared to purpose-built, dedicated infrastructure. But one of the key issues will almost certainly be the protection of private information. As DTCC’s latest systemic risk survey (from the first quarter of 2017) reconfirms, cyber risk tops the list of client concerns, as it has since the initial canvass in 2013.
That brings us to the breach at Equifax. Experian, TransUnion, and Equifax are the three major consumer reporting companies (CRCs). Together, they form the backbone of the system that provides a total of nearly $15 trillion of credit to U.S. households. Whenever someone applies for credit—a new credit card, a home mortgage, an auto loan, or even a cellphone or electric company account—the provider usually turns to one of these three CRCs to assess the applicant’s creditworthiness. The CRCs are to consumer credit what the credit rating agencies are to business and sovereign debt markets. Partly due to the efficiency of this information mechanism—and the competition it facilitates among lenders—many U.S. consumers can borrow at reasonable cost. As with DTCC, the services provided by these firms constitute an important public good. But, there is an important difference: the CRCs are subject to far less regulatory scrutiny (at least, for now).
Against this background, the Equifax breach—affecting key identity data for 143 million people, or roughly half of the adult population—poses risks to financial stability that, at least so far, have gotten less attention than they merit. To understand why we say this, imagine that, because of the Equifax breach, providers of credit become concerned that CRC data has become tainted either directly by hackers or by fraudulent accounts. Might intermediaries suddenly curtail credit supply to households? How long would it take for a new information system to be put in place to restore confidence? What would be the impact on the economy? The analogy to the creditors’ loss of confidence in banks amid the financial crisis of 2007-2009 is a disturbing one.
So far, there are only limited indications of a generalized loss of confidence. While Equifax’s stock price has plunged by 35 percent since September 7 (the day that the firm revealed the breach), the value of TransUnion stock is down by about 16 percent, and that of Experian by only about 5 percent. Some of these valuation declines may reflect expectations of increased security and regulatory costs and of the costs of reassuring worried consumers. Absent that, and assuming that Equifax’s data problems were unique, TransUnion and Experian would be expected to gain market share, which should drive their stock prices up. So, it is possible that confidence has begun to erode.
The challenge is that, as our opening quote suggests, it is virtually impossible for a firm to prove that it has not been hacked. Consequently, if financial professionals come to view the quality of CRC information about consumers with suspicion, the burden of reversing that perception will be a heavy one. If we are fortunate, the intermediaries that pay the CRCs for access to credit data will quickly be able to tell who is ensuring their data are secure and high quality, and who is not. That is, the users will be able to determine which, if any, of the CRC firms have invested in the equipment, software and personnel necessary for their current business model to survive.
Over time, legislators and regulators will need to take a much closer look at the CRC industry. Will the market discipline from their customers—the suppliers of household credit—be sufficient to improve the performance of the CRC oligopoly? Might there instead be a race to the bottom, where incumbent firms cut investment in data reliability and protection in order to provide their services at lower cost (a concern that also arises with competing derivatives and securities clearinghouses)? Or, as DTCC’s White Paper suggests, will new technology (such as the availability of cloud computing) help the CRCs improve data reliability without driving up costs?
Some observers already are calling for a shift to a regulated public utility model (see here and here). We are sympathetic: while CRCs are not clearing and settlement firms, at least superficially they have other features of FMUs (including limited competition and substitutability). Ultimately, whether the public utility approach is desirable will depend at least in part on how effective oligopolistic competition can be in preventing further Equifax-like disasters. Clearly, the evidence to date is not hopeful.