The Myth of Banking Deregulation
Myth: Finance was deregulated during the 1980s and 1990s, laying the groundwork for the 2008 financial crisis.
Reality: Although some financial regulations were rolled back during the late 20th century, the overall trend was toward increased government control.
According to many commentators, the New Deal regulatory regime led to the longest period of banking stability in U.S. history, but that regime was destroyed by free market ideologues who, during the late 20th century, oversaw a radical deregulation of the financial industry. This, they conclude, laid the groundwork for the 2008 financial crisis.
But while some restrictions on finance were lifted during this period, other controls were added — and the subsidization of finance that drained the system of market discipline only increased. As we entered the 21st century, our financial system was not a free market but a Frankenstein monster: large and imposing but inflexible and unstable.
The Collapse of the New Deal Regulatory Regime and the Re-Regulatory Response
The banking system was in many respects fairly stable in the decades following the New Deal, with far fewer bank failures than in the past.
By far the most important factor in postwar stability was not New Deal financial regulations, however, but the strength of the overall economy from the late 1940s into the 1960s, a period when interest rates were relatively stable, recessions were mild, and growth and employment were high.
Part of the credit for this stability goes to monetary policy. Although the classical gold standard that had achieved unrivaled monetary stability during the late 19th century had fallen apart during World War I, the Bretton-Woods agreement struck in the aftermath of World War II retained some link between national currencies and gold, limiting the government’s power to meddle with money. According to economist Judy Shelton:
[T]here can be little question that the sound money environment that reigned in the postwar years contributed to the impressive economic performance of both the victors and the vanquished and enabled the world to begin reconstructing an industrial base that would raise living standards to new heights for the generations that followed.
This would change as an increasingly expansive and expensive U.S. government cut its remaining ties to gold in 1971. The volatile inflation and interest rates that followed would throw the financial system into disarray, revealing the hidden weaknesses created by the New Deal regulatory regime. The failure of the New Deal regime would become most clear during the savings and loan crisis.
The New Deal had divided up the financial industry into highly regimented, tightly controlled silos. Insurance companies, investment banks, commercial banks, and savings and loans (or “thrifts,” as they were often called) all operated in their own universes, free from outside competition. The players in each sub-industry faced their own unique set of restrictions as well as their own government subsidies and privileges.
Thrifts were limited by the government almost exclusively to accepting deposits and making loans to homebuyers. In exchange for promoting home ownership, they were given special privileges by the government, including protection from competition and the ability to pay a slightly higher interest rate on their deposits than traditional banks. It was a simple business model best summed up by the famous 3-6-3 rule: borrow at 3 percent, lend at 6 percent, and be on the golf course by 3.
But this setup made thrifts enormously vulnerable to interest rate risk. They were making long-term loans — often 30 years — at fixed interest rates, yet were borrowing short-term via savings accounts. What would happen if depositors could suddenly get a higher return on their savings elsewhere, say by parking their savings in one of the new money market accounts? What would happen if inflation rose and their savings actually began losing its purchasing power? Depositors might flee, depriving the thrifts of capital. Thrifts, meanwhile, would have their hands tied: Regulation Q set a cap on the interest rate they could pay on deposits. And even their hands weren’t tied by Regulation Q, paying higher interest rates would cause thrifts to lose money on their existing loans: they could end up paying out 10 percent or more in interest to their depositors while receiving only 6 percent in interest payments from the loans already on their books.
All of this is exactly what happened when, starting in the late 1960s, the Federal Reserve began expanding the money supply to help the government finance a burgeoning welfare state and the Vietnam War. By the late 1970s, inflation had reached double digits.
As interest rates rose, thrifts began to fail in large numbers, but rather than unwind them, the government tried to save them. It did so in part through a program of partial deregulation. For example, the government allowed thrifts to diversify their assets, e.g., by moving into commercial real estate or through purchasing high yield bonds, and eliminated Regulation Q’s cap on deposit interest rates. Meanwhile, the government also dramatically expanded its deposit insurance subsidy for banks, including thrifts, increasing coverage in 1980 from $40,000 to $100,000.
The government’s program was disastrous — but not because of any problem inherent in deregulation. Had the government pursued a genuine free-market policy by allowing failed institutions to go out of business, ending the moral hazard created by deposit insurance, and then allowing the remaining thrifts to enter new lines of business and pay market interest rates, there still would have been pain and the process would have been messy, but the financial system would have moved in a more sound, more stable direction. Instead, the government created one of the greatest catastrophes in U.S. banking history by propping up and subsidizing insolvent “zombie banks,” giving them the power and incentive to gamble with taxpayers’ money.
To say a thrift is insolvent is to say that its capital has been wiped out. The bank no longer has any skin in the game. That creates a perverse set of incentives. It pays the thrift’s owners to make huge gambles, which, if they pay off, will make them rich, and if they don’t, will leave them no worse off. Deposit insurance, meanwhile, gives them virtually unlimited access to capital, since they can promise to pay high interest rates on deposits to depositors who don’t have to worry about the risks the bank is taking.
Well, the thrifts that took huge gambles generally ended up taking huge losses, destroying far more wealth than if they had simply been wound down when they reached insolvency. This was not an indictment of deregulation. It was an indictment of re-regulation — of regulatory reform that removed or changed some controls while retaining and expanding other controls and subsidies.
There are two lessons here. The first is that the New Deal regulatory regime could not last. It was (partially) dismantled because it collapsed under the pressure of bad monetary policy from the Federal Reserve and the perverse constraints and incentives imposed by regulators. (Technological innovations in the financial industry and other economic forces, such as increased global competition, also played a role.)
The second lesson is that if we want to evaluate the conventional narrative about financial deregulation, we have to investigate more carefully which regulations and subsidies were repealed, which regulations and subsidies were changed (and in what way), which regulations and subsidies weren’t changed or repealed, and what the consequences were. To speak simply of “deregulation” blinds us to the fact that in many respects financial intervention was increasing during this period, and that even when some regulations were altered or rescinded, the system itself was dominated by government distortions and controls.
The Big Picture
At the time of the 2008 financial crisis, there were — in addition to hundreds of state-level regulators — seven federal regulators overseeing the financial industry:
- Federal Reserve
- Office of the Comptroller of the Currency
- Office of Thrift Supervision
- Securities and Exchange Commission
- Federal Deposit Insurance Corporation
- Commodities Futures Trading Commission
- National Credit Union Administration
No matter what metric you look at, it’s hard to find any evidence that financial regulation by these bodies was decreasing overall. According to a study from the Mercatus center, outlays for banking and financial regulation grew from $190 million in 1960 to $1.9 billion in 2000. By 2008 that number had reached $2.3 billion. (All in constant 2000 dollars.) In the years leading up to the financial crisis, regulatory staff levels mostly rose, budgets increased, and the annual number of proposed new rules went up. There were also major expansions of government regulation of the financial industry, including Sarbanes-Oxley, the Privacy Act, and the Patriot Act.
None of this comes close to conveying the scale of industry regulation, however. The simple fact is that there was virtually nothing a financial firm could do that wasn’t overseen and controlled by government regulators.
There were, to be sure, some cases of genuine deregulation, but on the whole, these were policies that were undeniably positive, such as the elimination of Regulation Q and other price controls, and the removal of branch banking restrictions. And typically the bills that instituted these policies expanded regulation in others ways.
But consider what didn’t change. As we’ve seen, the major sources of instability in the U.S. financial system were branch banking restrictions, the creation of the Federal Reserve with its power to control the monetary system, and the creation of deposit insurance and the “too big to fail” doctrine, which encouraged risky behavior by banks.
Yet it was only the first of those problems that was addressed during the era of deregulation, when the Siegle-Neal Interstate Banking and Branching Efficiency Act eliminated restrictions on branching in 1994. The Federal Reserve was left untouched, and the scope of deposit insurance expanded: the government raised the cap on uninsured deposits to $100,000, though in reality it effectively insured most deposits through its policy of bailing out the creditors of institutions seen as “too big to fail.”
What, then, do people have in mind when they say that deregulation led to the Great Recession? Advocates of this view generally point to two examples: the “repeal” of Glass-Steagall, and the failure of the government to regulate derivatives.
Did the “Repeal” of Glass-Steagall Make the Banking System More Fragile?
When people say that Glass-Steagall was repealed, they’re referring to the Gramm-Leach-Bliley Act of 1999 (GLBA). The GLBA did not actually repeal Glass-Steagall. Instead, it repealed Section 20 and Section 32 of the Glass-Steagall Act. There was nothing banks could do after the repeal that they couldn’t do before the repeal, save for one thing: they could be affiliated with securities firms. Under the new law, a single holding company could provide banking, securities, and insurance services, increasing competition and allowing financial institutions to diversify.
Why this change? There were numerous factors. First of all, the barriers between commercial and investment banks had been eroding, due in part to innovations in the financial industry, such as money market mutual funds, which allowed investment banks to provide checking deposit-like services. Glass-Steagall didn’t change what was going on in financial markets so much as recognize that a distinction between commercial and investment banking was no longer tenable.
At a theoretical level, the case for Glass-Steagall had always been tenuous, and this had been reinforced by more recent scholarship that argued that the Great Depression was not in any significant way the result of banks dealing in securities.
Even more compelling, virtually no other country separated commercial and investment banking activities. In fact, as the authors of a 2000 report on the GLBA noted, “compared with other countries, U.S. law still grants fewer powers to banks and their subsidiaries than to financial holding companies, and still largely prohibits the mixing of banking and commerce.” The authors go on to observe that less restrictive banking laws were associated with greater banking stability, not less.
The question, then, is whether the GLBA’s marginal increase in banking freedom played a significant role in the financial crisis. Advocates of this thesis claim that it allowed the risk-taking ethos of investment banks to pollute the culture of commercial banking. But here are the facts:
- The two major firms that failed during the crisis, Bear Stearns and Lehman Brothers, were pure investment banks, unaffiliated with depository institutions. Merrill-Lynch, which came close to failing, wasn’t affiliated with a commercial bank either. Their problems were not caused by any affiliation with commercial banking, but by their traditional trading activities.
- On the whole, institutions that combined investment banking and commercial banking did better during the crisis than banks that didn’t.
- Glass-Steagall had stopped commercial banks from underwriting and dealing securities, but it hadn’t barred them from investing in things like mortgage-backed securities or collateralized debt obligations: to the extent banks suffered losses on those instruments during the crisis, Glass-Steagall wouldn’t have prevented it.
In light of such facts, even Barack Obama acknowledged that “there is not evidence that having Glass-Steagall in place would somehow change the dynamic.”
Finally, it is important to emphasize that the GLBA was not a deregulatory act, strictly speaking. As with much else that went on during the era, it was an instance of re-regulation. The government still dictated what financial institutions could and couldn’t do down to the smallest detail. Indeed, aside from repealing two sections of Glass-Steagall, the GLBA expanded banking subsidies and regulations, including regulations on thrifts, new privacy and disclosure rules, as well as new Community Reinvestment requirements for banks.
Were Derivatives Unregulated?
The role of derivatives in fostering the financial crisis has been wildly overstated. Take the credit default swaps (CDSs) that contributed to the downfall of insurance giant AIG. In the simplest terms, a CDS is a form of insurance. If I make a loan to Acme Corp., I can buy a CDS from a CDS seller that pays me if Acme defaults on its obligations. All I’ve done is transfer an existing risk — Acme’s default on a debt — from me to the CDS seller.
On the whole, CDSs and other derivatives didn’t create new risks: they mainly transferred risks among financial players, from those who didn’t want to bear them to those who did. True, these instruments were used by some firms, not just to hedge existing risks, but to take on new risks in the belief their bets would pay off — and the firms that made bad bets should have suffered the consequences. But focusing on derivatives detracts from the real story of the financial crisis.
At the most basic level, the financial crisis resulted from financial institutions using enormous leverage to buy mortgage-backed securities that turned out to be far riskier than most people assumed. Take CDSs out of the equation, and the crisis still would have happened. The details would have played out differently, but the bottom line would have been the same.
That said, it simply wasn’t true that derivatives were “unregulated.” As Heritage’s Norbert Michel points out, “Federal banking regulators, including the Federal Reserve and the OCC [Options Clearing Corporation], constantly monitor banks’ financial condition, including the banks’ swaps exposure.” In particular, banking capital requirements explicitly took into account swaps. (To the extent CDSs were a problem they were a problem encouraged by regulation, since, under Basel I capital regulations, CDSs allowed banks to hold less capital.)
When people say that derivatives were unregulated, they are typically referring to the 2000 Commodity Futures Modernization Act (CFMA). But the CFMA didn’t prevent regulation of CDSs. It merely prevented the Commodities Futures Trading Commission from regulating them, and (likely) treating them as futures contracts that had to be traded on an exchange. (For various technical reasons, CDSs don’t generally make sense to trade on an exchange rather than “Over-the-Counter.”)
It is possible that different regulations or behavior by regulators might have prevented the financial crisis. Certainly it is easy to concoct such scenarios after the fact. But the “deregulation” story pretends that regulators were eager to step in and prevent a crisis and simply lacked the power. That view is completely without merit. The government had all the power it needed to control the financial industry, and such deregulation as did take place was largely (though not universally) good.
The real problem, as we’ll see, was that government intervention had created an unstable system that encouraged the bad decisions that led to the crisis.