How the New Deal Made the Financial System Less Safe

The Myth: New Deal regulation of the financial system made the system safer.

The Reality: New Deal regulation of the financial system failed to address the real source of the problems that led to the Great Depression and laid the foundation for future crises.

Although there is widespread agreement among economists that the Great Depression was not caused by the free market, there is also widespread, if not universal, agreement that the government’s regulatory response to the Great Depression made the system safer. Many commentators on the 2008 financial crisis argue that it was the abandonment of the post–New Deal regulatory regime during the 1980s and 1990s that set the stage for our current troubles.

There are three major parts of the government’s regulatory response to the Great Depression:

  1. 1. Banking regulation
  2. 2. Housing regulation
  3. 3. Securities regulation

The government’s top priority on housing was to bail out mortgage borrowers and lenders, spawning the creation of the Federal Housing Administration and Fannie Mae. The Securities Act of 1933 and the Securities Exchange Act of 1934, which established the Securities and Exchange Commission, were passed to control the trading of securities in the name of protecting investors and making securities markets more orderly and fair. 

Here I’m going to focus on banking regulation, specifically the Banking Act of 1933, often referred to as Glass-Steagall. Among other provisions, Glass-Steagall created a separation between commercial and investment banking activities, and established the Federal Deposit Insurance Corporation (FDIC), which insures banking deposits.

Conventional wisdom says Glass-Steagall made the system safer. The truth is that it failed to address the causes of the Great Depression, and instead contributed to future crises.

The Senseless Separation of Commercial and Investment Banking

During the 1920s, commercial banks (i.e., those that accepted deposits and made loans) started expanding into lines of business traditionally dominated by investment banks, such as underwriting and trading securities. The development of universal banking allowed commercial banks to become, in effect, one-stop shops for their customers, and they grew quickly by taking advantage of economies of scope and offering customers major discounts on brokerage services. (Technically, commercial banks did not usually engage in investment banking activities, but instead operated through closely allied security affiliates.)

In 1932, the government launched an investigation of the crash of ’29, which became known as the Pecora hearings. The hearings regaled Americans with claims of banking abuses arising from banks’ involvement in securities, although the evidence for these claims was, to be generous, scant. (See, for instance, here, here, here, and here.)

Whatever the truth, the Pecora hearings enraged the public, and bolstered a number of pressure groups and politicians who argued that universal banking made banks and the financial system more fragile, and demanded the separation of commercial and investment banking activities.

The opponents of universal banking made several arguments to support their agenda, but the central claim was that securities were inherently more risky than the traditional banking activities of taking deposits and making loans, and so allowing banks to have securities affiliates made them less sound.

But the starting premise—that securities activities were riskier than commercial banking activities—was not obviously true. As economist Robert Litan writes, “the underwriting of corporate securities probably involves less risk than extending and holding loans.” That’s because underwriting risk typically only lasts a few days and involves assets that are more liquid than a standard loan, which can stay on a bank’s books for years and be difficult to sell.  

Certainly some activities of securities affiliates were riskier than some activities of traditional commercial banks. But it doesn’t follow that a commercial bank that engages in securities activities via its affiliate is taking on more risk overall. That’s because it is also gaining the benefits of diversification.

Diversification reduces risk. A single bond may be less risky than any given stock, yet a diversified portfolio of stocks can be less risky than the single bond. Similarly, even if a commercial bank that accepts deposits and makes loans enjoys less risk than an investment bank, that doesn’t imply that the commercial bank increases its overall risk by taking on investment banking activities. On the contrary, it is entirely possible for the risk-reducing features of diversification to outweigh the additional risk.

Apparently, this was true of most banks with securities affiliates in the lead up to the Great Depression. The best analysis of the pre-1933 period, by economist Eugene White, finds that banks with securities affiliates were more stable than those without them:

One of the most convincing pieces of evidence that the union of commercial and investment banking posed no threat to parent banks is the significantly higher survival rate of banks with securities operations during the massive bank failures of 1930-1933. While 26.3% of all national banks failed during this period, only 6.5% of the 62 banks which had affiliates in 1929 and 7.6% of the 145 banks which conducted large operations through their bond departments closed their doors.

This suggests that, by limiting banks’ ability to diversify their activities, Glass-Steagall made banks more risky. This risk would become manifest later in the century when commercial banks increasingly found themselves unable to compete with foreign universal banks. (As for the claim that the repeal of Glass-Steagall in 1999 contributed to the 2008 financial crisis, I’ll address that in the next post.)

Deposit Insurance and the Problem of Moral Hazard

The proximate cause of the Great Depression was the wave of bank failures that took place in the early 1930s. Federal deposit insurance was touted as a way to stop bank runs, protecting depositors and shielding sound but illiquid banks from the so-called contagion effects of bank failures.

But why was deposit insurance seen as the solution? Canada, as I’ve noted, did not experience a single bank failure during the Depression, even though it lacked deposit insurance. U.S. banks were unstable because, unlike Canadian banks, they could not branch, a fact that was widely recognized at the time.

And deposit insurance did not exactly have a great record. It had been tried at the state level for more than a hundred years, and every deposit insurance scheme that looked anything like the system eventually adopted under Glass-Steagall ended in failure.

The obvious solution to banking instability would have been to eliminate branch banking restrictions, allowing banks to consolidate and diversify geographically. But there were pressure groups who wanted to protect unit banking and who thereby benefited from deposit insurance. As Representative Henry Steagall, the politician who was the driving force behind deposit insurance, admitted, “This bill will preserve independent dual banking [i.e., unit banking] in the United States . . . that is what the bill is intended to do.”

What were the effects? As is so often the case in the history of finance, government support for the industry creates problems that are used to justify government control of the industry. 

Deposit insurance encourages risk-taking. Given the nature of the doctrine of limited liability, bank owners are always incentivized to take risks, since they enjoy unlimited upside gains and are insulated from the downside: their stock can become worthless, but  they aren’t personally liable for the business’s debts. (Although, it’s worth mentioning that prior to 1933, U.S. bankers faced double liability: if their bank went out of business, they could be required to pay up to two times their initial investment to reimburse depositors.) Depositors act as a counterweight: they are risk averse and will flee imprudent banks.

Deposit insurance reduces that counterweight by introducing moral hazard into the banking system. “Moral hazard” refers to the fact that when risks are insured against, people take more risks because they bear a smaller cost if things go wrong. In the case of deposit insurance, depositors are incentivized to patronize the bank that offers the highest interest rate, regardless of how much risk it is taking. As economist Richard Salsman puts it, “Deposit insurance was established in order to avert future bank runs. But its history has demonstrated a singular inducement to bankers to become reckless and pay excess yields, while encouraging depositors to run to bad banks instead of away from them.” If things go bad, after all, the depositors will be bailed out—at least up to the cap set by the FDIC, a cap that has ballooned over time from $2,500 in 1934 (more than $40,000 in 2008 dollars) to $250,000 in 2008.

The moral hazard introduced by deposit insurance was particularly intense given the scheme adopted by the FDIC. In normal insurance plans, such as car insurance or life insurance, if you are riskier you pay more for your insurance. But until a 1991 rule change, the FDIC charged banks a flat rate based on the size of their deposits. This meant that riskier banks were effectively being subsidized by prudent banks.

The government was not blind to this moral hazard problem. FDR had initially opposed deposit insurance on the grounds that, as he put it in a letter to the New York Sun in 1932:

It would lead to laxity in bank management and carelessness on the part of both banker and depositor. I believe that it would be an impossible drain on the Federal Treasury to make good any such guarantee. For a number of reasons of sound government finance, such plan would be quite dangerous.

(There’s no evidence FDR ever changed his mind on this point: deposit insurance made it into law because the president saw no other way to get his banking bill passed.)

In order to deal with the moral hazard problem created by deposit insurance, the government sought to limit risk-taking through command and control regulation. Discussing Glass-Steagall, economist Gerald O’Driscoll writes:

Among other things, the act prevented banks from being affiliated with any firm engaged in the securities business; established limits on loans made by banks to affiliates, including holding company affiliates; prohibited the payment of interest on demand accounts; and empowered the Federal Reserve Board to regulate interest rates paid on savings and time deposits. These regulations were intended to provide for the safety and soundness of the banking system.

However, these and other regulations meant to address the risks created by deposit insurance would fail to restrain government-encouraged risk-taking by banks and actually create even greater problems in the future. I’ll be discussing those problems in future posts. But it’s worth noting here that it was deposit insurance that set the stage for the doctrine that would eventually become known as “too big to fail.”

The Origins of “Too Big to Fail”

Businesses fail all the time and life goes on. What’s so different about financial institutions? It goes back to the peculiar nature of their business model; namely, even healthy financial institutions are typically illiquid. In industry parlance, banks borrow short and lend long. That is, they take in money from depositors who can draw down their accounts at any time and they lend those funds to business and consumer borrowers who repay their loans over a longer time horizon.

It’s a brilliant system in that it dramatically increases the financial capital available in the economy without forcing depositors to tie up their money in long-term investments. But it also carries with it a vulnerability: a healthy bank can fail if too many of its depositors demand their money back at once.

Most people—today and in the past—have believed that banking failures are “contagious”: a run on an insolvent bank can lead depositors at healthy banks to fear their money isn’t safe, setting off a cascade of bank failures and the collapse of the financial system.

Historically, this was seldom a genuine problem in systems that approximated free banking: solvent banks rarely suffered bank runs as the result of runs on insolvent banks. And financiers had developed effective private mechanisms, such as last-resort lending by clearinghouses, for dealing with widespread panics when they did occur. Nevertheless, concern over the contagion effects of bank failures has played an important role in justifying the expansion of government control over banking.

One solution to the problem of contagion was for the government to institute central banks, which would act as a lender of last resort. The idea, as formulated by Walter Bagehot in his famous 1873 work Lombard Street, was that a central bank’s role in a crisis should be to lend to solvent banks on good collateral at high interest rates.  

But during the 1930s, the Federal Reserve didn’t perform this function. As Norbert Michel points out, “In 1929, the Federal Reserve Board prohibited the extension of credit to any member bank that it suspected of stock market lending, a decision that ultimately led to a 33 percent decline in the economy’s stock of money.” But instead of insisting that the central bank do better, politicians decided that additional regulations were needed to address the problem. 

This led to the creation of deposit insurance. Now, instead of propping up solvent but illiquid institutions, the FDIC would try to prevent runs by promising to bail out depositors (up to a legally defined limit) even of insolvent banks.

But now regulators started to see contagion lurking around every corner, and came to believe that large financial institutions could not be allowed to fail lest that lead to the failure of other banking institutions tied to them in some way, thus setting off a chain of failures that could bring down the system. Thus was born the doctrine of “too big to fail.”

Actually, that name is misleading. A “too big to fail” institution can be allowed to fail in the sense that the company’s shareholders can be wiped out. What the government doesn’t let happen to such companies is for their debt holders (including depositors) to lose money: they are made whole.

Under Section 13(c) of the Federal Deposit Insurance Act of 1950, the FDIC was empowered to bail out a bank “when in the opinion of the Board of Directors the continued operation of such bank is essential to provide adequate banking service in the community.” It would first use that authority in 1971 to save Boston’s Unity Bank, but such bailouts would quickly become the norm, with the major turning point being the bailout of Continental Illinois in 1984.

As a result of “too big to fail,” much of the remaining debt holder-driven discipline was eliminated from the system. Thanks to the moral hazard created by the government’s deposit insurance and “too big to fail” subsidies, financial institutions were able to grow larger, more leveraged, and more reckless than ever before, creating just the sort of systemic risk that deposit insurance was supposed to prevent.

The bottom line is that Glass-Steagall failed on two counts: it did not fix the problems that had led to the Great Depression and it created new problems that would in time contribute to further crises.