Banking Panics and the Creation of the Federal Reserve

The Myth: We tried free banking and the result was constant bank runs and panics. The Federal Reserve was created to make the system stable and it succeeded.

The Reality: America’s recurrent panics were the product of financial control, and there is no evidence the Federal Reserve has made things better.

No one disputes that America’s banking system prior to the Federal Reserve’s (the Fed’s) creation in 1914 was unstable, prone to money shortages and recurrent panics. But what was the cause of that instability?

The conventional wisdom says that it was the inherent weakness of a free banking system — in particular, not having a central bank that could act as a “lender of last resort” to banks in need of cash during times of stress and panic.

One major reason to doubt that story, however, is that the phenomenon of recurrent banking panics was unique to the U.S. during the late 19th century, even though the U.S. was far from the only country without a central bank. Canada, for example, lacked a central bank and was far less regulated than the U.S., yet its financial system was notoriously stable.

In the U.S., government control over the banking system goes back to the earliest days of the republic. But when people speak about pre-Fed panics, what they usually have in mind is the period that runs from the Civil War to the creation of the Federal Reserve in 1913 (when the U.S. was on what was known as the National Currency System). During that era, there were two regulations that explain why the U.S. system was so volatile, while freer systems in Canada, Scotland, and elsewhere were remarkably stable:

(1)  bond-collateral banking

(2)  restrictions on branch banking


How Bond-Collateral Banking and Branch Banking Restrictions Fostered Crises

To understand bond-collateral banking, we need to take a step back and look at the monetary system at the time. Today we think of money as green pieces of paper issued by the government. But during the 19th and early 20th centuries, money meant specie: gold (or sometimes gold and silver). Paper money existed, but it was an IOU issued by a bank, which you could redeem in specie. A $10 bank note meant that if you brought the note to the bank, the bank had to give you $10 worth of gold.

In a fully free system, banks issue their own notes, and although those are redeemable in specie, banks don’t keep 100 percent of the gold necessary to redeem their notes on hand. Instead, they hold some gold as well as a variety of other assets, including government bonds, commercial paper (basically a short-term bond issued by businesses), and the various loans on their books.

This is what’s known as fractional reserve banking. The basic idea is that not every depositor will seek to redeem his notes for gold at the same time, and so some of the funds deposited at the bank can be invested by the bank and earn a return (which gold sitting in the vault does not). This was an important innovation in banking, which among other benefits meant that banks could pay depositors interest on their deposits rather than charge depositors for holding their gold in the vault.

But fractional reserve banking also carries with it what’s called liquidity risk. Even a solvent bank can be illiquid under a fractional reserve system. Although its assets (what it owns) are worth more than its liabilities (what it owes), the bank may not be able to quickly turn assets like long-term loans into cash. As a result, if too many depositors want to redeem their bank notes at once, the bank won’t be able to meet its obligations, which can lead it to suspend redemptions or even go out of business.

In the banking systems that most closely approximated free banking, such as Scotland’s system up to 1845, this was rarely a problem. Even highly illiquid banks were able to operate without facing bank runs so long as they remained solvent (i.e., so long as their assets were worth more than their liabilities, meaning they could pay their debts).

But in the post-Civil War era, solvent banks frequently experienced liquidity crises. Why? Because of banking regulations.

We’re taught to think of regulations as efforts to prevent “greedy” businesses from harming people. But historically banking regulations have often been designed to exploit the banking system in order to finance government spending. The typical pattern is to make the freedom of individuals to start banks or to engage in some banking activity, like issuing notes, contingent upon filling the government’s coffers. That’s what happened with the bond-collateral system imposed by the National Bank Act during the Civil War.

At the time, the federal government was in desperate need of funds to support the war effort, and so among other provisions it created an artificial market for its bonds by essentially forcing banks to buy them. Under the bond-collateral system, U.S. banks could only issue notes if those notes were backed by government bonds. For every $100 of government bonds a bank purchased, it was allowed to issue up to $90 in notes.

How did this make U.S. banking unstable? Imagine a bank that carries two liabilities on its books: the bank notes it has issued and checking account deposits. Now imagine that a customer with a checking account worth $200 wants to withdraw $90 worth of bank notes. In a free system, that’s no problem: the bank simply debits his account and issues him $90 in notes. There is no effect on the asset side of the bank’s balance sheet.

But consider what happens under the bond-collateral system. In order to issue the bank customer $90 in notes, the bank has to sell some of its assets and buy $100 of government bonds. At minimum that takes time and imposes a cost on the bank. But those problems were exacerbated because the U.S. government began retiring its debt in the 1880s, making its remaining bonds harder and more expensive to buy. The result was that, at a time when the economy was growing quickly, the available supply of paper money was shrinking.

This led to the problem of an inelastic currency. The demand for paper currency isn’t constant — it rises and falls. This was especially true in 19th century America, which was still a heavily agricultural society. During harvest season, farmers needed extra currency, say, to pay migrant workers to help them bring their crops to market. After the harvest season, demand for currency would shrink, as farmers deposited their notes back at the banks.

This left banks with a lousy set of options. They could either keep a bunch of expensive government bonds on their books (assuming they could get them), so that they could meet a temporary increase in demand for notes — or they could try to meet the temporary demand for cash by drawing down their gold reserves. Typically, they did the latter.

That would be bad enough if it simply meant that a small country bank would find its gold reserves dwindling. But making matters worse was the impact of branch banking restrictions.

Throughout America’s history, banks were legally prevented from branching — that is, the same bank was barred from operating in multiple locations spread around the country, the way you can find a Chase bank whether you’re in Virginia or California today. Instead Americans were left with what was known as a unit banking system. For the most part, every bank was a stand alone operation: one office building serving the surrounding community.

One result was a banking system that was highly undiversified. A bank’s fortunes were tied to its community. In an oil town, for instance, a downturn in the petroleum market could put the local bank out of business.

But the bigger problem was that unit banking made it harder for banks to deal with liquidity crises. A branched bank always had the option of calling on the cash reserves of its sister branches. This option was off limits to American banks. What developed instead was a system of correspondent banking and the so-called pyramiding of reserves, which concentrated problems in the heart of America’s financial center: New York. As economist George Selgin explains, unit banking

forced banks to rely heavily on correspondent banks for out-of-town collections, and to maintain balances with them for that purpose. Correspondent banking, in turn, contributed to the “pyramiding” of bank reserves: country banks kept interest-bearing accounts with Midwestern city correspondents, sending their surplus funds there during the off season. Midwestern city correspondents, in turn, kept funds with New York correspondents, and especially with the handful of banks that dominated New York’s money market. Those banks, finally, lent the money they received from interior banks to stockbrokers at call.

The pyramiding of reserves was further encouraged by the National Bank Act, which allowed national banks to use correspondent balances to meet a portion of their legal reserve requirements. Until 1887, the law allowed “country” national banks — those located in rural areas and in towns and smaller cities — to keep three-fifths of their 15 percent reserve requirement in the form of balances with correspondents or “agents” in any of fifteen designated “reserve cities,” while allowing banks in those cities to keep half of their 25 percent requirement in banks at the “central reserve city” of New York. In 1887 St. Louis and Chicago were also classified as central reserve cities. Thanks to this arrangement, a single dollar of legal tender held by a New York bank might be reckoned as legal reserves, not just by that bank, but by several; and a spike in the rural demand for currency might find all banks scrambling at once, like players in a game of musical chairs, for legal tender that wasn’t there to be had, playing havoc in the process with the New York stock market, as banks serving that market attempted to call in their loans. . . .

Nationwide branch banking, by permitting one and the same bank to operate both in the countryside and in New York, would have avoided this dependence of the entire system on a handful of New York banks, as well as the periodic scramble for legal tender and ensuing market turmoil.

It sounds complex, but in the final analysis it’s all pretty straightforward. Bankers were not free to run their businesses in a way that would maximize their profits and minimize their risks. The government forced them to adopt an undiversified, inflexible business model they would have never chosen on their own. America’s banking system was unstable because government regulations made it unstable, and the solution would have been to liberate the system from government control.

That’s not what happened.


The Creation of the Federal Reserve and Its Unimpressive Record

There was widespread recognition at the time that branching restrictions and bond-collateral banking were responsible for the turmoil in the American system. Neither of these regulations existed in Canada, and Canada’s stability was anything but a secret. As Americans debated what to do about the financial system during the early 20th century, many pointed to Canada’s success and urged repealing these restrictions in the U.S. As economist Kurt Schuler observes:

Many American economists and bankers admired Canada’s relatively unregulated banking system. The American Bankers’ Association’s ‘Baltimore plan’ of 1894 and a national business convention’s ‘Indianapolis plan’ of 1897 referred to Canada’s happy experience without American-style bond collateral requirements. (The Experience of Free Banking, chapter 4).

And Selgin also notes:

Proposals to eliminate or relax regulatory restrictions on banks’ ability to issue notes had as their counterpart provisions that would allow banks to branch freely. The Canadian system supplied inspiration here as well. Canadian banks enjoyed, and generally took full advantage of, nationwide branching privileges.

Of course, the push for deregulation of banking did not carry the day, thanks to various pressure groups and the general ideological climate of the country, which had shifted away from the pro-capitalist ideas that had characterized the 19th century. Instead, following the Panic of 1907, America got the Federal Reserve.

The Federal Reserve is America’s central bank, which today exercises enormous control over the money supply and the entire financial system. At the time of its creation, however, the Fed was seen as having a more limited function: to protect the safety and soundness of the banking system primarily by furnishing an elastic currency and acting as a “lender of last resort,” providing liquidity to banks in times of crises.

So what was the Fed’s track record? Did it put an end to the instability of the not-so-free banking period? Most people thinks so. But most people are wrong.

Bank runs and panics did not decrease in the first decades after the Fed was established. As economist Richard Salsman observes, “Bank failures reached record proportions even before the Great Depression of 1929-1933 and the collapse of the banking system in 1930. From 1913-1922, bank failures averaged 166 per year and the failure rate increased to 692 per year from 1923-1929 despite that period’s economic boom.”

True, bank panics do decline following the Great Depression, but that’s not thanks to the Fed — the credit for that goes to deposit insurance. (And, as we’ll see, deposit insurance laid the groundwork for severe troubles down the road.)

But even if we ignore the period from 1914, when the Fed was established, to the end of World War II…even then it is not clear that the Federal Reserve has been a stabilizing force in the financial system. In their study “Has the Fed Been a Failure?”, economists George Selgin, William D. Lastrapes, and Lawrence H. White find that:

(1) The Fed‘s full history (1914 to present) has been characterized by more rather than fewer symptoms of monetary and macroeconomic instability than the decades leading to the Fed‘s establishment. (2) While the Fed’s performance has undoubtedly improved since World War II, even its postwar performance has not clearly surpassed that of its undoubtedly flawed predecessor, the National Banking system, before World War I. (3) Some proposed alternative arrangements might plausibly do better than the Fed as presently constituted.

Those may be controversial claims — although the evidence the authors marshal is impressive — but the key point is this: the conventional wisdom that America’s history shows that an unregulated financial system leads to disaster and only a government controlled one can save the day is without merit. On the contrary, there is far more reason to suspect that the story runs the other way: that it’s government control that takes a naturally stable financial system and makes it fragile.