Free Markets Didn’t Create the Great Recession
Myth: The Great Recession was caused by free-market policies that led to irrational risk taking on Wall Street.
Reality: The Great Recession could not have happened without the vast web of government subsidies and controls that distorted financial markets.
As with the Great Depression, the causes of the Great Recession remain controversial, even among free-market-leaning economists. What we know for sure is that the free market can’t be blamed, because there was no free market in finance: finance (including the financial side of the housing industry) was one of the most regulated industries in the economy. And we also know that, absent some of those regulations, the crisis could not have occurred.
What Everyone Agrees On
The basic facts aren’t in dispute. During the early to mid 2000s, housing prices soared. At the same time, lending standards started to decline as the government encouraged subprime lending (i.e., lending to borrowers who had a spotty credit history and found it difficult to get conventional mortgages), and as businesses saw profit opportunities in extending loans to riskier borrowers and in offering riskier kinds of loans.
Increasingly, mortgage originators did not keep the loans they made on their own books, but sold them off to Fannie Mae, Freddie Mac, investment banks, or other financial firms, which bundled these loans into mortgage-backed securities (MBSs) and other financial instruments — instruments often rated super-safe by the three government-approved credit ratings agencies — and sold them to investors.
Financial institutions of all kinds invested heavily in housing, often financing these investments with enormous leverage (i.e., far more debt than equity). These investments went bad when housing prices began to decline and the underlying loans began to default at higher rates than expected.
As the value of MBSs and other mortgage-related instruments fell, the financial institutions that held them started to suffer losses, setting off a chain of failures and bailouts by the federal government, and ultimately causing credit markets to freeze up, threatening the entire financial system.
On these points, there is agreement. But why did this happen? What led so many institutions to invest so heavily in housing? Why did they make these investments using extreme amounts of leverage — and why were they able to take on so much debt in the first place? What led credit markets to break down in 2008? And what led the problems in housing and finance to spill over into the rest of the economy, turning a financial crisis into the Great Recession?
As with our discussion of the Great Depression, this is not intended to be a definitive, blow-by-blow account of the crisis. The goal is to lay to rest the myth that our financial system was anything close to free, and to see some of the ways in which government intervention played a role in creating the Great Recession.
The Federal Reserve Makes the Housing Boom Possible
We typically speak of central bankers controlling interest rates. More precisely, they influence interest rates by expanding or contracting the money supply. Recall from our discussion of the Great Depression that central bankers can make two crucial mistakes when it comes to monetary policy: they can be too loose (leading to price inflation or credit booms) or they can be too tight (leading to deflationary contractions).
The best explanation of the root cause of the housing boom is that, during the early 2000s, the Federal Reserve’s monetary policy was too loose, setting off — or at least dramatically magnifying — a boom in housing.
There are various metrics you can look at to assess whether monetary policy is too tight or too expansionary, but they all point in the same direction during this period. Take interest rates. As economist Lawrence H. White points out:
The Fed repeatedly lowered its target for the federal funds interest rate until it reached a record low. The rate began 2001 at 6.25 percent and ended the year at 1.75 percent. It was reduced further in 2002 and 2003; in mid-2003, it reached a then-record low of 1 percent, where it stayed for one year. The real Fed funds rate was negative — meaning that nominal rates were lower than the contemporary rate of inflation — for more than three years. In purchasing power terms, during that period a borrower was not paying, but rather gaining, in proportion to what he borrowed.
As White and others have argued, the Fed’s easy credit found its way (mostly) into the residential home market, where it had two major effects.
First, it helped drive up housing prices, as lower interest rates made buying a home more attractive. A $150,000 mortgage would have cost $2,400 a month at the 18 percent interest rates borrowers faced in 1980. But at the 6 percent rate they could often get during the 2000s that fell to a mere $1,050 a month. Low interest rates, then, made it possible for more people to buy homes, to buy bigger homes, and to speculate in housing, helping spark the boom in housing.
Second, the Fed’s policies encouraged riskier lending practices. Partly this was a side effect of the rising price of housing. As long as home prices are rising, the risk that a borrower will default on his mortgage is low, because he can always sell the house rather than quit paying down the debt. But if housing prices stop rising or even fall? Then the home might end up being worth less than what is owed on the mortgage and it can make economic sense for the underwater homebuyer to walk away from the home.
Fed policy also encouraged more risky kinds of loans. One obvious example was the proliferation of adjustable-rate mortgages (ARMs), where borrowers took on the risk that interest rates would rise. ARMs dominated subprime and other non-prime lending by 2006 as borrowers sought to take advantage of the Fed’s ultra-low short-term interest rates — a trend encouraged by Greenspan. But the net result was that when interest rates did eventually rise, defaults went, well, through the roof.
And the riskiest kinds of loans — no-money-down loans, interest-only adjustable-rate mortgages, low-doc and no-doc loans? All of them seemed to make sense only because of the boom in housing prices.
Absent cheap money from the Fed, there would have been no crisis. The groundwork for 2008 was laid in 1914.
What Role Did Government Housing Policy Play?
During the 1990s and 2000s, the government attempted to increase home ownership, especially by subprime borrowers. Through the Community Reinvestment Act, tax incentives, Fannie Mae and Freddie Mac, and other channels, the government actively sought to put more Americans in homes.
But what role did the government’s housing crusade play in creating the Great Recession? There seem to be at least two important roles.
First, it contributed to the Fed’s easy money becoming concentrated in the housing market. In 1997, the government passed the Taxpayer Relief Act, which eliminated capital gains taxes on home sales (up to $500,000 for a family and $250,000 for an individual). According to economists Steven Gjerstad and Vernon Smith, “the 1997 law, which favored houses over all other investments, would have naturally led more capital to flow into the housing market, causing an increased demand — and a takeoff in expectations of further increases in housing prices.” By the time the Federal Reserve started easing credit in 2001, they argue, the housing market was the most rapidly expanding part of the economy and became a magnet attracting the Fed’s new money.
Second, government housing policy encouraged the lowering of lending standards that further inflated the housing bubble. Two key forces here were the Community Reinvestment Act (CRA) and especially the Government-Sponsored Enterprises (GSEs), Fannie Mae and Freddie Mac, which were the main conduits through which the government pursued its affordable housing agenda.
Starting in 1992, Fannie and Freddie were required to help the government meet its affordable housing goals by repurchasing mortgages made to lower-income borrowers. Over the next decade and a half, the Clinton and Bush administrations would increase the GSEs’ affordable housing quotas, which over time forced them to lower their underwriting standards by buying riskier and riskier mortgages. American Enterprise Institute scholar Peter J. Wallison sums up the role this would ultimately play in the crisis:
By 2008, before the financial crisis, there were 55 million mortgages in the US. Of these, 31 million were subprime or otherwise risky. And of this 31 million, 76% were on the books of government agencies, primarily Fannie and Freddie. This shows where the demand for these mortgages actually came from, and it wasn’t the private sector. When the great housing bubble (also created by the government policies) began to deflate in 2007 and 2008, these weak mortgages defaulted in unprecedented numbers, causing the insolvency of Fannie and Freddie, the weakening of banks and other financial institutions, and ultimately the financial crisis.
To be sure, lending standards would decline industry-wide during the 2000s. In large part this was because other financial institutions could not compete with the GSEs without dropping their own lending standards. And although the government was not the sole force driving increased risk-taking in housing, it was the government that first insisted it was virtuous to exercise less caution if it meant getting more people into homes, and that continued to approve of declining lending standards throughout the housing boom. It started the trend of lower standards, which only later spread to the rest of the market.
Had the government not encouraged the imprudent lending that defined the crisis, it is unlikely the crisis would have occurred.
Government Policy and the Financial Sector
The Fed’s monetary policy and the government’s housing policy helped ensure that there would be a massive malinvestment in real estate. But why did those risks become concentrated and magnified in the financial sector?
The main transmission mechanism was MBSs and other derivatives, which moved mortgage risk from mortgage originators to large financial institutions such as Fannie Mae, Freddie Mac, and the big commercial and investment banks, as well as institutional investors. Not only did these players make big bets on housing, they did so using enormous leverage — often 30 or 40 dollars of debt for every 1 dollar of equity by the end of the crisis. (Fannie and Freddie were levered even more.)
Why? Was it irrationality and greed run amok? Well, no. Although there was plenty of irrationality and greed, government interference in financial markets once again played a key role in what happened.
Specifically, there are at least three major forces at play here: (1) the credit ratings agencies, (2) bank capital regulation, and (3) government-created moral hazard.
1. The Ratings Agencies
The conventional view is that financiers loaded up on mortgage derivatives because they placed the desire for riches above fear of risk. The truth is more complex. In large part the reason mortgage products became so popular was because they seemed relatively safe.
Why did they appear safe? One reason is that the credit ratings agencies tasked with evaluating credit instruments said they were safe.
The three credit ratings agencies in the lead-up to the crisis — Moody’s, Standard and Poor’s, and Fitch — were not free-market institutions. By the time of the crisis, they were the only institutions the government permitted to supply official ratings on securities. As political scientist Jeffrey Friedman notes, “A growing number of institutional investors, such as pension funds, insurance companies, and banks, were prohibited from buying bonds that had not been rated ‘investment grade’ (BBB- or higher) by these firms, and many were legally restricted to buying only the highest-rated (AAA) securities.”
But no one could compete with the ratings agencies, and so they had virtually no incentive to assess risks accurately. Thanks to bad incentives, incompetence, and honest error, the ratings agencies stamped many mortgage derivatives AAA — as safe as ExxonMobil’s and Berkshire Hathaway’s debt. These products thereby seemed to be safe but relatively high-yielding assets.
Did the buyers of mortgage-backed securities put stock in the quality of the ratings agencies’ assessments? Many did. Research from economist Manuel Adelino found that while investors did not rely on ratings agencies to assess the riskiness of most investments, they did take AAA ratings at face value. Anecdotal evidence backs Adelino up. For example, a New York Times article from 2008 reported:
When Moody’s began lowering the ratings of a wave of debt in July 2007, many investors were incredulous.
“If you can’t figure out the loss ahead of the fact, what’s the use of using your ratings?” asked an executive with Fortis Investments, a money management firm, in a July 2007 e-mail message to Moody’s. “You have legitimized these things, leading people into dangerous risk.”
But from another perspective, it hardly mattered whether anyone believed the ratings were accurate. The sheer fact these instruments were rated AAA and AA gave financial institutions an incentive to load up on them thanks to government-imposed capital regulations.
2. Bank-Capital Regulations
As we saw when we looked at the New Deal’s regulatory response to the Great Depression, at the same time that the government began subsidizing banks through federal deposit insurance it started regulating banks to limit the risk taking deposit insurance encouraged. In particular, the government sought to limit how leveraged banks could be through bank-capital regulations.
Bank capital is a bank’s cushion against risk. It’s made up of the cash a bank holds and the equity it uses to finance its activities, which can act as a shock absorber if its assets decline in value. The greater a bank’s capital, the more its assets can decline in value before the bank becomes insolvent. Prior to the FDIC, it wasn’t unusual for banks’ capital levels to hover around 25 percent. By the time of the 2008 financial crisis, bank capital levels were generally below 10 percent — sometimes well below 10 percent.
Bank-capital regulations forced banks to maintain a certain amount of capital. Until the 1980s, there were no worked out standards governing capital regulation, but in 1988, the U.S. and other nations adopted the Basel Capital Accord, or what became known as Basel I.
Basel is what’s known as risk-based capital regulation. That means that the amount of capital a bank has to have is determined by the riskiness of its assets, so that the riskier an asset, the more a bank has to finance it with equity capital rather than debt. Assets the Basel Committee on Banking Supervision regarded as riskless, such as cash and government bonds, required banks to use no capital. For assets judged most risky, such as commercial loans, banks had to use at least 8 percent of equity capital to fund them. Other assets fell somewhere in between 0 and 8 percent.
What’s important for our story is that securities issued by “public-sector entities,” such as Fannie and Freddie, were considered half as risky as conventional home mortgages: a bank could dramatically reduce its capital requirements by buying mortgage-backed securities from Freddie Mac and Fannie Mae rather than making mortgage loans and holding them on its books. In 2001, the U.S. adopted the Recourse Rule, which meant that privately issued asset-backed securities rated AAA or AA were considered just as risky as securities issued by the GSEs.
The net result was that banks were encouraged by government regulators to make big bets on mortgage derivatives rated AAA or AA by the ratings agencies.
3. Moral Hazard
As we’ve seen, many financial players really did believe that mortgage-related products were relatively safe. Government certainly encouraged that impression. The Federal Reserve was assuring markets that interest rates would remain low and that it would fight any significant declines in securities markets with expansionary monetary policy. The credit ratings agencies were stamping many mortgage derivatives AAA. Congress and the president were touting the health of the mortgage market, which was putting ever-more Americans into homes.
But it is mysterious why more people weren’t worried. Contrary to the housing industry’s bromide, there were examples of housing prices going down — even nationally, as in the Great Depression. There was also evidence that many of the loans underlying the mortgage instruments were of increasingly poor quality. And we can find plenty of examples of Cassandras who foresaw the problems that were to come.
Some market participants did understand how risky mortgage-related derivatives were, but were not overly concerned with those risks because they could pass them on to others. Mortgage originators, for instance, were incentivized to make bad loans because they could pawn off the loans to securitizers such as Fannie and Freddie.
But what about the people ultimately buying the mortgage securities? Why were they willing to knowingly take big risks? Part of the answer is that the moral hazard introduced through government policies including (but not limited to) “too big to fail” had convinced them that they would reap the rewards on the upside, yet would be protected on the downside thanks to government intervention. We’ve seen, after all, how the government had started bailing out financial institutions seen as “too big to fail” decades before the crisis.
Many people resist this hypothesis. It simply doesn’t seem plausible that investors were thinking to themselves, “This could easily blow up, but it’s okay, I’ll get bailed out.” But tweak that thought just a bit: “There’s some risk this will blow up, as there is with every financial investment, but there’s also a good probability the government will step in and shield me from most if not all of the losses.” How could that not influence investor decision-making?
And there is also another, more subtle effect of moral hazard to consider. Over the course of decades, the government had increasingly insulated debt holders from downside risk. Thanks to deposit insurance, “too big to fail,” and other government measures, debt holders simply weren’t hit over the head by the message that they could get wiped out if they weren’t careful.
More generally, the regulatory state had taught people that they need not exercise their own independent judgment about risk. Is your food safe? The FDA has seen to that. Is your airplane safe? The FAA has seen to that. Is your doctor competent? If he weren’t, the government wouldn’t allow him to practice medicine. Is it surprising, then, that even many sophisticated investors thought they didn’t need to check the work of the ratings agencies?
To be clear, I don’t think government regulation fully explains the widespread failure to accurately assess the risks of mortgages. Part of it I chalk up to honest error. It’s easy to see the folly of people’s judgment with hindsight, but people aren’t making decisions with hindsight. I also think there are psychological reasons why many people are vulnerable to speculative bubbles. But moral hazard almost certainly played a role in reducing investors’ sensitivity to risk and in allowing many financial institutions to take on dangerous amounts of leverage.
The Federal Reserve Made Things Worse
Given the massive malinvestment in residential real estate, the declining lending standards, and the concentration of mortgage risks in the financial sector that took place during the 2000s, the bust was inevitable. But was the bust sufficient to explain the economy-wide recession that followed?
No doubt there was going to be a recession as the result of the crisis, but there is a compelling argument that the severity of the recession — the thing that made it the Great Recession — was causally tied to the government’s response. In particular, what turned a crisis into a catastrophe was overly tight monetary policy from the Federal Reserve in response to the crisis.
Tight money, recall, can lead to deflationary spirals, where debtors have trouble repaying their debts, putting stress on financial institutions, and output and employment fall as people have trouble adjusting to declining prices. And the argument is that although the Federal Reserve started easing money in mid-2008, it did not do so nearly enough, leading to a monetary contraction and hence the deflation that turned a financial crisis into the Great Recession.
Judging whether monetary policy is too tight isn’t straightforward. Typically people look to interest rates, but interest rates alone can be deceiving. Although the low interest rates of the early 2000s were associated with easy money, easy money can also lead to high interest rates, as it did in the late 1970s (or in Zimbabwe during its bout of hyperinflation).
But by looking at other, more revealing indicators, a number of economists have concluded that monetary policy tightened substantially during 2008-2009, leading to a decline in total spending in the economy and helping spread the pain in the housing and financial sectors to the rest of the economy.
The Ultimate Lesson
Ayn Rand often stressed that the U.S. isn’t and has never been a fully free, fully capitalist nation. Rather, it’s been a mixed economy, with elements of freedom and elements of control. This means that we cannot, as is so often done, automatically blame bad things on the free element and credit good things to the controlled element. As Rand explained:
When two opposite principles are operating in any issue, the scientific approach to their evaluation is to study their respective performances, trace their consequences in full, precise detail, and then pronounce judgment on their respective merits. In the case of a mixed economy, the first duty of any thinker or scholar is to study the historical record and to discover which developments were caused by the free enterprise of private individuals, by free production and trade in a free market — and which developments were caused by government intervention into the economy.
As we’ve seen, the field of finance has been dominated by government intervention since this country’s founding. In this series, I’ve tried to highlight some of the most important government subsidies and controls affecting the industry, and indicate how they were often responsible for the very problems they were supposedly created to solve.
If you examine the historic and economic evidence carefully, the conclusion that follows is clear: if we value economic stability, our top priority should be to liberate the field of finance from government support and government control.