To Stimulate The Economy, Liberate It

by Yaron Brook | February 14, 2008 | Forbes.com

While some in Washington are quibbling about the details of the economic stimulus package, nearly everyone agrees with its basic idea: that our ailing economy needs Uncle Sam to play doctor and hand out some $150 billion in consumer spending money. But this sort of government intervention is not the cure for our economic troubles. It is the cause.

To understand why, we must first recognize that the key economic activity that causes growth is not consumer spending but production.

Economic growth means an increase in the amount of wealth that exists in a country–and all wealth must be produced. Houses, health care, air-conditioning and transportation do not come ready-made from nature. We have them only to the extent that individuals and businesses bring them into existence.

The focus of today’s stimulus packages on consumer spending is therefore completely backward. Consumption is a consequence of production. This fact is ignored by the Bush plan, which attempts to achieve prosperity through $100 billion in deficit-spending. Though this might bring the appearance of prosperity, in the same way that an unemployed man appears prosperous if he goes on a shopping spree with his credit cards, the reality will be the opposite.

The fact is that consumer spending is slowing because production is slowing. There have been massive misallocations of capital–witness, for instance, the housing market–which are now coming home to roost. The resulting financial losses, economic uncertainty and more tenuous job market are all contributing to the American consumer’s inability or unwillingness to spend.

If the Bush spending plan can’t productively stimulate the economy, what government economic plan can? None. Production does not need stimulation from the government; it needs liberation from the government. What a productive, dynamic economy requires of a government is that it restrict itself to protecting property rights from force and fraud, and refrain from interfering in free production and trade.

Now it is of course popular practice to blame economic problems, not on government intervention but on the free market. But observe that all of the most prominent problems today–problems with housing, financial markets, health care, oil–involve some of the least-free sectors of our economy, those with the most government intervention.

Consider the extent of government culpability in the current subprime meltdown. There is the Federal Reserve, which wrought havoc with the markets by manipulating interest rates, first setting them below the rate of inflation and then quintupling them.

The Fed’s initial policy convinced subprime borrowers that if they took out mortgages tied to Fed rates, they could afford homes that they ordinarily couldn’t. The Fed’s artificially low rates fueled a borrowing spree and housing bubble that were instrumental in the subprime meltdown. Then there is the network of entities backed by the government, like Fannie Mae and Freddie Mac , which were big champions of subprime lending and big propagandists for the idea that everyone needs to own a home to live the American Dream. Finally, there is the government’s long-standing policy of assuring large financial institutions that they are “too big to fail,” which encourages short-range, high-risk investments.

Given all these influences, is it any surprise that so many people with poor credit bought expensive homes, that so many financial institutions lent them the money and that all hell broke loose once the unsustainable could no longer be sustained? In an unhampered market, private lenders and borrowers don’t behave this way.

And this is just the tip of the iceberg of how our government today stifles economic productivity through its gargantuan regulatory and welfare state. Try to project the impact on productive businesses of the vast burden of federal and state regulations–regulations that render off-limits a huge range of productive endeavors.

For example: If a fast-growing software company needs to quickly import a dozen eager and talented Indian programmers, it can’t, thanks to our immigration laws. If a company needs to fire a group of incompetent employees to make its workforce more productive, it risks a million-dollar lawsuit. If a developer seeks to offer low-cost housing in the vast, unused tracts of land in expensive California districts, too bad–that would go against environmentalist “open space” laws.

If a health insurance company tries to win more customers with deductibles, coverage and limits that will make insurance far more affordable, the idea is sunk; states dictate the terms of health insurance contracts. If a group of venture capitalists want to invest in new nuclear power, to supply cheap energy to a new market, it cannot–environmental regulations have prevented any new plants for decades, despite the technology’s stellar safety record.

If the board of a struggling public company wants to hire a top-flight CEO to turn its company around, its job is much harder (and more expensive) thanks to the CEO-repelling climate created by Sarbanes-Oxley, whose vague laws and new criminal penalties make managing a firm much riskier. Even the simple project of building a larger facility to house a growing business can easily be held up for six months, while the owner must glad-hand zoning and permit bureaucrats.

And this is just the smallest indication of the regulatory strangulation that American businesses suffer. Imagine the economic stimulus, the explosion of productivity, that would occur if these regulatory nooses were removed.

For that matter, consider how our government wreaks economic destruction by taxing the wealth of the productive and diverting it unproductively. Americans pay trillions of dollars in taxes annually–the vast majority of which is not for the agencies that protect our rights (police, military and courts), but for regulations and for entitlement programs that transfer wealth from productive individuals who have earned it to those who haven’t.

Over the years, these programs have prevented individuals from investing trillions of dollars in new ventures. It took a million dollars to start Google ; if the government hadn’t drained us of millions of dollars, picture what other amazing technologies, products and services we would be enjoying today.

The economic stimulus that would result from drastically cutting government regulation and spending (and thus taxation) is almost unimaginable.

Faced with recession, therefore, we should be asking not, “What can the government do to stimulate the economy?” but “What can it stop doing?” Washington should be debating which disastrous programs to phase out first: Sarbanes-Oxley, or the constellation of agencies that distort the housing market, like Fannie Mae and Freddie Mac. Politicians should be committing to drastically cutting government spending, so that Americans can have real and lasting tax relief.

What our economy needs is not a stimulation package, but a liberation package.

About The Author

Yaron Brook

Chairman of the Board, Ayn Rand Institute

The Right Vision Of Health Care

by Yaron Brook | January 08, 2008 | Forbes.com

With the primary season in full swing, the presidential candidates are fighting over what to do about the spiraling cost of health care–especially the cost of health insurance, which is becoming prohibitively expensive for millions of Americans.

The Democrats, not surprisingly, are proposing a massive increase in government control, with some even calling for the outright socialism of a single-payer system. Republicans are attacking this “solution.” But although they claim to oppose the expansion of government interference in medicine, Republicans don’t, in fact, have a good track record of fighting it.

Indeed, Republicans have been responsible for major expansions of government health care programs: As governor of Massachusetts, Mitt Romney oversaw the enactment of the nation’s first “universal coverage” plan, initially estimated at $1.5 billion per year but already overrunning cost projections. Arnold Schwarzenegger, who pledged not to raise any new taxes, has just pushed through his own “universal coverage” measure, projected to cost Californians more than $14 billion. And President Bush’s colossal prescription drug entitlement–expected to cost taxpayers more than $1.2 trillion over the next decade–was the largest expansion of government control over health care in 40 years.

Today, nearly half of all spending on health care in America is government spending. Why, despite their lip service to free markets, have Republicans actually helped fuel the growth of socialized medicine and erode what remains of free-market medicine in this country?

Consider the basic factor that has driven the expansion of government medicine in America.

Prior to the government’s entrance into the medical field, health care was regarded as a product to be traded voluntarily on a free market–no different from food, clothing, or any other important good or service. Medical providers competed to provide the best quality services at the lowest possible prices. Virtually all Americans could afford basic health care, while those few who could not were able to rely on abundant private charity.

Had this freedom been allowed to endure, Americans’ rising productivity would have allowed them to buy better and better health care, just as, today, we buy better and more varied food and clothing than people did a century ago. There would be no crisis of affordability, as there isn’t for food or clothing.

But by the time Medicare and Medicaid were enacted in 1965, this view of health care as an economic product–for which each individual must assume responsibility–had given way to a view of health care as a “right,” an unearned “entitlement,” to be provided at others’ expense.

This entitlement mentality fueled the rise of our current third-party-payer system, a blend of government programs, such as Medicare and Medicaid, together with government-controlled employer-based health insurance (itself spawned by perverse tax incentives during the wage and price controls of World War II).

Today, what we have is not a system grounded in American individualism, but a collectivist system that aims to relieve the individual of the “burden” of paying for his own health care by coercively imposing its costs on his neighbors. For every dollar’s worth of hospital care a patient consumes, that patient pays only about 3 cents out-of-pocket; the rest is paid by third-party coverage. And for the health care system as a whole, patients pay only about 14%.

The result of shifting the responsibility for health care costs away from the individuals who accrue them was an explosion in spending.

In a system in which someone else is footing the bill, consumers, encouraged to regard health care as a “right,” demand medical services without having to consider their real price. When, through the 1970s and 1980s, this artificially inflated consumer demand sent expenditures soaring out of control, the government cracked down by enacting further coercive measures: price controls on medical services, cuts to medical benefits, and a crushing burden of regulations on every aspect of the health care system.

As each new intervention further distorted the health care market, driving up costs and lowering quality, belligerent voices demanded still further interventions to preserve the “right” to health care. And Republican politicians–not daring to challenge the notion of such a “right”–have, like Romney, Schwarzenegger and Bush, outdone even the Democrats in expanding government health care.

The solution to this ongoing crisis is to recognize that the very idea of a “right” to health care is a perversion. There can be no such thing as a “right” to products or services created by the effort of others, and this most definitely includes medical products and services. Rights, as our founding fathers conceived them, are not claims to economic goods, but freedoms of action.

You are free to see a doctor and pay him for his services–no one may forcibly prevent you from doing so. But you do not have a “right” to force the doctor to treat you without charge or to force others to pay for your treatment. The rights of some cannot require the coercion and sacrifice of others.

So long as Republicans fail to challenge the concept of a “right” to health care, their appeals to “market-based” solutions are worse than empty words. They will continue to abet the Democrats’ expansion of government interference in medicine, right up to the dead end of a completely socialized system.

By contrast, the rejection of the entitlement mentality in favor of a proper conception of rights would provide the moral basis for real and lasting solutions to our health care problems–for breaking the regulatory chains stifling the medical industry; for lifting the government incentives that created our dysfunctional, employer-based insurance system; for inaugurating a gradual phase-out of all government health care programs, especially Medicare and Medicaid; and for restoring a true free market in medical care.

Such sweeping reforms would unleash the power of capitalism in the medical industry. They would provide the freedom for entrepreneurs motivated by profit to compete with each other to offer the best quality medical services at the lowest prices, driving innovation and bringing affordable medical care, once again, into the reach of all Americans.

About The Author

Yaron Brook

Chairman of the Board, Ayn Rand Institute

The Pakistan Crisis

by Elan Journo | December 29, 2007

The assassination of Benazir Bhutto has, we’re told, upended Washington’s foreign policy. “Our foreign policy has relied on her presence as a stabilizing force. . . . Without her, we will have to regroup,” explained Sen. Arlen Specter (R-Pa.) in the Washington Post. “It complicates life for the American government.”

But in fact U.S. policy was in disarray long before the assassination.

U.S. diplomats have been scrambling for months to do something about the growing power of Islamists in the nuclear-armed nation which Washington hails as a “major non-NATO ally.” Having supported President Musharraf’s authoritarian regime, Washington helped broker the deal to allow Bhutto back into Pakistan, hoping she might create a pro-U.S. regime, but then decided to push Musharraf to share power with Bhutto, then insisted that he’s “indispensable,” but also flirted with the idea of backing Bhutto.

All this against the backdrop of the creeping Talibanization of Pakistan. Islamist fighters once “restricted to untamed mountain villages along the [Pakistani-Afghan] border,” now “operate relatively freely in cities like Karachi,” according to Newsweek. The Taliban “now pretty much come and go as they please inside Pakistan.” They are easily slipping in and out of neighboring Afghanistan to arm and train their fighters, and foster attacks on the West.

Why has Washington proven so incapable of dealing with this danger to U.S. security? The answer lies in how we embraced Pakistan as an ally.

Pakistan was an improbable ally. In the 1990s its Inter-Services Intelligence agency had helped bring the Taliban to power; Gen. Musharraf’s regime, which began in 1999, formally endorsed the Taliban regime; and many in Pakistan support the cause of jihad (taking to the streets to celebrate 9/11). But after 9/11 the Bush administration asserted that we needed Pakistan as an ally, and that the alternatives to Gen. Musharraf’s military dictatorship were far worse.

If the administration was right about that (which is doubtful), we could have had an alliance with Pakistan under only one condition–treating this supposedly lesser of two evils as, indeed, evil.

It would have required acknowledging the immorality of Pakistan’s past and demanding that it vigorously combat the Islamic totalitarians as proof of repudiating them. Alert to the merest hint of Pakistan’s disloyalty, we’d have had to keep the dictatorial regime at arm’s length. This would have meant openly declaring that both the regime and the pro-jihadists among Pakistan’s people are immoral, that our alliance is delimited to one goal, and that we would welcome and support new, pro-American leaders in Pakistan who actually embrace freedom.

But instead, Washington evaded Pakistan’s pro-Islamist past and pretended that this corrupt regime was good. We offered leniency on Pakistan’s billion-dollar debts, opened up a fire-hose of financial aid, lifted economic sanctions, and blessed the regime simply because it agreed to call itself our ally and pay lip-service to enacting “reforms.” After Musharraf pledged his “full support” and “unstinting cooperation,” we treated the dictator as if he were some freedom-loving statesman, and effectively whitewashed the regime.

Since we did not demand any fundamental change in Pakistan’s behavior as the price of our alliance, we should not have expected any.

Pakistan’s “unstinting cooperation” included help with the token arrests of a handful of terrorists–even as the country became a haven for Islamists. Since 2001, Islamists have established a stronghold in the Pakistani-Afghan tribal borderlands (where bin Laden may be hiding). But our “ally” neither eradicated them nor allowed U.S. forces to do so. Instead in 2006 Musharraf reached a truce with them: in return for the Islamists’ “promise” not to attack Pakistani soldiers, not to establish their own Taliban-like rule, and not to support foreign jihadists–Pakistan backed off and released 165 captured jihadists.

Far from protesting, President Bush endorsed this appeasing deal, saying: “When [Musharraf] looks me in the eye and says” this deal will stop “the Talibanization of the people, and that there won’t be a Taliban and won’t be al Qaeda, I believe him.”

We have gone on paying Pakistan for its “cooperation,” to the tune of $10 billion in aid. The Islamists, who predictably reneged on the truce, now have a new staging area in Pakistan from which to plot attacks on us (perhaps, one day, with Pakistani nukes).

Why did our leaders evade Pakistan’s true nature? Faced with the need to do something against the totalitarian threat, it was far easier to pretend that Musharraf was a great ally who would help rid us of our problems if we would only uncritically embrace him. To declare Musharraf’s regime evil, albeit the lesser of two evils, would have required a deep moral confidence in the righteousness of our cause. The Bush administration didn’t display this confidence in our own fight against the Taliban, allowing the enablers of bin Laden to flee rather than ruthlessly destroying them. Why would it display such confidence in dealing with Pakistan?

But no matter how much one pretends that facts are not facts, eventually they will rear their heads.

This is why we are so unable to deal with the threat of Pakistan. Our blindness is self-induced.

About The Author

Elan Journo

Senior Fellow and Vice President of Content Products, Ayn Rand Institute

Is Washington With Us?

by Elan Journo | December 13, 2007

Ever since President Bush’s you’re-either-with-us-or-with-the-terrorists speech in 2001, his administration has been regarded as shaping its defense policy according to black-and-white moral judgments. If you haven’t already been convinced that that speech was empty rhetoric, last week offered another depressing piece of evidence.

Washington refused to oppose, or even protest, Libya’s election to a seat on the U.N. Security Council.

A genuine commitment to the principle of justice would entail recognizing that Libya’s character — like that of an individual — is the sum of its words and conduct across years, and that it cannot be transformed overnight.

No one would believe a career thief who claims that he’s renounced his vile behavior and transformed his character instantly. It is the thief’s responsibility to go out of his way to acknowledge and condemn his past actions, to make restitution where possible, and to demonstrate a commitment to the law across years. The burden, in other words, is on him to prove he has reshaped his moral character and is no longer a threat.

Likewise, no one should believe that a vicious regime such as Libya has (as it claims) suddenly transformed itself into a civilized, peace-loving country. Remember, among other heinous attacks, Libya is responsible for the bombing of a Pan Am flight over Lockerbie, Scotland, which killed over 200 passengers.

Before Libya will even deserve a hearing, it must bend over backwards to prove its commitment to peaceful co-existence with other nations, across many years. For example, Libya must reject dictatorship, dethrone Gaddafi and prosecute him. It must exhaustively confess and document the regime’s crimes. In the name of restitution, it must declare that all Islamic terrorists are its enemies, and not simply name a few names, but actively combat other terrorist regimes such as Iran. Such steps would constitute only the very early beginnings of what Libya would have to do to make credible its disavowal of aggression. Before we can regard Libya as a non-hostile nation, it must prove unequivocally, in word and deed, that it has undergone a fundamental transformation.

But when in 2003 Libya promised to end its terrorism and to dismantle its nuclear weapons program, Washington took Gaddafi’s word as golden and decided that, suddenly, Libya was our ally against terrorism. So Washington agreed to restore diplomatic relations and lift economic sanctions.

Has Libya truly renounced terrorism? About a year after its overture to Washington, the regime was discovered to be fomenting a terrorist plot in Saudi Arabia. And although the U.S. State Department had re-affirmed Libya as a sponsor of terrorism as recently as March 2006, only two months later Washington removed Libya from the list of terrorist states. Has Libya diligently tried to make restitution to victims of its terrorism? Hardly. It grudgingly agreed to compensate the families of its victims, and then unrepentantly dragged its feet in making payments.

Libya has not even come close to changing its character — yet the Bush administration warmly opened its arms to the dictatorial regime.

Why? It wants to send Iran the signal that it could similarly resolve the conflict over its nuclear quest and its longstanding terror war against the United States. Explaining the rationale, Secretary of State Rice said: “Libya is an important model as nations around the world press for changes in the behavior by the Iranian and North Korean regimes.” This is America trying to project itself as a morally confident nation that will not tolerate evil regimes . . . by hastily forgiving the evil Libyan regime. It is a pathetic joke, and will do nothing to deter Teheran from its nuclear ambition.

The Bush administration has mocked justice — and thus shown our enemies that they have nothing to fear from us.

If we are to form true alliances and protect ourselves from hostile regimes, America must steer its foreign policy according to the objective requirements of justice.

About The Author

Elan Journo

Senior Fellow and Vice President of Content Products, Ayn Rand Institute

Deep-Six the Law of the Sea

by Tom Bowden | November 20, 2007

The Law of the Sea Treaty, which awaits a ratification vote in the U.S. Senate, declares most of the earth’s vast ocean floor to be “the common heritage of mankind” and places it under United Nations ownership “for the benefit of mankind as a whole.”

This treaty has been bobbing in the legislative ocean for the past 25 years. After President Ronald Reagan refused to sign it in 1982, repeated attempts at ratification have failed. Last month, however, the Senate Foreign Relations Committee voted 17 – 4 to send it to the full Senate, where a two-thirds majority is required to ratify.

What’s at stake are trillions of tons of vital minerals such as manganese, nickel, copper, zinc, gold and silver — enough to supply current needs for thousands of years — spread over vast seabeds constituting 41 percent of the planet’s area. Senate ratification would signify U.S. agreement that the International Seabed Authority, a U.N. agency based in Jamaica, should own these resources in perpetuity.

Why should we agree to this?

Like any other hard-to-reach resources, these undersea minerals are completely valueless where they now rest. What is it that makes such resources actually valuable? It is the thinking and action of inventors, engineers, explorers and entrepreneurs who devote their mental energy to the task of finding and retrieving them. These undersea pioneers don’t just find wealth, they create wealth — by bringing a portion of nature’s bounty under human control.

Despite the treaty’s allusion to seabeds as the “common heritage of mankind,” mankind as a whole has done exactly nothing to create value in the deep ocean, which is a remote wilderness, virtually unexploited. Under the proposed treaty, however, the ocean mining companies — whose science, exploration, technology, and entrepreneurship are being counted on to gather otherwise inaccessible riches — are treated as mere servants of a world collective.

In practice, under the treaty’s explicitly socialist approach, mining companies operate as mere licensees who must render hefty application fees as well as continuing payments (read: taxes) and obtain prior approval at every stage of work, under regulations that emerge sluggishly from multinational committees.

Licensees must also enrich a U.N.-operated competitor called, spookily enough, “The Enterprise.” For every square mile of ocean bottom a licensee explores, half must be relinquished to The Enterprise, free of charge — and The Enterprise gets to pick the better half.

Licensees must also make available, on so-called reasonable commercial terms, their technology and know-how, and even train this giant competitor’s personnel. At the end of the day, profits from The Enterprise, along with taxes from licensees, are distributed to U.N. member-nations such as Cuba, Uganda and Venezuela, who contribute nothing to the productive process.

The treaty simply assumes as a self-evident truth that wealth sharing is the moral duty of the haves toward the have-nots, and that the world’s needy nations have a moral claim on the wealth created by undersea miners. But we should pause to challenge both that moral assumption and its legal implications.

Morally, undersea mining operations are entitled to own outright those portions of the ocean floor they exploit, by virtue of the productive effort they expend. Producers in general are morally entitled to live and work for their own sake, keeping the wealth they create without any moral debt to those who didn’t create it. Because nature requires us to be productive in order to live, the businessman’s pursuit of profit is properly regarded as a virtue, not a vice indebting him to a hungry planet.

Legally, this viewpoint is embodied in the American ideals of life, liberty and the pursuit of happiness, secured by private property rights. A historical example of the proper principle in action is the Homestead Act of 1862. Farmers acquired property rights, i.e., private deeds, to 270 million acres of fertile Midwest prairie land by the productive act of farming it, parcel by parcel.

Suppose, instead, that the U.S. government had issued only licenses, not deeds, for the acreage those farmers carved out of wild prairie land. Then suppose the government had transferred half that hard-won acreage to “The Farm,” a giant government-owned competitor whose field hands the farmers would be expected to equip and train. Of course, such a travesty would have been unthinkable in the relatively capitalistic 19th century.

Governments today have legitimate options regarding how to deal with undersea explorers’ need to establish property rights in the deep ocean. But it would be totally improper for America to declare eternal hostility to private property in the ocean floor by ratifying a treaty dedicated on principle to denying such rights.

About The Author

Tom Bowden

Analyst and Outreach Liaison, Ayn Rand Institute

After Ten Years, States Still Resist Assisted Suicide

by Tom Bowden | November 02, 2007

This month marks the tenth anniversary of Oregon’s pathbreaking assisted suicide law. But despite legislative proposals in California and elsewhere, Oregon remains the only state to have provided clear procedures by which doctors can help end their dying patients’ pain and suffering while protecting themselves from criminal prosecution.

For a decade now, Oregon doctors have been permitted to prescribe a lethal dose of drugs to a mentally competent, terminally ill patient who makes written and oral requests, consults two physicians, and endures a mandatory waiting period. The patient’s free choice is paramount throughout this process. Neither relatives nor doctors can apply on the patient’s behalf, and the patient himself administers the lethal dose.

Elsewhere in America, however, the political influence of religious conservatism has thwarted passage of similar legislation, leaving terminal patients with nothing but a macabre menu of frightening, painful, and often violent end-of-life techniques universally regarded as too inhumane for use on sick dogs or mass murderers.

Consider Percy Bridgman, the Nobel Prize-winning physicist who, at 79, was entering the final stages of terminal cancer. Wracked with pain and bereft of hope, he got a gun and somehow found courage to pull the trigger, knowing he was condemning others to the agony of discovering his bloody remains. His final note said simply: “It is not decent for society to make a man do this to himself. Probably this is the last day I will be able to do it myself.”

What lawmakers must grasp is that there is no rational basis upon which the government can properly prevent any individual from choosing to end his own life. When religious conservatives enact laws to enforce the idea that their God abhors suicide, they threaten the central principle on which America was founded.

The Declaration of Independence proclaimed, for the first time in the history of nations, that each person exists as an end in himself. This basic truth — which finds political expression in the right to life, liberty, and the pursuit of happiness — means, in practical terms, that you need no one’s permission to live, and that no one may forcibly obstruct your efforts to achieve your own personal happiness.

But what if happiness becomes impossible to attain? What if a dread disease, or some other calamity, drains all joy from life, leaving only misery and suffering? The right to life includes and implies the right to commit suicide. To hold otherwise — to declare that society must give you permission to kill yourself — is to contradict the right to life at its root. If you have a duty to go on living, despite your better judgment, then your life does not belong to you, and you exist by permission, not by right.

For these reasons, each individual has the right to decide the hour of his death and to implement that solemn decision as best he can. The choice is his because the life is his. And if a doctor is willing (not forced) to assist in the suicide, based on an objective assessment of his patient’s mental and physical state, the law should not stand in his way.

Religious conservatives’ opposition to the Oregon approach stems from the belief that human life is a gift from the Lord, who puts us here on earth to carry out His will. Thus, the very idea of suicide is anathema, because one who “plays God” by causing his own death, or assisting in the death of another, insults his Maker and invites eternal damnation, not to mention divine retribution against the decadent society that permits such sinful behavior.

If a religious conservative contracts a terminal disease, he has a legal right to regard his own God’s will as paramount, and to instruct his doctor to stand by and let him suffer, just as long as his body and mind can endure the agony, until the last bitter paroxysm carries him to the grave. But conservatives have no right to force such mindless, medieval misery upon doctors and patients who refuse to regard their precious lives as playthings of a cruel God.

Rational state legislators should regard the Oregon law’s anniversary as a stinging reminder that 49 of the 50 states have failed to take meaningful steps toward recognizing and protecting an individual’s unconditional right to commit suicide.

About The Author

Tom Bowden

Analyst and Outreach Liaison, Ayn Rand Institute

Be Healthy or Else!

by Yaron Brook and Don Watkins | October 22, 2007

As part of his universal health care proposal, John Edwards would make doctor visits and other forms of preventive care mandatory. In a similar proposal in England, a Tory panel suggested that Britons should be forced to adopt a government-prescribed “healthy lifestyle.” Britons who “cooperate” by quitting smoking or losing weight would receive Health Miles that could be used to purchase vegetables or gym memberships; those who don’t would be denied certain medical treatments.

These paternalistic proposals are offered as solutions to the spiraling costs that plague our respective health care systems. It is unrealistic, states the Tory report, for British citizens “to expect that the state will underwrite the health implications of any lifestyle decision they choose to make.”

But any proposal that expands the government’s power to control our lives — to dictate to us when to go to the doctor or how many helpings of veggies we must eat — cannot be a solution to anything. Instead of debating what coercive measures we should be taking to lower “social costs,” we should be questioning the health care systems that make our lifestyles other people’s business in the first place.

Both the American and British systems, despite their differences, are fundamentally collectivist: they exist on the premise that the individual’s health is not his own responsibility, but “society’s.” Both Britain’s outright socialized medicine and America’s semi-socialized blend of Medicare, Medicaid, and government-controlled, employer-sponsored health plans aim to relieve the individual of the burden of paying for his own health care by coercively imposing those costs on his neighbors.

When the government introduces force into the health care system to relieve the individual of responsibility for his own health, it is inevitably led to progressively expand its control over that system and every citizen’s life.

For example, in a system in which medical care is “free” or artificially inexpensive, with someone else paying for one’s health care, medical costs spiral out of control because individuals are encouraged to demand medical services without having to consider their real costs. When “society” foots the bill for one’s health, it also encourages the unhealthy lifestyles of the short-range mentalities who don’t care to think beyond the next plate of French fries. The astronomical tab that results from all of this causes collectivist politicians to condemn various easy targets (e.g., doctors, insurance companies, smokers, the obese) for taking too much of the “people’s money,” and then to enact a host of coercive measures to control expenses: price controls on medical services, cuts to medical benefits — or, as with the current proposals, attempts to reduce demand for medical services by forcing a “healthy lifestyle” on individuals.

Properly, your health care decisions and expenditures are not anyone’s business but your own — any more than how much you spend on food, cars, or movies is. But under collectivized health care, every Twinkie you eat, doctor’s visit you cancel, or lab test you wish to have run, becomes other people’s right to question, regulate, and prohibit — because they are paying for it. When “society” collectively bears the costs of health care, the government will inevitably seek to dictate every detail of medical care and, ultimately, every detail of how you live your life.

To protect our health and our freedom, we must reject collectivized health care, and put an end to a system that forces us to pay for other people’s medical care. We must remove government from the system and demand a free market in medicine — one in which the government’s only role is to protect the individual rights of doctors, patients, hospitals, and insurance companies to deal with one another voluntarily, and where each person is responsible for his own health care.

Let’s not allow the land of the free and the home of the brave to become a nation of dependents looking to the nanny state to take care of us and following passively its dictates as to how we should live our lives.

About The Authors

Yaron Brook

Chairman of the Board, Ayn Rand Institute

Don Watkins

Former Fellow (2006-2017), Ayn Rand Institute

It Isn’t Easy Being Green

by Keith Lockitch | October 16, 2007

It isn’t news that environmentalism has gone mainstream in a big way — with organic food in every grocery store, hybrid cars on every freeway, and every mass-market magazine declaring green the “new black.” More than ever before, consumers are buying into environmentalist ideology — and buying products that purport to impact nature less, in order to impact nature less.

One would think that serious environmentalists would be thrilled about this trend — thrilled that the public seems willing to take ecological marching orders and do its duty to the planet. But they aren’t: A backlash against “buying green” has arisen in environmentalist circles, with critics disparaging the new eco-consumers as “light greens,” and condemning the “Cosmo-izing of the green movement.”

Surprising? Not really. Not if one grasps the deeper meaning of environmentalism.

Most people have a mistaken view of environmentalism. They see it as a movement whose goal is to protect the environment so that we, and future generations, may continue to enjoy it. Environmentalists might call for certain sacrifices — like stern priests calling upon us to do penance for our sins — but people take their word for it that those sacrifices will turn out to be for the good of “society.” People feel virtuous in paying more for those organic blueberries and spending time washing out tin cans and nasty cloth diapers, because they see it as a sacrifice for the “greater good.” And although “going green” may demand some cost and effort, it need not — on this view — be too burdensome nor demand personal hardships that are too great.

But in fact, the goal of environmentalism is not any alleged benefit to mankind; its goal is to preserve nature untouched — to prevent nature from being altered for human purposes. Observe that whenever there is a conflict between the goals of “preserving nature” and pursuing some actual human value, environmentalists always side with nature against man. If tapping Arctic oil reserves to supply our energy needs might affect the caribou, environmentalists demand that we leave vast tracts of Arctic tundra completely untouched. If a new freeway bypass will ease traffic congestion but might disturb the dwarf wedge mussel, environmentalists side with the mollusk against man. If a “wetland” is a breeding ground for disease-carrying insects, environmentalists fight to prevent it being drained no matter the toll of human suffering.

It is simply not true that environmentalism values human well being. It demands sacrifices, not for the sake of any human good, but for the sake of leaving nature untouched. It calls for sacrifice as an end in itself.

Though environmentalists will often claim to be opposed to merely “indiscriminate” or “excessive” consumption of natural resources, their ideology actually drives them to oppose any act of altering nature for human purposes. The environmentalist goal of “preserving nature” unavoidably conflicts with the requirements of human life: Man’s basic means of survival is to reshape nature to serve his ends, to take the raw materials of his environment and use them to produce values. But this requires “touching” nature, not leaving it untouched. Even organic crops require land and water and energy; even hybrid cars are built of metal and plastic and glass, and use up fuel. All human activity, on whatever scale, violates the environmentalist injunction to “leave nature alone.”

This is why it is no surprise that environmentalist leaders would condemn “buying green” as a consumer trend. Says Michael Ableman, an organic farmer and environmental author: “The assumption that by buying anything, whether green or not, we’re solving the problem is a misperception. Consuming is a significant part of the problem to begin with.” In other words, the very act of consuming — i.e., pursuing material values in support of our lives — is a “problem.”

Environmentalists are criticizing “buying green,” because at root they are against buying anything.

Anyone who thinks that it’s easy being “green” — that “eco-chic” is consistent with the principles of environmentalism — had better think harder about the true nature of the ideology they are helping bring into power. Environmentalists’ call for minor sacrifices for the sake of some undefined “greater good” is the first stage in their call for sacrifice as such, for no human benefit whatsoever.

If environmentalists are now confident enough to start attacking “buying green” as superficial and hypocritical, we had better take them at their word and stop buying anything they have to sell, especially their poisonous ideology.

About The Author

Keith Lockitch

Vice President of Education and Senior Fellow, Ayn Rand Institute

The Influence of Atlas Shrugged

by Yaron Brook | October 09, 2007

On the 50th anniversary of its publication, Atlas Shrugged, Ayn Rand’s epic about a group of businessmen who rebel against a society that shackles and condemns them, is everywhere. Hardly a day goes by without a mention of the novel in the media or by some prominent celebrity or businessman as the most significant book he’s read. Meanwhile, Ayn Rand’s novels, including Atlas Shrugged, are being taught in tens of thousands of high schools. And last year sales of the novel in bookstores topped an astonishing 130,000 copies — more than when it was first published.

As executive director of the Ayn Rand Institute, I see the impact of Atlas Shrugged on a daily basis. I’m continually amazed by how many people, from every walk of life and every part of the planet, from high school students to political activists in countries from Hong Kong to Belarus to Ghana, eagerly tell me: “Atlas Shrugged changed my life.”

Scores of business leaders, from CEOs of Fortune 500 companies to young entrepreneurs in Silicon Valley, say they have derived great spiritual fuel from Atlas Shrugged. Many tell me that the novel has motivated them to make the most of their lives, inspiring them to be more ambitious, more productive, and more successful in their work. And many of America’s politicians and intellectuals who claim to fight for economic freedom name Atlas Shrugged as the book that has most inspired them. I have no doubt that the novel has played a considerable role in discrediting socialism as an ideal and in making discussion of capitalism intellectually legitimate.

If you have read Atlas Shrugged and entered the universe of Dagny Taggart, Hank Rearden, and John Galt, you can understand why the novel has inspired so many in this way. Atlas Shrugged portrays great businessmen as heroic, productive thinkers, and it venerates capitalism as the only social system that leaves such minds free to create and produce the material values on which all of our lives depend. It gives philosophic and esthetic expression to the uniquely American spirit of individualism, of self-reliance, of entrepreneurship, of free markets.

But while many people appreciate these elements of Atlas Shrugged on a personal, emotional level, they are often uncomfortable on a moral level with the novel’s arguments in support of business and capitalism.

Ayn Rand’s ethical philosophy of rational selfishness — on which her admiration for successful businessmen and her impassioned defense of capitalism rest — constitutes a radical challenge to the dominant beliefs of our culture. Rejecting the prevailing ideas that morality comes from a supernatural being or from a societal decree, Rand holds that morality is a science that can be proved by reason. Rejecting the altruistic idea that morality consists of selflessly serving something “higher” — whether the Judeo-Christian God or a collectivist society — she maintains that the height of moral virtue is to rationally pursue your own selfish ends.

Socialism as a political ideal is dead. But the morality that spawned it — from each according to his ability, to each according to his need — still haunts us. So long as need and the “public interest” are regarded as moral claim checks on the ability of the productive, the continued growth of the government’s control over the economy and our lives is inevitable.

Those who have read Atlas Shrugged are often struck by the similarity of the events in the novel to the disastrous events reported in the daily news — from the government’s attempt to take over medicine to decaying infrastructure and collapsing bridges to the shackles on businessmen inflicted by Sarbanes-Oxley. The similarity is no accident: the justification for these government programs is the needs of the uninsured, the so-called public interest, and the necessity to curb the selfishness of businessmen. Without a moral revolution, we cannot win true economic or political freedom.

So while Atlas Shrugged has provided millions with inspiration and with some level of appreciation for the virtues of capitalism and the evils of statism, it has not had nearly the influence it could have had, had its underlying ideas gained wider understanding. Though it has changed individual lives, it has not changed the world. But I believe it could — and should.

Imagine a future America guided by the principles found in Atlas Shrugged — a culture of reason, where science is cherished and respected, not banished from biology classrooms and stem-cell research labs — a culture of individualism, in which government is the protector of individual rights, not its primary violator — a culture in which markets are not just regarded as the most effective option of an imperfect lot, but in which laissez-faire capitalism is recognized and venerated as the only moral social system — a culture in which business innovators understand that ambition, productive effort, and wealth creation are not just practical necessities, but moral virtues — a culture in which such innovators, proudly asserting their right to their work, are fully liberated and their productive genius fully applied to the generation of unimaginable economic progress.

This is the world that Atlas Shrugged challenges us to strive for. But in order to get there, the novel’s full philosophic meaning must be grasped. This is precisely why the Ayn Rand Institute exists: to convey Rand’s profound message. And her message is getting out, all the way to professional intellectuals, on campuses and elsewhere across America, who are taking up Ayn Rand’s ideas with a seriousness that they never have shown before.

With more and more thinkers giving it the attention it merits, I am confident that the real influence of Atlas Shrugged has yet to be felt.

About The Author

Yaron Brook

Chairman of the Board, Ayn Rand Institute

The Morality of Moneylending: A Short History

by Yaron Brook | Fall 2007 | The Objective Standard

It seems that every generation has its Shylock—a despised financier blamed for the economic problems of his day. A couple of decades ago it was Michael Milken and his “junk” bonds. Today it is the mortgage bankers who, over the past few years, lent billions of dollars to home buyers—hundreds of thousands of whom are now delinquent or in default on their loans. This “sub-prime mortgage crisis” is negatively affecting the broader financial markets and the economy as a whole. The villains, we are told, are not the borrowers—who took out loans they could not afford to pay back—but the moneylenders—who either deceived the borrowers or should have known better than to make the loans in the first place. And, we are told, the way to prevent such problems in the future is to clamp down on moneylenders and their industries; thus, investigations, criminal prosecutions, and heavier regulations on bankers are in order.

Of course, government policy for decades has been to encourage lenders to provide mortgage loans to lower-income families, and when mortgage brokers have refused to make such loans, they have been accused of “discrimination.” But now that many borrowers are in a bind, politicians are seeking to lash and leash the lenders.

This treatment of moneylenders is unjust but not new. For millennia they have been the primary scapegoats for practically every economic problem. They have been derided by philosophers and condemned to hell by religious authorities; their property has been confiscated to compensate their “victims”; they have been humiliated, framed, jailed, and butchered. From Jewish pogroms where the main purpose was to destroy the records of debt, to the vilification of the House of Rothschild, to the jailing of American financiers—moneylenders have been targets of philosophers, theologians, journalists, economists, playwrights, legislators, and the masses.

Major thinkers throughout history—Plato, Aristotle, Thomas Aquinas, Adam Smith, Karl Marx, and John Maynard Keynes, to name just a few—considered moneylending, at least under certain conditions, to be a major vice. Dante, Shakespeare, Dickens, Dostoyevsky, and modern and popular novelists depict moneylenders as villains.

Today, anti-globalization demonstrators carry signs that read “abolish usury” or “abolish interest.” Although these protestors are typically leftists—opponents of capitalism and anything associated with it—their contempt for moneylending is shared by others, including radical Christians and Muslims who regard charging interest on loans as a violation of God’s law and thus as immoral.

Moneylending has been and is condemned by practically everyone. But what exactly is being condemned here? What is moneylending or usury? And what are its consequences?

Although the term “usury” is widely taken to mean “excessive interest” (which is never defined) or illegal interest, the actual definition of the term is, as the Oxford English Dictionary specifies: “The fact or practice of lending money at interest.” This is the definition I ascribe to the term throughout this essay.

Usury is a financial transaction in which person A lends person B a sum of money for a fixed period of time with the agreement that it will be returned with interest. The practice enables people without money and people with money to mutually benefit from the wealth of the latter. The borrower is able to use money that he would otherwise not be able to use, in exchange for paying the lender an agreed-upon premium in addition to the principal amount of the loan. Not only do both interested parties benefit from such an exchange; countless people who are not involved in the trade often benefit too—by means of access to the goods and services made possible by the exchange.

Usury enables levels of life-serving commerce and industry that otherwise would be impossible. Consider a few historical examples. Moneylenders funded grain shipments in ancient Athens and the first trade between the Christians in Europe and the Saracens of the East. They backed the new merchants of Italy and, later, of Holland and England. They supported Spain’s exploration of the New World, and funded gold and silver mining operations. They made possible the successful colonization of America. They fueled the Industrial Revolution, supplying the necessary capital to the new entrepreneurs in England, the United States, and Europe. And, in the late 20th century, moneylenders provided billions of dollars to finance the computer, telecommunications, and biotechnology industries.

By taking risks and investing their capital in what they thought would make them the most money, moneylenders and other financiers made possible whole industries—such as those of steel, railroads, automobiles, air travel, air conditioning, and medical devices. Without capital, often provided through usury, such life-enhancing industries would not exist—and homeownership would be impossible to all but the wealthiest people.

Moneylending is the lifeblood of industrial-technological society. When the practice and its practitioners are condemned, they are condemned for furthering and enhancing man’s life on earth.

Given moneylenders’ enormous contribution to human well-being, why have they been so loathed throughout history, and why do they continue to be distrusted and mistreated today? What explains the universal hostility toward one of humanity’s greatest benefactors? And what is required to replace this hostility with the gratitude that is the moneylenders’ moral due?

As we will see, hostility toward usury stems from two interrelated sources: certain economic views and certain ethical views. Economically, from the beginning of Western thought, usury was regarded as unproductive—as the taking of something for nothing. Ethically, the practice was condemned as immoral—as unjust, exploitative, against biblical law, selfish. The history of usury is a history of confusions, discoveries, and evasions concerning the economic and moral status of the practice. Until usury is recognized as both economically productive and ethically praiseworthy—as both practical and moral—moneylenders will continue to be condemned as villains rather than heralded as the heroes they in fact are.

Our brief history begins with Aristotle’s view on the subject.

Aristotle

The practice of lending money at interest was met with hostility as far back as ancient Greece, and even Aristotle (384–322 b.c.) believed the practice to be unnatural and unjust. In the first book of Politics he writes:

The most hated sort [of moneymaking], and with the greatest reason, is usury, which makes a gain out of money itself, and not from the natural use of it. For money was intended to be used in exchange, but not to increase at interest. And this term Usury which means the birth of money from money, is applied to the breeding of money, because the offspring resembles the parent. Wherefore of all modes of making money this is the most unnatural.1

Aristotle believed that charging interest was immoral because money is not productive. If you allow someone to use your orchard, he argued, the orchard bears fruit every year—it is productive—and from this product the person can pay you rent. But money, Aristotle thought, is merely a medium of exchange. When you loan someone money, he receives no value over and above the money itself. The money does not create more money—it is barren. On this view, an exchange of $100 today for $100 plus $10 in interest a year from now is unjust, because the lender thereby receives more than he gave, and what he gave could not have brought about the 10 percent increase. Making money from money, according to Aristotle, is “unnatural” because money, unlike an orchard, cannot produce additional value.

Aristotle studied under Plato and accepted some of his teacher’s false ideas. One such idea that Aristotle appears to have accepted is the notion that every good has some intrinsic value—a value independent of and apart from human purposes. On this view, $100 will be worth $100 a year from now and can be worth only $100 to anyone, at any time, for any purpose. Aristotle either rejected or failed to consider the idea that loaned money loses value to the lender over time as his use of it is postponed, or the idea that money can be invested in economic activity and thereby create wealth. In short, Aristotle had no conception of the productive role of money or of the moneylender. (Given the relative simplicity of the Greek economy, he may have had insufficient evidence from which to conclude otherwise.) Consequently, he regarded usury as unproductive, unnatural, and therefore unjust.

Note that Aristotle’s conclusion regarding the unjust nature of usury is derived from his view that the practice is unproductive: Since usury creates nothing but takes something—since the lender apparently is parasitic on the borrower—the practice is unnatural and immoral. It is important to realize that, on this theory, there is no dichotomy between the economically practical and the morally permissible; usury is regarded as immoral because it is regarded as impractical.

Aristotle’s economic and moral view of usury was reflected in ancient culture for a few hundred years, but moral condemnation of the practice became increasingly pronounced. The Greek writer Plutarch (46–127 a.d.), for example, in his essay “Against Running In Debt, Or Taking Up Money Upon Usury,” described usurers as “wretched,” “vulture-like,” and “barbarous.”2 In Roman culture, Seneca (ca. 4 b.c.–65 a.d.) condemned usury for the same reasons as Aristotle; Cato the Elder (234–149 b.c.) famously compared usury to murder;3 and Cicero (106–43 b.c.) wrote that “these profits are despicable which incur the hatred of men, such as those of . . . lenders of money on usury.”4

As hostile as the Greeks and Romans generally were toward usury, their hostility was based primarily on their economic view of the practice, which gave rise to and was integrated with their moral view of usury. The Christians, however, were another matter, and their position on usury would become the reigning position in Western thought up to the present day.

The Dark and Middle Ages

The historian William Manchester described the Dark and Middle Ages as

stark in every dimension. Famines and plague, culminating in the Black Death [which killed 1 in 4 people at its peak] and its recurring pandemics, repeatedly thinned the population. . . . Among the lost arts were bricklaying; in all of Germany, England, Holland and Scandinavia, virtually no stone buildings, except cathedrals, were raised for ten centuries. . . . Peasants labored harder, sweated more, and collapsed from exhaustion more often than their animals.5

During the Dark Ages, the concept of an economy had little meaning. Human society had reverted to a precivilized state, and the primary means of trade was barter. Money all but disappeared from European commerce for centuries. There was, of course, some trade and some lending, but most loans were made with goods, and the interest was charged in goods. These barter-based loans, primitive though they were, enabled people to survive the tough times that were inevitable in an agrarian society.6

Yet the church violently opposed even such subsistence-level lending.

During this period, the Bible was considered the basic source of knowledge and thus the final word on all matters of importance. For every substantive question and problem, scholars consulted scripture for answers—and the Bible clearly opposed usury. In the Old Testament, God says to the Jews: “[He that] Hath given forth upon usury, and hath taken increase: shall he then live? he shall not live . . . he shall surely die; his blood shall be upon him.”7 And:

Thou shalt not lend upon usury to thy brother; usury of money; usury of victuals; usury of anything that is lent upon usury.

Unto a stranger thou mayest lend upon usury; but unto thy brother thou shalt not lend upon usury, that the Lord thy God may bless thee in all that thou settest thine hand to in the land whither thou goest to possess it.8

In one breath, God forbade usury outright; in another, He forbade the Jews to engage in usury with other Jews but permitted them to make loans at interest to non-Jews.

Although the New Testament does not condemn usury explicitly, it makes clear that one’s moral duty is to help those in need, and thus to give to others one’s own money or goods without the expectation of anything in return—neither interest nor principal. As Luke plainly states, “lend, hoping for nothing again.”9 Jesus’ expulsion of the moneychangers from the temple is precisely a parable conveying the Christian notion that profit is evil, particularly profit generated by moneylending. Christian morality, the morality of divinely mandated altruism, expounds the virtue of self-sacrifice on behalf of the poor and the weak; it condemns self-interested actions, such as profiting—especially profiting from a seemingly exploitative and unproductive activity such as usury.

Thus, on scriptural and moral grounds, Christianity opposed usury from the beginning. And it constantly reinforced its opposition with legal restrictions. In 325 a.d., the Council of Nicaea banned the practice among clerics. Under Charlemagne (768–814 a.d.), the Church extended the prohibition to laymen, defining usury simply as a transaction where more is asked than is given.10 In 1139, the second Lateran Council in Rome denounced usury as a form of theft, and required restitution from those who practiced it. In the 12th and 13th centuries, strategies that concealed usury were also condemned. The Council of Vienne in 1311 declared that any person who dared claim that there was no sin in the practice of usury be punished as a heretic.

There was, however, a loophole among all these pronouncements: the Bible’s double standard on usury. As we saw earlier, read one way, the Bible permits Jews to lend to non-Jews. This reading had positive consequences. For lengthy periods during the Dark and Middle Ages, both Church and civil authorities allowed Jews to practice usury. Many princes, who required substantial loans in order to pay bills and wage wars, allowed Jewish usurers in their states. Thus, European Jews, who had been barred from most professions and from ownership of land, found moneylending to be a profitable, albeit hazardous, profession.

Although Jews were legally permitted to lend to Christians—and although Christians saw some practical need to borrow from them and chose to do so—Christians resented this relationship. Jews appeared to be making money on the backs of Christians while engaging in an activity biblically prohibited to Christians on punishment of eternal damnation. Christians, accordingly, held these Jewish usurers in contempt. (Important roots of anti-Semitism lie in this biblically structured relationship.)

Opposition to Jewish usurers was often violent. In 1190, the Jews of York were massacred in an attack planned by members of the nobility who owed money to the Jews and sought to absolve the debt through violence.11 During this and many other attacks on Jewish communities, accounting records were destroyed and Jews were murdered. As European historian Joseph Patrick Byrne reports:

“Money was the reason the Jews were killed, for had they been poor, and had not the lords of the land been indebted to them, they would not have been killed.”12 But the “lords” were not the only debtors: the working class and underclass apparently owed a great deal, and these violent pogroms gave them the opportunity to destroy records of debt as well as the creditors themselves.13

In 1290, largely as a result of antagonism generated from their moneylending, King Edward I expelled the Jews from England, and they would not return en masse until the 17th century.

From the Christian perspective, there were clearly problems with the biblical pronouncements on usury. How could it be that Jews were prohibited from lending to other Jews but were allowed to lend to Christians and other non-Jews? And how could it be that God permitted Jews to benefit from this practice but prohibited Christians from doing so? These questions perplexed the thinkers of the day. St. Jerome’s (ca. 347–420) “solution” to the conundrum was that it was wrong to charge interest to one’s brothers—and, to Christians, all other Christians were brothers—but it was fine to charge interest to one’s enemy. Usury was perceived as a weapon that weakened the borrower and strengthened the lender; so, if one loaned money at interest to one’s enemy, that enemy would suffer. This belief led Christians to the absurd practice of lending money to the Saracens—their enemies—during the Crusades.14

Like the Greeks and Romans, Christian thinkers viewed certain economic transactions as zero-sum phenomena, in which a winner always entailed a loser. In the practice of usury, the lender seemed to grow richer without effort—so it had to be at the expense of the borrower, who became poorer. But the Christians’ economic hostility toward usury was grounded in and fueled by biblical pronouncements against the practice—and this made a substantial difference. The combination of economic and biblical strikes against usury—with an emphasis on the latter—led the Church to utterly vilify the usurer, who became a universal symbol for evil. Stories describing the moneylenders’ horrible deaths and horrific existence in Hell were common. One bishop put it concisely:

God created three types of men: peasants and other laborers to assure the subsistence of the others, knights to defend them, and clerics to govern them. But the devil created a fourth group, the usurers. They do not participate in men’s labors, and they will not be punished with men, but with the demons. For the amount of money they receive from usury corresponds to the amount of wood sent to Hell to burn them.15

Such was the attitude toward usury during the Dark and early Middle Ages. The practice was condemned primarily on biblical/moral grounds. In addition to the fact that the Bible explicitly forbade it, moneylending was recognized as self-serving. Not only did it involve profit; the profit was (allegedly) unearned and exploitative. Since the moneylender’s gain was assumed to be the borrower’s loss—and since the borrower was often poor—the moneylender was seen as profiting by exploiting the meek and was therefore regarded as evil.

Beginning in the 11th century, however, a conflicting economic reality became increasingly clear—and beginning in the 13th century, the resurgence of respect for observation and logic made that reality increasingly difficult to ignore.

Through trade with the Far East and exposure to the flourishing cultures and economies of North Africa and the Middle East, economic activity was increasing throughout Europe. As this activity created a greater demand for capital and for credit, moneylenders arose throughout Europe to fill the need—and as moneylenders filled the need, the economy grew even faster.

And Europeans were importing more than goods; they were also importing knowledge. They were discovering the Arabic numerical system, double-entry accounting, mathematics, science, and, most importantly, the works of Aristotle.

Aristotle’s ideas soon became the focus of attention in all of Europe’s learning centers, and his writings had a profound effect on the scholars of the time. No longer were young intellectuals satisfied by biblical references alone; they had discovered reason, and they sought to ground their ideas in it as well. They were, of course, still stifled by Christianity, because, although reason had been rediscovered, it was to remain the handmaiden of faith. Consequently, these intellectuals spent most of their time trying to use reason to justify Christian doctrine. But their burgeoning acceptance of reason, and their efforts to justify their ideas accordingly, would ultimately change the way intellectuals thought about everything—including usury.

Although Aristotle himself regarded usury as unjust, recall that he drew this conclusion from what he legitimately thought was evidence in support of it; in his limited economic experience, usury appeared to be unproductive. In contrast, the thinkers of this era were confronted with extensive use of moneylending all around them—which was accompanied by an ever-expanding economy—a fact that they could not honestly ignore. Thus, scholars set out to reconcile the matter rationally. On Aristotelian premises, if usury is indeed unjust and properly illegal, then there must be a logical argument in support of this position. And the ideas that usury is unproductive and that it necessarily consists in a rich lender exploiting a poor borrower were losing credibility.

Public opinion, which had always been against usury, now started to change as the benefits of credit and its relationship to economic growth became more evident. As support for usury increased, however, the Church punished transgressions more severely and grew desperate for theoretical justification for its position. If usury was to be banned, as the Bible commands, then this new world that had just discovered reason would require new, non-dogmatic explanations for why the apparently useful practice is wrong.

Over the next four hundred years, theologians and lawyers struggled to reconcile a rational approach to usury with Church dogma on the subject. They dusted off Aristotle’s argument from the barrenness of money and reasserted that the profit gained through the practice is unnatural and unjust. To this they added that usury entails an artificial separation between the ownership of goods and the use of those same goods, claiming that lending money is like asking two prices for wine—one price for receiving the wine and an additional price for drinking it—one price for its possession and another for its use. Just as this would be wrong with wine, they argued, so it is wrong with money: In the case of usury, the borrower in effect pays $100 for $100, plus another fee, $10, for the use of the money that he already paid for and thus already owns.16

In similar fashion, it was argued that usury generates for the lender profit from goods that no longer belong to him—that is, from goods now owned by the borrower.17 As one Scholastic put it: “[He] who gets fruit from that money, whether it be pieces of money or anything else, gets it from a thing which does not belong to him, and it is accordingly all the same as if he were to steal it.”18

Another argument against usury from the late Middle Ages went to a crucial aspect of the practice that heretofore had not been addressed: the issue of time. Thinkers of this period believed that time was a common good, that it belonged to no one in particular, that it was a gift from God. Thus, they saw usurers as attempting to defraud God.19 As the 12th-century English theologian Thomas of Chobham (1160–1233) wrote: “The usurer sells nothing to the borrower that belongs to him. He sells only time, which belongs to God. He can therefore not make a profit from selling someone else’s property.”20 Or as expressed in a 13th-century manuscript, “Every man stops working on holidays, but the oxen of usury work unceasingly and thus offend God and all the Saints; and, since usury is an endless sin, it should in like manner be endlessly punished.”21

Although the identification of the value of time and its relationship to interest was used here in an argument against usury, this point is actually a crucial aspect of the argument in defense of the practice. Indeed, interest is compensation for a delay in using one’s funds. It is compensation for the usurer’s time away from his money. And although recognition of an individual’s ownership of his own time was still centuries away, this early acknowledgment of the relationship of time and interest was a major milestone.

The Scholastics came to similar conclusions about usury as those reached by earlier Christian thinkers, but they sought to defend their views not only by reference to scripture, but also by reference to their observational understanding of the economics of the practice. The economic worth of usury—its productivity or unproductively—became their central concern. The question became: Is money barren? Does usury have a productive function? What are the facts?

This is the long arm of Aristotle at work. Having discovered Aristotle’s method of observation-based logic, the Scholastics began to focus on reality, and, to the extent that they did, they turned away from faith and away from the Bible. It would take hundreds of years for this perspective to develop fully, but the type of arguments made during the late Middle Ages were early contributions to this crucial development.

As virtuous as this new method was, however, the Scholastics were still coming to the conclusion that usury is unproductive and immoral, and it would not be until the 16th century and the Reformation that usury would be partially accepted by the Church and civil law. For the time being, usury remained forbidden—at least in theory.

Church officials, particularly from the 12th century on, frequently manipulated and selectively enforced the usury laws to bolster the financial power of the Church. When it wanted to keep its own borrowing cost low, the Church enforced the usury prohibition. At other times, the Church itself readily loaned money for interest. Monks were among the earliest moneylenders, offering carefully disguised interest-bearing loans throughout the Middle Ages.

The most common way to disguise loans—and the way in which banking began in Italy and grew to be a major business—was through money exchange. The wide variety of currencies made monetary exchange necessary but difficult, which led to certain merchants specializing in the field. With the rapid growth of international trade, these operations grew dramatically in scale, and merchants opened offices in cities all across Europe and the eastern Mediterranean. These merchants used the complexities associated with exchange of different currencies to hide loans and charge interest. For example, a loan might be made in one currency and returned in another months later in a different location—although the amount returned would be higher (i.e., would include an interest payment), this would be disguised by a new exchange rate. This is one of many mechanisms usurers and merchants invented to circumvent the restrictions. As one commentator notes, “the interest element in such dealings [was] normally . . . hidden by the nature of the transactions either in foreign exchange or as bills of exchange or, frequently, as both.”22 By such means, these merchants took deposits, loaned money, and made payments across borders, thus creating the beginnings of the modern banking system.

Although the merchant credit extended by these early banks was technically interest, and thus usury, both the papal and civic authorities permitted the practice, because the exchange service proved enormously valuable to both. In addition to financing all kinds of trade across vast distances for countless merchants, such lending also financed the Crusades for the Church and various wars for various kings.23 Everyone wanted what usury had to offer, yet no one understood exactly what that was. So while the Church continued to forbid usury and punish transgressors, it also actively engaged in the practice. What was seen as moral by the Church apparently was not seen as wholly practical by the Church, and opportunity became the mother of evasion.

The Church also engaged in opportunistic behavior when it came to restitution. Where so-called “victims” of usury were known, the Church provided them with restitution from the usurer. But in cases where the “victims” were not known, the Church still collected restitution, which it supposedly directed to “the poor” or other “pious purposes.” Clerics were sold licenses empowering them to procure such restitution, and, as a result, the number of usurers prosecuted where there was no identifiable “victim” was far greater than it otherwise would have been. The death of a wealthy merchant often provided the Church with windfall revenue. In the 13th century, the Pope laid claim to the assets of deceased usurers in England. He directed his agents to “inquire concerning living (and dead) usurers and the thing wrongfully acquired by this wicked usury . . . and . . . compel opponents by ecclesiastical censure.”24

Also of note, Church officials regularly ignored the usury of their important friends—such as the Florentine bankers of the Medici family—while demonizing Jewish moneylenders and others. The result was that the image of the merchant usurer was dichotomized into “two disparate figures who stood at opposite poles: the degraded manifest usurer-pawnbroker, as often as not a Jew; and the city father, arbiter of elegance, patron of the arts, devout philanthropist, the merchant prince [yet no less a usurer!].”25

In theory, the Church was staunchly opposed to usury; in practice, however, it was violating its own moral law in myriad ways. The gap between the idea of usury as immoral and the idea of usury as impractical continued to widen as the evidence for its practicality continued to grow. The Church would not budge on the moral status, but it selectively practiced the vice nonetheless.

This selective approach often correlated with the economic times. When the economy was doing well, the Church, and the civil authorities, often looked the other way and let the usurers play. In bad times, however, moneylenders, particularly those who were Jewish, became the scapegoats. (This pattern continues today with anti-interest sentiment exploding whenever there is an economic downturn.)

To facilitate the Church’s selective opposition to usury, and to avoid the stigma associated with the practice, religious and civil authorities created many loopholes in the prohibition. Sometime around 1220, a new term was coined to replace certain forms of usury: the concept of interest.26 Under circumstances where usury was legal, it would now be called the collecting of interest. In cases where the practice was illegal, it would continue to be called usury.27

The modern word “interest” derives from the Latin verb intereo, which means “to be lost.” Interest was considered compensation for a loss that a creditor had incurred through lending. Compensation for a loan was illegal if it was a gain or a profit, but if it was reimbursement for a loss or an expense it was permissible. Interest was, in a sense, “damages,” not profit. Therefore, interest was sometimes allowed, but usury never.

So, increasingly, moneylenders were allowed to charge interest as a penalty for delayed repayment of a loan, provided that the lender preferred repayment to the delay plus interest (i.e., provided that it was seen as a sacrifice). Loans were often structured in advance so that such delays were anticipated and priced, and so the prohibition on usury was avoided. Many known moneylenders and bankers, such as the Belgian Lombards, derived their profits from such penalties—often 100 percent of the loan value.28

Over time, the view of costs or damages for the lender was expanded, and the lender’s time and effort in making the loan were permitted as a reason for charging interest. It even became permissible on occasion for a lender to charge interest if he could show an obvious, profitable alternative use for the money. If, by lending money, the lender suffered from the inability to make a profit elsewhere, the interest was allowed as compensation for the potential loss. Indeed, according to some sources, even risk—economic risk—was viewed as worthy of compensation. Therefore, if there was risk that the debtor would not pay, interest charged in advance was permissible.29

These were major breakthroughs. Recognition of the economic need for advanced calculation of a venture’s risk, and for compensation in advance for that risk, were giant steps in the understanding of and justification for moneylending.

But despite all these breakthroughs and the fact that economic activity continued to grow during the later Middle Ages, the prohibition on usury was still selectively enforced. Usurers were often forced to pay restitution; many were driven to poverty or excommunicated; and some, especially Jewish moneylenders, were violently attacked and murdered. It was still a very high-risk profession.

Not only were usurers in danger on Earth; they were also threatened with the “Divine justice” that awaited them after death.30 They were considered the devil’s henchmen and were sure to go to Hell. It was common to hear stories of usurers going mad in old age out of fear of what awaited them in the afterlife.

The Italian poet Dante (1265–1321) placed usurers in the seventh rung of Hell, incorporating the traditional medieval punishment for usury, which was eternity with a heavy bag of money around one’s neck: “From each neck there hung an enormous purse, each marked with its own beast and its own colors like a coat of arms. On these their streaming eyes appeared to feast.”31 Usurers in Dante’s Hell are forever weighed down by their greed. Profits, Dante believed, should be the fruits of labor—and usury entailed no actual work. He believed that the deliberate, intellectual choice to engage in such an unnatural action as usury was the worst kind of sin.32

It is a wonder that anyone—let alone so many—defied the law and their faith to practice moneylending. In this sense, the usurers were truly heroic. By defying religion and taking risks—both financial and existential—they made their material lives better. They made money. And by doing so, they made possible economic growth the likes of which had never been seen before. It was thanks to a series of loans from local moneylenders that Gutenberg, for example, was able to commercialize his printing press.33 The early bankers enabled advances in commerce and industry throughout Europe, financing the Age of Exploration as well as the early seeds of technology that would ultimately lead to the Industrial Revolution.

By the end of the Middle Ages, although everyone still condemned usury, few could deny its practical value. Everyone “knew” that moneylending was ethically wrong, but everyone could also see that it was economically beneficial. Its moral status was divinely decreed and appeared to be supported by reason, yet merchants and businessmen experienced its practical benefits daily. The thinkers of the day could not explain this apparent dichotomy. And, in the centuries that followed, although man’s understanding of the economic value of usury would advance, his moral attitude toward the practice would remain one of contempt.

Renaissance and Reformation

The start of the 16th century brought about a commercial boom in Europe. It was the Golden Age of Exploration. Trade routes opened to the New World and expanded to the East, bringing unprecedented trade and wealth to Europe. To fund this trade, to supply credit for commerce and the beginnings of industry, banks were established throughout Europe. Genoese and German bankers funded Spanish and Portuguese exploration and the importation of New World gold and silver. Part of what made this financial activity possible was the new tolerance, in some cities, of usury.

The Italian city of Genoa, for example, had a relatively relaxed attitude toward usury, and moneylenders created many ways to circumvent the existing prohibitions. It was clear to the city’s leaders that the financial activities of its merchants were crucial to Genoa’s prosperity, and the local courts regularly turned a blind eye to the usurious activities of its merchants and bankers. Although the Church often complained about these activities, Genoa’s political importance prevented the Church from acting against the city.

The Catholic Church’s official view toward usury remained unchanged until the 19th century, but the Reformation—which occurred principally in northern Europe—brought about a mild acceptance of usury. (This is likely one reason why southern Europe, which was heavily Catholic, lagged behind the rest of Europe economically from the 17th century onward.) Martin Luther (1483–1546), a leader of the Reformation, believed that usury was inevitable and should be permitted to some extent by civil law. Luther believed in the separation of civil law and Christian ethics. This view, however, resulted not from a belief in the separation of state and religion, but from his belief that the world and man were too corrupt to be guided by Christianity. Christian ethics and the Old Testament commandments, he argued, are utopian dreams, unconnected with political or economic reality. He deemed usury unpreventable and thus a matter for the secular authorities, who should permit the practice and control it.

However, Luther still considered usury a grave sin, and in his later years wrote:

[T]here is on earth no greater enemy of man, after the Devil, than a gripe-money and usurer, for he wants to be God over all men. . . . And since we break on the wheel and behead highwaymen, murderers, and housebreakers, how much more ought we to break on the wheel and kill . . . hunt down, curse, and behead all usurers!34

In other words, usury should be allowed by civil authorities (as in Genoa) because it is inevitable (men will be men), but it should be condemned in the harshest terms by the moral authority. This is the moral-practical dichotomy in action, sanctioned by an extremely malevolent view of man and the universe.

John Calvin, (1509–1564), another Reformation theologian, had a more lenient view than Luther. He rejected the notion that usury is actually banned in the Bible. Since Jews are allowed to charge interest from strangers, God cannot be against usury. It would be fantastic, Calvin thought, to imagine that by “strangers” God meant the enemies of the Jews; and it would be most unchristian to legalize discrimination. According to Calvin, usury does not always conflict with God’s law, so not all usurers need to be damned. There is a difference, he believed, between taking usury in the course of business and setting up business as a usurer. If a person collects interest on only one occasion, he is not a usurer. The crucial issue, Calvin thought, is the motive. If the motive is to help others, usury is good, but if the motive is personal profit, usury is evil.

Calvin claimed that the moral status of usury should be determined by the golden rule. It should be allowed only insofar as it does not run counter to Christian fairness and charity. Interest should never be charged to a man in urgent need, or to a poor man; the “welfare of the state” should always be considered. But it could be charged in cases where the borrower is wealthy and the interest will be used for Christian good. Thus he concluded that interest could neither be universally condemned nor universally permitted—but that, to protect the poor, a maximum rate should be set by law and never exceeded.35

Although the religious authorities did little to free usury from the taint of immorality, other thinkers were significantly furthering the economic understanding of the practice. In a book titled Treatise on Contracts and Usury, Molinaeus, a French jurist, made important contributions to liberate usury from Scholastic rationalism.36 By this time, there was sufficient evidence for a logical thinker to see the merits of moneylending. Against the argument that money is barren, Molinaeus (1500–1566) observed that everyday experience of business life showed that the use of any considerable sum of money yields a service of importance. He argued, by reference to observation and logic, that money, assisted by human effort, does “bear fruit” in the form of new wealth; the money enables the borrower to create goods that he otherwise would not have been able to create. Just as Galileo would later apply Aristotle’s method of observation and logic in refuting Aristotle’s specific ideas in physics, so Molinaeus used Aristotle’s method in refuting Aristotle’s basic objection to usury. Unfortunately, like Galileo, Molinaeus was to suffer for his ideas: The Church forced him into exile and banned his book. Nevertheless, his ideas on usury spread throughout Europe and had a significant impact on future discussions of moneylending.37

The prevailing view that emerged in the late 16th century (and that, to a large extent, is still with us today) is that money is not barren and that usury plays a productive role in the economy. Usury, however, is unchristian; it is motivated by a desire for profit and can be used to exploit the poor. It can be practical, but it is not moral; therefore, it should be controlled by the state and subjected to regulation in order to restrain the rich and protect the poor.

This Christian view has influenced almost all attitudes about usury since. In a sense, Luther and Calvin are responsible for today’s so-called “capitalism.” They are responsible for the guilt many people feel from making money and the guilt that causes people to eagerly regulate the functions of capitalists. Moreover, the Protestants were the first to explicitly assert and sanction the moral-practical dichotomy—the idea that the moral and the practical are necessarily at odds. Because of original sin, the Protestants argued, men are incapable of being good, and thus concessions must be made in accordance with their wicked nature. Men must be permitted to some extent to engage in practical matters such as usury, even though such practices are immoral.

In spite of its horrific view of man, life, and reality, Luther and Calvin’s brand of Christianity allowed individuals who were not intimidated by Christian theology to practice moneylending to some extent without legal persecution. Although still limited by government constraints, the chains were loosened, and this enabled economic progress through the periodic establishment of legal rates of interest.

The first country to establish a legal rate of interest was England in 1545 during the reign of Henry VIII. The rate was set at 10 percent. However, seven years later it was repealed, and usury was again completely banned. In an argument in 1571 to reinstate the bill, Mr. Molley, a lawyer representing the business interests in London, said before the House of Commons:

Since to take reasonably, or so that both parties might do good, was not hurtful; . . . God did not so hate it, that he did utterly forbid it, but to the Jews amongst themselves only, for that he willed they should lend as Brethren together; for unto all others they were at large; and therefore to this day they are the greatest Usurers in the World. But be it, as indeed it is, evil, and that men are men, no Saints, to do all these things perfectly, uprightly and Brotherly; . . . and better may it be born to permit a little, than utterly to take away and prohibit Traffick; which hardly may be maintained generally without this.

But it may be said, it is contrary to the direct word of God, and therefore an ill Law; if it were to appoint men to take Usury, it were to be disliked; but the difference is great between that and permitting or allowing, or suffering a matter to be unpunished.38

Observe that while pleading for a bill permitting usury—on the grounds that it is necessary (“Traffick . . . hardly may be maintained generally without [it]”)—Molley concedes that it is evil. This is the moral-practical dichotomy stated openly and in black-and-white terms, and it illustrates the general attitude of the era. The practice was now widely accepted as practical but still regarded as immoral, and the thinkers of the day grappled with this new context.

One of England’s most significant 17th-century intellectuals, Francis Bacon (1561–1626), realized the benefits that moneylending offered to merchants and traders by providing them with capital. He also recognized the usurer’s value in providing liquidity to consumers and businesses. And, although Bacon believed that the moral ideal would be lending at 0 percent interest, as the Bible requires, he, like Luther, saw this as utopian and held that “it is better to mitigate usury by declaration than suffer it to rage by connivance.” Bacon therefore proposed two rates of usury: one set at a maximum of 5 percent and allowable to everyone; and a second rate, higher than 5 percent, allowable only to certain licensed persons and lent only to known merchants. The license was to be sold by the state for a fee.39

Again, interest and usury were pitted against morality. But Bacon saw moneylending as so important to commerce that the legal rate of interest had to offer sufficient incentive to attract lenders. Bacon recognized that a higher rate of interest is economically justified by the nature of certain loans.40

The economic debate had shifted from whether usury should be legal to whether and at what level government should set the interest rate (a debate that, of course, continues to this day, with the Fed setting certain interest rates). As one scholar put it: “The legal toleration of interest marked a revolutionary change in public opinion and gave a clear indication of the divorce of ethics from economics under the pressure of an expanding economic system.”41

In spite of this progress, artists continued to compare usurers to idle drones, spiders, and bloodsuckers, and playwrights personified the moneygrubbing usurers in characters such as Sir Giles Overreach, Messrs. Mammon, Lucre, Hoard, Gripe, and Bloodhound. Probably the greatest work of art vilifying the usurer was written during this period—TheMerchant of Venice by Shakespeare (1564–1616), which immortalized the character of the evil Jewish usurer, Shylock.

In The Merchant of Venice, Bassanio, a poor nobleman, needs cash in order to court the heiress, Portia. Bassanio goes to a Jewish moneylender, Shylock, for a loan, bringing his wealthy friend, Antonio, to stand as surety for it. Shylock, who has suffered great rudeness from Antonio in business, demands as security for the loan not Antonio’s property, which he identifies as being at risk, but a pound of his flesh.42

The conflict between Shylock and Antonio incorporates all the elements of the arguments against usury. Antonio, the Christian, lends money and demands no interest. As Shylock describes him:

Shy. [Aside.] How like a fawning publican he looks!
I hate him for he is a Christian;
But more for that in low simplicity
He lends out money gratis, and brings down
The rate of usance here with us in Venice.
If I can catch him once upon the hip,
I will feed fat the ancient grudge I bear him.
He hates our sacred nation, and he rails,
Even there where merchants most do congregate,
On me, my bargains, and my well-won thrift,
Which he calls interest. Cursed be my tribe,
If I forgive him!43

Shylock takes usury. He is portrayed as the lowly, angry, vengeful, and greedy Jew. When his daughter elopes and takes her father’s money with her, he cries, “My daughter! O my ducats! Oh my daughter!”44 —not sure for which he cares more.

It is clear that Shakespeare understood the issues involved in usury. Note Shylock’s (legitimate) hostility toward Antonio because Antonio loaned money without charging interest and thus brought down the market rate of interest in Venice. Even Aristotle’s “barren money” argument is present. Antonio, provoking Shylock, says:

If thou wilt lend this money, lend it not
As to thy friends,—for when did friendship take
A breed for barren metal of his friend?—
But lend it rather to thine enemy:
Who if he break, thou mayst with better face
Exact the penalty.45

Friends do not take “breed for barren metal” from friends; usury is something one takes only from an enemy.

Great art plays a crucial role in shaping popular attitudes, and Shakespeare’s depiction of Shylock, like Dante’s depiction of usurers, concretized for generations the dichotomous view of moneylending and thus helped entrench the alleged link between usury and evil. As late as 1600, medieval moral and economic theories were alive and well, even if they were increasingly out of step with the economic practice of the time.

The Enlightenment

During the Enlightenment, the European economy continued to grow, culminating with the Industrial Revolution. This growth involved increased activity in every sector of the economy. Banking houses were established to provide credit to a wide array of economic endeavors. The Barring Brothers and the House of Rothschild were just the largest of the many banks that would ultimately help fuel the Industrial Revolution, funding railroads, factories, ports, and industry in general.

Economic understanding of the important productive role of usury continued to improve over the next four hundred years. Yet, the moral evaluation of usury would change very little. The morality of altruism—the notion that self-sacrifice is moral and that self-interest is evil—was embraced and defended by many Enlightenment intellectuals and continued to hamper the acceptability of usury. After all, usury is a naked example of the pursuit of profit—which is patently self-interested. Further, it still seemed to the thinkers of the time that usury could be a zero-sum transaction—that a rich lender might profit at the expense of a poor borrower. Even a better conception of usury—let alone the misconception of it being a zero-sum transaction—is anathema to altruism, which demands the opposite of personal profit: self-sacrifice for the sake of others.

In the mid-17th century, northern Europe was home to a new generation of scholars who recognized that usury served an essential economic purpose, and that it should be allowed freely. Three men made significant contributions in this regard.

Claudius Salmasius (1588–1653), a French scholar teaching in Holland, thoroughly refuted the claims about the “barrenness” of moneylending; he showed the important productive function of usury and even suggested that there should be more usurers, since competition between them would reduce the rate of interest. Other Dutch scholars agreed with him, and, partially as a result of this, Holland became especially tolerant of usury, making it legal at times. Consequently, the leading banks of the era were found in Holland, and it became the world’s commercial and financial center, the wealthiest state in Europe, and the envy of the world.46

Robert Jacques Turgot (1727–1781), a French economist, was the first to identify usury’s connection to property rights. He argued that a creditor has the right to dispose of his money in any way he wishes and at whatever rate the market will bear, because it is his property. Turgot was also the first economist to fully understand that the passing of time changes the value of money. He saw the difference between the present value and the future value of money—concepts that are at the heart of any modern financial analysis. According to Turgot: “If . . . two gentlemen suppose that a sum of 1000 Francs and a promise of 1000 Francs possess exactly the same value, they put forward a still more absurd supposition; for if these two things were of equal value, why should any one borrow at all?”47 Turgot even repudiated the medieval notion that time belonged to God. Time, he argued, belongs to the individual who uses it and therefore time could be sold.48

During the same period, the British philosopher Jeremy Bentham (1748–1832) wrote a treatise entitled A Defense of Usury. Bentham argued that any restrictions on interest rates were economically harmful because they restricted an innovator’s ability to raise capital. Since innovative trades inherently involved high risk, they could only be funded at high interest rates. Limits on permissible interest rates, he argued, would kill innovation—the engine of growth. Correcting another medieval error, Bentham also showed that restrictive usury laws actually harmed the borrowers. Such restrictions cause the credit markets to shrink while demand for credit remains the same or goes up; thus, potential borrowers have to seek loans in an illegal market where they would have to pay a premium for the additional risk of illegal trading.

Bentham’s most important contribution was his advocacy of contractual freedom:

My neighbours, being at liberty, have happened to concur among themselves in dealing at a certain rate of interest. I, who have money to lend, and Titus, who wants to borrow it of me, would be glad, the one of us to accept, the other to give, an interest somewhat higher than theirs: Why is the liberty they exercise to be made a pretence for depriving me and Titus of ours.49

This was perhaps the first attempt at a moral defense of usury.

Unfortunately, Bentham and his followers undercut this effort with their philosophy of utilitarianism, according to which rights, liberty, and therefore moneylending, were valuable only insofar as they increased “social utility”: “the greatest good for the greatest number.” Bentham famously dismissed individual rights—the idea that each person should be free to act on his own judgment—as “nonsense upon stilts.”50 He embraced the idea that the individual has a “duty” to serve the well-being of the collective, or, as he put it, the “general mass of felicity.”51 Thus, in addition to undercutting Turgot’s major achievement, Bentham also doomed the first effort at a moral defense of usury—which he himself had proposed.

An explicitly utilitarian attempt at a moral defense of usury was launched in 1774 in the anonymously published Letters on Usury and Interest. The goal of the book was to explain why usury should be accepted in England of the 18th century, and why this acceptance did not contradict the Church’s teachings. The ultimate reason, the author argued, is one of utility:

Here, then, is a sure and infallible rule to judge of the lawfulness of a practice. Is it useful to the State? Is it beneficial to the individuals that compose it? Either of these is sufficient to obtain a tolerance; but both together vest it with a character of justice and equity. . . . In fact, if we look into the laws of different nations concerning usury, we shall find that they are all formed on the principle of public utility. In those states where usury was found hurtful to society, it was prohibited. In those where it was neither hurtful nor very beneficial, it was tolerated. In those where it was useful, it was authorized. In ours, it is absolutely necessary.52

And:

[T]he practice of lending money to interest is in this nation, and under this constitution, beneficial to all degrees; therefore it is beneficial to society. I say in this nation; which, as long as it continues to be a commercial one, must be chiefly supported by interest; for interest is the soul of credit and credit is the soul of commerce.53

Although the utilitarian argument in defense of usury contains some economic truth, it is morally bankrupt. Utilitarian moral reasoning for the propriety of usury depends on the perceived benefits of the practice to the collective or the nation. But what happens, for example, when usury in the form of sub-prime mortgage loans creates distress for a significant number of people and financial turmoil in some markets? How can it be justified? Indeed, it cannot. The utilitarian argument collapses in the face of any such economic problem, leaving moneylenders exposed to the wrath of the public and to the whips and chains of politicians seeking a scapegoat for the crisis.

Although Salmasius, Turgot, and Bentham made significant progress in understanding the economic and political value of usury, not all their fellow intellectuals followed suit. The father of economics, Adam Smith (1723–1790), wrote: “As something can everywhere be made by the use of money, something ought everywhere to be paid for the use of it.”54 Simple and elegant. Yet, Smith also believed that the government must control the rate of interest. He believed that unfettered markets would create excessively high interest rates, which would hurt the economy—which, in turn, would harm society.55 Because Smith thought that society’s welfare was the only justification for usury, he held that the government must intervene to correct the errors of the “invisible hand.”

Although Smith was a great innovator in economics, philosophically, he was a follower. He accepted the common philosophical ideas of his time, including altruism, of which utilitarianism is a form. Like Bentham, he justified capitalism only through its social benefits. If his projections of what would come to pass in a fully free market amounted to a less-than-optimal solution for society, then he advocated government intervention. Government intervention is the logical outcome of any utilitarian defense of usury.

(Smith’s idea that there need be a “perfect” legal interest rate remains with us to this day. His notion of such a rate was that it should be slightly higher than the market rate—what he called the “golden mean.” The chairman of the Federal Reserve is today’s very visible hand, constantly searching for the “perfect” rate or “golden mean” by alternately establishing artificially low and artificially high rates.)

Following Bentham and Smith, all significant 19th-century economists—such as David Ricardo, Jean Baptiste Say, and John Stuart Mill—considered the economic importance of usury to be obvious and argued that interest rates should be determined by freely contracting individuals. These economists, followed later by the Austrians—especially Carl Menger, Eugen von Böhm-Bawerk, and Ludwig von Mises—developed sound theories of the productivity of interest and gained a significant economic understanding of its practical role. But the moral-practical dichotomy inherent in their altruistic, utilitarian, social justification for usury remained in play, and the practice continued to be morally condemned and thus heavily regulated if not outlawed.

The 19th and 20th Centuries

Despite their flaws, the thinkers of the Enlightenment had created sufficient economic understanding to fuel the Industrial Revolution throughout the 19th century. Economically and politically, facts and reason had triumphed over faith; a sense of individualism had taken hold; the practicality of the profit motive had become clear; and, relative to eras past, the West was thriving.

Morally and philosophically, however, big trouble was brewing. As capitalism neared a glorious maturity, a new, more consistent brand of altruism, created by Kant, Hegel, and their followers, was sweeping Europe. At the political-economic level, this movement manifested itself in the ideas of Karl Marx (1818–1883).

Marx, exploiting the errors of the Classical economists, professed the medieval notion that all production is a result of manual labor; but he also elaborated, claiming that laborers do not retain the wealth they create. The capitalists, he said, take advantage of their control over the means of production—secured to them by private property—and “loot” the laborers’ work. According to Marx, moneylending and other financial activities are not productive, but exploitative; moneylenders exert no effort, do no productive work, and yet reap the rewards of production through usury.56 As one 20th-century Marxist put it: “The major argument against usury is that labor constitutes the true source of wealth.”57 Marx adopted all the medieval clichés, including the notion that Jews are devious, conniving money-grubbers.

What is the profane basis of Judaism? Practical need, self-interest. What is the worldly cult of the Jew? Huckstering. What is his worldly god? Money.

Money is the jealous god of Israel, beside which no other god may exist. Money abases all the gods of mankind and changes them into commodities.58

Marx believed that the Jews were evil—not because of their religion, as others were clamoring at the time—but because they pursued their own selfish interests and sought to make money. And Marxists were not alone in their contempt for these qualities.

Artists who, like Marx, resented capitalists in general and moneylenders in particular, dominated Western culture in the 19th century. In Dickens’s A Christmas Carol, we see the moneygrubbing Ebenezer Scrooge. In Dostoyevsky’s Crime and Punishment, the disgusting old lady whom Raskalnikov murders is a usurer. And in The Brothers Karamazov, Dostoyevsky writes:

It was known too that the young person had . . . been given to what is called “speculation,” and that she had shown marked abilities in the direction, so that many people began to say that she was no better than a Jew. It was not that she lent money on interest, but it was known, for instance, that she had for some time past, in partnership with old Karamazov, actually invested in the purchase of bad debts for a trifle, a tenth of their nominal value, and afterwards had made out of them ten times their value.59

In other words, she was what in the 1980s became known as a “vulture” capitalist buying up distressed debt.

Under Marx’s influential ideas, and given the culture-wide contempt for moneylenders, the great era of capitalism—of thriving banks and general financial success—was petering out. Popular sentiment concerning usury was reverting to a dark-ages type of hatred. Marx and company put the moneylenders back into Dante’s Inferno, and to this day they have not been able to escape.

The need for capital, however, would not be suppressed by the label “immoral.” People still sought to start businesses and purchase homes; thus usury was still seen as practical. Like the Church of the Middle Ages, people found themselves simultaneously condemning the practice and engaging in it.

Consequently, just as the term “interest” had been coined in the Middle Ages to facilitate the Church’s selective opposition to usury and to avoid the stigma associated with the practice, so modern man employed the term for the same purpose. The concept of moneylending was again split into two allegedly different concepts: the charging of “interest” and the practice of “usury.” Lending at “interest” came to designate lower-premium, lower-risk, less-greedy lending, while “usury” came to mean specifically higher-premium, higher-risk, more-greedy lending. This artificial division enabled the wealthier, more powerful, more influential people to freely engage in moneylending with the one hand, while continuing to condemn the practice with the other. Loans made to lower-risk, higher-income borrowers would be treated as morally acceptable, while those made to higher-risk, lower-income borrowers would remain morally contemptible. (The term “usury” is now almost universally taken to mean “excessive” or illegal premium on loans, while the term “interest” designates tolerable or legal premium.)

From the 19th century onward, in the United States and in most other countries, usury laws would restrict the rates of interest that could be charged on loans, and there would be an ongoing battle between businessmen and legislators over what those rates should be. These laws, too, are still with us.

As Bentham predicted, such laws harm not only lenders but also borrowers, who are driven into the shadows where they procure shady and often illegal loans in order to acquire the capital they need for their endeavors. And given the extra risk posed by potential legal complications for the lenders, these loans are sold at substantially higher interest rates than they would be if moneylending were fully legal and unregulated.

In the United States, demand for high-risk loans has always existed, and entrepreneurs have always arisen to service the demand for funds. They have been scorned, condemned to Hell, assaulted, jailed, and generally treated like the usurers of the Middle Ages—but they have relentlessly supplied the capital that has enabled Americans to achieve unprecedented levels of productiveness and prosperity.

The earliest known advertisement for a small-loan service in an American newspaper appeared in the Chicago Tribune in November 1869. By 1872, the industry was prospering. Loans collateralized by furniture, diamonds, warehouse receipts, houses, and pianos were available (called chattel loans). The first salary loan office (offering loans made in advance of a paycheck) was opened by John Mulholland in Kansas City in 1893. Within fifteen years he had offices all across the country. The going rate on a chattel loan was 10 percent a month for loans under $50, and 5-7 percent a month for larger loans. Some loans were made at very high rates, occasionally over 100 percent a month.60

The reason rates were so high is because of the number of defaults. With high rates in play, the losses on loans in default could ordinarily be absorbed as a cost of doing business. In this respect, the 19th-century small-loan business was a precursor of the 20th-century “junk” bond business or the 21st-century sub-prime mortgage lender. However, unlike the “junk” bond salesman, who had recourse to the law in cases of default or bankruptcy, these small-loan men operated on the fringes of society—and often outside the law. Because of the social stigmatization and legal isolation of the creditors, legal recourse against a defaulting borrower was generally unavailable to a usurer. Yet these back-alley loans provided a valuable service—one for which there was great demand—and they enabled many people to start their own businesses or improve their lives in other ways.

Of course, whereas most of these borrowers paid off their loans and succeeded in their endeavors, many of them got into financial trouble—and the latter cases, not the former, were widely publicized. The moneylenders were blamed, and restrictions were multiplied and tightened.

In spite of all the restrictions, laws, and persecutions, the market found ways to continue. In 1910, Arthur Morris set up the first bank in America with the express purpose of providing small loans to individuals at interest rates based on the borrower’s “character and earning power.” In spite of the usury limit of 6 percent that existed in Virginia at the time, Morris’s bank found ways, as did usurers in the Middle Ages, to make loans at what appeared to be a 6 percent interest rate while the actual rates were much higher and more appropriate. For instance, a loan for $100 might be made as follows: A commission of 2 percent plus the 6 percent legal rate would be taken off the top in advance; thus the borrower would receive $92. Then he would repay the loan at $2 a week over fifty weeks. The effective compound annual interest rate on such a loan was in excess of 18 percent. And penalties would be assessed for any delinquent payments.61 Such camouflaged interest rates were a throwback to the Middle Ages, when bankers developed innovative ways to circumvent the restrictions on usury established by the Church. And, as in the Middle Ages, such lending became common as the demand for capital was widespread. Consequently, these banks multiplied and thrived—for a while.

(Today’s credit card industry is the successor to such institutions. Credit card lenders charge high interest rates to high-risk customers, and penalties for delinquency. And borrowers use these loans for consumption as well as to start or fund small businesses. And, of course, the credit card industry is regularly attacked for its high rates of interest and its “exploitation” of customers. To this day, credit card interest rates are restricted by usury laws, and legislation attempting to further restrict these rates is periodically introduced.)

In 1913, in New York, a moneylender who issued loans to people who could not get them at conventional banks appeared before a court on the charge of usury. In the decision, the judge wrote:

You are one of the most contemptible usurers in your unspeakable business. The poor people must be protected from such sharks as you, and we must trust that your conviction and sentence will be a notice to you and all your kind that the courts have found a way to put a stop to usury. Men of your type are a curse to the community, and the money they gain is blood money.62

This ruling is indicative of the general attitude toward usurers at the time. The moral-practical dichotomy was alive and kicking, and the moneylenders were taking the blows. Although their practical value to the economy was now clear, their moral status as evil was still common “sense.” And the intellectuals of the day would only exacerbate the problem.

The most influential economist of the 20th century was John Maynard Keynes (1883–1946), whose ideas not only shaped the theoretical field of modern economics but also played a major role in shaping government policies in the United States and around the world. Although Keynes allegedly rejected Marx’s ideas, he shared Marx’s hatred of the profit motive and usury. He also agreed with Adam Smith that government must control interest rates; otherwise investment and thus society would suffer. And he revived the old Reformation idea that usury is a necessary evil:

When the accumulation of wealth is no longer of high social importance, there will be great changes in the code of morals. We shall be able to rid ourselves of many of the pseudo-moral principles which have hag-ridden us for two hundred years, by which we have exalted some of the most distasteful of human qualities into the position of the highest virtues. . . . But beware! The time for all this is not yet. For at least another hundred years we must pretend to ourselves and to everyone that fair is foul and foul is fair; for foul is useful and fair is not. Avarice and usury and precaution must be our gods for a little longer still. For only they can lead us out of the tunnel of economic necessity into daylight.63

Although Keynes and other economists and intellectuals of the day recognized the need of usury, they universally condemned the practice and its practitioners as foul and unfair. Thus, regardless of widespread recognition of the fact that usury is a boon to the economy, when the Great Depression occurred in the United States, the moneylenders on Wall Street were blamed. As Franklin Delano Roosevelt put it:

The rulers of the exchange of mankind’s goods have failed, through their own stubbornness and their own incompetence, have admitted failure, and have abdicated. Practices of the unscrupulous money changers stand indicted in the court of public opinion, rejected by the hearts and minds of men . . . [We must] apply social values more noble than mere monetary profit.64

And so the “solution” to the problems of the Great Depression was greater government intervention throughout the economy—especially in the regulation of interest and the institutions that deal in it. After 1933, banks were restricted in all aspects of their activity: the interest rates they could pay their clients, the rates they could charge, and to whom they could lend. In 1934, the greatest bank in American history, J. P. Morgan, was broken up by the government into several companies. The massive regulations and coercive restructurings of the 1930s illustrate the continuing contempt for the practice of taking interest on loans and the continuing distrust of those—now mainly bankers—who engage in this activity. (We paid a dear price for those regulations with the savings and loan crisis of the 1970s and 1980s, which cost American taxpayers hundreds of billions of dollars.65 And we continue to pay the price of these regulations in higher taxes, greater financial costs, lost innovation, and stifled economic growth.)

The 21st Century

From ancient Greece and Rome, to the Dark and Middle Ages, to the Renaissance and Reformation, to the 19th and 20th centuries, moneylending has been morally condemned and legally restrained. Today, at the dawn of the 21st century, moneylending remains a pariah.

One of the latest victims of this moral antagonism is the business of providing payday loans. This highly popular and beneficial service has been branded with the scarlet letter “U”; consequently, despite the great demand for these loans, the practice has been relegated to the fringes of society and the edge of the law. These loans carry annualized interest rates as high as 1000 percent, because they are typically very short term (i.e., to be paid back on payday). By some estimates there are 25,000 payday stores across America, and it is “a $6 billion dollar industry serving 15 million people every month.”66 The institutions issuing these loans have found ways, just as banks always have, to circumvent state usury laws. Bank regulators have severely restricted the ability of community banks to offer payday loans or even to work with payday loan offices, more than 13 states have banned them altogether, and Congress is currently looking at ways to ban all payday loans.67 This is in spite of the fact that demand for these loans is soaring and that they serve a genuine economic need, that they are a real value for low-income households. As the Wall Street Journal reports, “Georgia outlawed payday loans in 2004, and thousands of workers have since taken to traveling over the border to find payday stores in Tennessee, Florida and South Carolina. So the effect of the ban has been to increase consumer credit costs and inconvenience for Georgia consumers.”68

A story in the LA Weekly, titled “Shylock 2000”—ignoring the great demand for payday loans, ignoring the economic value they provide to countless borrowers, and ignoring the fact that the loans are made by mutual consent to mutual advantage—proceeded to describe horrific stories of borrowers who have gone bankrupt. The article concluded: “What’s astonishing about this story is that, 400 years after Shakespeare created the avaricious lender Shylock, such usury may be perfectly legal.”69

What is truly astonishing is that after centuries of moneylenders providing capital and opportunities to billions of willing people on mutually agreed upon terms, the image of these persistent businessmen has not advanced beyond that of Shylock.

The “Shylocks” du jour, of course, are the sub-prime mortgage lenders, with whom this article began. These lenders provided mortgages designed to enable low-income borrowers to buy homes. Because the default rate among these borrowers is relatively high, the loans are recognized as high-risk transactions and are sold at correspondingly high rates of interest. Although it is common knowledge that many of these loans are now in default, and although it is widely believed that the lenders are to blame for the situation, what is not well known is, as Paul Harvey would say, “the rest of the story.”

The tremendous growth in this industry is a direct consequence of government policy. Since the 1930s, the U.S. government has encouraged home ownership among all Americans—but especially among those in lower income brackets. To this end, the government created the Federal Home Loan Banks (which are exempt from state and local income taxes) to provide incentives for smaller banks to make mortgage loans to low-income Americans. Congress passed the Community Reinvestment Act, which requires banks to invest in their local communities, including by providing mortgage loans to people in low-income brackets. The government created Fannie Mae and Freddie Mac, both of which have a mandate to issue and guarantee mortgage loans to low-income borrowers.

In recent years, all these government schemes and more (e.g., artificially low-interest rates orchestrated by the Fed) led to a frenzy of borrowing and lending. The bottom line is that the government has artificially mitigated lenders’ risk, and it has done so on the perverse, altruistic premise that “society” has a moral duty to increase home ownership among low-income Americans. The consequence of this folly has been a significant increase in delinquent loans and foreclosures, which has led to wider financial problems at banks and at other institutions that purchased the mortgages in the secondary markets.

Any objective evaluation of the facts would place the blame for this disaster on the government policies that caused it. But no—just as in the past, the lenders are being blamed and scapegoated.

Although some of these lenders clearly did take irrational risks on many of these loans, that should be their own problem, and they should have to suffer the consequences of their irrational actions—whether significant financial loss or bankruptcy. (The government most certainly should not bail them out.) However, without the perception of reduced risk provided by government meddling in the economy, far fewer lenders would have been so frivolous.

Further, the number of people benefiting from sub-prime mortgage loans, which make it possible for many people to purchase a home for the first time, is in the millions—and the vast majority of these borrowers are not delinquent or in default; rather, they are paying off their loans and enjoying their homes, a fact never mentioned by the media.

It should also be noted that, whereas the mortgage companies are blamed for all the defaulting loans, no blame is placed on the irresponsible borrowers who took upon themselves debt that they knew—or should have known—they could not handle.

After four hundred years of markets proving the incredible benefits generated by moneylending, intellectuals, journalists, and politicians still rail against lenders and their institutions. And, in spite of all the damage done by legal restrictions on interest, regulation of moneylenders, and government interference in financial markets, whenever there is an economic “crisis,” there is invariably a wave of demand for more of these controls, not less.

Moneylenders are still blamed for recessions; they are still accused of being greedy and of taking advantage of the poor; they are still portrayed on TV and in movies as slick, murderous villains; and they are still distrusted by almost everyone. (According to a recent poll, only 16 percent of Americans have substantial confidence in the American financial industry.)70 Thus, it should come as no surprise that the financial sector is the most regulated, most controlled industry in America today.

But what explains the ongoing antipathy toward, distrust of, and coercion against these bearers of capital and opportunity? What explains the modern anti-moneylending mentality? Why are moneylenders today held in essentially the same ill repute as they were in the Middle Ages?

The explanation for this lies in the fact that, fundamentally, 21st-century ethics is no different from the ethics of the Middle Ages.

All parties in the assault on usury share a common ethical root: altruism—belief in the notion that self-sacrifice is moral and self-interest is evil. This is the source of the problem. So long as self-interest is condemned, neither usury in particular, nor profit in general, can be seen as good—both will be seen as evil.

Moneylending cannot be defended by reference to its economic practicality alone. If moneylending is to be recognized as a fully legitimate practice and defended accordingly, then its defenders must discover and embrace a new code of ethics, one that upholds self-interest—and thus personal profit—as moral.

Conclusion

Although serious economists today uniformly recognize the economic benefits of charging interest or usury on loans, they rarely, if ever, attempt a philosophical or moral defense of this position. Today’s economists either reject philosophy completely or adopt the moral-practical split, accepting the notion that although usury is practical, it is either immoral or, at best, amoral.

Modern philosophers, for the most part, have no interest in the topic at all, partly because it requires them to deal with reality, and partly because they believe self-interest, capitalism, and everything they entail, to be evil. Today’s philosophers, almost to a man, accept self-sacrifice as the standard of morality and physical labor as the source of wealth. Thus, to the extent that they refer to moneylending at all, they consider it unquestionably unjust, and positions to the contrary unworthy of debate.

It is time to set the record straight.

Whereas Aristotle united productiveness with morality and thereby condemned usury as immoral based on his mistaken belief that the practice is unproductive—and whereas everyone since Aristotle (including contemporary economists and philosophers) has severed productiveness from morality and condemned usury on biblical or altruistic grounds as immoral (or at best amoral)—what is needed is a view that again unifies productiveness and morality, but that also sees usury as productive, and morality as the means to practical success on earth. What is needed is the economic knowledge of the last millennium combined with a new moral theory—one that upholds the morality of self-interest and thus the virtue of personal profit.

Let us first condense the key economic points; then we will turn to a brief indication of the morality of self-interest.

The crucial economic knowledge necessary to a proper defense of usury includes an understanding of why lenders charge interest on money—and why they would do so even in a risk-free, noninflationary environment. Lenders charge interest because their money has alternative uses—uses they temporarily forego by lending the money to borrowers. When a lender lends money, he is thereby unable to use that money toward some benefit or profit for himself. Had he not lent it, he could have spent it on consumer goods that he would have enjoyed, or he could have invested it in alternative moneymaking ventures. And the longer the term of the loan, the longer the lender must postpone his alternative use of the money. Thus interest is charged because the lender views the loan as a better, more profitable use of his money over the period of the loan than any of his alternative uses of the same funds over the same time; he estimates that, given the interest charged, the benefit to him is greater from making the loan than from any other use of his capital.71

A lender tries to calculate in advance the likelihood or unlikelihood that he will be repaid all his capital plus the interest. The less convinced he is that a loan will be repaid, the higher the interest rate he will charge. Higher rates enable lenders to profit for their willingness to take greater risks. The practice of charging interest is therefore an expression of the human ability to project the future, to plan, to analyze, to calculate risk, and to act in the face of uncertainty. In a word, it is an expression of man’s ability to reason. The better a lender’s thinking, the more money he will make.

Another economic principle that is essential to a proper defense of usury is recognition of the fact that moneylending is productive. This fact was made increasingly clear over the centuries, and today it is incontrovertible. By choosing to whom he will lend money, the moneylender determines which projects he will help bring into existence and which individuals he will provide with opportunities to improve the quality of their lives and his. Thus, lenders make themselves money by rewarding people for the virtues of innovation, productiveness, personal responsibility, and entrepreneurial talent; and they withhold their sanction, thus minimizing their losses, from people who exhibit signs of stagnation, laziness, irresponsibility, and inefficiency. The lender, in seeking profit, does not consider the well-being of society or of the borrower. Rather, he assesses his alternatives, evaluates the risk, and seeks the greatest return on his investment.

And, of course, lent money is not “barren”; it is fruitful: It enables borrowers to improve their lives or produce new goods or services. Nor is moneylending a zero-sum game: Both the borrower and the lender benefit from the exchange (as ultimately does everyone involved in the economy). The lender makes a profit, and the borrower gets to use capital—whether for consumption or investment purposes—that he otherwise would not be able to use.72

An understanding of these and other economic principles is necessary to defend the practice of usury. But such an understanding is not sufficient to defend the practice. From the brief history we have recounted, it is evident that all commentators on usury from the beginning of time have known that those who charge interest are self-interested, that the very nature of their activity is motivated by personal profit. Thus, in order to defend moneylenders, their institutions, and the kind of world they make possible, one must be armed with a moral code that recognizes rational self-interest and therefore the pursuit of profit as moral, and that consequently regards productivity as a virtue and upholds man’s right to his property and to his time.

There is such a morality: It is Ayn Rand’s Objectivist ethics, or rational egoism, and it is the missing link in the defense of usury (and capitalism in general).

According to rational egoism, man’s life—the life of each individual man—is the standard of moral value, and his reasoning mind is his basic means of living. Being moral, on this view, consists in thinking and producing the values on which one’s life and happiness depend—while leaving others free to think and act on their own judgment for their own sake. The Objectivist ethics holds that people should act rationally, in their own long-term best interest; that each person is the proper beneficiary of his own actions; that each person has a moral right to keep, use, and dispose of the product of his efforts; and that each individual is capable of thinking for himself, of producing values, and of deciding whether, with whom, and on what terms he will trade. It is a morality of self-interest, individual rights, and personal responsibility. And it is grounded in the fundamental fact of human nature: the fact that man’s basic means of living is his ability to reason.

Ayn Rand identified the principle that the greatest productive, life-serving power on earth is not human muscle but the human mind. Consequently, she regarded profit-seeking—the use of the mind to identify, produce, and trade life-serving values—as the essence of being moral.73

Ayn Rand’s Objectivist ethics is essential to the defense of moneylending. It provides the moral foundation without which economic arguments in defense of usury cannot prevail. It demonstrates why moneylending is supremely moral.

The Objectivist ethics frees moneylenders from the shackles of Dante’s inferno, enables them to brush off Shakespeare’s ridicule, and empowers them to take an irrefutable moral stand against persecution and regulation by the state. The day that this moral code becomes widely embraced will be the day that moneylenders—and every other producer of value—will be completely free to charge whatever rates their customers will pay and to reap the rewards righteously and proudly.

If this moral ideal were made a political reality, then, for the first time in history, moneylenders, bankers, and their institutions would be legally permitted and morally encouraged to work to their fullest potential, making profits by providing the lifeblood of capital to our economy. Given what these heroes have achieved while scorned and shackled, it is hard to imagine what their productive achievements would be if they were revered and freed.

About The Author

Yaron Brook

Chairman of the Board, Ayn Rand Institute

Bibliography

Buchan, James. Frozen Desire: The Meaning of Money. New York: Farrar, Straus & Giroux, 1997.

Cohen, Edward E. Athenian Economy and Society. Princeton: Princeton University Press, 1992.

Davies, Glyn. A History of Money. Cardiff: University of Wales Press, 1994.

Ferguson, Niall. The Cash Nexus. New York: Basic Books, 2001.

Grant, James. Money of the Mind. New York: The Noonday Press, 1994.

Homer, Sidney. A History of Interest Rates. New Brunswick: Rutgers University Press, 1963.

Le Goff, Jacques. Your Money or Your Life. New York: Zone Books, 1988.

Lewis, Michael. The Money Culture. New York: W. W. Norton & Company, 1991.

Lockman, Vic. Money, Banking, and Usury (pamphlet). Grants Pass, OR: Westminster Teaching Materials, 1991.

Murray, J. B. C. The History of Usury. Philadelphia: J. B. Lippincott & Co., 1866.

Sobel, Robert. Dangerous Dreamers. New York: John Wiley & Sons, Inc., 1993.

von Böhm-Bawerk, Eugen. Capital and Interest: A Critical History of Economical Theory. Books I–III. William A. Smart, translator. London: Macmillan and Co., 1890.


Endnotes

Acknowledgments: The author would like to thank the following people for their assistance and comments on this article: Elan Journo, Onkar Ghate, Sean Green, John D. Lewis, John P. McCaskey, and Craig Biddle.

1 Aristotle, The Politics of Aristotle, translated by Benjamin Jowett (Oxford: Clarendon Press, 1885), book 1, chapter 10, p. 19.

2 Plutarch, Plutarch’s Morals, translated by William Watson Goodwin (Boston: Little, Brown, & Company, 1874), pp. 412–24.

3 Lewis H. Haney, History of Economic Thought (New York: The Macmillan Company, 1920), p. 71.

4 Anthony Trollope, Life of Cicero (Kessinger Publishing, 2004), p. 70.

5 William Manchester, A World Lit Only by Fire (Boston: Back Bay Books, 1993), pp. 5–6.

6 Glyn Davies, A History of Money: From Ancient Times to the Present Day (Cardiff: University of Wales Press, 1994), p. 117.

7 Ezekiel 18:13.

8 Deuteronomy 23:19–20.

9 Luke 6:35.

10 Jacques Le Goff, Your Money Or Your Life (New York: Zone Books, 1988), p. 26.

11 Edward Henry Palmer, A History of the Jewish Nation (London: Society for Promoting Christian Knowledge, 1874), pp. 253–54. And www.routledge-ny.com/ref/middleages/Jewish/England.pdf.

12 Byrne is here quoting Jacob Twinger of Königshofen, a 14th-century priest.

13 Joseph Patrick Byrne, The Black Death (Westport: Greenwood Press, 2004), p. 84.

14 Sidney Homer, A History of Interest Rates (New Brunswick: Rutgers University Press, 1963), p. 71.

15 Sermon by Jacques de Vitry, “Ad status” 59,14, quoted in Le Goff, Your Money Or Your Life, pp. 56–57.

16 See Thomas Aquinas, Summa Theologica, part II, section II, question 78, article 1.

17 Ibid.

18 Frank Wilson Blackmar, Economics (New York: The Macmillan Company, 1907), p. 178.

19 Le Goff, Your Money Or Your Life, pp. 33–45.

20 Jeremy Rifkin, The European Dream (Cambridge: Polity, 2004), p. 105.

21 Le Goff, Your Money Or Your Life, p. 30.

22 Davies, A History of Money, p. 154.

23 Ibid., pp. 146–74.

24 Robert Burton, Sacred Trust (Oxford: Oxford University Press, 1996), p. 118.

25 Ibid., pp. 118–20.

26 Homer, A History of Interest Rates, p. 73.

27 As Blackstone’s Commentaries on the Laws of England puts it: “When money is lent on a contract to receive not only the principal sum again, but also an increase by way of compensation for the use, the increase is called interest by those who think it lawful, and usury by those who do not.” p. 1336.

28 Homer, A History of Interest Rates, pp. 72–74.

29 Le Goff, Your Money Or Your Life, p. 74.

30 Ibid., pp. 47–64.

31 Dante Alighieri, The Inferno, Canto XVII, lines 51–54.

32 Dorothy M. DiOrio, “Dante’s Condemnation of Usury,” in Re: Artes Liberales V, no. 1, 1978, pp. 17–25.

33 Davies, A History of Money, pp. 177–78.

34 Paul M. Johnson, A History of the Jews (New York: HarperCollins, 1988), p. 242.

35 Eugen von Böhm-Bawerk, Capital and Interest: A Critical History of Economical Theory (London: Macmillan and Co., 1890), translated by William A. Smart, book I, chapter III.

36 Charles Dumoulin (Latinized as Molinaeus), Treatise on Contracts and Usury (1546).

37 von Böhm-Bawerk, Capital and Interest, book I, chapter III.

38 Sir Simonds d’Ewes, “Journal of the House of Commons: April 1571,” in The Journals of all the Parliaments during the reign of Queen Elizabeth (London: John Starkey, 1682), pp. 155–80. Online: http://www.british-history.ac.uk/report.asp?compid=43684.

39 Francis Bacon, “Of Usury,” in Bacon’s Essays (London: Macmillan and Co., 1892), p. 109.

40 Davies, A History of Money, p. 222.

41 Ibid., p. 222, emphasis added.

42 James Buchan, Frozen Desire (New York: Farrar, Strauss & Giroux, 1997), p. 87 (synopsis of the play).

43 William Shakespeare, The Merchant of Venice, Act 1, Scene 2.

44 Ibid., Act 3, Scene 2.

45 Ibid., Act 1, Scene 3.

46 von Böhm-Bawerk, Capital and Interest, book I, chapter III.

47 Ibid., book I, p. 56.

48 Ibid., book I, chapter IV.

49 Jeremy Bentham, A Defence of Usury (Philadelphia: Mathew Carey, 1787), p. 10.

50 Jeremy Bentham, The Works of Jeremy Bentham, edited by John Bowring (Edinburgh: W. Tait; London: Simpkin, Marshall, & Co., 1843), p. 501.

51 Ibid., p. 493.

52 Anonymous, Letters on Usury and Interest (London: J. P. Coghlan, 1774).

53 Ibid.

54 Adam Smith, The Wealth of Nations (New York: Penguin Classics, 1986), p. 456.

55 Ibid.

56 For a thorough rebuttal of Marx’s view, see von Böhm-Bawerk, Capital and Interest, book I, chapter XII.

57 Gabriel Le Bras, quoted in Le Goff, Your Money Or Your Life, p. 43.

58 Johnson, A History of the Jews, p. 351.

59 Fyodor M. Dostoevsky, The Brothers Karamozov, translated by Constance Garnett (Spark Publishing, 2004), p. 316.

60 James Grant, Money of the Mind (New York: Noonday Press, 1994), p. 79.

61 Ibid., pp. 91–95.

62 Ibid., p. 83.

63 John Maynard Keynes, “Economic Possibilities for our Grandchildren,” in Essays in Persuasion (New York: W. W. Norton & Company, 1963), pp. 359, 362. Online: http://www.econ.yale.edu/smith/econ116a/keynes1.pdf.

64 Franklin D. Roosevelt, First Inaugural Address, March 4, 1933, http://www.historytools.org/sources/froosevelt1st.html.

65 To understand the link between 1930s regulations and the S&L crisis, see Edward J. Kane, The S&L Insurance Mess: How Did it Happen? (Washington, D.C.: The Urban Institute Press, 1989), and Richard M. Salsman , The Collapse of Deposit Insurance—and the Case for Abolition (Great Barrington, MA: American Institute for Economic Research, 1993).

66 “Mayday for Payday Loans,” Wall Street Journal, April 2, 2007, http://online.wsj.com/article/SB117546964173756271.html.

67 “U.S. Moves Against Payday Loans, Which Critics Charge Are Usurious,” Wall Street Journal, January 4, 2002, http://online.wsj.com/article/SB1010098721429807840.html.

68 “Mayday for Payday Loans,” Wall Street Journal.

69 Christine Pelisek, “Shylock 2000,” LA Weekly, February 16, 2000, http://www.laweekly.com/news/offbeat/shylock-2000/11565/.

70 Wall Street Journal, August 2, 2007, p. A4.

71 For an excellent presentation of this theory of interest, see von Böhm-Bawerk, Capital and Interest, book II.

72 For a discussion of the productive nature of financial activity see my taped course, “In Defense of Financial Markets,” http://www.aynrandbookstore2.com/prodinfo.asp?number=DB46D.

73 For more on Objectivism, see Leonard Peikoff, Objectivism: The Philosophy of Ayn Rand (New York: Dutton, 1991); and Ayn Rand, Atlas Shrugged (New York: Random House, 1957) and Capitalism: The Unknown Ideal (New York: New American Library 1966).

Further Reading

Ayn Rand | 1957
For the New Intellectual

The Moral Meaning of Capitalism

An industrialist who works for nothing but his own profit guiltlessly proclaims his refusal to be sacrificed for the “public good.”
View Article
Ayn Rand | 1961
The Virtue of Selfishness

The Objectivist Ethics

What is morality? Why does man need it? — and how the answers to these questions give rise to an ethics of rational self-interest.
View Article