Justice Holmes and the Empty Constitution

by Tom Bowden | Summer 2009 | The Objective Standard

On April 17, 1905, Justice Oliver Wendell Holmes Jr. issued his dissenting opinion in the case of Lochner v. New York.

1

 At a mere 617 words, the dissent was dwarfed by the 9,000 words it took for the Supreme Court’s eight other Justices to present their own opinions. But none of this bothered Holmes, who prided himself on writing concisely. “The vulgar hardly will believe an opinion important unless it is padded like a militia brigadier general,” he once wrote to a friend. “You know my view on that theme. The little snakes are the poisonous ones.”

2

Of the many “little snakes” that would slither from Justice Holmes’s pen during his thirty years on the Supreme Court, the biting, eloquent dissent in Lochner carried perhaps the most powerful venom. A dissent is a judicial opinion in which a judge explains his disagreement with the other judges whose majority votes control a case’s outcome. As one jurist put it, a dissent “is an appeal . . . to the intelligence of a future day, when a later decision may possibly correct the error into which the dissenting judge believes the court to have been betrayed.”3 Holmes’s Lochner dissent, though little noticed at first, soon attained celebrity status and eventually became an icon. Scholars have called it “the greatest judicial opinion of the last hundred years” and “a major turning point in American constitutional jurisprudence.”4 Today, his dissent not only exerts strong influence over constitutional interpretation and the terms of public debate, but it also serves as a litmus test for discerning a judge’s fundamental view of the United States Constitution. This means that any Supreme Court nominee who dares to question Holmes’s wisdom invites a fierce confirmation battle and risks Senate rejection. As one observer recently remarked, “The ghost of Lochner continues to haunt American constitutional law.”5

Holmes’s dissent in Lochner blasted the majority opinion endorsed by five members of the nine-man Court. Holmes, as if anticipating the modern era of “sound bites,” littered his dissent with pithy, quotable nuggets that seemed to render the truth of his opinions transparently obvious. Prominent scholars have called the dissent a “rhetorical masterpiece” that “contains some of the most lauded language in legal history.”6 His “appeal to the intelligence of a future day” was a stunning success. So thoroughly did Holmes flay the majority’s reasoning that Ronald Dworkin, a prominent modern legal philosopher, dismisses the majority decision as an “infamous . . . example of bad constitutional adjudication” that gives off a “stench”; and Richard A. Posner, prolific author and federal appellate judge, writes that Lochner is the type of decision that “stinks in the nostrils of modern liberals and modern conservatives alike.”7

What heinous offense did the Lochner majority commit to provoke Holmes’s caustic dissent? It was not the fact that they had struck down a New York law setting maximum working hours for bakers. Holmes personally disapproved of such paternalistic laws and never questioned the Supreme Court’s power to strike down legislation that violated some particular clause in the Constitution.8 No, in Holmes’s eyes the majority’s unforgivable sin did not lie in the particular result they reached, but in the method by which they reached it. The majority interpreted the Constitution as if it embodies a principled commitment to protecting individual liberty. But no such foundational principle exists, Holmes asserted, and the sooner judges realize they are expounding an empty Constitution — empty of any underlying view on the relationship of the individual to the state — the sooner they will step aside and allow legislators to decide the fate of individuals such as Joseph Lochner.

Lochner, a bakery owner whose criminal conviction sparked one of the Supreme Court’s most significant cases, never denied he had violated the New York Bakeshop Act of 1895. Instead, he contended that the statute itself was unconstitutional. The majority agreed with Lochner, and Holmes was moved to dissent — for reasons that are best understood against the background of Progressive Era reform.

The New York Bakeshop Act of 1895

The first decade of the twentieth century was a time of rapid economic and population growth in America. European immigrants streamed into the cities, searching for the upward economic and cultural mobility that defined the American Dream. Of course, they all needed to eat, and the baking industry was one of many that expanded rapidly to meet demand. From the growth pangs of that industry came the legal dispute that eventually took the form of Lochner v. New York.

The great, mechanized bakeries that today produce mass quantities of baked goods had not yet been organized. What few machines had been invented (such as the mechanical mixer, patented in 1880) were not widely owned.9 Thus three-quarters of America’s bread was baked at home, mostly in rural areas.10 But in the fast-growing cities, many people lived in tenement apartments that lacked an oven for home baking. Bread was baked here as it had been in urban environments for centuries, as it had been in ancient Rome — in commercial ovens scattered about the city. Consumers could walk a short distance and buy what they would promptly eat before it went stale (the first plastic wrap, cellophane, was not manufactured in America until 1924).11 In New York City, bakeries were often housed in tenement basements whose solid earth floors could support the heavy ovens.

From the great Midwestern farms came massive railroad shipments of flour, which was packaged and distributed by wagons and trucks to each bakery’s storeroom. Laborers were needed to unload bags and barrels that weighed as much as two hundred pounds; sift the flour and yeast; mix the flour with ingredients in great bowls, troughs, and sifters; knead the dough; fire up the ovens; shove the loaves in and out of the ovens; and clean and maintain the tools and facilities.12 Most urban bakeshops employed four or fewer individuals to perform this work.13 Long hours were typical, as was true generally of labor at the turn of the century, on farms and in factories. Indeed, bakers worked even longer hours than other laborers. Ovens were heated day and night, and bakers worked while others were sleeping, so that customers could buy fresh bread in the morning.14 A baker’s workday might start in the late evening and end in the late morning or early afternoon of the next day.15 A typical workday exceeded 10 hours; workweeks often consumed 70 or 80 hours, and on occasion more than 100 hours.16

These bakeshops did not feature the clean, well-lit, well-ventilated working conditions that mechanization and centralization would later bring to the industry. Urban bakeshops shared dark, low-ceilinged basement space with sewage pipes. Dust and fumes accumulated for lack of ventilation. Bakeshops were damp and dirty, and facilities for washing were primitive.17 In order to entice people to work long hours in these conditions, shop owners had to offer wages high enough to persuade laborers to forgo other opportunities. A typical bakeshop employee would earn cash wages of as much as $12 per week.18 Despite harsh conditions, the mortality rate for bakers did not markedly exceed other occupations.19 And many who had escaped Europe to pursue upward mobility discovered that competing employers — when they could be found — offered nothing better.

No governmental or private coercion required anyone to take a bakery job within the state of New York. Labor contracts were voluntary, and terminable at will. The law left each individual — employer and employee alike — free to make his own decisions, based on his own judgment, and to negotiate whatever terms were offered. But such voluntary arrangements were not satisfactory to the New York legislature in these, the early years of what later became known as the Progressive Era. The hallmark of that political reform movement, which began in the 1890s and ended with World War I, was increased government intervention in the marketplace through such measures as railroad regulation, antitrust legislation, and income taxation. Progressive reformers focused special attention on housing and working conditions and advanced a variety of arguments that laws should limit hours of labor. Some said this would spread jobs and wealth among more people, eliminating unemployment. Others attacked the validity of labor contracts reached between bakeshop owners and laborers. According to one critic, “An empty stomach can make no contracts. [The workers] assent but they do not consent, they submit but they do not agree.”20

The Bakeshop Act of 1895, sponsored by a coalition of prominent powers in New York politics, passed both houses of the state legislature unanimously.21 The Act made it a crime for the owner of a bakeshop to allow a laborer to work more than 10 hours in one day, or more than 60 hours in one week. Bakeshop owners, however, were exempted; only employees’ hours were limited.22 Although similar laws in other states allowed employees to voluntarily opt out, New York’s law included no such “free-contract proviso.”23 The law also provided funds for hiring four deputies to seek out violations and enforce the law.24

New York v. Lochner: Crimes and Appeals

During the first three months after the Bakeshop Act took effect, 150 bakeries were inspected, of which 105 were charged with violations.25 In 1899, inspectors brought about the arrest of Joseph Lochner, a German immigrant whose shop, Lochner’s Home Bakery, was located upstate in Utica.26 Lochner had arrived in America at age 20 and worked for eight years as a laborer before opening his own shop. In contrast to the dreary basement bakeries that furnished the Bakeshop Act’s rationale, Lochner’s bakery (at least, as shown in a 1908 photograph) seems to have been a “relatively airy and mechanized aboveground shop.”27 In any event, Lochner was indicted, arraigned, tried, and convicted of having offended the statute in December 1899, by permitting an employee to work more than 60 hours in one week. To avoid a 20-day jail sentence, Lochner paid the $20 fine.28 Two years later, Lochner was arrested again, for having allowed another employee to work more than 60 hours.29 (Not coincidentally, Lochner had been quarreling for many years with the Utica branch of the journeyman bakers’ union, an avid supporter of the maximum hours regulation.)30 Offering no defense at his 1902 trial, Lochner was sentenced to pay $50, or serve 50 days in jail. This time, however, instead of paying the fine, he appealed his conviction.31 Lochner seems to have been a “hardheaded man who had determined that no one else was going to tell him how to run his business — not the state of New York and especially not the workers or their union.”32

The first New York appellate court to consider Lochner’s case held that the parties’ right to make employment contracts was subordinate to the public’s power to promote health. The court treated the Bakeshop Act as a health law, assuming (without factual findings from the trial court) that working long hours in hot, ill-ventilated areas, with flour dust in the air, “might produce a diseased condition of the human system, so that the employees would not be capable of doing their work well and supplying the public with wholesome food.”33 Rejecting Lochner’s argument that his contract rights were being violated, the court observed that “the statute does not prohibit any right, but regulates it, and there is a wide difference between regulation and prohibition, between prescribing the terms by which the right may be enjoyed, and the denial of that right altogether.”34 In other words, a right is not violated unless it is annihilated.

The next New York appellate court to consider Lochner’s case also treated the Bakeshop Act as a health law that trumped the parties’ right to make labor contracts. The court pointed out that the statute regulated not only bakers’ working hours but a bakeshop’s drainage, plumbing, furniture, utensils, cleaning, washrooms, sleeping places, ventilation, flooring, whitewashing, and walls, even to the point that the factory inspector “may also require the wood work of such walls to be painted.”35 Given the Act’s close attention to such health-related details, the court thought it “reasonable to assume . . . that a man is more likely to be careful and cleanly when well, and not overworked, than when exhausted by fatigue, which makes for careless and slovenly habits, and tends to dirt and disease.”36

New York’s power to regulate for health reasons was grounded, the court held, in the “police power” that state governments possess as part of their sovereignty. While noting the “impossibility of setting the bounds of the police power,” the court held that the Bakeshop Act’s purpose “is to benefit the public; that it has a just and reasonable relation to the public welfare, and hence is within the police power possessed by the Legislature.”37 According to a then-prominent legal treatise cited by the court, the Act’s maximum hours provision was especially necessary to safeguard health against the supposedly mind-muddling effects of capitalism:

If the law did not interfere, the feverish, intense desire to acquire wealth . . . inciting a relentless rivalry and competition, would ultimately prevent, not only the wage-earners, but likewise the capitalists and employers themselves, from yielding to the warnings of nature and obeying the instinct of self-preservation by resting periodically from labor.38

In a concurring opinion, another judge warned that to invalidate the law would “nullify the will of the people.”39

In dissent, however, Judge Denis O’Brien urged that the Bakeshop Act be struck down as unconstitutional. He, too, acknowledged the long-established understanding that the police power authorizes legislation “for the protection of health, morals, or good order,” but he did not believe that the maximum hours provision served any such purpose.40 Instead, he urged that this portion of the law be voided as an unjustified infringement on individual liberty:

Liberty, in its broad sense, means the right, not only of freedom from actual restraint of the person, but the right of such use of his faculties in all lawful ways, to live and work where he will, to earn his livelihood in any lawful calling, and to pursue any lawful trade or avocation. All laws, therefore, which impair or trammel those rights or restrict his freedom of action, or his choice of methods in the transaction of his lawful business, are infringements upon his fundamental right of liberty, and are void.41

In so dissenting, Judge O’Brien was following leads supplied by Supreme Court Justices as to how the Constitution should be interpreted. Justice Stephen Field, dissenting in the Slaughter-House Cases of 1873, had argued that a state monopoly on slaughterhouse work violated the “right to pursue one of the ordinary trades or callings of life.”42 And in Allgeyer v. Louisiana, an 1897 case, the Supreme Court had actually struck down a Louisiana insurance law, holding that the Constitution’s references to “liberty” not only protect “the right of the citizen to be free from the mere physical restraint of his person, as by incarceration” but also “embrace the right of the citizen to be free in the enjoyment of all his faculties . . . to pursue any livelihood or avocation; and for that purpose to enter into all contracts which may be proper.”43

As Joseph Lochner pondered his next step, he found cause for hope in the fact that his conviction had been upheld by the narrowest possible margins (3–2 and 4–3) in New York’s appellate courts. The conflict between “liberty of contract” and the “police power,” like a seesaw teetering near equilibrium, seemed capable of tipping in either direction. Sensing that victory was attainable, Lochner took his fight to the highest court in the land.

Lochner v. New York: The Supreme Court’s Decision

When Lochner’s petition arrived at the Supreme Court, it was accepted for review by Justice Rufus Peckham, a noted opponent of state regulation and author of the Court’s Allgeyer opinion.44 The case was argued over two days in February 1905.45 At first the court voted 5–4 in private conference to uphold Lochner’s conviction. But then Justice Peckham wrote a sharp dissent that convinced another Justice to change his mind. With a little editing, Peckham’s dissent then became the majority’s official opinion declaring the Bakeshop Act unconstitutional.46

Early in his opinion, Peckham conceded that all individual liberty is constitutionally subordinate to the amorphous “police power”:

There are . . . certain powers, existing in the sovereignty of each state in the Union, somewhat vaguely termed police powers, the exact description and limitation of which have not been attempted by the courts. Those powers, broadly stated, and without, at present, any attempt at a more specific limitation, relate to the safety, health, morals, and general welfare of the public. Both property and liberty are held on such reasonable conditions as may be imposed by the governing power of the state in the exercise of those powers. . . .47

Thus Peckham had to admit that the bulk of the Bakeshop Act, being directed at health hazards curable by better plumbing and ventilation, was valid under the police power. But the Act’s maximum hours provision, Peckham wrote, was not really a health law, because it lacked any “fair ground, reasonable in and of itself, to say that there is material danger to the public health, or to the health of the employees, if the hours of labor are not curtailed.”48

So if the maximum hours provision was not a health law, what was it? In the majority’s view it was a “labor law,” designed to benefit one economic class at another’s expense.49 “It seems to us,” Peckham wrote, “that the real object and purpose were simply to regulate the hours of labor between the master and his employees . . . in a private business, not dangerous in any degree to morals, or in any real and substantial degree to the health of the employees.”50 Finding that the “statute necessarily interferes with the right of contract between the employer and employees,” Peckham concluded that laws such as this, “limiting the hours in which grown and intelligent men may labor to earn their living, are mere meddlesome interferences with the rights of the individual. . . .”51 Four Justices sided with Peckham in holding that the “limit of the police power has been reached and passed in this case,” yielding a five-man majority to strike down the maximum hours portion of the New York Bakeshop Act.52 (Three Justices, not including Holmes, dissented on grounds that the law really was a health measure and therefore valid under the police power.)

At this point — that is, before taking Holmes’s dissent into account — opinions on the Bakeshop Act’s validity had been expressed by some 20 appellate judges (12 in New York, and 8 on the Supreme Court). Remarkably, these 20 had split evenly: Ten thought the Act a legitimate exercise of the police power, while 10 thought it exceeded that power.53 This is the kind of split opinion one might expect from a jury that has been asked to decide a close question of fact, such as whether the noise from a woodworking shop is loud enough to be classified as an illegal nuisance. In Lochner’s case, a score of highly experienced judges split down the middle while engaged in what they saw as a similar task, namely deciding whether a provision restricting work hours was or was not a health law.

Justice Holmes, by radically reframing the issue over which his brethren had been agonizing, sought to show how this thorny problem could be made to disappear. In essence he asked a much more fundamental question: What if the Constitution contains no limit on the police power? What if the distinction between “health laws” and other types of law is just a red herring? In raising this issue, Holmes was banking on the fact that nobody — not even the five-man Lochner majority — regarded “liberty of contract” as an ironclad principle or claimed to know the precise nature of the states’ constitutional “police powers.” Before he was through, Holmes would call into question not only the majority’s decision to invalidate the Bakeshop Act but the very idea that the United States Constitution embodies principles relevant to such decisions.

Holmes in Dissent: The Empty Constitution

Uninterested in whether or not the Bakeshop Act was a health law, Holmes devoted only a single line of his dissent to the issue: “A reasonable man might think it a proper measure on the score of health.”54 As one commentator noted, he “entirely ignored his colleagues and refused to engage in their debate about how to apply existing legal tests for distinguishing health and safety laws from special interest legislation.”55 Holmes, who has been called “the finest philosophical mind in the history of judging,” had more profound issues on his mind.56

Peckham’s majority opinion had been based on the premise that the Constitution protects individual liberty, including liberty of contract. Holmes attacked that premise outright. How could liberty of contract possibly be a principle capable of yielding a decision in Lochner’s case, Holmes asked, when violations of such liberty are routinely permitted by law? “The liberty of the citizen to do as he likes so long as he does not interfere with the liberty of others to do the same,” Holmes observed, “is interfered with by school laws, by the Post Office, by every state or municipal institution which takes his money for purposes thought desirable, whether he likes it or not.” For good measure, he cited several cases in which the Court had recently approved laws prohibiting lotteries, doing business on Sunday, engaging in usury, selling stock on margin, and employing underground miners more than eight hours a day — each law a clear interference with contractual liberty. “General propositions do not decide concrete cases,” Holmes nonchalantly concluded — and what judge could have shown otherwise, given the state of American jurisprudence at the time?

With “liberty of contract” in tatters, Holmes could casually dismiss it as a mere “shibboleth,” a subjective opinion harbored by five Justices that has no proper role in constitutional adjudication.57 To drive home his contempt for the majority’s approach, Holmes included in his Lochner dissent a snide, sarcastic gem that has become the most quoted sentence in this much-quoted opinion: “The Fourteenth Amendment does not enact Mr. Herbert Spencer’s Social Statics.”58 For a modern reader to grasp the meaning of this reference, some factual background is required. The English author Herbert Spencer (1820–1903) was a prominent intellectual whose most important book, Social Statics, was originally published in 1853 and reissued continually thereafter. “In the three decades after the Civil War,” one historian has written, “it was impossible to be active in any field of intellectual work without mastering Spencer.”59 Central to Spencer’s thinking was a belief that our emotions dictate our moral values, which include an “instinct of personal rights.”60 That “instinct” Spencer defined as a “feeling that leads him to claim as great a share of natural privilege as is claimed by others — a feeling that leads him to repel anything like an encroachment upon what he thinks his sphere of original freedom.”61 This led Spencer to conclude: “Every man has freedom to do all that he wills, provided he infringes not the equal freedom of any other man.62 Holmes, by coyly denying that Spencer’s “law of equal liberty” had the solemn status of a constitutional principle, masterfully conveyed two points: that any principle of individual liberty must emanate from a source outside the Constitution, not within it — and that the Peckham majority’s “liberty of contract” had the same intellectual status as Spencer’s emotionalist rubbish. “All my life I have sneered at the natural rights of man,” Holmes confided to a friend some years later.63 But in a lifetime of sneering, Holmes never uttered a more damaging slur than this offhand reference to Herbert Spencer’s Social Statics.

In order to mock “liberty of contract” as nothing more than a reflection of the majority’s tastes in popular reading, Holmes had to evade large swaths of evidence tending to show that the Constitution indeed embodies a substantive commitment to individual liberty. In the Declaration of Independence, the Founders clearly stated their intent to create a government with a single purpose — the protection of individual rights to life, liberty, and the pursuit of happiness. Consistent with the Constitution’s Preamble, which declares a desire to “secure the blessings of liberty to ourselves and our posterity,” every clause in the Bill of Rights imposes a strict limit on government’s power over individual liberty and property. In addition, Article I forbids the states to pass any law “impairing the obligation of contracts.”64 And to prevent future generations from interpreting such clauses as an exhaustive list, the Ninth Amendment states: “The enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people.”

To be sure, the Constitution’s basic principle was undercut by important omissions and contradictions, the most serious being its toleration of slavery at the state level. But the Civil War tragically and unmistakably exposed the evil of a legal system that allows state governments to violate individual rights.65 Immediately after that war’s end, three constitutional amendments re-defined and strengthened the federal system, elevating the federal government to full sovereignty over the states and extending federal protection to individuals whose rights might be violated by state legislation. Two of these amendments were quite specific: The Thirteenth banned slavery, and the Fifteenth required that blacks be allowed to vote. But the Fourteenth Amendment’s reach was much broader. Not only did it endow individuals with federal citizenship, it also specified that no state government shall “abridge the privileges or immunities”66 of any citizen or deprive any person of “life, liberty, or property, without due process of law.”

In light of this context, no honest jurist in 1905 could deny that the Constitution embodies certain views on the proper relationship between the individual and his government. Reasonable disagreements might concern how that basic framework should guide interpretation of the document’s express language, but no such disagreement could obscure the fact that the Constitution was chock-full of substantive content. Yet it was precisely this fact that Holmes now urged the Court to evade. The same compromises and exceptions that rendered “liberty of contract” an easy target in Holmes’s attack on the Lochner majority also lent plausibility to his wider assault on the notion that America’s Constitution embodies any principles at all. A constitution, he wrote, “is not intended to embody a particular economic theory, whether of paternalism and the organic relation of the citizen to the State or of laissez faire.” As is evident from the two illustrations he chose, Holmes was using “economic theory” to mean a principle defining the individual’s relationship to the state. His first example, “paternalism and the organic relation of the citizen to the State,” refers to the Hegelian view that a nation, in one philosopher’s description, “is not an association of autonomous individuals [but] is itself an individual, a mystic ‘person’ that swallows up the citizens and transcends them, an independent, self-sustaining organism, made of human beings, with a will and purpose of its own.”67 Thus, as Hegel wrote, “If the state claims life, the individual must surrender it.”68 Holmes’s second example, “laissez faire,” refers to unregulated capitalism, a social system in which a nation is an association of autonomous individuals, who appoint government as their agent for defending individual rights (including private property rights) against force and fraud.

In Holmes’s view, a constitution cannot and should not attempt to embody either of these theories, or indeed any particular view on the individual’s relation to the state. Rather, a constitution is “made for people of fundamentally differing views,” any one of which may rightfully gain ascendancy if its adherents compose a sufficiently influential fraction of the electorate. As Holmes put it: “Every opinion tends to become a law,” and the reshaping of law is the “natural outcome of a dominant opinion.”69 In other words, a nation made up of capitalists, socialists, communists, anarchists, Quakers, Muslims, atheists, and a hundred other persuasions cannot reasonably expect its constitution to elevate one political view above all the others. Because opinions vary so widely, a nation that deems one superior to all others risks being torn apart by internal dissensions unable to find outlets in the political process. On this view, a proper constitution averts disaster by providing an orderly mechanism for embodying in law the constantly shifting, subjective opinions of political majorities. As one commentator explained, “Holmes believed that the law of the English-speaking peoples was an experiment in peaceful evolution in which a fair hearing in court substituted for the violent combat of more primitive societies.”70 It did not trouble Holmes that under such a constitution, society might adopt “tyrannical” laws. As he once wrote to a friend, “If my fellow citizens want to go to Hell I will help them. It’s my job.”71 And so Holmes was able to conclude, in his Lochner dissent, “that the word liberty in the Fourteenth Amendment is perverted when it is held to prevent the natural outcome of a dominant opinion.”

So there you have it. In just 617 carefully chosen words, the framework of liberty erected by the Founding Fathers and buttressed by the Civil War amendments had been interpreted out of existence.

According to Holmes, judges who claim to find fundamental principles in the Constitution are merely giving vent to their own personal political beliefs, which make some laws seem “natural and familiar” and others “novel, and even shocking.” But either reaction, in his view, is an “accident” having no proper place in adjudication. A judge’s “agreement or disagreement has nothing to do with the right of a majority to embody their opinions in law,” Holmes wrote, no matter what the judge’s reasons. “Some of these laws embody convictions or prejudices which judges are likely to share,” said Holmes. “Some may not.”72 Thus, it makes no difference whether a judge holds a conviction based on careful reflection, and an understanding of the Constitution’s specific clauses and content, its history and mission — or merely harbors a prejudice based on upbringing, social class, or a desire to please those in power. All such views are personal to the judge and hence irrelevant in adjudication — an interpretive principle to which Holmes made no exception for himself. “This case is decided upon an economic theory which a large part of the country does not entertain,” Holmes wrote in Lochner. “If it were a question whether I agreed with that theory, I should desire to study it further and long before making up my mind. But I do not conceive that to be my duty. . . .”

In short, Holmes believed that the Supreme Court presides over an empty Constitution — empty of purpose, of moral content, of enduring meaning — bereft of any embedded principles defining the relationship between man and the state. This distinctively Holmesian view, novel in 1905, is today’s orthodoxy. It dominates constitutional interpretation, defines public debate, and furnishes a litmus test for evaluating nominees to the Supreme Court. Although judges sometimes close their eyes to its logical implications when their pet causes are endangered, Holmes’s basic argument remains unrefuted by the legal establishment. In his bleak universe, there exists no principled limit on government power, no permanent institutional barrier between ourselves and tyranny — and the government can dispose of the individual as it pleases, as long as procedural niceties are observed. This pernicious Holmesian influence is reflected in the declining stature of America’s judiciary.

Lochner’s Legacy: Empty Robes

Although the Lochner decision was influential for a time, it was ultimately overshadowed by Holmes’s dissent. During the 32-year period (1905–1937) known as “the Lochner era,” the Supreme Court occasionally emulated theLochner majority by striking down state laws in the name of individual liberty.73 For example, the Court overrode laws setting minimum wages for women, banning the teaching of foreign languages to children, and requiring children to attend public schools.74 But then, in 1937, at the height of the New Deal, the Court “finally ended the Lochner era by upholding a state minimum wage law.”75 A year later, the Court announced that all economic intervention would be presumed valid, unless a “specific prohibition of the Constitution” (for instance, Article I’s ban on export taxes at the state level) said otherwise.76 In effect, any new exercise of government power over the economy was now presumed innocent until proven guilty. As the Supreme Court said in another New Deal case, “A state is free to adopt whatever economic policy may reasonably be deemed to promote the public welfare,” and the “courts are without authority . . . to override it.”77 One scholar summarized the sea change this way: “When the New Deal Court repudiated Lochner after 1937, it was repudiating market freedom as an ultimate constitutional value, and declaring that, henceforth, economic regulation would be treated as a utilitarian question of social engineering.”78 The Lochner majority was last cited approvingly by the Supreme Court in 1941.79

Holmes’s dissent was instrumental in consigning the Lochner decision to legal hell. According to liberal Justice Felix Frankfurter, the dissent was “the turning point” in a struggle against “the unconscious identification of personal views with constitutional sanction.”80 Echoing Holmes, conservative theorist Robert Bork has reviled Lochner as a “notorious” decision that enforced “an individual liberty that is nowhere to be found in the Constitution itself.”81 Added Bork: “To this day, when a judge simply makes up the Constitution he is said ‘to Lochnerize.’ . . .”82 Other commentators agree: “Supreme Court justices consistently use Lochner as an epithet to hurl at their colleagues when they disapprove of a decision declaring a law unconstitutional.”83 “We speak of ‘lochnerizing’ when we wish to imply that judges substitute their policy preferences for those of the legislature.”84 Typical of modern attitudes are the Washington Post’s reference to the “discredited Lochner era”85 and the New York Times’s observation that the era “is considered one of the court’s darkest.”86

With the canonization of Holmes’s Lochner dissent, a miasma of judicial timidity seeped into America’s courtrooms. More than sixty years have elapsed since the Supreme Court last struck down an economic regulation on grounds that it violated unenumerated property or contract rights. And in the noneconomic realm, the Court’s Lochner-esque decision in Roe v. Wade (1973) generated fierce public and professional backlash, discouraging further forays of that type. In Roe, a decision “widely regarded as the second coming of Lochner,” a sharply divided Court held that the Constitution protects a woman’s right to abort her first-trimester fetus.87 Here, one must carefully distinguish the method of that Court’s decision from its specific content. Because the Constitution does not expressly authorize states to ban abortion, the Court was entitled to evaluate the law’s validity in light of the Constitution’s fundamental commitment to protecting individual liberty (including that of women, regardless of any errors the Founders may have made on that score). One can agree with that liberty-oriented approach and yet still acknowledge the Court’s failure to apply it persuasively. (Essentially, the Roe Court recited a grab bag of pro-liberty clauses and precedents and invited the reader to choose a favorite.)88

Predictably, however, conservatives have aimed their critical arrows — dipped in the venom of Holmes’s dissent — straight at Roe v. Wade’s conclusion that the Constitution protects individual liberty. Those arrows struck home. A large segment of the public now believes that any such holding, no matter how firmly grounded in the Constitution’s language and history, is merely rhetorical camouflage for judges’ assumption of extra-constitutional power to impose their own personal opinions on the law.89 Little wonder that recurring public protests and even death threats have dogged the Court ever since. Fear of similar backlash has hindered the administration of justice in other areas as well. For example, the Court needed seventeen years of hand-wringing to finally decide, in Lawrence v. Texas (2003), that the Constitution does not permit gays to be thrown in jail for private, consensual sex.90 Dissenting in that case, Justice Scalia referenced Lochner obliquely, asserting that the Constitution no more protects homosexual sodomy than it does the right to work “more than 60 hours per week in a bakery.”91

Notwithstanding occasional hard-won exceptions, the emasculated Supreme Court now spurns virtually every opportunity to search the Constitution for underlying principles that place limits on state power. A few years ago, when Suzette Kelo’s house was seized under the eminent domain power for transfer to a private developer in Connecticut, she took her case to the Supreme Court — only to be told that the Constitution offers her no protection.92 Abigail Burroughs, terminally ill with neck and head cancer, died several years before the Court disdainfully turned its back on her survivors’ plea for a constitutional right to use experimental life-saving medicine unapproved by the Food and Drug Administration.93 And Dr. Harold Glucksberg, a physician whose terminally ill patient sought a painless suicide, lost his case on the grounds that offering voluntary medical assistance at the end of life is not “deeply rooted in this Nation’s history and tradition.”94 Cases such as these have made it painfully clear to Americans that their Constitution — as interpreted by the modern Supreme Court — imposes no principled limits on the state’s power to dispose of their property and lives. If more proof is necessary, observe that both the Bush and Obama administrations, in recent highly publicized legislation, have dramatically expanded government control of the economy and of private businesses without any discernible worry that the Supreme Court will trouble itself over the rampant abrogation of private property and contract rights.

Lochner’s Other Legacy: An Empty Debate

By arguing that the Constitution is nothing but a highly formalized mechanism for molding subjective opinions into law, Holmes shifted the terms of public debate toward discussion of whose subjective opinions count. Beginning in the 1980s, conservatives such as Edwin Meese III, the U.S. attorney general under Ronald Reagan, and Robert Bork, federal judge and failed Supreme Court nominee, successfully framed the alternatives for constitutional interpretation in Lochnerian terms. According to this view, judges have only two options: to emulate the majority in Lochner by brazenly enforcing their own subjective opinions — or to emulate Holmes in dissent by deferring to the subjective opinions of society (as manifested by legislative vote). In today’s parlance, this means judges must choose between “judicial activism” and “judicial restraint.”95 On this basis, Holmesian conservatives routinely condemn Lochner v. New York, Roe v. WadeLawrence v. Texas, and similar cases as illegitimate exercises of raw judicial power, “activist” decisions unauthorized by the Constitution and dangerous to the body politic. According to Bork, Lochner “lives in the law as the symbol, indeed the quintessence, of judicial usurpation of power.”96

Today’s liberals generally find themselves on the defensive against such conservative attacks. On the liberal view, a mechanically applied doctrine of “judicial restraint” would improperly tie judges’ hands, allowing legislative majorities unrestrained power to enact any law not expressly forbidden by the Constitution. As Judge Posner has observed, “This would mean that a state could require everyone to marry, or to have sexual intercourse at least once a month, or that it could take away every couple’s second child and place it in a foster home.”97 But as an alternative to the folly of such “judicial restraint,” liberals offer dubious interpretive methods of their own. Rather than refute Holmes’s attack on the Lochner majority, liberals contend that the Constitution “must draw its meaning from the evolving standards of decency that mark the progress of a maturing society.”98 Or, as Al Gore pledged during his 2000 presidential run, “I would look for justices of the Supreme Court who understand that our Constitution is a living and breathing document, that it was intended by our founders to be interpreted in the light of the constantly evolving experience of the American people.”99

In sum, neither conservatives nor liberals have advanced a method of interpretation aimed at objectively identifying and applying constitutional principles that limit the power of government over the individual. Instead, both factions accept the Holmesian model that makes all government action a matter of subjective social opinions. Although the factions differ in detail — conservatives are more likely to venerate freeze-dried opinions from centuries past, whereas liberals prefer a bubbling stew of modern sentiments — the current controversy is nothing but Lochner warmed over. As one legal history states more formally, “The majority and dissenting opinions in Lochner stand today as landmarks in the literature of judicial activism and restraint.”100 So long as Lochner sets the terms of debate, Americans will continue to believe they face a Hobson’s choice between judicial eunuchs who passively allow legislatures to dominate a helpless populace — and judicial dictators who actively impose their own personal prejudices on that same helpless populace. Given those alternatives, it is no wonder that Holmesian conservatives are winning the public debate. Any citizen who wants to have some slight influence on the “dominant opinion” will more likely prefer an all-powerful legislature beholden to the voting public, as against an all-powerful, life-tenured judiciary beholden to no one.

In recent decades, the bellwether of this struggle between “activism” and “restraint” has been Roe v. Wade — and so it will continue, until that fragile decision is either overruled or placed on a sound constitutional basis.101 For many years now, the addition of a single conservative Justice would have been enough to tip the balance against Roe. If that decision is finally overruled on Holmesian grounds, then the last ragged vestiges of a principled, content-filled Constitution will have succumbed. After that, it may become virtually impossible to hear the voices of the Constitution’s framers above the clamor of pressure groups competing to forge the next “dominant opinion.” Ultimately, the outcome may depend on whether dissenters from the Holmesian consensus continue to be exposed and ostracized at the judicial nomination stage, by means of the Lochner litmus test.

The Lochner Litmus Test

During his lifetime, Holmes took pleasure from the prospect that his work would have enduring influence after his death. He once spoke, with characteristic eloquence, of feeling

the secret isolated joy of the thinker, who knows that, a hundred years after he is dead and forgotten, men who never heard of him will be moving to the measure of his thought — the subtle rapture of a postponed power, which the world knows not because it has no external trappings, but which to his prophetic vision is more real than that which commands an army.102

And indeed, the world is still “moving to the measure of his thought.” Holmes’s dissent is largely responsible for the “modern near-consensus that unelected justices have no mandate ‘to impose a particular economic philosophy upon the Constitution.’”103 Notably, President Obama’s regulatory czar, Cass Sunstein, is a former constitutional law professor who wrote an article, “Lochner’s Legacy,” stating that “for more than a half-century, the most important of all defining cases has been Lochner v. New York.104 In this post-Lochner world, it is not intellectually respectable to hold that the Constitution embodies any particular view of the relationship between the individual and the state. A judge who dares to suggest otherwise will inevitably be accused of resurrecting Lochner. And a judicial nominee who fails to pledge allegiance to Holmes’s empty Constitution may be grilled and required to recant, on pain of losing a confirmation vote.

Consider two examples. Clarence Thomas, before being nominated to the Supreme Court, had said in a speech that “the entire Constitution is a Bill of Rights; and economic rights are protected as much as any other rights.”105 When Thomas’s nomination reached the Senate, noted liberal constitutional scholar Laurence Tribe opposed confirmation in a New York Times op-ed that said: “Thomas would return the Court to the Lochner (1905) era — an era in which the Court was accused of sacrificing the health and safety of American workers at the altar of laissez-faire capitalism.”106 Thomas later went on the record as rejecting a return to the Lochner approach and endorsing the line of cases that discredited the majority opinion.107 The Senate then confirmed his appointment, but by a razor-thin margin (52–48). Similarly, in another confirmation fight fourteen years later, a young senator (and former law professor) named Barack Obama spoke out against the nomination of California appellate judge Janice Rogers Brown to the federal bench. It seems that Brown, in a public speech, had dared to disagree with Holmes, asserting that his “Lochner dissent has troubled me — has annoyed me — for a long time . . . because the framers did draft the Constitution with a surrounding sense of a particular polity in mind. . . .”108 Obama leaped to the attack: “For those who pay attention to legal argument, one of the things that is most troubling is Justice Brown’s approval of the Lochner era of the Supreme Court.”109 Predictably, Brown backtracked during her confirmation hearings, pledging that she would not really pursue a Lochner approach.110 She was then confirmed, narrowly, by a 56–43 vote.111

As President Obama and the Senate gear up to select a replacement for retiring Justice David Souter, the Lochner litmus test will once again serve as a powerful tool for identifying a nominee’s fundamental approach to construing the Constitution. The alternatives embodied in Lochner will be trotted out once again, and candidates will be invited to condemn the discredited majority approach and endorse the Holmesian view.

But what if the opinions set forth in Lochner do not exhaust the alternatives? What if judges can properly aspire to be, not petty despots or passive rubber stamps, but objective interpreters of a constitution by means of its fundamental principles? The question deserves attention, before the Supreme Court sinks into a timorous lassitude from which it cannot recover.

The Path Not Taken

Justice Holmes took advantage of clashing precedents to claim that the Constitution lacks all content, that the nation’s fundamental law is agnostic on the issue of man’s relation to the state. But Holmes was wrong about the empty Constitution. Not only is the document saturated with substantive content, but the deliberate disregard of that content inevitably left an interpretive vacuum where the Founders’ framework once stood, a vacuum that had to be filled by some other principle of man’s relation to the state. If the Lochner dissent was to be taken seriously, the individual had to be treated on principle as a rightless creature doomed to cringe before the “natural outcome” of society’s “dominant opinion,” and the Constitution had to be regarded on principle as an institutional juggernaut imposing society’s shifting, subjective opinions on recalcitrant individuals. Thus by intellectual sleight of hand, Holmes managed to radically redefine the Constitution’s content while presenting himself as the very soul of content-neutrality. And for more than a century now, we have been “moving to the measure of his thought,” following Holmes’s path into that shadowy, clamorous jungle where pressure groups struggle incessantly for the privilege of imposing their arbitrary “dominant opinions” on others, by force of law — while individuals are legally helpless to resist ever-growing assaults on their lives, liberties, and property. Only by retracing our steps and revisiting the Lochner decision with a different mind-set can we hope to find a clearer road.

The Lochner case arrived at the Supreme Court in the posture of a dispute over whether a restriction on working hours was a health law or not. But in his dissent Holmes highlighted a more fundamental issue: Does the Constitution protect the principle of liberty of contract? If so, then the government’s so-called police power is and must be severely limited — limited by the principle of the inalienable rights of the individual. But if a principle is a general truth that guides action in every case where it applies, brooking no exceptions, then surely neither the “police power” nor “liberty of contract,” as defined by the Court at that time, qualified as a genuine principle. The vague and undefinable “police power” gave society virtually unlimited control over the individual — yet even in Holmes’s view, that power was somehow subordinate to the equally vague “traditions of our people and our law.” On the other hand, “liberty of contract” supposedly protected an individual’s right to dispose of his labor and property — except in the dozens of situations where the police power could override it. How could a judge possibly know when to apply one and not the other? There was no objective basis for choosing.

Despite the lack of clear, consistent principles to govern cases such as Lochner’s, a Supreme Court Justice with Holmes’s penetrating philosophical skills could have explained, even in 1905, why both Holmes and the majority were erring in their approaches to Lochner’s case. That explanation would have had to begin with the realization that every constitution embodies some particular view of the individual’s relation to the state. Although Holmes was wrong to deny that the Constitution has content, the majority was also wrong in its interpretation of that content. On that score, it was surely preposterous for Justice Peckham to concede that individuals’ liberty and property are held in thrall to each state’s “vaguely termed police powers, the exact description and limitation of which have not been attempted by the courts.” After all, the term “police power” is not even mentioned in the Constitution, and nowhere does the document require that states be allowed to legislate for the “safety, health, morals and general welfare of the public,” a shapeless pile of verbiage that could excuse almost any law, regardless of content. Although it is true that the states in a federal system must be recognized as possessing power to enact and enforce laws, there was never any need to define that power in a way that threatened the Constitution’s underlying framework of protection for individual rights. Under a more objective concept of New York’s “police power,” therefore, the Court’s inquiry would have shifted to whether the Bakeshop Act protected Lochner’s rights or violated them.

As to what Lochner’s individual rights entailed, again the Constitution’s content could not properly be ignored. For example, the document’s references to the inviolable “obligation of contracts” (Article I), unenumerated rights “retained by the people” (Ninth Amendment), citizens’ inviolable “privileges or immunities” (Fourteenth Amendment), and individuals’ rights to “life, liberty, and property” (Fourteenth Amendment), all would have been recognized as relevant. Although it would not have been self-evident which clauses might apply to Lochner’s case or precisely how they should be interpreted, the Court could have taken first steps toward limiting the amorphous police power. How? By defining liberty of contract as a principle subsuming an individual’s unassailable freedom to trade his property, his money, and his labor according to his own judgment. Contra Holmes, general propositions can decide concrete cases, if those propositions are objectively defined. But such definition is impossible, at the constitutional level, so long as judges refuse to acknowledge that government exists for any particular purpose.

None of this is to deny that constitutional interpretation can be fraught with difficulty. Reasonable judges can arrive at different interpretations, especially in cases at the intersection of individual rights and legitimate exercises of government power. And even the most incisive interpretations cannot, and should not attempt to, rewrite the Constitution. So, for example, as long as the Post Office clause resides in Article I, the Supreme Court cannot abolish that ponderous government monopoly — even if it violates liberty of contract in obvious ways. Moreover, the Court must pay due respect to precedent, while never allowing an injustice to survive any longer than may be necessitated by innocent reliance on prior erroneous rulings. But in the mind of an objective judge, none of these pitfalls will obscure the fact that the Constitution has content — a specific view of the proper relation between man and the state — which content cannot be ignored without betraying the Court’s duty of objective interpretation. To take the purpose of government into account when interpreting the Constitution’s express language is not a judicial usurpation of power. On the contrary, it is an essential part of objective interpretation, no more in need of special authorization than is the use of concepts or logic.112

Ayn Rand once observed that Justice Holmes “has had the worst philosophical influence on American law.”113 The nihilistic impact of his Lochner dissent alone is enough to justify her claim. But it is not too late for a new generation of jurists to target that influence for elimination, by embarking upon the mission that Holmes and his brethren should have undertaken a century ago. Tomorrow’s jurists will need to honestly confront Lochner, that “most important of all defining cases” in American jurisprudence, with the understanding that neither the majority nor the dissents in that case properly took into account the Constitution’s substantive content. They will need to challenge the false Lochnerian alternatives of “judicial activism” and “judicial restraint.” And they will need to question whether, and on what grounds, Lochner should continue to serve as a litmus test for Supreme Court appointees. Once the “ghost of Lochner” has ceased to haunt American constitutional law, the Supreme Court can assume its proper role as ultimate legal authority on the objective meaning of America’s founding document.


About The Author

Tom Bowden

Analyst and Outreach Liaison, Ayn Rand Institute

“The Objective Standard”

 

Endnotes

Acknowledgments: The author would like to thank Onkar Ghate for his invaluable suggestions and editing, Adam Mossoff and Larry Salzman for their helpful comments on earlier drafts, Peter Schwartz for sharing his thoughts on legal interpretation, and Rebecca Knapp for her editorial assistance.

1  Lochner v. New York, 198 U.S. 45, 65 (1905) (Holmes, J., dissenting). The full text of the dissent can be found here.

2 Sheldon M. Novick, Honorable Justice: The Life of Oliver Wendell Holmes (Boston: Little, Brown and Co., 1989), p. 283.

3 Charles Evans Hughes, The Supreme Court of the United States, quoted in Catherine Drinker Bowen, Yankee from Olympus: Justice Holmes and His Family (Boston: Little, Brown and Co., 1943), p. 373.

4 Richard A. Posner, Law and Literature (Cambridge, MA: Harvard University Press, 1998), p. 271; G. Edward White, Justice Oliver Wendell Holmes: Law and the Inner Self (New York: Oxford University Press, 1993), p. 324.

5 David E. Bernstein, review of Michael J. Phillips, The Lochner Court, Myth and Reality: Substantive Due Process from the 1890s to the 1930sLaw and History Review, vol. 21 (Spring 2003), p. 231.

6 Posner, Law and Literature, p. 271; Bernard H. Siegan, Economic Liberties and the Constitution (Chicago: University of Chicago Press, 1980), p. 203.

7 Ronald Dworkin, Freedom’s Law: The Moral Reading of the American Constitution (Cambridge, MA: Harvard University Press, 1997), pp. 82, 208; Richard A. Posner, Overcoming Law (Cambridge, MA: Harvard University Press, 1995), pp. 179–80.

8 Albert W. Alschuler, Law Without Values: The Life, Work, and Legacy of Justice Holmes (Chicago: University of Chicago Press, 2000), p. 63; Posner, Law and Literature,p. 269.

9 Paul Kens, Lochner v. New York: Economic Regulation on Trial (Lawrence: University Press of Kansas, 1998), pp. 7–8.

10 Ibid., p. 6.

11 “DuPont Rid of Cellophane,” New York Times, June 30, 1986, http://www.nytimes.com/1986/06/30/business/du-pont-rid-of-cellophane.html?&pagewanted=print (last accessed May 14, 2009).

12 Kens, Lochner v. New York, p. 13.

13 Ibid., p. 7.

14 73 A.D. 120, 128 (N.Y. App. Div. 1902).

15 Kens, Lochner v. New York, p. 13.

16 Ibid.

17 Ibid., pp. 8–9.

18 Ibid., p. 13. In that era, hourly wages were virtually unknown; laborers were hired by the day, or sometimes by the week.

19 Ibid., p. 10.

20 David Montgomery, Beyond Equality: Labor and the Radical Republicans, 1862–1972 (Urbana, IL: University of Illinois Press, 1981), p. 252 (emphasis in original).

21 Kens, Lochner v. New York, pp. 63–64; Session Laws of New York, 1895, vol. 1, ch. 518.

22 Kens, Lochner v. New York, p. 65.

23 Ibid., p. 21.

24 Ibid., p. 67.

25 Ibid., p. 90.

26 Ibid., p. 89; Peter Irons, A People’s History of the Supreme Court (New York: Penguin, 1999), p. 255.

27 Kens, Lochner v. New York, p. 89.

28 Ibid., p. 90.

29 Ibid., p. 89.

30 Ibid., p. 90.

31 Ibid., pp. 91–92. Ironically, Lochner’s team of appellate lawyers included one Henry Weismann, who had actually lobbied on behalf of the bakers’ union for passage of the Bakeshop Act in 1895.

32 Ibid., p. 89.

33 73 A.D. at 128.

34 73 A.D. at 127.

35 New York v. Lochner, 69 N.E. 373, 376, 378-79 (N.Y. 1904).

36 69 N.E. at 380.

37 69 N.E. at 376, 381.

38 Christopher Gustavus Tiedeman, A Treatise on the Limitations of Police Power in the United States (St. Louis: The F.H. Thomas Law Book Co., 1886), p. 181, quoted in New York v. Lochner, 73 A.D. 120, 126 (N.Y. App. 1902).

39 69 N.E. at 381 (Gray, J., concurring).

40 69 N.E. at 388 (O’Brien, J., dissenting).

41 69 N.E. at 386 (O’Brien, J., dissenting).

42 83 U.S. 36, 88 (Field, J., dissenting).

43 165 U.S. 578, 589 (1897).

44 Kens, Lochner v. New York, p. 117.

45 Novick, Honorable Justice, p. 280.

46 Ibid., p. 281.

47 198 U.S. 53 (emphasis added).

48 198 U.S. at 61.

49 198 U.S. at 57; see also Howard Gillman, The Constitution Besieged: The Rise and Demise of Lochner Era Police Powers Jurisprudence (Durham, NC: Duke University Press, 1993).

50 198 U.S. at 64.

51 198 U.S. at 53, 61.

52 198 U.S. at 58.

53 The grounds on which Judges McLennan and Williams dissented, in the first New York appellate court, are unclear, as they did not deliver written opinions. 73 A.D. at 128.

54 Unless otherwise noted, all quotations of Holmes in this section are from his dissent, 198 U.S. at 74–76.

55 Jeffrey Rosen, The Supreme Court: The Personalities and Rivalries That Defined America (New York: Times Books/Henry Holt and Company, 2007), p. 113.

56 Posner, Overcoming Law, p. 195 (emphasis in original).

57 A “word or saying used by adherents of a party, sect, or belief and usually regarded by others as empty of real meaning.” Merriam-Webster Online, “shibboleth,” http://www.merriam-webster.com/dictionary/shibboleth (last accessed January 28, 2009).

58 A Lexis/Nexis search performed on February 27, 2009, indicated that this sentence had been quoted verbatim in 59 reported appellate cases, 98 news reports, and 338 law review articles.

59 Richard Hofstadter, Social Darwinism in American Thought (New York: George Braziller, Inc., rev. ed., 1959), p. 33.

60 Herbert Spencer, Social Statics: The Conditions Essential to Human Happiness Specified, and the First of Them Developed (New York: Robert Schalkenbach Foundation, 1995), pp. 25, 86.

61 Ibid., p. 86.

62 Ibid., pp. 95–96 (emphasis in original).

63 Richard A. Posner, ed., The Essential Holmes: Selections from the Letters, Speeches, Judicial Opinions, and Other Writings of Oliver Wendell Holmes, Jr. (Chicago: University of Chicago Press, 1992), p. xxv.

64 Article I, section 10, clause 1.

65 See Harry Binswanger, “The Constitution and States’ Rights,” The Objectivist Forum, December 1987, pp. 7–13.

66 This was a 19th-century term of art denoting “fundamental rights” and “substantive liberties” of the individual, to be protected against “hostile state action.” Michael Kent Curtis, No State Shall Abridge: The Fourteenth Amendment and the Bill of Rights (Durham, NC: Duke University Press, 1986), pp. 47–48.

67 Leonard Peikoff, The Ominous Parallels: The End of Freedom in America (New York: Stein and Day, 1982), p. 27.

68 Robert Maynard Hutchins, ed., Great Books of the Western World (Volume 46: Hegel)(Chicago: W. Benton, 1952), p. 123.

69 As Holmes wrote in another dissent years later, concerning liberty of contract, “Contract is not specially mentioned in the text that we have to construe. It is merely an example of doing what you want to do, embodied in the word liberty. But pretty much all law consists in forbidding men to do some things that they want to do, and contract is no more exempt from law than other acts.” Adkins v. Children’s Hospital, 261 U.S. 525, 568 (1923) (Holmes, J., dissenting).

70 Sheldon M. Novick, “Oliver Wendell Holmes,” The Oxford Companion to the Supreme Court of the United States (New York: Oxford University Press, 1992), p. 410.

71 Letter to Harold Laski, March 4, 1920, in Mark deWolfe Howe, ed., Holmes-Laski Letters: The Correspondence of Mr. Justice Holmes and Harold J. Laski 1916–1935, vol. 1 (Cambridge, MA: Harvard University Press, 1953), p. 249.

72 Emphasis added.

73 Some legal historians hold that the Lochner Era actually began in 1897, when the Supreme Court in Allgeyer v. Louisiana struck down a state insurance law that interfered with contractual freedom.

74 Adkins v. Children’s Hospital,261 U.S. 525 (1923); Meyer v. Nebraska, 262 U.S. 390 (1923); Pierce v. Society of Sisters, 268 U.S. 510 (1925).

75 Adam Cohen, “Looking Back on Louis Brandeis on His 150th Birthday,” New York Times(November 14, 2006), p. A26. In West Coast Hotel v. Parrish, 300 U.S. 379 (1937), the Court upheld a state minimum-wage law for women.

76 United States v. Carolene Products, 304 U.S. 144, 152 n.4 (1938).

77 Nebbia v. New York, 291 U.S. 502, 537 (1934).

78 Bruce Ackerman, We the People: Transformations (Cambridge, MA: Harvard University Press, 2000), p. 401.

79 United States v. Darby, 312 U.S. 100 (1941); see Ackerman, We the People, p. 375.

80 Quoted in White, Justice Oliver Wendell Holmes, p. 362.

81 Robert Bork, “Individual Liberty and the Constitution,” The American Spectator, June 2008, pp. 30, 32.

82 Robert H. Bork, The Tempting of America: The Political Seduction of the Law (New York: Touchstone, 1990), p. 44.

83 Bernstein, review of Phillips, The Lochner Court, Myth and Reality, p. 231.

84 William M. Wiecek, Liberty Under Law: The Supreme Court in American Life (Baltimore: Johns Hopkins University Press, 1988), p. 124.

85 Bruce Fein, “Don’t Run from the Truth: Why Alito Shouldn’t Deny His Real Convictions,” Washington Post, (December 18, 2005), p. B1.

86 Adam Cohen, “Last Term’s Winner at the Supreme Court: Judicial Activism,” New York Times(July 9, 2007), p. A16.

87 Posner, Law and Literature, p. 271; 410 U.S. 113 (1973).

88 Roe v. Wade, 410 U.S. 113, 152 (1973).

89 According to a 2005 Pew Research Center public opinion poll, 26 percent of respondents believe the Supreme Court should “completely overturn” its decision in Roe v. Wade: http://people-press.org/questions/?qid=1636990&pid=51&ccid=51#top (last accessed May 4, 2009). A 2008 Gallup poll on the same issue found that 33 percent would like to see the decision overturned: http://www.gallup.com/poll/110002/Will-Abortion-Issue-Help-Hurt-McCain.aspx (last accessed May 4, 2009). On average, about one-third of Americans disapprove of the way the Supreme Court is doing its job: http://www.gallup.com/poll/18895/Public-Divided-Over-Future-Ideology-Supreme-Court.aspx (last accessed May 4, 2009).

90 In the 1986 case of Bowers v. Hardwick, 478 U.S. 186, the Supreme Court held that homosexual conduct between consulting adults in their home could be criminally punished. Not until 2003 did the Court, in Lawrence v. Texas, 539 U.S. 558, strike down a state law that put gays in jail — and then only by a 6–3 vote.

91 539 U.S. 558, 592 (Scalia, J., dissenting).

92 Kelo v. City of New London, 545 U.S. 469 (2005).

93 “Court Declines Experimental Drugs Case,” USA Today, January 14, 2008, http://www.usatoday.com/news/washington/2008-01-14-280098622_x.htm (last accessed April 30, 2009).

94 Washington v. Glucksberg,521 U.S. 702, 720–21 (1997).

95 Although today’s legal professionals debate such interpretive concepts as “public-meaning originalism,” “living constitutionalism,” and judicial “humility,” it is the activism/restraint dichotomy that continues to dominate public discussion outside the courts and academia.

96 Bork, The Tempting of America, p. 44.

97 Richard A. Posner, Sex and Reason (Cambridge, MA: Harvard University Press, 1992), p. 328.

98 Trop v. Dulles,356 U.S. 86, 101 (1958).

99 Transcript of Democratic Presidential Debate, Los Angeles, California, March 1, 2000, http://edition.cnn.com/TRANSCRIPTS/0003/01/se.09.html (last accessed April 30, 2009).

100 Ronald M. Labbé and Jonathan Lurie, The Slaughterhouse Cases: Regulation, Reconstruction, and the Fourteenth Amendment (Lawrence: University Press of Kansas, 2003), p. 249.

101 In a 1992 case, Planned Parenthood v. Casey, 505 U.S. 833, a plurality of the Supreme Court singled out the Fourteenth Amendment’s concept of “liberty” as the proper basis for upholding a woman’s qualified right to abortion. However, the Court also reaffirmed Roe’s holding that the states have “their own legitimate interests in protecting prenatal life.” 505 U.S. at 853. Hence this entire line of cases remains vulnerable to the Holmesian critique in Lochner. If the “police power” can be interpreted to have no limits, then why not the state’s “legitimate interests in protecting prenatal life”?

102 Posner, The Essential Holmes, p. 220 (correcting Holmes’s obsolete spelling of “subtle” as “subtile”). In a similar vein, Holmes gave a eulogy in 1891 praising men of “ambition” whose “dream of spiritual reign” leads them to seek the “intoxicating authority which controls the future from within by shaping the thoughts and speech of a later time.” Posner, The Essential Holmes, p. 214.

103 Stuart Taylor Jr., “Does the President Agree with This Nominee?” TheAtlantic.com, May 3, 2005, http://www.theatlantic.com/magazine/archive/2005/05/does-the-president-agree-with-this-nominee/304012/ (last accessed April 30, 2009).

104 Cass R. Sunstein, “Lochner’s Legacy,” Columbia Law Review, vol. 87 (June 1987), p. 873.

105 Quoted in Scott Douglas Gerber, First Principles: The Jurisprudence of Justice Clarence Thomas (New York: NYU Press, 2002), p. 54.

106 Ibid., p. 54.

107 Ibid., pp. 54–55; Dworkin, Freedom’s Law, pp. 308–10.

108 Janice Rogers Brown, “‘A Whiter Shade of Pale’: Sense and Nonsense—The Pursuit of Perfection in Law and Politics,” address to Federalist Society, University of Chicago Law School, April 20, 2000, http://www.communityrights.org/PDFs/4-20-00FedSoc.pdf.

109 “Remarks of U.S. Senaor Barack Obama on the nomination of Justice Janice Rogers Brown,” June 8, 2005, http://obamaspeeches.com/021-Nomination-of-Justice-Janice-Rogers-Brown-Obama-Speech.htm (last accessed January 29, 2009).

110 Taylor, “Does the President Agree with This Nominee?” supra.

111 In addition, the late Bernard H. Siegan, a professor at the University of San Diego School of Law, was rejected by the Senate for a seat on the U.S. Court of Appeals based largely on the support for the Lochner decision expressed in his book, Economic Liberties and the Constitution. See Larry Salzman, “Property and Principle: A Review Essay on Bernard H. Siegan’s Economic Liberties and the Constitution,” The Objective Standard, vol. 1, no. 4 (Winter 2006–2007), p. 88.

112 Promising work on objective judicial interpretation is being undertaken by Tara Smith, professor of philosophy, University of Texas at Austin. See “Why Originalism Won’t Die—Common Mistakes in Competing Theories of Judicial Interpretation,” Duke Journal of Constitutional Law & Public Policy, vol. 2 (2007), p. 159; “Originalism’s Misplaced Fidelity,” Constitutional Commentary, vol. 25, no. 3 (forthcoming, August 2009).

113 Quoted in Marlene Podritske and Peter Schwartz, eds., Objectively Speaking: Ayn Rand Interviewed (Lanham, MD: Lexington Books, 2009), p. 60.

America’s Unfree Market

by Yaron Brook and Don Watkins | May 2009

Since day one of the financial crisis, we have been told that the free market has failed. But this is a myth. Regardless of what one thinks were the actual causes of the crisis, the free market could not have been the source because, whatever you wish to call America’s economy post World War I, you cannot call it a free market. America today is a mixed economy — a market that retains some elements of freedom, but which is subject to pervasive and entrenched government control.

The actual meaning of “free market” is: the economic system of laissez-faire capitalism. Under capitalism, the government’s sole purpose is to protect the individual’s rights to life, liberty, property, and the pursuit of happiness from violation by force or fraud. This means a government is limited to three basic functions: the military, the police, and the court system. In a truly free market, there is no income tax, no alphabet agencies regulating every aspect of the economy, no handouts or business subsidies, no Federal Reserve. The government plays no more role in the economic lives of its citizens than it does in their sex lives.

Thus a free market is a market totally free from the initiation of physical force. Under such a system, individuals are free to exercise and act on their own judgment. They are free to produce and trade as they see fit. They are fully free from interference, regulation, or control by the government.

Historically, a fully free market has not yet existed. But it was America’s unsurpassed economic freedom that enabled her, in the period between the Civil War and World War I, to become an economic juggernaut, and the symbol of freedom and prosperity.

That freedom has largely been curtailed. But one sector that remains relatively free is America’s high-tech industry. Throughout the late 20th century, the computer industry had no significant barriers to entry, no licensing requirements, no government-mandated certification tests. Individuals were left free for the most part to think, produce, innovate and take risks: if they succeeded, they reaped the rewards; if they failed, they could not run to Washington for help.

The results speak for themselves.

Between 1981 and 1985, about 6 million personal computers were sold worldwide. During the first half of this decade, that number climbed to 855 million. Meanwhile, the quality of computers surged as prices plummeted. For instance, the cost per megabyte for a personal computer during the early 1980s was generally between $100 and $200; today it’s less than a cent.

That is what a free economy would look like: unbridled choice in production and trade with innovation and prosperity as the result.

But this is hardly what the economy looks like today.

The latest Federal budget was $3.6 trillion dollars, up from less than $1 billion a century ago. Taxes eat up nearly half of the average American’s income. A mammoth welfare state doles out favors to individuals and to businesses. Hundreds of thousands of regulations direct virtually every aspect of our lives. The Federal Reserve holds virtually unlimited control over the U.S. monetary and banking systems.

All of this represents the injection of government force into the market. And just as the elimination of force makes possible all the tremendous benefits of the free market, the introduction of force into markets undermines those benefits.

Nowhere is this clearer than in the highly controlled U.S. automotive industry and in the housing market.

The U.S. automotive industry is subject to thousands of regulations, but most relevant here are pro-union laws, such as the Wagner Act, which force Detroit to deal with the United Auto Workers (UAW), and the Corporate Average Fuel Economy (CAFE) law. These laws – not some innate inability to produce good cars – put American companies at a severe competitive disadvantage with foreign automakers.

In a free market, individuals would be free to voluntarily form and join unions, while employers would have the freedom to deal with those unions or not. But under current law, the UAW is protected by the coercive power of government. Individuals who wish to work for Detroit auto companies are forced to join the UAW — and Detroit is forced to hire UAW workers. This gives the UAW the ability to command above-market compensation for its members, to the detriment of the auto companies.

Compounding this, CAFE standards force Detroit to manufacture small, fuel-efficient cars in domestic (UAW) factories. These cars are notorious money-losers for American auto companies, swallowing up tens of billions of dollars. But under CAFE, the Big Three are barred by law from focusing on their more profitable lines of larger vehicles, from producing their fuel-efficient fleet overseas, or even from using the threat of offshoring as bargaining leverage.

Imagine if the same sort of anti-market policies imposed on Detroit had been applied to the computer industry. Suppose that in the mid-1980s, as IBM-compatible computers were battling Apple for preeminence, the government had decided that it favored Apple computers and would give tax incentives for computer buyers to purchase them. This would have hobbled and very likely wiped out IBM, Intel, Microsoft, and thousands of other companies. And while today Apple is an innovative, well-managed company, it is because of market pressures that required it to shape up or go bankrupt – pressures that would not have existed had Washington loaned it a helping hand.

Now turn from the auto industry to housing.

The conventional view of the housing crisis is that it was the result of a housing market free of government control. But, once again, the notion that the housing market was free is a total fantasy.

On a free market, the government would neither encourage nor discourage homeownership. Individuals would be free to decide whether to buy or to rent. Lenders would lend based on their expectation of a profit, knowing that if they make bad loans, they will pay the price. Interest rates would be determined by supply and demand – not by government fiat.

But that is not what happened in our controlled market. Instead, the government systematically intervened to encourage homeownership and real estate speculation. Think: Fannie and Freddie, the Community Reinvestment Act, tax code incentives for flipping homes, really the list goes on and on. This was a free market?

Unquestionably, today’s crisis is complex, and to identify its cause is not easy. But the opponents of the free market are not interested in identifying the cause. Their aim since day one has been to silence the debate and declare the matter settled: we had a free market, we had a financial crisis, and therefore, the free market was to blame. The only question, they would have us believe, is how, not whether, the government should intervene.

But they are wrong. There was no free market. And when you look across the American economy, what you see is that the freer parts, like the high-tech industry, are the most productive, and the more controlled parts, like the automotive, banking and housing industries, are in crisis.

Is this evidence that we need more government intervention or more freedom?

About The Authors

Yaron Brook

Chairman of the Board, Ayn Rand Institute

Don Watkins

Former Fellow (2006-2017), Ayn Rand Institute

Energy at the Speed of Thought: The Original Alternative Energy Market

by Alex Epstein | Summer 2009 | The Objective Standard

The most important and most overlooked energy issue today is the growing crisis of global energy supply. Cheap, industrial-scale energy is essential to building, transporting, and operating everything we use, from refrigerators to Internet server farms to hospitals. It is desperately needed in the undeveloped world, where 1.6 billion people lack electricity, which contributes to untold suffering and death. And it is needed in ever-greater, more-affordable quantities in the industrialized world: Energy usage and standard of living are directly correlated.1

Every dollar added to the cost of energy is a dollar added to the cost of life. And if something does not change soon in the energy markets, the cost of life will become a lot higher. As demand increases in the newly industrializing world, led by China and India,2 supply stagnates3 — meaning rising prices as far as the eye can see.

What is the solution?

We just need the right government “energy plan,” leading politicians, intellectuals, and businessmen tell us. Of course “planners” such as Barack Obama, John McCain, Al Gore, Thomas L. Friedman, T. Boone Pickens, and countless others favor different plans with different permutations and combinations of their favorite energy sources (solar, wind, biomass, ethanol, geothermal, occasionally nuclear and natural gas) and distribution networks (from decentralized home solar generators to a national centralized so-called smart grid). But each agrees that there must be a plan — that the government must lead the energy industry using its power to subsidize, mandate, inhibit, and prohibit. And each claims that his plan will lead to technological breakthroughs, more plentiful energy, and therefore a higher standard of living.

Consider Nobel Peace Prize winner Al Gore, who claims that if only we follow his “repower American plan” — which calls for the government to ban and replace all carbon-emitting energy (currently 80 percent of overall energy and almost 100 percent of fuel energy)4 in ten years — we would be using

fuels that are not expensive, don’t cause pollution and are abundantly available right here at home. . . . We have such fuels. Scientists have confirmed that enough solar energy falls on the surface of the earth every 40 minutes to meet 100 percent of the entire world’s energy needs for a full year. Tapping just a small portion of this solar energy could provide all of the electricity America uses.

And enough wind power blows through the Midwest corridor every day to also meet 100 percent of US electricity demand. Geothermal energy, similarly, is capable of providing enormous supplies of electricity for America. . . . [W]e can start right now using solar power, wind power and geothermal power to make electricity for our homes and businesses.5

And Gore claims that, under his plan, our vehicles will run on “renewable sources that can give us the equivalent of $1 per gallon gasoline.”6

Another revered thinker, Thomas L. Friedman, also speaks of the transformative power of government planning, in the form of a government-engineered “green economy.” In a recent book, he enthusiastically quotes an investor who claims: “The green economy is poised to be the mother of all markets, the economic investment opportunity of a lifetime.”7 Friedman calls for “a system that will stimulate massive amounts of innovation and deployment of abundant, clean, reliable, and cheap electrons.”8 How? Friedman tells us that

there are two ways to stimulate innovation — one is short-term and the other is long-term — and we need to be doing much more of both. . . . First, there is innovation that happens naturally by the massive deployment of technologies we already have [he stresses solar and wind]. . . . The way you stimulate this kind of innovation — which comes from learning more about what you already know and doing it better and cheaper — is by generous tax incentives, regulatory incentives, renewable energy mandates, and other market-shaping mechanisms that create durable demand for these existing clean power technologies. . . . And second, there is innovation that happens by way of eureka breakthroughs from someone’s lab due to research and experimentation. The way you stimulate that is by increasing government-funded research. . . .9

The problem with such plans and claims: Politicians and their intellectual allies have been making and trying to implement them for decades — with nothing positive (and much negative) to show for it.

For example, in the late 1970s, Jimmy Carter heralded his “comprehensive energy policy,” claiming it would “develop permanent and reliable new energy sources.” In particular, he (like many today) favored “solar energy, for which most of the technology is already available.” All the technology needed, he said, “is some initiative to initiate the growth of a large new market in our country.”10

Since then, the government has heavily subsidized solar, wind, and other favored “alternatives,” and embarked on grand research initiatives to change our energy sources — claiming that new fossil fuel and nuclear development is unnecessary and undesirable. The result? Not one single, practical, scalable source of energy. Americans get a piddling 1.1 percent of their power from solar and wind sources,11 and only that much because of national and state laws subsidizing and mandating them. There have been no “eureka breakthroughs,” despite many Friedmanesque schemes to induce them, including conveniently forgotten debacles such as government fusion projects,12 the Liquid Fast Metal Breeder Reactor Program,13 and the Synfuels Corporation.14

Many good books and articles have been written — though not enough, and not widely enough read — chronicling the failures of various government-sponsored energy plans, particularly those that sought to develop “alternative energies,” over the past several decades.15 Unfortunately, the lesson that many take from this is that we must relinquish hope for dramatic breakthroughs, lower our sights, and learn to make do with the increasing scarcity of energy.

But the past failures do not warrant cynicism about the future of energy; they warrant cynicism only about the future of energy under government planning. Indeed, history provides us ample grounds for optimism about the potential for a dynamic energy market with life-changing breakthroughs — because America once had exactly such a market. For most of the 1800s, an energy market existed unlike any we have seen in our lifetimes, a market devoid of government meddling. With every passing decade, consumers could buy cheaper, safer, and more convenient energy, thanks to continual breakthroughs in technology and efficiency — topped off by the discovery and mass availability of an alternative source of energy that, through its incredible cheapness and abundance, literally lengthened and improved the lives of nearly everyone in America and millions more around the world. That alternative energy was called petroleum. By studying the rise of oil, and the market in which it rose, we will see what a dynamic energy market looks like and what makes it possible. Many claim to want the “next oil”; to that end, what could be more important than understanding the conditions that gave rise to the first oil?

Today, we know oil primarily as a source of energy for transportation. But oil first rose to prominence as a form of energy for a different purpose: illumination.

For millennia, men had limited success overcoming the darkness of the night with man-made light. As a result, the day span for most was limited to the number of hours during which the sun shone — often fewer than ten in the winter. Even as late as the early 1800s, the quality and availability of artificial light was little better than it had been in Greek and Roman times — which is to say that men could choose between various grades of expensive lamp oils or candles made from animal fats.16 But all of this began to change in the 1820s. Americans found that lighting their homes was becoming increasingly affordable — so much so that by the mid-1860s, even poor, rural Americans could afford to brighten their homes, and therefore their lives, at night, adding hours of life to their every day.17

What made the difference? Individual freedom, which liberated individual ingenuity.

The Enlightenment and its apex, the founding of the United States of America, marked the establishment of an unprecedented form of government, one established explicitly on the principle of individual rights. According to this principle, each individual has a right to live his own life solely according to the guidance of his own mind — including the crucial right to earn, acquire, use, and dispose of the physical property, the wealth, on which his survival depends. Enlightenment America, and to a large extent Enlightenment Europe, gave men unprecedented freedom in the intellectual and economic realms. Intellectually, individuals were free to experiment and theorize without restrictions by the state. This made possible an unprecedented expansion in scientific inquiry — including the development by Joseph Priestly and Antoine Lavoisier of modern chemistry, critical to future improvements in illumination.18 Economically, this freedom enabled individuals to put scientific discoveries and methods into wealth-creating practice, harnessing the world around them in new, profitable ways — from textile manufacturing to steelmaking to coal-fired steam engines to illuminants.

There had always been a strong desire for illumination, and therefore a large potential market for anyone who could deliver it affordably — but no one had been able to actualize this potential. In the 1820s, however, new scientists and entrepreneurs entered the field with new knowledge and methods that would enable them to harness nature efficiently to create better, cheaper illuminants at a profit. Contrary to those who believe that the government is necessary to stimulate, invest in, or plan the development of new energy sources, history shows us that all that is required is an opportunity to profit.

That said, profiting in the illumination industry was no easy task. The entrenched, animal-based illuminants of the time, whatever their shortcomings, had long histories, good reputations, refined production processes, established transportation networks and marketing channels, and a large user base who had invested in the requisite lamps. In other words, animal-based illuminants were practical. For a new illumination venture to be profitable, it would have to create more value (as judged by its customers) than it consumed. A successful alternative would not only have to be a theoretical source of energy, or even work better in the laboratory; it would have to be produced, refined, transported, and marketed efficiently — or it would be worthless. Unlike today, no government bureaucrats were writing big checks for snazzy, speculative PowerPoint presentations or eye-popping statistics about the hypothetical potential of a given energy source. Thus, scientists and entrepreneurs developed illumination technologies with an eye toward creating real value on the market. They began exploring all manner of potential production materials — animal, vegetable, and mineral — and methods of production and distribution. Many of their attempts failed, such as forays into fish oils and certain plant oils that proved unprofitable for reasons such as unbearable smell, high cost of mass production, and low-quality light.19 But, out of this torrent of entrepreneurial exploration and experimentation, three illumination breakthroughs emerged.

One, called camphene, came from the work of the enterprising scientist Isaiah Jennings, who experimented with turpentine. If turpentine could create a quality illuminant, he believed, the product held tremendous commercial potential as the lowest-cost illuminant on the market: Unlike animal fat, turpentine was neither in demand as a food product nor as a lubricant. Jennings was successful in the lab, and in 1830, he took out a patent for the process of refining turpentine into camphene. The process he patented was a form of distillation — boiling at different temperatures in order to separate different components — a procedure that is vital to the energy industry to this day.

Before camphene could succeed on the market, Jennings and others had to solve numerous practical problems. For example, they discovered that camphene posed the threat of explosion when used in a standard (animal) oil lamp. The initial solution was to design new lamps specifically for use with camphene — but this solution was inadequate because the money saved using camphene would barely defray the expense of a new lamp. So, producers devised methods that enabled customers to inexpensively modify their existing lamps to be camphene-safe. The payoff: In the 1840s, camphene was the leading lamp oil, while use of animal oils, the higher-cost product, as illuminants declined in favor of their use as lubricants. Camphene was the cheapest source of light to date, creating many new customers who were grateful for its “remarkable intensity and high lighting power.”20

Second, whereas Jennings had focused on developing a brand-new source of illumination, another group of entrepreneurs — from, of all places, the Cincinnati hog industry — saw an opportunity to profitably improve the quality of light generated from animal lard, an already widely used source of illumination. At the time, the premium illuminant in the market was sperm whale oil, renowned for yielding a safe, consistent, beautiful light — at prices only the wealthy could afford. In the 1830s, soap makers within the hog industry set out to make traditional lard as useful for illumination as the much scarcer sperm whale oil. They devised a method of heating lard with soda alkali, which generated two desirable by-products that were as good as their sperm equivalents but less expensive: a new lard oil, dubbed stearin oil, for lamps and stearic acid for candles. This method, combined with a solid business model employing Cincinnati’s feedstock of hogs, created a booming industry that sold 2 million pounds of stearin products annually. The price of stearin oil was one third less than that of sperm whale oil, making premium light available to many more Americans.21

Thus camphene and stearin became leaders in the market for lamps and candles — both portable sources ofillumination. The third and final new form of illumination that emerged in the early 1800s was a bright, high-quality source of illumination delivered via fixed pipes to permanent light fixtures installed in homes and businesses. In the 17th century, scientists had discovered that coal, when heated to extremely high temperatures (around 1600 degrees), turns into a combustible gas that creates a bright light when brought to flame. In 1802, coal gas was used for the first time for commercial purposes in the famous factory of Boulton & Watt, near Birmingham, England.22 Soon thereafter, U.S. entrepreneurs offered coal gas illumination to many industrial concerns — making possible a major extension of the productive day for businesses, and thus increasing productivity throughout American industry. Initially, the high cost of the pipes and fixtures required by gas lighting precluded its use in homes. But entrepreneurs devised more efficient methods of installing pipes in order to bring gas into urban homes, and soon city dwellers in Baltimore, Boston, and New York would get more useful hours out of their days. Once the infrastructure was in place, the light was often cheaper than sperm whale oil, and was reliable, safe, and convenient. As a result, during the 1830s and 1840s, the coal-gas industry grew at a phenomenal rate; new firms sprang up in Brooklyn, Bristol (Rhode Island), Louisville, New Orleans, Pittsburgh, and Philadelphia.23

By the 1840s, after untold investing, risk-taking, thinking, experimentation, trial, error, failures, and success, coal gas, camphene, and stearin producers had proven their products to be the best, most practical illuminants of the time — and customers eagerly bought them so as to bring more light to their lives than ever before.

But this was only the beginning. Because the market was totally free, the new leaders could not be complacent; they could not prevent better ideas and plans from taking hold in the marketplace. Unlike the static industries fantasized by today’s “planners,” where some government-determined mix of technologies produces some static quantity deemed “the energy Americans need,” progress knew no ceiling. The market in the 19th century was a continuous process of improvement, which included a constant flow of newcomers who offered unexpected substitutes that could dramatically alter Americans’ idea of what was possible and therefore what was “needed.”

In the early 1850s, entrepreneurs caused just such a disruption with a now-forgotten product called coal oil.24 Coal oil initially emerged in Europe, which at the time also enjoyed a great deal of economic freedom. Scientists and entrepreneurs in the field of illumination were particularly inclined to look for illuminants in coals and other minerals because of the relative scarcity of animal and vegetable fats, and correspondingly high prices for both. Beginning with the French chemist A. F. Selligue, and continuing with the British entrepreneur James Young, Europeans made great strides in distilling coal at low heat (as against the high heat used to create coal gas) to liquefy it, and then distilling it (as Jennings had distilled turpentine into camphene) to make lamp oil and lubricants that were just as good as those from animal sources. Coal was plentiful, easy to extract in large quantities, and therefore cheap. The primary use of coal oil in Europe, however, was as a lubricant. In North America, the primary use would be as an illuminant.

Beginning in the 1840s, a Canadian physician named Abraham Gesner, inspired by the Europeans, conducted experiments with coal and was able to distill a quantity of illuminating oil therefrom. Gesner conceived a business plan (like so many scientists of the day, he was entrepreneurial), and teamed with a businessman named Thomas Cochrane to purchase an Alberta mining property from which he could extract a form of coal (asphaltum), refine it at high quality, and sell it below the going price for camphene.

But in 1852 the project was aborted — not because the owners lost the means or will to see it through, but because the Canadian government forbade it. The government denied that the subsurface minerals belonged to those who harnessed their value; it held that they were owned by the Crown, which did not approve of this particular use.

Gesner’s experience in Canada highlights a vital precondition of the rapid development of the American illumination energy industry: the security of property rights. All of the industries had been free to acquire and develop the physical land and materials necessary to create the technologies, make the products, and bring them to market based on the entrepreneurs’ best judgment. They had been free to cut down trees for camphene, raise hogs for stearin, and mine coal and build piping for gas lighting, so long as they were using honestly acquired property. And this freedom was recognized as a right, which governments were forbidden to abrogate in the name of some “higher” cause, be it the Crown or “the people” or the snail darter or protests by those who say, “Not in my backyard” about other people’s property. Because property rights were recognized, nothing stopped them from acting on their productive ideas. Had property rights not been recognized, all their brilliant ideas would have been like Gesner’s under Canadian rule: worthless.

Not surprisingly, Gesner moved to the United States. He set up a firm, the New York Kerosene Company, whose coal-oil illuminant, kerosene, was safer and 15 percent less expensive than camphene, more than 50 percent less expensive than coal gas, 75 percent less expensive than lard oil, and 86 percent less expensive than sperm whale oil. Unfortunately, this was not enough for Gesner to succeed. His product suffered from many problems, such as low yields and bad odor, and was not profitable. However, his limited successes had demonstrated that coal’s abundance and ease of refining made it potentially superior to animal and vegetable sources.

That potential was fully actualized by a businessman named Samuel Downer and his highly competent technical partners, Joshua Merrill and Luther Atwood. Downer had devoted an existing company to harnessing a product called “coup oil,” the properties of which rendered it uncompetitive with other oils. Recognizing the hopelessness of coup oil, Downer set his sights on coal-oil kerosene. Downer’s firm made major advances in refining technology, including the discovery of a more efficient means of treating refined oil with sulfuric acid, and of a process called “cracking” — also known as “destructive distillation” — which uses high heat to break down larger molecules into smaller ones, yielding higher amounts of the desired substance, in this case kerosene. (Unbeknownst to all involved, these discoveries would be vital to the undreamed of petroleum industry, which would emerge in the near future.) By 1859, after much effort went into developing effective refining processes and an efficient business model, Downer’s firm was able to make large profits by selling kerosene at $1.35 a gallon — a price that enabled more and more Americans to light their houses more of the time. Others quickly followed suit, and by decade’s end, businessmen had started major coal-oil refineries in Kentucky, Cincinnati, and Pittsburgh. The industry had attracted millions in investment by 1860, and was generating revenues of $5 million a year via coal oil — a growing competitor to coal gas, which was generating revenues of $17 million a year and had attracted $56 million (more than $1 billion in today’s dollars) in investment.25

As the 1850s drew to a close, coal oil and coal gas were the two leading illuminants. These new technologies brightened the world for Americans and, had the evolution of illumination innovation ended here, most Americans of the time would have died content. Their quality of life had improved dramatically under this energy revolution — indeed, so dramatically that, were a comparable improvement to occur today, it would dwarf even the most extravagant fantasies of today’s central planners. This points to a crucial fact that central planners cannot, do not, or will not understand: The source of an industry’s progress is a free market — a market with real economic planning, profit-driven individual planning.

The revolution in illumination was a process of thousands of entrepreneurs, scientists, inventors, and laborers using their best judgment to conceive and execute plans to make profits — that is, to create the most valuable illuminant at the lowest cost — with the best plans continually winning out and raising the bar. As a result, the state of the market as a whole reflected the best discoveries and creativity of thousands of minds — a hyperintelligent integration of individual thinking that no single mind, no matter how brilliant, could have foreseen or directed.

Who knew in 1820 that, of all the substances surrounding man, coal — given its physical properties, natural quantities, and costs of extraction and production — would be the best source for inexpensive illumination? Who knew all the thousands of minute, efficiency-producing details that would be reflected in the operations of the Samuel Downer Company — operations developed both by the company and by decades of trial and error on the market? Consider, then, what it would have meant for an Al Gore or Thomas Friedman or Barack Obama to “plan” the illumination energy market. It would have meant pretending to know the best technologies and most efficient ways of harnessing them and then imposing a “plan.” And, given that neither Gore nor Friedman nor anyone else could possibly possess all the knowledge necessary to devise a workable plan, what would their “plan” consist of? It would consist of what all central planners’ “plans” consist of: prohibition, wealth transfers, and dictates from ignorance. Depending on when the “planners” began their meddling and who was whispering in their ear, they might subsidize tallow candles or camphene, thereby pricing better alternatives out of the market or limiting lighting choices to explosive lamps.

Thankfully, there was no such “planner” — there were only free individuals seeking profit and free individuals seeking the best products for their money. That freedom enabled the greatest “eureka” of them all — from an unlikely source.

George Bissell was the last person anyone would have bet on to change the course of industrial history. Yet this young lawyer and modest entrepreneur began to do just that in 1854 when he traveled to his alma mater, Dartmouth College, in search of investors for a venture in pavement and railway materials.26 While visiting a friend, he noticed a bottle of Seneca Oil — petroleum — which at that time was sold as medicine. People had known of petroleum for thousands of years, but thought it existed only in small quantities. This particular bottle came from an oil spring on the land of physician Dr. Francis Beattie Brewer in Titusville, Pennsylvania, which was lumber country.

At some point during or soon after the encounter, Bissell became obsessed with petroleum, and thought that he could make a great business selling it as an illuminant if, first, it could be refined to produce a high quality illuminant, and, second, it existed in substantial quantities. Few had considered the first possibility, and most would have thought the second out of the question. The small oil springs or seeps men had observed throughout history were thought to be the mere “drippings” of coal, necessarily tiny in quantity relative to their source.

But Bissell needed no one’s approval or agreement — except that of the handful of initial investors he would need to persuade to finance his idea. The most important of these was Brewer, who sold him one hundred acres of property in exchange for $5,000 in stock in Bissell’s newly formed Pennsylvania Rock Oil Company of New York.

To raise sufficient funds to complete the project, Bissell knew that he would have to demonstrate at minimum that petroleum could be refined into a good illuminant. He solicited Benjamin Silliman Jr., a renowned Yale chemist, who worked with the petroleum, refined it, and tested its properties for various functions, including illumination. After collecting a $500 commission (which the crash-strapped firm could barely put together), Silliman delivered his glowing report: 50 percent of crude petroleum could be refined into a fine illuminant and 90 percent of the crude could be useful in some form or another.

Proof of concept in hand, Bissell raised just enough money to enact the second part of his plan: to see if oil could be found in ample quantities. According to the general consensus, his plan — to drill for oil — was unlikely to uncover anything. (One of Bissell’s investors, banker James Townsend, recalled his friends saying, “Oh, Townsend, oil coming out of the ground, pumping oil out of the earth as you pump water? Nonsense! You’re crazy.”) But Bissell’s organization had reason to suspect that the consensus was wrong — mostly because saltwater driller Samuel Kier had inadvertently found modest quantities of oil apart from known coal deposits, which contradicted the coal-drippings theory. And so Bissell proceeded, albeit with great uncertainty and very little money.

He sent Edwin Drake, a former railroad conductor and jack-of-many-trades, to Titusville to find oil. Drake and his hired hands spent two years and all the funds the company could muster, but after drilling to 69.5 feet with his self-made, steam-powered rig, he found nothing. Fortunately, just as the investors told Drake to wrap up the project, his crew noticed oil seeping out of the rig. Ecstatic, they attempted to pump the oil out of the well — and succeeded. With that, a new industry was born.

That is, a new potential industry was born. In hindsight we know that oil existed in quantities and had physical qualities that would enable it to supplant every other illuminant available at the time. But this was discovered only later by entrepreneurs with the foresight to invest time and money in the petroleum industry.

Bissell and other oilmen faced a difficult battle. They had to extract, refine, transport, and market at a profit this new, little-understood material, whose ultimate quantities were completely unknown — while vying for market share with well-established competitors. Fortunately, they were up to the task, and many others would follow their lead.

When word got out about Drake’s discovery, a “black gold” rush began, a rush to buy land and drill the earth for as much of this oil as possible. For example, upon seeing Drake’s discovery, Jonathan Watson, a lumber worker on Brewer’s land, bought what would become millions of dollars worth of oil land. George Bissell did the same. Participants included men in the lumber industry, salt borers turned oil borers, and others eager to take advantage of this new opportunity.27

Progress in this new industry was messy and chaotic — and staggering. In 1859, a few thousand barrels were produced; in 1860, more than 200,000; and in 1861, more than 2 million.28 Capital poured in from investors seeking to tap into the profits. In the industry’s first five years, private capitalists invested $580 million — $7 billion in today’s dollars.29 Even in the middle of the 19th century, when wealth was relatively scarce, the supposed problem of attracting capital to fund the development of a promising energy source did not exist so long as the energy source was truly promising.

As producers demonstrated that enormous quantities of oil existed, they created a huge profit opportunity for others to build businesses performing various functions necessary to bring oil to market. At first, would-be transporters were hardly eager to build rail lines to Titusville, and would-be refiners were hardly eager to risk money on distillation machines (“stills”) that might not see use. As such, the oil industry was not functioning efficiently, and much of the oil produced in the first three years went to waste. The oil that did not go to waste was expensive to bring to market, requiring wagon-driving teamsters to haul it 20–40 miles to the nearest railroad station in costly 360-pound barrels.30

But once production reached high levels, driving crude oil prices down, the transportation, refining, and distribution of oil attracted much investment and talent. An early, price-slashing solution to transportation problems was “pond fresheting.” Entrepreneurial boatmen on Oil Creek and the Alleghany River, which led to Pittsburgh, determined that they could offer cheaper transportation by strapping barrels of oils on rafts and floating them down the river. But this only worked half the year; the rest of the time, water levels were too low. The ingenious workaround they devised was to pay local dam owners to release water (“freshet”) at certain points in the year in order to raise water levels, thereby enabling them to float their rafts downstream. The method worked, and Pittsburgh quickly became the petroleum refining capital of America.31

Railroads entered the picture as well, building lines to new cities, which allowed them to become refining cities. In 1863, the Lake Shore Railroad built a line to Cleveland, inspiring many entrepreneurs to establish refineries there — including a 23-year-old named John Rockefeller.32 Another innovation in oil transport was “gathering lines” — small several-mile-long pipelines that connected drilling sites to local storage facilities or railroads. At first, gathering lines were halted by the Pennsylvania government’s lax enforcement of property rights; the politically-influential teamsters would tear down new pipelines, and the government would look the other way. But once rights were protected, gathering lines could be constructed quickly for any promising drilling site, enabling sites to pump oil directly to storage facilities or transportation centers without the loss, danger, and expense of using barrels and teamsters. Still another innovation was the tank car. These special railroad cars could carry far more oil than could normal boxcars loaded with barrels, and, once certain problems were solved (wood cars were replaced by iron cars and measures were taken to prevent explosion), they became the most efficient means of transportation.33

In the area of refining, innovation was tremendous. Certain industry leaders, such as Joshua Merrill of the Samuel Downer Company and Samuel Andrews of Clark, Rockefeller, and Andrews (later to be named Standard Oil), continuously experimented to solve difficulties associated with the refining process. To refine crude oil is to extract from it one or more of its valuable “fractions,” such as kerosene for illumination, paraffin wax for candles, and gasoline for fuel. The process employs a still to heat crude oil at multiple, increasing temperatures to boil off and separate the different fractions, each of which has a different boiling point. Distillation is simple in concept and basic execution, but to boil off and bottle kerosene was hugely problematic: Impure kerosene could be highly noxious and highly explosive. Additionally, early stills did not last very long, yielded small amounts of kerosene per unit, took hours upon hours to cool between batches, and raised numerous other challenges.

Throughout the 1860s, the leading refiners experimented with all aspects of the refining process: Should stills be shaped horizontally or vertically? How should heat be applied for evenness of temperature? How can the life of the still be maximized? How can the tar residue at the bottom be cleaned quickly and with as little damage to the still as possible? What procedures should one employ to purify the kerosene once distillation has been performed? When the process involves a chemical treatment, how much of that treatment should be used? Is it profitable to “crack” the oil, heating it at high temperature to create more kerosene molecules, which creates more kerosene per barrel but takes longer and requires expensive purification procedures?

The leading refiners progressively asked and answered these questions, and profited immensely from the knowledge they gained. By the end of the 1860s, the basics of refining technology had been laid down,34 though it would not be until the 1870s — the Rockefeller era — that they would be employed industry-wide.

On the marketing and distribution end, kerosene became a widely available good. Refining firms made arrangements with end sellers, most notably wholesale grocers and wholesale druggists, to sell their product. Rockefeller’s firm was a pioneer in international sales, setting up a New York office to sell kerosene all around the world — where it was in high demand thanks to its quality and cheapness, and to the lack of alternatives.35

The pace of growth of the oil industry was truly phenomenal. Within five years of its inception, with no modern communication or construction technology, the industry had made light accessible to even some of the poorest Americans. In 1864, a chemist wrote:

Kerosene has, in one sense, increased the length of life among the agricultural population. Those who, on account of the dearness or inefficiency of whale oil, were accustomed to go to bed soon after the sunset and spend almost half their time in sleep, now occupy a portion of the night in reading and other amusements.36

Within five years, an unknown technology and an unimagined industry had become a source of staggering wealth creation. Had the early days of this industry been somehow filmed, one would see oilmen in every aspect of the business building up an enormous industry, moving as if the film were being fast-forwarded. Almost nothing in history rivals this pace of development, and it is inconceivable today that any construction-heavy industry could progress as quickly. It now takes more than five years just to get a permit to start building an oil derrick, let alone to complete the derrick, much less thousands of them.

But in the mid-1800s, no drilling permits or other government permissions were required to engage in productive activity. This did not mean that oilmen could pollute at will — property rights laws prohibited polluting others’ property (though some governments, unfortunately, were lax in their enforcement of such laws). It did mean that, for the most part, they were treated as innocent until proven guilty; and they knew that so long as they followed clearly defined laws, their projects would be safe.37

Anyone with an idea could implement it as quickly as his abilities permitted. If he thought a forest contained a valuable mineral, he could buy it. If he thought drilling was the best means of extracting the mineral, he could set up a drilling operation. If he thought a railroad or a pipeline was economical, he could acquire the relevant rights-of-way, clear the land, and build one. If he thought he could do something better than others, he could try — and let the market be the judge. And he could do all of these things by right, without delay — in effect, developing energy at the speed of thought.

As one prominent journalist wrote:

It is certain . . . the development [of the petroleum industry] could never have gone on at anything like the speed that it did except under the American system of free opportunity. Men did not wait to ask if they might go into the Oil Region: they went. They did not ask how to put down a well: they quickly took the processes which other men had developed for other purposes and adapted them to their purpose. . . . Taken as a whole, a truer exhibit of what must be expected of men working without other regulation than that they voluntarily give themselves is not to be found in our industrial history.38

Imagine if George Bissell and Edwin Drake were to pursue the idea of drilling for oil in today’s political context. At minimum, they would have to go through a multiyear approval process in which they would be required to do environmental impact studies documenting the expected impact on every form of local plant and animal life. Then, of course, they would have to contend with zoning laws, massive taxes, and government subsidies handed to their competitors. More likely, the EPA would simply ax the project, declaring Titusville “protected” government land (the fate of one-third of the land in the United States today). More likely still, Bissell would not even seriously consider such a venture, knowing that the government apparatus would wreck it with unbearable costs and delays, or a bureaucratic veto.

The speed of progress depends on two things: the speed at which men can conceive of profitable means of creating new value — and the speed at which they can implement their ideas. Since future discoveries depend on the knowledge and skills gained from past discoveries, delays in market activity retard both the application and the discovery of new knowledge.

In 1865, members of the oil industry experienced a tiny fraction of the government interference with which the modern industry regularly contends: the Civil War’s Revenue Act of 1865. This was a $1 per barrel tax on crude inventory — approximately 13 percent of the price. This Act “slowed drilling to a virtual standstill” and “put hundreds of marginal producers out of business” by eating into businesses’ investment and working capital.39 Remarkably, the damage done by the Act scared the government away from taxing crude and oil products for decades, an effective apologyforits previous violation of property rights. Such was the general economic climate of the time.

After the brief but crushing bout of confiscatory taxation, the economic freedom that made possible the rise of the oil industry resumed, as did the industry’s explosive growth. In 1865, kerosene cost 58 cents a gallon, much less expensive than any prior product had been — and half the price of coal oil.40 But entrepreneurs did not have time to revel in the successes of the past. They were too busy planning superior ventures for the future — knowing that with creativity they could always come up with something better, and that customers would always reward better, cheaper products.

The paragon of this relentless drive to improve was Rockefeller, who developed a new business structure that would bring the efficiency of oil refining — and ultimately, the whole process of producing and selling oil — to new heights. Rockefeller was obsessed with efficiency and with careful accounting of profit and loss. In seeking to maximize his efficiency, he had one central realization that steered the fate of his company: Tremendous efficiency could be achieved through scale. From his first investment in a refinery in 1863, when he built the largest refinery in Cleveland, to his continual borrowing to expand the size of his operations, Rockefeller realized that the more oil he refined, the more he could invest in expensive but efficient devices and practices whose often-high costs could be spread over a large number of units. He created barrel-making facilities that cut his barrel costs from $3 to $1 each. He built large-scale refineries that required less labor per barrel. He purchased a fleet of tank cars, and created an arrangement with a railroad that lowered his costs from $900,000 to $300,000 a trip. (Such savings are the real basis of Rockefeller’s much-maligned rebates from railroads.)

Rockefeller’s improvements, which can be enumerated almost indefinitely, helped lower the prevailing per-gallon price of kerosene from 58 cents in 1865, to 26 cents in 1870 — a price at which most of his competitors could not afford to stay in business — to 8 cents in 1880. These incredible prices represented the continuous breakthroughs that the Rockefeller-led industry was making. Every five years marked another period of dramatic progress — whether through long-distance pipelines that eased distribution or through advances in refining that made use of vast deposits of previously unrefinable oil. Oil’s potential was so staggering that no alternative was necessary. But then someone conceived of one: the electric lightbulb.

Actually, many men had conceived of electric lightbulbs in one form or another; but Thomas Edison, beginning in the late 1870s, was the first to successfully develop one that was practical and potentially profitable. Edison’s lightbulb lasted hundreds of hours, and was conceived as part of a practical distribution network — the Edison system, the first electrical utility and distribution grid. As wonderful as kerosene was, it generated heat and soot and odor and smoke and had the potential to explode; lightbulbs did not. Thus, as soon as Edison’s lightbulb was announced, the stock prices of publicly traded oil refiners plummeted.

Oil, it appeared, was no longer the future of illumination energy; electricity was. This fact, and the competitive pressures it placed on the oil industry, prompted entrepreneurs to figure out whether their product could enjoy comparable consumer demand in any other sphere, inside or outside of the energy industry. They worked to expand the market for oil as a lubricant and as a fuel for railroads and tankers. But the fate of the industry would hinge on the rise of the automobile in the 1890s.41

It is little known that most builders of automobiles did not intend them to run on gasoline. Given the growth and popularity of electricity at the time, many cars were designed to run on electric batteries, whereas other cars ran on steam or ethanol. Gasoline’s dominance was not a fait accompli.

If the market had not been free, the electric car would likely have been subsidized into victory, given the obsession with electricity at the time. But when the technologies were tested in an open market, oil/gasoline won out — because of the incredible efficiency of the Rockefeller-led industry coupled with gasoline’s energy density. Per unit of mass and volume, it could take a car farther than an electric battery or a pile of coal or a vat of ethanol (something that remains true to this day). Indeed, Thomas Edison himself explained this to Henry Ford, in a story told by electricity entrepreneur Samuel Insull.

“He asked me no end of details,” to use Mr. Ford’s own language, “and I sketched everything for him; for I have always found that I could convey an idea quicker by sketching than by just describing it.” When the conversation ended, Mr. Edison brought his fist down on the table with a bang, and said: “Young man, that’s the thing; you have it. Keep at it. Electric cars must keep near to power stations. The storage battery is too heavy. Steam cars won’t do, either, for they require a boiler and fire. Your car is self-contained — carries its own power plant — no fire, no boiler, no smoke and no steam. You have the thing. Keep at it.”. . . And this at a time when all the electrical engineers took it as an established fact that there could be nothing new and worthwhile that did not run by electricity.42

By 1912, gasoline had become a staple of life — and was on the way to changing it even more than kerosene had. A trade journal from 1912, Gasoline — The Modern Necessity, read:

It seems almost unbelievable that there was once a time when the refiners and manufacturers of petroleum products concerned themselves seriously with finding a market for the higher distillates. At the present time it is the higher distillate known as gasoline that is giving not alone the refiners grave concern but modern civilization as well. Then it was how to find an adequate and profitable market for it; now it is how to meet the ever-increasing demand for it.43

Oil was the ultimate alternative energy — first for illumination, then for locomotion. In a mere half century, oil went from being useless black goo to the chief energy source leading the illumination and mobilization of the world. Young couples filling up their automobiles in 1910 had nary a clue as to how much thought and knowledge went into their ability to power their horseless carriages so cheaply and safely. Nor did most appreciate that all of this depended on a political system in which the government’s recognition and protection of the right to property and contract enabled businessmen to develop the world around them, risk their time and money on any innovation they chose, and profit from the results.

If we compare today’s “planned” energy market to the rights-respecting energy market that brought about the emergence of oil, we can see in concrete fact the practicality of a genuinely free market.

Instead of protecting property rights and unleashing the producers of energy to discover the best forms of energy and determine how best to deploy them (which includes genuine privatization of the electricity grid and other transcontinental development),44 our government randomly dictates what the future is to be. Today, we are told, as if it were written in the stars, that plug-in hybrids powered by solar and wind on a “smart grid” are the way to go — a claim that has no more validity than an 1860s claim that a network of wagon drivers should deliver coal oil nationwide.

What sources of energy are best pursued and how best to pursue them can be discovered only by millions of minds acting and interacting freely in the marketplace — where anyone with a better idea is free to prove it and unable to force others to fund his pursuit. When the government interferes in the marketplace, countless productive possibilities are precluded from coming into existence.

Today’s government as “energy planner” not only thwarts the market by coercively subsidizing the “right” energy technologies; it damages the market by opposing or even banningthe “wrong” energy technologies or business models. Today’s energy policy severely restricts the production of every single practical, scalable form of energy: coal, natural gas, oil, and, above all, nuclear.

Nuclear energy deserves special mention because it has tremendous proven potential, the result of its incredible energy density: more than one million times that of any fossil fuel — which, unlike oil, coal, or natural gas, has never been allowed to develop in anything resembling a free market. Thanks to environmentalist hysteria, this proven-safe source of energy has been virtually banned in the United States. And when nuclear plants have been permitted, construction costs and downtime losses have been multiplied many times over by multi-decade regulatory delays. Even in other countries, where nuclear power is much more welcome, it is under the yoke of governments and is therefore progressing at a fraction of its potential.

If the scientists, engineers, and businessmen in the nuclear power industry had been able to pursue their ideas and develop their products in a free market — as oilmen once were able to do — how much better would our lives be today? What further technologies would have blossomed from that fertile foundation? Would automobiles even be running on gasoline? Would coal be used for anything anymore? And if entrepreneurs with other, perhaps even better, energy ideas had been free to put them into practice as quickly as their talents would allow — just as their 19th-century forebears had — might we by now have realized the dream of supplanting nuclear fission with nuclear fusion, which many consider the holy grail of energy potential?

The fact is, we cannot even dream of what innovations would have developed or what torrents of energy would have been unleashed. As the history of the original alternative energy industry illustrates, no one can predict the revolutionary outcomes of a market process. Happily, however, with respect to the future, we can do better than dream: We can see for ourselves what kind of untapped energy potential exists, by learning from the 19th century. We can — and must — remove the political impediments to energy progress by limiting the government to the protection of rights. Then, we will witness something truly spectacular: energy at the speed of 21st-century thought.

About The Author

Alex Epstein

Alex Epstein was a writer and a fellow on staff at the Ayn Rand Institute between 2004 and 2011.

Endnotes

1 Robert Bryce, Gusher of Lies (New York: PublicAffairs, 2008), p. 132.

2 Ibid., pp. 267–70.

3 International Energy Outlook 2008, “Highlights,” Energy Information Administration, U.S. Department of Energy, http://www.eia.doe.gov/oiaf/ieo/pdf/highlights.pdf.

4 Annual Energy Review, “U.S. Primary Energy Consumption by Source and Sector, 2007,” Energy Information Administration, U.S. Department of Energy, http://www.eia.doe.gov/emeu/aer/pecss_diagram.html.

5 “Al Gore’s Challenge to Repower America,” speech delivered July 17, 2008, http://www.repoweramerica.org/about/challenge/.

6 Ibid.

7 Thomas L. Friedman, Hot, Flat, and Crowded: Why We Need a Green Revolution—and How It Can Renew America (New York: Farrar, Straus & Giroux, 2008), p. 172.

8 Ibid., p. 186.

9 Ibid., pp. 187–88.

10 Jimmy Carter, “NATIONAL ENERGY PLAN—Address Delivered Before a Joint Session of the Congress,” April 20, 1977, http://www.presidency.ucsb.edu/ws/index.php?pid=7372.

11 http://www.eia.doe.gov/aer/txt/ptb0103.html.

12 Richard Nixon, “Special Message to the Congress on Energy Policy,” April 18, 1973, http://www.presidency.ucsb.edu/ws/index.php?pid=3817&st.

13 Linda R. Cohen and Roger G. Noll, The Technology Pork Barrel (Washington, DC: The Brookings Institution), pp. 217–18.

14 Ibid., pp. 259–313.

15 In this regard, I recommend Gusher of Lies by Robert Bryce and The Technology Pork Barrel by Linda R. Cohen and Roger G. Noll.

16 Harold F. Williamson and Arnold R. Daum, The American Petroleum Industry 1859–1899: The Age of Illumination (Evanston, IL: Northwestern University, 1963), p. 29.

17 Ibid., p. 320.

18 Ibid., p. 28.

19 M. Luckiesh, Artificial Light (New York: The Century Co., 1920), pp. 51–56.

20 Williamson and Daum, The American Petroleum Industry, pp. 33–34.

21 Ibid., pp. 34–36.

22 Ibid., p. 32.

23 Ibid., pp. 32, 38–42.

24 This discussion is based on Williamson and Daum, The American Petroleum Industry, pp. 43–60.

25 Calculated using GDP Deflator and CPI, http://www.measuringworth.com/.

26 This discussion is based on Williamson and Daum, The American Petroleum Industry, pp. 63–81.

27 Ibid., pp. 86–89.

28 Ibid., p. 103.

29 Robert L. Bradley, Oil, Gas, and Government: The U.S. Experience, vol. 1 (London: Rowman & Littlefield, 1996), p. 18.

30 Williamson and Daum, The American Petroleum Industry,pp. 85, 106.

31 Ibid., pp. 165–69.

32 Burton W. Fulsom, The Myth of the Robber Barons (Herndon, VA: Young America’s Foundation, 1996), p. 85.

33 Williamson and Daum, The American Petroleum Industry, pp. 183–89.

34 Ibid., pp. 202–31.

35 Alex Epstein, “Vindicating Capitalism: The Real History of the Standard Oil Company,” The Objective Standard, Summer 2008, pp. 29–35.

36 Williamson and Daum, The American Petroleum Industry, p. 320.

37 For a comprehensive account of the existence and decline of economic freedom in the oil industry, see Bradley, Oil, Gas, and Government.

38 Paul Henry Gidden, The Birth of the Oil Industry (New York: The Macmillan Company, 1938), p. xxxix.

39 Bradley, Oil, Gas, and Government.

40 Discussion based on Alex Epstein, “Vindicating Capitalism.”

41 Discussion based on Harold F. Williamson, Ralph L. Andreano, Arnold R. Daum, and Gilbert C. Klose, The American Petroleum Industry, 1899–1959: The Age of Energy (Evanston, IL: Northwestern University Press, 1963), pp. 184–95.

42 Samuel Insull, The Memoirs of Samuel Insull (Polo, IL: Transportation Trails, 1934, 1992), pp. 142–43.

43 Williamson, Andreano, Daum, and Klose, The American Petroleum Industry, p. 195.

44 Raymond C. Niles, “Property Rights and the Electricity Grid,” The Objective Standard, Summer 2008.

No More Green Guilt

by Keith Lockitch | May 01, 2009

Every investment prospectus warns that “past performance is no guarantee of future results.” But suppose that an investment professional’s record contains nothing but losses, of failed prediction after failed prediction. Who would still entrust that investor with his money?

Yet, in public policy there is one group with a dismal track record that Americans never seem to tire of supporting. We invest heavily in its spurious predictions, suffer devastating losses, and react by investing even more, never seeming to learn from the experience. The group I’m talking about is the environmentalist movement.

No More Green Guilt

Consider their track record — like the dire warnings of catastrophic over-population. Our unchecked consumption, we were told, was depleting the earth’s resources and would wipe humanity out in a massive population crash. Paul Ehrlich’s 1968 bestseller, The Population Bomb, forecasted hundreds of millions of deaths per year throughout the 1970s, to be averted, he insisted, only by mass population control “by compulsion if voluntary methods fail.”

But instead of global-scale famine and death, the 1970s witnessed an agricultural revolution. Despite a near-doubling of world population, food production continues to grow as technological innovation creates more and more food on each acre of farmland. The U.S., which has seen its population grow from 200 to 300 million, is more concerned about rampant obesity than a shortage of food.

The Alar scare in 1989 is another great example. The NRDC, an environmentalist lobby group, engineered media frenzy over the baseless assertion that Alar, an apple ripening agent, posed a cancer threat. The ensuing panic cost the apple industry over $200 million dollars, and Alar was pulled from the market even though it was a perfectly safe and value-adding product.

Or consider the campaign against the insecticide DDT, beginning with Rachel Carson’s 1962 bookSilent Spring. The world had been on the brink of eradicating malaria using DDT — but for Carson and her followers, controlling disease-carrying mosquitoes was an arrogant act of “tampering” with nature. Carson issued dire warnings that nature was “capable of striking back in unexpected ways” unless people showed more “humility before [its] vast forces.” She asserted, baselessly, that among other things DDT would cause a cancer epidemic. Her book led to such a public outcry that, despite its life-saving benefits and mountains of scientific evidence supporting its continued use, DDT was banned in the United States in 1972. Thanks to environmentalist opposition, DDT was almost completely phased out worldwide. And while there is still zero evidence of a DDT cancer risk, the resurgence of malaria needlessly kills over a million people a year.

Time and time again, the supposedly scientific claims of environmentalists have proven to be pseudo-scientific nonsense, and the Ehrlichs and Carsons of the world have proven to be the Bernard Madoffs of science. Yet Americans have ignored the evidence and have instead invested in their claims — accepting the blame for unproven disasters and backing coercive, harmful “solutions.”

Today, of course, the Green doomsday prediction is for catastrophic global warming to destroy the planet — something that environmentalists have pushed since at least the early 1970s, when they were also worried about a possible global cooling shifting the planet into a new ice age.

But in this instance, just as with Alar, DDT, and the population explosion, the science is weak and the “solutions” drastic. We are told that global warming is occurring at an accelerating rate, yet global temperatures have been flat for the last decade. We are told that global warming is causing more frequent and intense hurricanes, yet the data doesn’t support such a claim. We are warned of a potentially catastrophic sea level rise of 20 feet over the next century, but that requires significant melting of the land-based ice in Antarctica and Greenland. Greenland has retained its ice sheet for over 100,000 years despite wide-ranging temperatures and Antarctica has been cooling moderately for the last half-century.

Through these distortions of science we are again being harangued to support coercive policies. We are told that our energy consumption is destroying the planet and that we must drastically reduce our carbon emissions immediately. Never mind that energy use is an indispensable component of everything we do, that 85 percent of the world’s energy is carbon-based, or that there are no realistic, abundant alternatives available any time soon, and that billions of people are suffering today from lack of energy.

Despite all of that, Americans seem to once again be moving closer to buying the Green investment pitch and backing destructive Green policies. Why don’t we learn from past experience? Do you think a former Madoff investor would hand over money to him again?

It’s not that we’re too stupid to learn, it’s that we are holding onto a premise that distorts our understanding of reality. Americans are the most successful individuals in history — even in spite of this economic downturn — in terms of material wealth and the quality of life and happiness it brings. We are heirs to the scientific and industrial revolutions, which have increased life expectancy from 30 years to 80 and improved human life in countless, extraordinary ways. Through our ingenuity and productive effort, we have achieved an unprecedented prosperity by reshaping nature to serve our needs. Yet we have always regarded this productivity and prosperity with a certain degree of moral suspicion. The Judeo-Christian ethic of guilt and self-sacrifice leads us to doubt the propriety of our success and makes us susceptible to claims that we will ultimately face punishment for our selfishness — that our prosperity is sinful and can lead only to an apocalyptic judgment day.

Environmentalism preys on our moral unease and fishes around for doomsday scenarios. If our ever-increasing population or life-enhancing chemicals have not brought about the apocalypse, then it must be our use of fossil fuels that will. Despite the colossal failures of past Green predictions, we buy into the latest doomsday scare because, on some level, we have accepted an undeserved guilt. We lack the moral self-assertiveness to regard our own success as virtuous; we think we deserve punishment.

It is time to stop apologizing for prosperity. We must reject the unwarranted fears spread by Green ideology by rejecting unearned guilt. Instead of meekly accepting condemnation for our capacity to live, we should proudly embrace our unparalleled ability to alter nature for our own benefit as the highest of virtues.

It’s time to recapture our Founding Fathers’ admiration for the virtue of each individual’s pursuit of his own happiness.

About The Author

Keith Lockitch

Vice President of Education and Senior Fellow, Ayn Rand Institute

Atlas Shrugged and the Housing Crisis that Government Built

by Yaron Brook | March 2009

Ayn Rand once said that the purpose of her novel Atlas Shrugged — which tells the story of a U.S. economy crumbling under the weight of increasing government control — was “to keep itself from becoming prophetic.” She may not have succeeded. As a number of commentators have noted, the parallels between today’s events and those dramatized in Rand’s 1957 novel are striking.

In a recent Wall Street Journal column, for instance, Stephen Moore observed that “our current politicians are committing the very acts of economic lunacy” that Atlas Shruggeddepicted 52 years ago. In the novel, he points out, politicians respond to crises “that in most cases they created themselves” with more controls and regulations. These, in turn “generate more havoc and poverty,” which spawn more controls, “until the productive sectors of the economy collapse under the collective weight of taxes and other burdens imposed in the name of fairness, equality and do-goodism.”

This certainly seems like an apt description of, say, the housing crisis. For decades, Washington promoted homeownership by people who couldn’t afford it: think Fannie Mae and Freddie Mac, the Community Reinvestment Act, tax incentives to buy homes, housing subsidies for the needy, among other programs. And when people started to default on their mortgages by the truckload? The government didn’t scrap its controls, but instead promised to bail out delinquent homeowners and irresponsible bankers and impose more regulations on all lenders (responsible or not).

But what commentators miss is that Rand’s novel provides the explanation for why this is happening — and the cause is not some inexplicable “lunacy” on the part of politicians. The cause is our very conception of fairness, equality, and the good. “Why,” states the hero ofAtlas Shrugged to the people of a crumbling world, “do you shrink in horror from the sight of the world around you? That world is not the product of your sins, it is the product and the image of your virtues.”

In Rand’s novel, government puts the needs of the meek and less fortunate first. For instance, the Anti-dog-eat-dog Rule is passed to protect some long established, less-efficient railroads from better-run competitors. Why? Because it was deemed that those “established railroad systems were essential to the public welfare.” What about the superior railroad destroyed in the process? Its owner needs to be less selfish and more selfless. The Rule fails to stem the crisis, and the country sinks deeper into depression. But, wedded to the ideal that each must be his brother’s keeper, government imposes more burdens and regulatory shackles on productive companies in the name of bailing out the struggling ones — only to drive the country further toward disaster.

Sound familiar? These are the same slogans invoked and implemented today. We must be “unified in service to the greater good,” President Obama tells a cheering nation. We must heed the “call to sacrifice” and “reaffirm that fundamental belief — I am my brother’s keeper, I am my sister’s keeper. . .”

In 2002, pushing for extensive new government programs to “expand home ownership,” President Bush reminded us of our selfless “responsibility . . . to promote something greater than ourselves.”

To implement this goal, Washington allowed Fannie and Freddie to pile up dangerous levels of debt. It used the Community Reinvestment Act to coerce banks into relaxing their lending standards. It used our tax dollars to dole out housing subsidies to otherwise unqualified borrowers. And when it turned out that home buyers who couldn’t afford homes without government help, also couldn’t afford them with government help, we still do not abandon these failed policies. Clinging to the notion that we are our brother’s keeper, everyone today proposes new policies to bail out the “unfortunate.”

While the details of these policies have been debated, no one challenges their goal. No one questions whether it is morally right to be selfless and to sacrifice to “promote something greater than ourselves.”

This is what Atlas Shrugged challenges. Why, it asks, is it morally right to regard some individuals as servants of those in need, rather than as independent beings with their own lives and goals? What is noble about a morality that turns men into beggars and victims — the bailed out and the bailers out?

Atlas Shrugged presents instead a new conception of morality that upholds the right of the individual to exist for his own sake. This, Rand tells us, is the only possible basis for a free country. It’s freedom or service — the pursuit of happiness or of the “public good” — the Declaration of Independence or the endless crises of the welfare state — self-interest or self-sacrifice.

It’s still not too late to make the right choice.

About The Author

Yaron Brook

Chairman of the Board, Ayn Rand Institute

The Resurgence of Big Government

by Yaron Brook | Fall 2008 | The Objective Standard

Following the economic disasters of the 1960s and 1970s, brought on by the statist policies of the political left, America seemed to change course. Commentators called the shift the “swing to the right” — that is, toward capitalism. From about 1980 to 2000, a new attitude took hold: the idea that government should be smaller, that recessions are best dealt with through tax cuts and deregulation, that markets work pretty effectively, and that many existing government interventions are doing more harm than good. President Bill Clinton found it necessary to declare, “The era of big government is over.”

Today that attitude has virtually vanished from the public stage. We are now witnessing a swing back to the left — toward statism. As a wave of recent articles have proclaimed: The era of big government is back.1

The evidence is hard to miss. Consider our current housing and credit crisis. From day one, it was blamed on the market and a lack of oversight by regulators who were said to be “asleep at the wheel.” In response to the crisis, the government, the policy analysts, the media, and the American people demanded action, and everyone understood this to mean more government, more regulation, more controls. We got our wish.

First came the Fed’s panicked slashing of interest rates. Then the bailout of Bear Stearns. Then the bailout of Freddie Mac. Then a $300 billion mortgage bill, which passed by a substantial margin and was signed into law by President Bush. No doubt more is to come.

All of this intervention, of course, is supported by our presidential candidates. Both blame Wall Street for the current problems and vow to increase the power of the Fed’s and the SEC’s financial regulators. John McCain has announced that there are “some greedy people on Wall Street that perhaps need to be punished.”2 Both he and Barack Obama envision an ever-growing role for government in the marketplace, each promises to raise taxes in some form or another, and both support more regulations, particularly on Wall Street. Few doubt they will keep these promises.

What do Americans think of all this? A recent poll by the Wall Street Journal and NBC News found that 53 percent of Americans want the government to “do more to solve problems.” Twelve years earlier, Americans said they opposed government interference by a 2-to-1 margin.3

In fact, our government has been “doing more” throughout this decade. While President Bush has paid lip service to freer markets, his administration has engineered a vast increase in the size and reach of government.

He gave us Sarbanes-Oxley, the largest expansion of business regulation in decades. He gave us the Medicare prescription drug benefit, the largest new entitlement program in thirty years. He gave us the “No Child Left Behind Act,” the largest expansion of the federal government in education since 1979. This is to say nothing of the orgy of spending over which he has presided: His 2009 budget stands at more than $3 trillion — an increase of more than a $1 trillion since he took office.4 All of this led one conservative columnist to label Bush “a big government conservative.”5 It was not meant as a criticism.

Americans entered the 21st century enjoying the greatest prosperity in mankind’s history. And many agreed that this prosperity was mainly the result of freeing markets from government intervention, not only in America, but also around the world. Yet today, virtually everyone agrees that markets have failed.

Why? What happened?

To identify the cause of today’s swing to the left, we need first to understand the cause and consequences of the swing to the right.

Although the swing to the right was portrayed as an embrace of capitalism, it was primarily a rebellion against the disastrous government policies of the left. Recall that the 1960s and 1970s were a time of incredible government growth — the establishment of the massive welfare state of the so-called Great Society. Medicare and Medicaid were launched, and welfare programs were greatly expanded. Government interference in the economy was at its greatest in U.S. history. Industry was heavily regulated, from how much airlines could charge and what destinations they could service, to the routes trucking companies could use and how much they were allowed to charge for freight, to the commissions that stockbrokers could charge. The Federal Reserve dictated not only the interest rates at which banks could borrow from one another, as it does today, but also the interest rates that banks could pay on savings accounts.

As a consequence of all this, inflation was rampant, in double digits for much of the 1970s. American industry struggled and became less competitive. The stock market, as measured by the Dow Jones Industrials, was basically flat from 1965 to 1982. Economic growth was nonexistent, with the 1970s characterized by repeated recessions. Unemployment was high. And because the combination of inflation, stagnation, and unemployment was a phenomenon unanticipated by economists, a new term was coined: stagflation.

This economic chaos is what Americans rebelled against in 1980 by electing Reagan. However, instead of embracing full, unregulated, laissez-faire capitalism — in which the state is separated from the economy — the Reagan administration and conservative policy makers repealed only some of the disastrous interventions strangling the economy. They lessened our crushing tax burden, rolled back a few of the most burdensome regulations, and undid some of the most destructive controls. It was a step in the right direction, but it was merely a pragmatic solution to an immediate crisis.

The intellectual groundwork for this solution had been laid largely by free-market leaning economists such as Milton Friedman and Friederich Hayek. Presenting and building on the work of predecessors such as Adam Smith, Frank Knight, Ludwig von Mises, and many others, they successfully dismantled virtually every economic charge against capitalism and demonstrated, from many aspects and angles, the economic impotence of statism. But on capitalism’s moral superiority — on whether capitalism is a good and just system — they had nothing new or persuasive to say. In fact, most of these advocates of free markets downplayed or denigrated the significance of morality altogether.

To be sure, some figures in the 1980s spoke of economic prosperity in vaguely moral terms. Reagan, for instance, spoke of a new “morning in America.” But neither he nor his supporters could explain why seeking wealth and pursuing one’s own economic well-being, essential characteristics of capitalism, are morally proper.

From a young age, and throughout life, Americans are taught that pursuing self-interest is petty and wrong. The noble, we are told, is the self-sacrificial. Denying the self is good, we are told, especially when it “helps others.” In the noncontroversial words of Senator McCain, “[S]erving only one’s self is a petty and unsatisfying ambition. But serve a cause greater than self-interest and you will know a happiness far more sublime than the fleeting pleasure of fame and fortune.”6

The economic advocates of freer markets did nothing to dislodge this premise; often they did worse than nothing: They reinforced it.

Thus the view remains prevalent in America that although making money is practical, only “giving it back” is moral. Bill Gates might be admired for the business smarts that earned him tens of billions of dollars, but he has routinely been condemned for creating his fortune and is now regarded as “noble” only because he has given away billions to charitable causes.

And not only are self-interest and individual prosperity viewed as morally suspect; so too is the mode of action demanded and protected by a capitalist system: the profit motive. A person pursuing his own profit, conventional morality teaches us, is unprincipled and without scruples and will engage in any form of corruption if it suits his purposes. A free market unleashes greed. Corporations, businesses, capitalists, workers, traders, speculators, individual investors — each acts to make money, to make a profit for himself. Thus, given the contempt for self-interest, the conclusion is inescapable: Capitalism unleashes vicious, not virtuous, action.

Although some on the right try to evade this conclusion, many openly acknowledge it. Instead of searching for new moral principles to defend the selfish nature of capitalism, they bemoan that nature. In a time of economic crisis, these individuals — largely religious conservatives and neoconservatives — proclaim the economic success capitalism brings. But they hate the fact that business is motivated by self-interest and profit-seeking, and they say so.

Because of capitalism’s inherent selfishness, Irving Kristol, a leading neoconservative “defender” of the free market, can muster only two cheers, not three, for capitalism. And the prominent conservative writer Michael Novak, in a speech claiming to defend capitalism morally, could not manage even that.

Capitalism is by no means the Kingdom of God. It is a poor and clumsy human system. Although one can claim for it that it is better than any of its rivals, there is no need to give such a system three cheers. My friend Irving Kristol calls his book Two Cheers for Capitalism. One cheer is quite enough.7

The swing to the right was a swing to avoid economic catastrophe — a “practical” move, not a moral one. And given the lack of moral justification, the swing could not last.

Confronted with the prospect of economic collapse, Americans — the most reality-oriented people on the planet — listened to practical solutions. They were willing to put aside the teachings of their morality to avert disaster, particularly when they could see the ravages to which these teachings led when followed consistently, as they were being followed in the Communist bloc, and when their pragmatic reaction was given a vaguely moral spin, as it was when Reagan spoke of a revival of America’s independent, can-do spirit.

But once the collapse was avoided and some prosperity restored, the meaning and demands of the temporarily repressed morality had to resurface. If it is unfair that some Americans cannot afford homes, to take but one example, then the government should require (i.e., force) banks to lend them money — hence the Community Reinvestment Act (CRA). So what if this will lead to some economic inefficiency or problem down the road? People are in need now. Morally, everyone knows, we cannot turn our backs on the poor.

Some economists who inspired the swing to the right conceded that government intervention is needed when the market does not lead to the results we believe are morally right. And in any case, because most supporters of freer markets could not or would not challenge conventional moral teachings, they had no answer to those who asked, “Why shouldn’t we sacrifice a little bit of economic efficiency to do what’s right?”

The conflict between the alleged immorality and the perceived practicality of capitalism has been and is the bane of liberty in America. If capitalism is so flawed morally, how can we trust it economically?

If selfishness and the profit motive are immoral, then no wonder they are blamed for any and all economic crises. Nor is it any wonder that the government — which we are assured is not self-interested — is posited as the solution to such greed-induced crises. Politicians and bureaucrats, we are told, are working not for their own benefit, but for the “common good” or “public interest.” Thus, economic disasters cannot be their fault; the blame must lie on the shoulders of greedy businessmen.

Because Americans accept the notion that self-interest is morally wrong, they have come to equate businessmen with crooks, on the grounds that both pursue self-interested goals. The argument goes, in effect, like this: Left to his own devices, free from the watchful eye of our public servants in Washington, a businessman will try to make a buck by raiding the cookie jar rather than by producing and selling cookies.

So, although in the wake of the economic disaster that was the 1970s Americans came to think that some freeing of markets was necessary, they were never morally comfortable with capitalism. And almost all of the culture’s voices — on the left and right, Democratic and Republican — told them they were right to be uneasy, that capitalism unleashed greed and destructive “excesses.” As the 1970s faded from memory and prosperity returned, a familiar pattern reemerged: Whenever a new economic problem surfaced, the cause had to be “greed” and “market excess” — and the cure, government intervention. The swing to the right had come to an end, and the pendulum had reversed course.

Consider again the current problems in the housing market. The causes are complex, but the driving force is clearly government intervention: the Fed keeping interest rates around 1 percent for a year, thus encouraging people to borrow and providing the impetus for a housing bubble; the CRA, which forces banks to lend money to low-income and poor credit households (otherwise called sub-prime lending); the creation of Fannie Mae and Freddie Mac with government guaranteed debt, leading to artificially low mortgage rates and the illusion that the financial instruments created by bundling them are low-risk; government licensing of the rating agencies, which has eliminated competition among raters of financial securities and entrenched a suspect business model; deposit insurance and the “too big to fail” doctrine, which have created huge distortions in incentives and risk-taking throughout the financial system; and so on.8

These are the real causes of the housing and financial crisis. But this fact, open to anyone who makes the effort to see, goes unnoticed because commentators, politicians, and policy makers know the popularly accepted “cause” in advance: the “greed” of banks, mortgage brokers, lenders, and borrowers — the “greed” that produced “market excesses.” The solution? Rein in the selfishness of these market participants with new government interventions and more government regulators. Thus the very cause of the housing mess — government interference in the market — is adopted as the solution, and the government’s power to dictate our economic decisions grows and grows.

This was the historical pattern of the 20th century. In every major economic crisis, the evidence implicating government interventions went unnoticed, and blame was laid instead at the feet of the market.

The Great Depression? Despite massive evidence that the Federal Reserve’s and other government policies were responsible for the crash and the inability of the economy to recover, “greedy” investors, speculators, and businessmen were blamed. Consequently, in the aftermath, the government’s power over the economy was not curtailed but dramatically expanded.

The energy crisis of the 1970s? Despite evidence that it was brought on by price controls, fiat currency, and legal restrictions on our capacity to produce energy, “greedy” oil companies were blamed. The prescribed “solution” was for the government to exert even more control.

Time and time again, the failures of statism have been blamed on capitalism and cited as a rationalization for more statism.

So, what we witnessed in the 1980s and 1990s was a period of unshackling markets a little bit in order to prevent economic disaster within a longer-term trend of growing government power over our economic lives. Reversing this longer-term trend requires that capitalism be seen not as an unpleasant and temporary fix, but as the noble ideal and moral solution that it actually is. In other words, reversing the trend requires a profound shift in the nation’s moral convictions. It requires a radical change in Americans’ conception and evaluation of self-interest.

To be self-interested is to be dedicated to the actual requirements of your life and long-term happiness. This, Americans must come to understand, demands much of a person. To be self-interested, one must first figure out which goals and values will in fact advance one’s life and happiness; then one must determine which actions will in fact secure these goals and values. None of this knowledge is trivial or self-evident; it is essential to good living, and comes only from rational thinking.

Americans must come to realize that a person who lies, cheats, and steals does not qualify as self-interested, precisely because he does no such thinking. He does not consider the long-term requirements of his life and happiness, map out a course to achieve them, and then pursue that course with passion and rigor. Rather, he takes the “easy way” by seeking unearned money, ersatz love, and political power that enables him to “get away” with such things. He does not think and act rationally; he does whatever he feels like doing. And regardless of whether he is caught and jailed for his crimes, or whether his trophy wife cheats on him or divorces him and takes him to the cleaners, or whether his political “career” collapses as his indiscretions are aired on national television, he is not and cannot be happy — because happiness is a consequence of rational thought and productive effort, not of evasion and parasitism.

Americans must come to understand that a selfish person is the opposite of the stereotype: A selfish person is a thinker and a creator.

It is true that capitalism unleashes selfishness. But Americans must learn what this actually means: Capitalism unleashes the thinkers and creators of the world. By protecting the individual’s right to life, liberty, property, and the pursuit of happiness, capitalism leaves each individual free to use his mind and produce goods. When you consider some of the giants of American industry — from John D. Rockefeller to JP Morgan to Henry Ford to Sam Walton to Bill Gates — two things stand out: the ideas these men originated and the novel products and business innovations they created. These men created products and services that improved our lives by orders of magnitude — and they were able to originate and produce such goods only to the extent that the government left them alone. Americans must come to realize that our quality of life stands in direct proportion to the freedom of industrialists and businessmen to act selfishly — and that the only way to defend such freedom is to recognize and uphold the morality of self-interest.

Americans must come to realize that we have nothing to fear from businessmen acting selfishly. On the contrary, we can only gain from their rational, productive efforts. We can learn from their example and profit from the opportunity to trade with them. Our unprecedented prosperity and standard of living exist not despite but because of these men. To shackle and tether such individuals with government regulations and interventions — to treat them as potential or actual Al Capones — is both unjust and self-destructive. Where would we be without our cars, medications, and computers?

Liars and cheats and crooks exist in every era and every culture, but under capitalism their opportunities diminish and their “lifestyles” become more difficult. Because capitalism entails a wall of separation between economy and state, their path to power is cut off. In a capitalist society, no businessmen or lobbyists would be skulking around Washington in search of favorable government interventions — whether subsidies for themselves or shackles for their competitors — because the government could not intervene in economic affairs. No politicians would be promising a new prescription drug benefit to be paid for by soaking the rich or by treating the middle class as beasts of burden, because the government would be constitutionally prohibited from “managing” the economy. Fewer business scandals on the order of WorldCom or Enron would arise, because unscrupulous businessmen would not stand a chance competing against fully free businessmen with long-range vision and integrity, men such as JP Morgan and Sam Walton (not to mention that in a capitalist system, actual crooks would be jailed).

Americans must come to understand that appeals to the “common good” and the “public interest” are not moral claims but licenses to evil. Because the American public is just a number of separate individuals, whenever some group trumpets action in the name of the “public interest” — say, a new prescription drug benefit or Social Security scheme — it is declaring that the wishes of some individuals trump the rights and interests of other individuals. But everyone has a moral right to pursue his own happiness, free from coercive interference by others. If it is to have a legitimate meaning, the “public interest” can mean only this: The rights of each and every individual are equally protected by the government.

If Americans want to turn permanently toward a genuinely free market — and thus toward peak prosperity — they will have to reconsider their moral convictions. They will have to discover a new morality, one based on the requirements of human life and backed by detailed arguments and demonstrable facts. This is what Ayn Rand offers in her body of writings. She is the only champion of capitalism who would and could defend capitalism on moral grounds, as indicated by the radical titles of her books Capitalism: The Unknown Ideal and The Virtue of Selfishness. Those who want to fight the trend toward statism — those who want to effect a real and lasting turn toward capitalism — would do well to study her thought.

About The Author

Yaron Brook

Chairman of the Board, Ayn Rand Institute

Endnotes

Acknowledgements: The author would like to thank Don Watkins for his editorial assistance and Onkar Ghate for his suggestions and editing.

1 See for instance: “Amid Turmoil, U.S. Turns Away From Decades of Deregulation,” July 25, 2008, http://online.wsj.com/article_print/SB121694460456283007.html; “The Return of Big Government” April 11, 2008, http://www.usnews.com/articles/business/economy/2008/04/11/the-return-of-big-government.html; “How Big Government Got Its Groove Back,” June 9, 2008, http://www.prospect.org/cs/articles?article=how_big_government_got_its_groove_back, “A move to curb capitalism?” May 30, 2008, http://www.washingtontimes.com/news/2008/may/30/a-move-to-curb-capitalism/.

2 “Transcript of Republican presidential debate in Simi Valley,” http://www.baltimoresun.com/news/politics/la-na-transcript-cnn,0,3961241.story?page=10.

3 “Amid Turmoil, U.S. Turns Away From Decades of Deregulation,” July 25, 2008, http://online.wsj.com/article_print/SB121694460456283007.html.

4 “Budget of the United States Government,” Fiscal Year 2009, http://www.gpoaccess.gov/usbudget/fy09/pdf/hist.pdf.

5 “A ‘Big Government Conservatism,’” August 15, 2003, http://www.opinionjournal.com/extra/?id=110003895.

6 “John McCain’s New Hampshire Primary Speech,” January 8, 2008, http://www.nytimes.com/2008/01/08/us/politics/08text-mccain.html.

7 Michael Novak, “The Moral Case for Capitalism,” Wealth & Virtue, February 18, 2004, http://www.nationalreview.com/novak/novak200402180913.asp.

8 See my Forbes.com column, “The Government Did It,” July 18, 2008, for a fuller discussion of the causes of the housing and financial crisis; http://www.forbes.com/2008/07/18/fannie-freddie-regulation-oped-cx_yb_0718brook_print.html.

Vindicating Capitalism: The Real History of the Standard Oil Company

by Alex Epstein | Summer 2008 | The Objective Standard

Who were we that we should succeed where so many others failed? Of course, there was something wrong, some dark, evil mystery, or we never should have succeeded!1

— John D. Rockefeller

The Standard Story of Standard Oil

In 1881, The Atlantic magazine published Henry Demarest Lloyd’s essay “The Story of a Great Monopoly” — the first in-depth account of one of the most infamous stories in the history of capitalism: the “monopolization” of the oil refining market by the Standard Oil Company and its leader, John D. Rockefeller. “Very few of the forty millions of people in the United States who burn kerosene,” Lloyd wrote,

know that its production, manufacture, and export, its price at home and abroad, have been controlled for years by a single corporation — the Standard Oil Company. . . . The Standard produces only one fiftieth or sixtieth of our petroleum, but dictates the price of all, and refines nine tenths. This corporation has driven into bankruptcy, or out of business, or into union with itself, all the petroleum refineries of the country except five in New York, and a few of little consequence in Western Pennsylvania. . . . the means by which they achieved monopoly was by conspiracy with the railroads. . . . [Rockefeller] effected secret arrangements with the Pennsylvania, the New York Central, the Erie, and the Atlantic and Great Western. . . . After the Standard had used the rebate to crush out the other refiners, who were its competitors in the purchase of petroleum at the wells, it became the only buyer, and dictated the price. It began by paying more than cost for crude oil, and selling refined oil for less than cost. It has ended by making us pay what it pleases for kerosene. . . .2

Many similar accounts followed Lloyd’s — the most definitive being Ida Tarbell’s 1904 History of the Standard Oil Company, ranked by a survey of leading journalists as one of the five greatest works of journalism in the 20th century.3 Lloyd’s, Tarbell’s, and other works differ widely in their depth and details, but all tell the same essential story — one that remains with us to this day.

Prior to Rockefeller’s rise to dominance in the early 1870s, the story goes, the oil refining market was highly competitive, with numerous small, enterprising “independent refiners” competing harmoniously with each other so that their customers got kerosene at reasonable prices while they made a nice living. Ida Tarbell presents an inspiring depiction of the early refiners.

Life ran swift and ruddy and joyous in these men. They were still young, most of them under forty, and they looked forward with all the eagerness of the young who have just learned their powers, to years of struggle and development. . . . They would meet their own needs. They would bring the oil refining to the region where it belonged. They would make their towns the most beautiful in the world. There was nothing too good for them, nothing they did not hope and dare.4

“But suddenly,” Tarbell laments, “at the very heyday of this confidence, a big hand [Rockefeller’s] reached out from nobody knew where, to steal their conquest and throttle their future. The suddenness and the blackness of the assault on their business stirred to the bottom their manhood and their sense of fair play. . . .”5

Driven by insatiable greed and pursuing his firm’s self-interest above all else, the story goes, Rockefeller conspired to obtain an unfair advantage over his competitors through secret, preferential rebate contracts (discounts) with the railroads that shipped oil. By dramatically and unfairly lowering his costs, he slashed prices to the point that he could make a profit while his competitors had to take losses to compete. Sometimes he went even further, engaging in “predatory pricing”: lowering prices so much that Standard took a small, temporary loss (which it could survive given its pile of cash) while his competitors took a bankrupting loss.

These “anticompetitive” practices of rebates and “predatory pricing,” the story continues, forced competitors to sell their operations to Rockefeller — their only alternative to going out of business. It was as if he was holding a gun to their heads — and the “crime” only grew as Rockefeller acquired more and more companies, enabling him, in turn, to extract ever steeper rebates from the railroads, which further enabled him to prey on new competitors with unmatchable prices. This continued until Rockefeller acquired an unchallengeable monopoly in the industry, one with the “power” to banish future competition at will and to dictate prices to suppliers (such as crude oil producers) and consumers, who had no alternative refiner to turn to.

Pick a modern history or economics textbook at random and you are likely to see some variant of the Lloyd/Tarbell narrative being taken for granted. Howard Zinn provides a particularly succinct illustration in his immensely popular textbook A People’s History of the United States. Here is his summary of Rockefeller’s success in the oil industry: “He bought his first oil refinery in 1862, and by 1870 set up Standard Oil Company of Ohio, made secret agreements with railroads to ship his oil with them if they gave him rebates — discounts — on their prices, and thus drove competitors out of business.”6

Exhibiting the same “everyone knows about the evil Standard Oil monopoly” attitude, popular economist Paul Krugman writes of Standard Oil and other large companies of the late 19th century:

The original “trusts” — monopolies created by merger, such as the Standard Oil trust, or its emulators in the sugar, whiskey, lead, and linseed oil industries, to name a few — were frankly designed to eliminate competition, so that prices could be increased to whatever the traffic would bear. It didn’t take a rocket scientist to figure out that this was bad for consumers and the economy as a whole.7

The standard story of Standard Oil has a standard lesson drawn from it: Rockefeller should never have been permitted to take the destructive, “anticompetitive” actions (rebates, “predatory pricing,” endless combinations) that made it possible for him to acquire and maintain his stranglehold on the market. The near-laissez-faire system of the 19th century accorded him too much economic freedom — the freedom to contract, to combine with other firms, to price, and to associate as he judged in his interest. Unchecked, economic freedom led to Standard’s large aggregation of economic power — the power flowing from advantageous contractual arrangements and vast economic resources that enabled it to destroy the economic freedom of its competitors and consumers. This power, we are told, was no different in essence than the political power of government to wield physical force in order to compel individuals against their will. In the free market, through unrestrained voluntary contracts and combinations, Standard had allegedly become the equivalent of a king or dictator with the unchallenged power to forbid competition and legislate prices at whim. “Standard Oil,” writes Ron Chernow, author of the popular Rockefeller biography Titan, “had taught the American public an important but paradoxical lesson: Free markets, if left completely to their own devices can wind up terribly unfree.”8

This lesson was and is the logic behind antitrust law, in which government uses its political power to forcibly stop what it regards as “anticompetitive” uses of economic power. John Sherman, the author of America’s first federal antitrust law, the Sherman Antitrust Act of 1890, likely had Rockefeller in mind when he said:

If we will not endure a king as a political power we should not endure a king over the production, transportation, and sale of any of the necessaries of life. If we would not submit to an emperor, we should not submit to an autocrat of trade, with power to prevent competition, and to fix the price of any commodity.9

But Rockefeller was no autocrat. The standard lesson of Rockefeller’s rise is wrong — as is the traditional story of how it happened. Rockefeller did not achieve his success through the destructive, “anticompetitive” tactics attributed to him — nor could he have under economic freedom. Rockefeller had no coercive power to banish competition or to dictate consumer prices. His sole power was his earned economic power — which was no more and no less than his ability to refine crude oil to produce kerosene and other products better, cheaper, and in greater quantity than anyone thought possible.

It has been more than one hundred years since Ida Tarbell published her History of the Standard Oil Company. It is time for Americans to know the real history of that company and to learn its attendant and valuable lessons about capitalism.

The “Pure and Perfect” Early Refining Market

Any objective analysis of the nature of Rockefeller’s rise to dominance — Standard Oil had an approximately 90 percent market share in oil refining from 1879 to 1899<10 — must take into account the context in which he rose. This means taking a thorough look at the market he came to dominate, before he entered it.

Traditional accounts of Rockefeller’s ascent, which began in 1863, portray the pre-Rockefeller market as a competitive paradise of myriad “independent refiners” — a paradise that Rockefeller destroyed when he drove his competitors out of business and wrested full “control” of the oil refining business for himself.

This idealized view of the early oil refining market appeals to most readers, who have been taught that a good, “competitive” market is one with as many viable competitors as possible, and that it is “anti-competitive” to have a market with a few dominant participants (“oligopoly”), let alone one dominant participant (“monopoly”). This view of markets was formalized in the 20th century as the doctrine of “pure and perfect” or “perfect” competition, which holds that the ideal market consists of as many distinct producers as possible, each selling equally desirable, interchangeable products. Under “perfect competition,” no one competitor has any independent influence on price, and the profits of each are minimized as much as possible (on some variants of “perfect competition,” prices equal costs and profits are nonexistent). Although advocates of this view acknowledge (or lament) that it cannot exist in reality, they view it as a model market toward which we should at least strive.

By this standard, the early oil refining market was “perfect” in many ways. Many small, “independent,” practically indistinguishable refiners were in business. No one threatened to drive the others out of business, and the market was extremely easy to enter; those with no experience in refining could buy the necessary equipment for three hundred dollars and start making profits almost immediately.11 Some refiners recovered their start-up costs after one batch of kerosene.12

But the traditional perspective ignores the crucial aspect of markets relevant to their impact on human life: their productivity — how much it produces, the value of that which is produced, and the efficiency with which it is produced. By this standard, the oil refining market was anything but perfect — refiners were at an early, primitive stage of productivity, which happily ended.

This is not a moral criticism of the early oil refining industry. The first five years of that industry, along with the crude production industry, from 1859 to 1864, were full of great achievements. It is almost impossible to overstate the dramatic and near-immediate positive effect of a group of scientists and businessmen discovering that “rock oil,” previously thought to be useless, could be refined to produce kerosene — the greatest, cheapest source of light known to man. In 1858, a year before the first oil well was drilled, only well-to-do families such as that of 11-year-old Henry Demarest Lloyd could afford sperm whale oil at three dollars per gallon to light their homes at night.13 For most, the day lasted only as long as did the daylight. But by 1864, just five years into the industry, a New York chemist observed:

Kerosene has, in one sense, increased the length of life among the agricultural population. Those who, on account of the dearness or inefficiency of whale oil, were accustomed to go to bed soon after the sunset and spend almost half their time in sleep, now occupy a portion of the night in reading and other amusements; and this is more particularly true of the winter seasons.14

Still, the market’s primitive methods of production and distribution at this early stage made it impossible for it to have anywhere near the worldwide impact it would have by the time Lloyd’s famous essay damning Rockefeller was published.

A particularly problematic area was transportation, which was convoluted and extremely expensive. Oil was transported in 42-gallon wood barrels of spotty quality, costing $2.50 each. Each one had to be filled and sealed separately and piled onto a railroad platform (where barrels were prone to leak or fall off) or occasionally onto a barge (where barrels were prone to fall off and start fires).15 The myriad small refiners each could ship only a handful of barrels at a time; this required the railroads to make many separate stops at different destinations for different refiners, which resulted in a lengthy and expensive journey for both railroads and refiners. And for some time, this was the best aspect of the process. In the early days, to get barrels of crude oil from assorted oil spots in northwest Pennsylvania onto railways headed for the refineries, oil was transported by horse and wagon by teamsters, often through roadless territory and waist-high mud, with barrels perpetually bouncing and frequently breaking or falling out. (Because of government intervention, the teamsters had a huge influence in politics and for years prevented the construction of local pipelines — an incomparably superior form of oil transportation.)16

The refining process, the core of the industry, was also at a primitive stage. To refine crude oil is to extract from it one or more of its valuable “fractions,” such as kerosene for illumination, paraffin wax for candles, or gasoline for fuel. The heart of the refining process uses a “still” — a distillation apparatus — to heat crude oil at multiple, increasing temperatures to boil off and separate the different fractions, each of which has a different boiling point. Distillation is simple in concept and basic execution, but to produce quality kerosene and other by-products requires precise temperature controls and various additional purification procedures. Impure kerosene could be highly explosive; death by kerosene was a common phenomenon in the 1860s and even the 1870s, claiming thousands of lives annually. In fact, the spotty quality of much American kerosene is what inspired John Rockefeller to call his company Standard Oil.17

Some refineries in the early 1860s, such as those of famed refiners Joshua Merrill and Charles Pratt, produced safe, high-quality kerosene, but most did not. Tarbell’s exalted “independent refiners” from the Oil Regions of Pennsylvania, incidentally, produced the worst quality kerosene.

“Deluded by petroleum enthusiasts as to the simplicity of refining,” write Williamson and Daum in their comprehensive history of the early petroleum industry, “individuals inexperienced in any form of distillation flocked into the new business. . . .” But, they note,

successful petroleum refining . . . called for the utmost vigilance. . . . Real separation of the various components of crude oil was no objective at all; their major purpose was simply to distill off the gases, gasoline and naphtha fractions as fast as heat and condensation could permit. All condensed liquid that conceivably could be fobbed off as burning oil . . . was recovered and the tar residue was thrown away. . . . Only in the provincial isolation of the Oil Region and nearby locations did such outfits receive serious designations as petroleum refineries.18

In a mature market, such operations, with their inferior, hazardous products, would never succeed. But in the early stages of the market, anyone could succeed, because the overall refining capacity was insufficient to meet the enormous demand for kerosene.Even lower-quality kerosene was spectacularly valuable compared to any other illuminant Americans could buy.

The supply-and-demand equation of kerosene even made it possible for refiners with low efficiency to profit handsomely. In 1865, kerosene cost fifty-eight cents a gallon; at one-fifth the cost of whale oil this was a great deal for consumers — and it was a price at which anyone with a still could make money. Even if the still was very small, requiring much more manpower and other expenses per gallon of output than a larger still; even if the still refined only kerosene and failed to make use of the other 40 percent of crude; even if the still was low-quality and needed frequent repair or replacement — the owner could turn a healthy profit.19

This stage of the industry was necessarily temporary. As more and more people entered the refining industry, attracted by the premium profits, prices inevitably went down — as did profits for those who could not increase their efficiency accordingly.

Such a process, which began in the mid-1860s, was more dramatic than almost anyone expected. Between 1865 and 1870, refining capacity exploded relative to oil production, and prices plummeted correspondingly. In 1865, kerosene cost fifty-eight cents a gallon; by 1870, twenty-six cents.20 Refining capacity was increasing relative to the supply of oil; by 1871 the ratio of capacity to crude production was 2.5:1.21 At this point, those who expected to make a livelihood with three-hundred-dollar stills found the market very inhospitable. A shakeout of the efficient men from the inefficient boys was inevitable. In the mid-1860s, no one imagined that the best of the men, by orders of magnitude, would turn out to be a 24-year-old boy named John Davison Rockefeller.

The Phenomenon

In 1863, the first railroad line was built connecting the city of Cleveland to the Oil Regions in Pennsylvania, where virtually all American oil came from. Clevelanders quickly took the opportunity to refine oil — as had the residents of the Oil Regions, Pittsburgh, New York, and Baltimore. Cleveland had the disadvantage of being one hundred miles22 from the oil fields but the advantage of having far cheaper prices for materials and land (Oil Regions real estate had become extremely expensive), plus proximity to the Erie Canal for shipping.23

In 1863, Rockefeller was running a successful merchant business with his partner, Maurice Clark, when a local man named Samuel Andrews approached the two. A talented amateur chemist, Andrews sought their investment in a refinery. After investigating the industry, Rockefeller convinced Clark that they should invest four thousand dollars.24 Rockefeller was attracted to the substantial — and then stable — profits of the refining industry, in contrast to the production industry, which alternated between incredible booms and busts. (When producers struck a “gusher,” whole towns were built up to the height of 1860s luxury; when they dried up, those towns faded into abject poverty.) He was not, however, impressed with the efficiency with which refiners ran their operations. He believed he could do better.

And he did — immediately. Instead of setting up a shanty refinery, Rockefeller invested enough to create the largest refinery in Cleveland: Excelsior Works. From the beginning, he encouraged Andrews to expand and improve the refinery, which soon produced 505 barrels a day,25 as compared to some refineries in the Oil Regions that produced as few as five barrels a day.26 Additionally, in a highly profitable act of foresight, Rockefeller carefully bought the land for his refinery in a place from which it would be easy to ship by railroad and by water, thus putting shippers in competition for his business; his competitors simply placed their refineries near the new Cleveland rail line and took for granted that it would be their means of transportation.27

Rockefeller’s business background made him well-suited to run a highly efficient firm. His first interest in business had been accounting — the art of measuring profit and loss (i.e., economic efficiency). Rockefeller’s first job had been as an assistant bookkeeper, and for his entire career he revered the practice of careful financial record-keeping. “For Rockefeller,” writes Ron Chernow, “ledgers were sacred books that guided decisions and saved one from fallible emotion. They gauged performance, exposed fraud, and ferreted out hidden inefficiencies.”28

Rockefeller was hardly the only man in the refining industry with a background in accounting or a concern with efficiency. But he was distinguished in this regard by his degree of focus on applying good accounting practices to his new business. Rockefeller, from a young age, exhibited an obsessive, laser-like concentration on whatever he chose as his purpose. At age sixteen he landed an accounting job after six weeks of repeated visits to top firms, shrugging off rejections until he finally convinced one of them, Hewitt and Tuttle, to hire him.29 Applied to the task of minimizing costs and maximizing revenues in his refining operation, Rockefeller’s focus brought Standard Oil phenomenal success.

While other refiners took any given business cost for granted — including the cost of barrels and the cost of crude — Rockefeller put himself and those who worked for him to the task of discovering ways to lower every cost while continuously seeking additional sources of revenue.

Consider the cost of transporting oil in barrels. Barrels were a major expense for everyone in the industry, and barrel makers were notoriously unreliable when it came to delivering barrels on time. Rockefeller at once slashed his costs and solved this reliability problem by having his firm manufacture its own barrels. He purchased forest land, had laborers cut wood, and — in a crucial innovation — had the wood dried in a kiln before using it to transport kerosene. (Others used green wood barrels, which were far heavier and thus more expensive to transport.) With these and other innovations, Rockefeller’s barrel costs dropped from $2.50 a barrel to less than $1 a barrel — and he always had barrels when he needed them.30

Rockefeller further lowered his costs by eliminating the use of barrels altogether in receiving crude oil (barrels would remain in use for shipping refined oil to customers for some time). He did so by investing in “tank cars” — railroad cars fitted with giant tanks — shortly after they came on the market in 1865. By 1869, he owned seventy-eight of them, yielding huge cost savings over his competitors.31

Or consider the cost of buying crude, which most people took as entirely dependent upon current market prices. One way in which he cut this cost was by employing his own purchasing agents, which eliminated the need for paying “jobbers” (purchasing middlemen). A shrewd negotiator, Rockefeller trained his purchasing agents to obtain the best possible prices. Further saving money and improving his negotiating position, Rockefeller built large storage facilities to keep crude in reserve, so that he would not have to pay exorbitant prices in the event of a spike in its price. Accordingly, his purchasing agents developed comprehensive, constantly updated knowledge of the industry so that they could determine the most opportune times to purchase crude.

These improvements, along with many others, reflected a practice that characterized Rockefeller’s firm for his thirty-five years at the helm: vertical integration — incorporating into a company functions that it had previously paid others to do. Time after time, Rockefeller found that, given his and his subordinates’ talent and innovative spirit, many facets of the business could be done more cheaply if his firm undertook them itself.

Rockefeller also lowered costs in the refining process itself. One particularly innovative form of cost-cutting in which he engaged was self-insurance against fires. In the early refining industry, the danger of fire was omnipresent. Even in the 1870s, when safety improved significantly, premiums “varied from 25 per cent down to 5 per cent of valuation. . . .”32 Rockefeller determined that he could save money by self-insuring. He regularly set aside income to handle fire damage, while implementing every safety precaution he and his men could think of. The practice saved the company thousands and, eventually, millions of dollars; over time, its insurance funds grew to the point where they could be used to pay large dividends to shareholders. (In later years, Rockefeller contained the risk of fire even more by multiplying refineries across the country, so that one disaster could do only so much damage.)33

Another refining cost that Rockefeller minimized was the chemical treatment of kerosene. Samuel Andrews was skilled at determining the right quantity of sulfuric acid needed to completely purify distilled kerosene. This was important because sulfuric acid was expensive. Rockefeller saved money by getting ideal results with 2 percent whereas competitors often used up to 10 percent.34

And in building his refineries, Rockefeller used the highest quality materials to get maximum longevity from equipment — thus avoiding the reliability issues of early stills — and he built large facilities so as to lower his labor costs per gallon refined.

Rockefeller also worked to maximize the amount of revenue he could bring in, both by selling by-products of crude besides kerosene and by establishing marketing operations in major consumer states and overseas. Appalled at the idea of wasting the 40 percent of his crude that was not kerosene, Rockefeller extracted and sold the fraction naphtha, and he sold much of the remaining portion of the crude to other refiners who specialized in other non-kerosene fractions, such as paraffin wax and gasoline. (He also used fuel oil from crude to help power his plants, thereby saving money on coal.) Later, Rockefeller’s firm refined and sold all these fractions — becoming what is called a “complete” refinery — but even before that development, he let no cost-cutting or value-creating opportunity go to waste.

In the mid-1860s, Rockefeller set up an office in New York City to focus on overseas sales. The overseas market for kerosene was larger than the American market and presented a great opportunity to Rockefeller since nearly all the world’s known oil at the time was American. Recognizing the importance of having a steady stream of foreign demand, Rockefeller had his brother head the New York operation to keep tabs on the various markets and maximize sales.

These improvements in efficiency and marketing resulted in a company that was staggeringly more productive than most of its rivals, and well on its way to revolutionizing the oil refining industry.

Rockefeller’s obsession with cutting costs has been called “penny-pinching”35 — a term that aptly describes his desire and ability to cut costs to the smallest detail. But insofar as it conjures an image of a miserly businessman, the term does not apply. Rockefeller, by disposition and in action, was anything but averse to spending money; he recognized that spending in the form of investing was vital to the dramatic increases in efficiency he sought and achieved.

Many of Rockefeller’s penny-pinching methods required investments, often large ones. He knew that although these would cut into his cash in the short run, they would prove profitable in the long run — if the company simultaneously invested in its growth. The greater the firm’s output, the more it could leverage economies of scale, achieving greater efficiency by dispersing productivity-increasing costs over a greater number of units. By virtue of its size and output, Rockefeller’s firm was able, for example, to purchase, maintain, and replant forests in order to more efficiently produce barrels — a strategy that would be utterly unprofitable for a small refiner producing, say, fifty barrels a day.

The bigger the company, the more it can invest in efficiency-increasing measures — from tank cars to forests to purchasing agents to self-insurance — when it makes financial sense. Recognizing this, Rockefeller reinvested profits in the business at every opportunity. Whereas other oilmen in the booming 1860s spent almost all of their profits on the premise that current market conditions would endure and therefore future revenue would easily cover their future costs, Rockefeller reinvested as much of the firm’s profit as possible in its growth, efficiency, and durability.

Rockefeller also solicited large amounts of capital from outside the company. Early on, he borrowed money frequently, which he could do easily given his lifelong track record of perfect credit. Rockefeller’s penchant for borrowing turned out to be his path to assuming full leadership of the company. His business partner, Maurice Clark, routinely complained during the refinery’s first two years about Rockefeller’s borrowing, and in 1865 threatened to dissolve the firm. Rockefeller called his bluff, announced the dissolution in the paper, and agreed to bid with him for the refinery business. The 26-year-old Rockefeller won, for a price of $72,500 (the equivalent today of about $820,000).36 Clark thought he had gotten a bargain — but given what Rockefeller was to accomplish in the next five years, Clark would undoubtedly come to think twice.

In 1867, Rockefeller accepted an outside investment of several hundred thousand dollars from Henry Flagler and John Harkness.37 The investment turned out better than anyone could have hoped; Rockefeller gained not only vital capital, but also Flagler, who would be his beloved right-hand man for decades to come.

By 1870, the firm of Rockefeller, Andrews, and Flagler was, thanks to Rockefeller’s vision, a super-efficient refining machine, generating more than fifteen hundred barrels a day38 — more than most refineries could produce in a week — at lower cost than anyone else. And in that year, the firm became the Standard Oil Company of Ohio — a joint-stock company, of the type used by railroads, that enabled Rockefeller to more easily acquire other refiners in the coming years.

Reflecting Rockefeller’s profitable investments in efficiency, the Company declared assets including “ . . . sixty acres in Cleveland, two great refineries, a huge barrel making plant, lake facilities, a fleet of tank cars, sidings and warehouses in the Oil Regions, timberlands for staves, warehouses in the New York area, and [barges] in New York Harbor.”39

But Standard’s most important asset was Rockefeller, followed by his close associates. Rockefeller’s ambition for the expansion of the business was only growing, and he talked with Henry Flagler morning, noon, and night about possibilities and plans. Reflecting on the company nearly fifty years later, Rockefeller recalled: “We had vision. We saw the vast possibilities of the oil industry, stood at the center of it, and brought our knowledge and imagination and business experience to bear in a dozen, in twenty, in thirty directions.”40

The days of indistinguishably inefficient refiners were over. And Rockefeller, barely thirty, was just scratching the surface of his productive potential.

Having explored this much of Rockefeller’s hard-earned success, let us turn to his most controversial form of cost savings and efficiency: railroad rebates.

The Virtuous Rebates

Historians overwhelmingly attribute Rockefeller’s success to his dealings with the railroads, dealings that are almost universally viewed as “anticompetitive.”

Here is Ida Tarbell’s description of how Rockefeller advanced ahead of other refiners — as described from their perspective (with which Tarbell agrees).

John Rockefeller might get his oil cheaper now and then . . . but he could not do it often. He might make close contracts for which they [other refiners] had neither the patience nor the stomach. He might have an unusual mechanical and practical genius in [Samuel Andrews]. But these things could not explain all. They believed they bought, on the whole, almost as cheaply as he, and they knew they made as good oil and with as great, or nearly as great, economy. He could sell at no better price than they. Where was his advantage? There was but one place where it could be, and that was in transportation. He must be getting better rates from the railroads than they were.41

This is an unforgivable evasion of Rockefeller’s vast productive superiority over his competitors in the late 1860s. It is possible that some of Rockefeller’s competitors believed this in the 1860s — as Rockefeller, to the extent possible, kept his business methods and the scope of his operations secret — but for Tarbell to write this in the 1900s is absurd.

Also absurd is the implication of the success-by-rebates story: that railroads arbitrarilygifted Rockefeller with rebates so enormous he was able to bankrupt the competition. No seller of the era (or any era) gave Rockefeller or anyone unnecessary or unprofitable discounts — certainly not railroads, which were often struggling financially. Rockefeller earned his rebates, by devising ways to make his oil cheaper to ship and by setting shippers in competition with one another so that he could negotiate them down to the best price.

The story of Standard’s first known rebate illustrates the true nature of the phenomenon. In this case, Standard extracted a big discount by dramatically lowering a railroad’s shipping costs.

When the Lake Shore railroad built a connection to Cleveland in 1867, Flagler went to the railroad’s vice president and offered to pay 35 cents a barrel for shipping crude from the Oil Regions to Cleveland, and $1.30 a barrel for kerosene sent to New York (usually for export). In exchange for these discounts, Flagler offered the Lake Shore a major incentive: guaranteed, large, regular shipments. This was a huge boon to the Lake Shore, and its vice-president James H. Devereux readily accepted the deal. As he explained:

[T]he then average time for a round trip from Cleveland to New York for a freight car was thirty days; to carry sixty cars per day would require 1,800 cars at an average cost of $500 each, making an investment of $900,000 necessary to do this business, as the ordinary freight business had to be done; but [research showed] that if sixty carloads could be assured with absolute regularity each and every day, the time for a round trip from Cleveland to New York and return could be reduced to ten days, . . . only six hundred cars would be necessary to do this business with an investment therefore of only $300,000.42

Praising the rebate as a boon to the Lake Shore, Devereux said: “Mr. Flagler’s proposition offered to the railroad company a larger measure of profit than would or could ensue from any business to be carried under the old arrangements. . . .”43

Guaranteed, large shipments were a landmark, cost-cutting innovation in oil transportation — identical in nature to Rockefeller’s use of tank cars or his cost-cutting in barrel production. As economic and antitrust historian Dominick Armentano summarizes, Standard also “furnished loading facilities and discharging facilities at great cost; . . . it provided terminal facilities and exempted the railroads from liability for fire by carrying its own insurance.”44

In addition to lowering railroads’ costs to obtain better prices, Rockefeller’s firm was expert at setting railroads against one another and cultivating alternative means of shipping, such as waterways, to further lower shipping costs. Having established the location of his first refinery near the Erie Canal and having built up a large capital position he was able to take advantage of the lower rates of shipping by water; because it was slower than shipping by land it required a company to have, in addition to water access, the capital to handle the larger delay between paying for crude and being paid for kerosene. Of much of his competition, Rockefeller said: “The others had not the capital and could not let the oil remain so long in transit by lake and canal; it took twice as long that way. . . .”45

Rockefeller’s rebates, then, were an earned cost savings of the sort that any market competitor — and any consumer — should perpetually seek. The extent to which others could not match the low prices he was able to charge in the 1870s as a result of his many cost-cutting measures, including this one, is simply an instance of productive inferiority; nothing about it is coercive or “anticompetitive.” To say that Rockefeller — by cutting his costs, thus enabling himself to sell profitably for lower prices and win over more customers — was rendering competitors “unfree” is like saying that Google is rendering its competitors unfree by building the most appealing search engine. To call Rockefeller’s actions “anticompetitive” is to say that “competition” consists in no one ever outperforming anyone else. Economic freedom does not mean the satisfaction of anyone’s arbitrary desires to succeed in any market regardless of ability or performance or consumer preferences; it means that everyone is free to produce and trade by voluntary exchange to mutual consent. If one cannot compete in a certain field or industry, one is free to seek another job — but not to cripple those who are able to compete.

True economic competition — the kind of competition that made kerosene production far cheaper — is not a process in which businessmen are forced by the government to relinquish their advantages, to minimize their profits, to perform at the norm, never rising too far above the mean. Economic competition is a process in which businessmen are free to capitalize on their advantages, to maximize their profits, to perform at the peak of their abilities, to rise as high as their effort and skill take them.

Rockefeller’s meteoric rise and the business practices that made it possible — including his dealings with the railroads — epitomize the beauty of a free market. His story provides a clear demonstration of the kind of life-serving productivity that is the hallmark of laissez-faire competition.

The Missing Context of Standard’s Rise to Supremacy

The 1870s was a decade of gigantic growth for the Standard Oil Company. In 1870, it was refining fifteen hundred barrels per day — a huge amount for the time. By January 1871, it had achieved a 10 percent market share, making it the largest player in the industry. By 1873, it had one-third of the market share, was refining ten thousand barrels a day and had acquired twenty-one of the twenty-six other firms in Cleveland. By the end of the decade, it had achieved a 90 percent market share.

Such figures are used as ammunition by those who believe in the dangers of acquisitions and high market share. These critics believe that Standard’s growth and its ability to acquire so many companies so quickly “must have” come from some sort of “anticompetitive” misconduct — and they point to Standard Oil’s participation in two cartels during the early 1870s as evidence of Rockefeller’s market malice.

But the growing success of Standard did not flow from these attempted cartels — neither of which Standard initiated, and both of which failed miserably in very short order — but from the company’s enormous productive superiority to its competitors, and from the market conditions whose groundwork had been laid in the 1860s. Without understanding these conditions, one cannot understand Rockefeller’s exceptionally rapid rise.

Recall that in 1870 kerosene cost twenty-six cents a gallon, while three-fourths of the refining industry was losing money. A major cause of this was that refining capacity was at 12 million barrels a year, while there were only 5 million barrels to refine,46 a disparity that had an upward effect on the price of the crude that refiners purchased — and a downward effect on the price of the refined oil they sold. On November 8, 1871, a writer for the Titusville Herald estimated that “at present rates the loss to the refiner, on the average, is seventy-five cents per barrel.”47 Rockefeller’s firm, which was engineered to drastically lower production costs, could profit with such prices; few other firms could.

Even if there had not been a major excess of refining capacity, most of the refiners in America would have been unable to survive without drastically transforming their businesses. Rockefeller had raised the industry bar, and was expanding; anyone who hoped to compete with him would have to run a refining operation of comparable scale and efficiency.

Still, the excess capacity exacerbated the trouble for the lesser refiners — many of whom further exacerbated their own trouble by refusing to close or sell their failing businesses. In 1870, the Pittsburgh Evening Chronicle described the “very discouraging” tendency of the industry to increase refining capacity “ad infinitum” even during difficult times.48 One projection in 1871 put the rate of expansion at four thousand barrels per day.49

Refiners hoped that the old prices would come back. But the harsh reality for those refiners was that they could return to profitability only if they could restructure their businesses as modern, technological enterprises with the economies of scale on the order of those achieved by Standard. This reality became increasingly apparent over the decade as prices dropped from 26 cents a gallon in 1870, to 22 cents in 1872, to 10 cents in 1874.50

The failing refiners were neither the first nor the last businesses to be in such a situation. And, like many before and after them, they tried to solve their problems via cartels: agreements among producers to artificially reduce their production in order to artificially raise their prices. Rockefeller, hoping for stability in prices and an end to the irrationality of others refining beyond their means, joined and supported two cartels. This move was disastrous — the worst of Rockefeller’s career.

Cartels are generally viewed as evil, destructive schemes because they are overt attempts by a group of businesses to increase revenues by raising consumers’ prices across an industry. In and of itself, however, seeking higher prices for one’s products is not evil; it is good. The problem with cartels is not that they seek higher profits, but that they shortsightedly attempt to generate them by non-productive means. So long as the economic freedom to offer competing or substitute products exists — as should be the case — such a scheme is bound to fail.

Cartels are more accurately viewed as ineffectual than evil. Cutting off supply in order to effect higher profits rewards those who do not participate in the scheme (as well as cheaters within the cartel) with the opportunity to sell more of their own products at inflated prices. And to attempt a cartel is to invite a boycott and long-term alienation from one’s customers. These truths were borne out by both the South Improvement Company (SIC) scheme and the Pittsburgh Plan.

The Pennsylvania Railroad and its infamous leader, Tom Scott, a master manipulator of the Pennsylvania legislature, initiated the South Improvement Company cartel. Railroads, like oil refiners, were struggling financially; they too had overbuilt given the market. Having less traffic than they had anticipated, they sought to solve the problem by charging above-market prices. Here is the essence of their plan: The railroads would more than double the rates for everyone outside the cartel, including oil producers, to either bring all refiners into the SIC or drive them out of business. In turn, SIC refiners, which could constitute virtually all the refiners on the market, would impose strict limits on their output in order to raise prices. It seemed to be a “win-win” plan: The railroads would get higher rates and more revenue, and SIC refiners would raise prices and start profiting again.

This whole scheme, however, was delusional. For one, it presumed that the oil producers would accept catastrophic rate increases. They did not.

The oil producers — who were also the railroads’ consumers and the refineries’ suppliers — retaliated by placing an embargo on refineries associated with the South Improvement Company. The proposed rate increases were so dramatic and arbitrary that producers were strongly committed to the embargo — and it worked, cutting off Standard’s operations while benefiting those who did not participate. Writes Charles Morris in The Tycoons, “By early March, [1872] the Standard was effectively out of business, and up to 5,000 Cleveland refinery workers were laid off. . . . [In early April] the triumphant producers announced the end of their embargo.”51 The South Improvement Company never collected a rebate.

So much for Standard’s and the SIC’s “monopoly power.”

The other cartel in which Standard participated, the Pittsburgh Plan, was an agreement between oil producers and refiners to inflate their respective prices. While the 1870s began with high crude prices due to low crude supply and excess refinery capacity, a series of gushers soon reduced the price of crude to about $3.50 a barrel. Oil producers wanted to reverse this trend. Again, the idea was to artificially restrict production, raise prices, and reap the profits while competitors and consumers idly complied. The participants agreed that refiners would buy oil at the premium price of $5 a barrel (in some cases $4) so long as the producers substantially limited their production. Refineries, also, would limit production to raise their prices. The deal was wildly illogical; part of it stipulated that producers in the Oil Regions would simply cease new drilling for six months.

The plan dissolved in short order. Producers outside the cartel did not play their role of not trying to make a profit; instead, they expanded their production to make money — as did cartel members once this started happening. Prices fell — indeed, they fell immediately to the market rate, $3.25; within two months, following more crude discoveries, prices fell again, down to $2.52

Historians try to outdo one another in denouncing the oil cartels as immoral. But given the desperation of many in the industry, and the relatively primitive understanding of how such arrangements pan out, it is more valuable to learn from the incidents, to gain a better understanding of the nature of cartels and other attempts to control markets under economic freedom.

Despite a huge percentage of refiners trying collectively to control market prices, they could not do so — because they had no means of forcing consumers to pay their prices or of forcing other producers not to compete by offering lower prices. The only thing they could control was their own production and whether it was the best it could be. Before the cartels, Rockefeller had relied solely on stellar production and efficiency to achieve great success; his participation in the cartels brought him failure and ire and was antithetical to his fundamental goal of expanding production.

In the wake of the South Improvement Company fiasco, Rockefeller claimed that he had never believed the cartel would work and that he had participated in it merely to show failing refiners that the only solution to their problems was to sell their businesses to him. Given his company’s prominent role in the SIC, this is likely overstated. But it is undeniable that while planning the cartel, Rockefeller began an aggressive policy of acquisition and improvement that continued throughout the decade.

From 10 to 90 in Eight Years

Rockefeller had several motives for acquiring competitors. First, other refineries had talent and assets that he wanted — including facilities that produced not only kerosene, but a full range of petroleum products. Second, he wanted to eliminate the industry’s excess refining capacity and its accompanying instability as soon as possible, rather than ride out the storm as the other ships sank.

Rockefeller made his first acquisition in December 1871. He proposed a buyout to Oliver Payne of Clark, Payne & Company, which was his biggest competitor in Cleveland (and which featured the same Clark family that initially had been involved in business with Rockefeller). Payne, suffering from the depressive industry conditions and without much hope of timely relief, was open to the possibility of selling. The decisive moment in the negotiations came when Rockefeller showed Payne Standard’s books. Payne was “thunderstruck” by how much profit the company was making under conditions in which others were flailing.53 Rockefeller bought the company for $400,000 (a “goodwill” premium of $150,000 more than its then current market value).

After acquiring Clark, Payne & Company, Rockefeller increased his company’s capitalization to $3.5 million and went on an acquisition spree — later dubbed “The Conquest of Cleveland.” By the end of March 1872, he had proposed to buy out all of the other refiners in Cleveland, and twenty-one of twenty-six had already agreed. During 1872, Rockefeller also bought several refineries in New York, a crucial port, at which point he owned 25 percent of the refining capacity there.54

According to many analysts, the rapidity of acquisition “proves” that Rockefeller was involved in devious activities. But it proves nothing of the sort. The basic reason so many sold was that Rockefeller’s propositions made economic sense; if the second leading refiner in Cleveland was “thunderstruck” by the superiority of Standard’s efficiency, imagine the relative economic positions of the smaller, even less efficient refiners.

Another common view is that the “threat” of the proposed South Improvement Company frightened Cleveland refiners into selling to Rockefeller. But, if anything, as has been shown, the SIC provided incentive for refineries to remain independent.

A better explanation of why so many sold to Rockefeller is that they were eager to be bought out; in fact, a problem later surfaced with frauds trying to set up new refineries just to be bought out by Rockefeller. Of course, it took only a handful of acquisition targets, resentful of a market that had superseded them, to make a “devastating exposé” and gain a place in the anti-capitalist canon.

What if a company Rockefeller wanted to buy was not willing to sell? Accounts differ, but one plausible account is that he gave the competitor “a good sweating” (an expression attributed to Flagler) by lowering prices to a point where Standard remained profitable but the competitor would go out of business quickly. This practice is labeled “predatory pricing” — but it is no such thing. If predatory pricing is taken to mean lowering one’s prices below cost to drive a competitor out of business — and then raising those prices to artificially high levels once the competitor has been eliminated — then Rockefeller did not engage in “predatory pricing,” at least not to any significant extent. If he had tried, he would have experienced the fact that, like cartels, this form of attempting to profit through unproductive measures fails. In general, large companies that attempt to profit by this means find that they lose money at alarming rates because they are selling more units at a loss than their “prey” is selling. If they do manage to destroy an existing company, they have weakened themselves in the process, thus providing an opportunity for more substantial, more able competitors to enter the market.

Nothing is inherently wrong — either economically or morally — with temporarily selling at a loss in order to eliminate a rickety competitor. And the phrase “predatory pricing” is a misnomer in any event, because no force is involved in the practice of selling at a loss. But Standard Oil did not need to employ such measures to make its acquisitions. The company was so superior in its efficiency and economies of scale that it could price its product at a level at which it could profit but its competitors could not.

A study by John McGee published in 1958 shows that Standard generally did not lower prices below cost and take a loss; rather, it opted for temporarily smaller gains to demonstrate to unsustainable competitors that they were, indeed, unsustainable and would do well to join Standard and thrive.55 Here is Rockefeller’s description of how competitors came to see the situation:

The point is, that after awhile, when the people, or, at least, the intelligent, saw that we were not crushing or oppressing anybody, they began to listen to our suggestion for a pleasing meeting at which we could quietly talk over conditions and show them the advantage of entering our organizations. One after another they joined us.56

Rockefeller used the Conquest of Cleveland to create the most impressive refining concern ever. He took twenty-four refineries and turned them into six state-of-the art facilities, selling the unusable parts for scrap. These refineries constituted a “complete” refining operation, which produced not only kerosene but several profitable by-products. In 1873, these refineries produced 10,000 barrels a day.57 At this rate, which would only grow, Rockefeller would create nationwide markets for paraffin wax, petroleum jelly, chewing gum, various medicinal products (later found to be of dubious value), fuel oil, and many other products.

In answering a question about Lloyd’s characterization of him, Rockefeller contrasted Standard with other refiners: “Here were these refiners, who bought crude oil, distilled it, purified it with sulphuric acid, and sold the kerosene. We did that, too; but we did fifty — yes, fifty — other things beside, and made a profit from each one.” And: “. . . every one of these articles I have named to you represents a separate industry founded on crude petroleum. And we made a good profit from each industry. Yet this ‘historian,’ Lloyd, cannot see that we did anything but make kerosene and get rebates and ‘oppress’ somebody.”58

In 1873, Rockefeller began vertically integrating the company to include the acquisition of gathering pipelines for crude oil. These pipelines connected new oil wells to transportation hubs. Managing these with its typical excellence, Standard made its stream of incoming oil more reliable and enabled drillers to quickly find a place to put newfound oil instead of letting it go to waste in an uncontrolled gusher.

Standard was no longer just a kerosene company; it was a full-fledged, integrated oil-refining giant. And, after the Conquest of Cleveland in 1873, Rockefeller, age thirty-three, was still just beginning.

Starting in 1874, Rockefeller focused on acquiring competitors in the rest of the country. He began, as he had in Cleveland, with the major players: Charles Pratt in New York; Atlantic Refining in Philadelphia; and Lockhart, Waring, and Frew in Pittsburgh. He bought out the largest refiners in the Oil Regions, including the refinery of a man named John Archbold, who later became president of Standard when Rockefeller retired.

Rockefeller’s operation was so superior to others in every facet — from its marketing efforts, to its access to supplies of crude, to its ability to generate and profitably sell dozens of by-products — that the acquisitions occurred with relative ease, even when he was acquiring his most sophisticated competitors. Charles Morris writes of buying out the Warden interests in Atlantic Refining: “Warden’s son recalled that his father was invited to examine the Standard’s books and was astonished at its profitability, just as Oliver Payne had been in Cleveland a few years before.”59

The most difficult acquisitions for Rockefeller were in Pennsylvania. The difficulties were not initiated by the refiners but by the Pennsylvania Railroad and its subsidiary, the Empire Transportation Company (ETC). ETC owned extensive gathering pipelines and tank cars in the region, and it attempted to freeze Standard out of the area and acquire a nationwide refining victory of its own by — of all things — lowering its prices and making transportation nearly free for its refiners. This attempt ended in disaster. Rockefeller, who had provided the Pennsylvania with two-thirds of its freight, first tried to convince the Pennsylvania’s Tom Scott to stop his scheme. When that failed, he stopped shipping on the railroad and redirected his domestic and international traffic elsewhere. The Pennsylvania Railroad started hemorrhaging money and, facing terrified shareholders, Scott not only ended the scheme, but he sold ETC to Standard, making Standard’s onloading and offloading transportation network that much more extensive and efficient.

At this point, Rockefeller had earned a 90 percent market share — a 90 percent that was far different in nature than what 90 percent in 1870 would have meant. Rockefeller owned not a grab bag of mediocre operations, but an integrated, coordinated group of facilities in Cleveland, New York, Baltimore, and Pennsylvania, the likes of which had never been imagined. Near the end of the 1870s, he ran, to use the apt cliché, a well-oiled machine. Standard housed millions of barrels of crude in its storage facilities, transported that crude to its refineries by gathering line and tank car, extracted every ounce of value from that crude using its state-of-the-art refining technologies, and shipped the myriad resulting petroleum products to Standard’s export facilities in New York — where its marketing experts distributed Standard products to every nook and cranny of the world.

Rockefeller oversaw all of this in conjunction with a team of great business minds (many of whom were obtained through the acquisitions) that understood every facet of the domestic and international oil market and that was always expanding and adjusting operations to meet demand.

It is important to note that as big as Standard was becoming, its leader’s obsession with efficiency remained unabated. Rockefeller had a rare ability to conceive and execute a grand vision for the future, while minding every detail of the present. A story told by Ron Chernow in Titan illustrates this well:

In the early 1870s, Rockefeller inspected a Standard plant in New York City that filled and sealed five-gallon tin cans of kerosene for export. After watching a machine solder caps to the cans, he asked the resident expert: “How many drops of solder do you use on each can?” “Forty,” the man replied. “Have you ever tried thirty-eight?” Rockefeller asked. “No? Would you mind having some sealed with thirty-eight and let me know?” When thirty-eight drops were applied, a small percentage of cans leaked — but none at thirty-nine. Hence, thirty-nine drops of solder became the new standard instituted at all Standard Oil refineries. “That one drop of solder,” said Rockefeller, still smiling in retirement, “saved” $2,500 the first year; but the export business kept on increasing after that and doubled, quadrupled — became immensely greater than it was then; and the saving has gone steadily along, one drop on each can and has amounted since to many hundreds of thousands of dollars.60

Rockefeller and his firm were as active-minded and vigilant as could be, but in the late 1870s one development in the industry took it by surprise: long-distance pipelines.

A group of entrepreneurs successfully started the Tidewater Company, the first long-distance pipeline. This posed an immediate threat to the railroads’ oil transportation revenue, because pipelines are a far more efficient, less expensive means of transporting oil. With sufficiently thick or plentiful pipelines, enormous amounts of oil can be shipped at relatively low cost twenty-four hours a day.

Initially, Rockefeller, the allegedly invincible “monopolist,” aided the railroads in fighting Tidewater (including using commonly-practiced political tactics that should have been beneath him) but failed. Realizing the superiority of pipelines, he entered the pipeline business in full-force himself, creating the National Transit Company.

Describing Rockefeller’s excellent pipeline practices, oil historian Robert L. Bradley Jr. writes:

Right-of-way was obtained by dollars, not legal force. Pipe was laid deep for permanence, and only the best equipment was used to minimize leakage. Storage records reflected “accuracy and integrity.” Innovative tank design reduced leakage and evaporation to benefit all parties. Fire-preventions reflected “systematic administration.” The pricing strategy was to prevent entry by keeping rates low. While these business successes may not have benefited certain competitors, they benefited customers and consumers of the final products.61

By 1879, Rockefeller was the consummate so-called “monopolist,” “controlling” some 90 percent of the refining market. According to antitrust theory, when one “controls” nearly an entire market, he can restrict output and force consumers to pay artificially high prices. Yet output had quadrupled from 1870 to 1880. And as for consumer prices, recall that in 1870 kerosene cost twenty-six cents per gallon and was bankrupting much of the industry; by 1880, Standard Oil was phenomenally profitable, and kerosene cost nine cents per gallon.62 It had revolutionized the method of producing refined oil, bringing about an explosion of productivity, profit, and improvement to human life. It had shrunk the cost of light by a factor of 30, thereby adding hours to the days of millions around the world. This is the story Henry Lloyd and Ida Tarbell should have told.

The 1880s and the Peril of the “Monopolist”

If antitrust theory was correct, Standard’s “control” of 90 percent of the oil refining market, should have made the 1880s its easiest, least-challenging decade — one in which it could coast, pick off competitor fleas with ease, and raise prices into the stratosphere.

In fact, the company struggled mightily in that decade to lower its prices even more — while facing its greatest competitive challenges (foreign and domestic), as well as a bedeviling technological challenge.

In the mid-1880s, Standard executives, like many others in the industry, feared that the world would run out of oil for them to refine. As late as 1885, there were no significant, well-known oil deposits in America outside of northwest Pennsylvania, and those appeared to be drying up. In 1885, the state geologist of Pennsylvania declared that “the amazing exhibition of oil” for the past quarter century had been only “a temporary and vanishing phenomenon — one which young men will live to see come to its natural end.”63 Some executives at Standard even suggested, of all things, that Standard Oil exit the oil business.

Others did not feel this desperation but did wonder where new oil could possibly come from; Pennsylvania was the only known oil source in America, and prospecting technology was still primitive. In 1885, when top executive John Archbold was told of oil deposits in Oklahoma, he said that the chances of finding a large oil field there “are at least one hundred to one against it” and that if he was wrong, “I’ll drink every gallon produced west of the Mississippi!”64

Rockefeller, however, having seen expectations of an oil apocalypse defied again and again in different parts of Pennsylvania, not only remained in the refining business; in a crucial vertical integration involving enormous risk, he also entered Standard Oil into the business of exploration and production.

Happily, by 1887, Standard’s new exploration and production division, along with other oil producers, found an abundant oil supply in Lima, Ohio. But there was a problem: The oil was virtually useless.

All crude oil is not created equal — different kinds contain different fractions of potential petroleum products, as well as other elements that can make it harder or easier to refine. The oil discovered in Lima was the worst oil known to man. Its kerosene content was lower than Pennsylvania oil, and the kerosene that could be produced did not burn well, depositing large amounts of soot in any house it was burned in. Worse, due to high sulfur content, the oil emitted a skunk-like odor (it came to be called “skunk oil”). “Even touching this oil,” writes historian Burton Fulsom, “meant a long, soapy bath or social ostracism.”65 Obviously, kerosene with even a whiff of skunk smell would not appeal to consumers seeking to light their homes — and no known process could remove the smell from the oil.

Rockefeller was undeterred. He proceeded to pump or purchase millions of barrels of the virtually useless oil, confident that with enough effort and science it would be possible to extract marketable kerosene and other products. (In the meantime, he was able to sell some as cheap fuel oil to railroads carrying cargo, for which the smell was not as prohibitive.)

As Rockefeller bought millions of barrels of oil at fifteen cents per barrel, his board, with whom he always collaborated, began to blanch. At one point, a showdown ensued between Rockefeller and Charles Pratt (the son of the great refiner), who said that they could no longer fund this costly experiment. Rockefeller calmly offered to risk $3 million of his own money. Pratt acquiesced, but Rockefeller no doubt would have invested the money, about $65 million in today’s dollars, himself if need be.

Standard had accumulated 40 million barrels of skunk oil, when, in 1888, there came a breakthrough. On October 13, Rockefeller’s team of scientists, led by a famous German chemist he had hired named Herman Frasch, announced that they had discovered a way to refine the oil.66 This was a landmark in the history of petroleum. Just as previous refiners discovered how to transform ordinary crude oil from useless glop into black gold, so Standard Oil transformed crude skunk oil into odorless black gold.

At the onset of the 1880s, Standard Oil was known only as a refiner. Thanks to the Lima discovery, Standard would be the leader in crude oil production in the 1890s. In 1888, Standard was responsible for less than 1 percent of crude oil production; in 1891, that number had jumped to 25 percent.67

The triumph at Lima was crucial in providing Standard cheap oil during the late 1880s and the 1890s — which it needed in the face of new, unprecedented competitive challenges from foreign and domestic sources.

As was discovered in the late 1880s, large deposits of oil existed far beyond Pennsylvania and Ohio, most notably in Russia. Locals in Baku, Russia, had known for hundreds of years that some oil was there; in the 1880s, explorers from Russia and abroad discovered that there was a lot of it.

The road to this discovery was paved in the 1870s, when the czar opened up the then state-controlled region to free, economic development, and small drillers and refiners got involved. Over time, these men realized that Russia contained oil deposits larger than any known American source — and that the oil was relatively easy to extract. Men from two families, the Nobels and the Rothschilds, having learned from Rockefeller’s example, started two soon-to-be formidable firms. Although these producers faced challenges of their own, they posed a huge challenge to Standard Oil on the international market — which comprised most of Standard’s customers.

Domestic competitors did not stand still, either. We have already seen how the Tidewater Company challenged Standard in the realm of oil delivery — a challenge that Standard met with the National Transit Company subsidiary and an expansion resulting in three thousand miles of long-distance and gathering pipelines and 40 million barrels of storage capacity.68 But after Lima, Standard was also challenged in the realms of production and refining. The Lima discovery inspired the emergence of competitors who sought similar discoveries in Kansas, Oklahoma, Texas, and California. And these were not the shanty refinery “competitors” of decades past; they were large, vertically integrated, technologically advanced companies.

Rockefeller faced further competition from sources outside the oil industry. Any producer of any product competes not merely with those businesses selling the same type of product he does, but also with any seller of any product that serves a similar purpose and thus can be its substitute.

In 1878, a man entirely outside the oil industry invented a product that would transform the illumination industry. That man was Thomas Edison; his invention was the electric lightbulb. Although the oil market involved many more products than kerosene, kerosene was still its main product and illumination its primary purpose. Thus, as soon as the lightbulb was announced, the stock prices of publicly traded refiners plummeted. The lightbulb would become a cheaper, safer alternative to kerosene, just as kerosene had become a cheaper, safer alternative to whale oil. (Because of the efficiencies Standard had achieved with kerosene, however, it did take more than a decade for Edison and company to improve the lightbulb to the point that it was economically competitive with Rockefeller’s cheapest kerosene.)

Rockefeller’s basic response to these competitive challenges was to continue doing what he had been doing to make his company the world leader: He continued to make Standard as efficient as he could, and he kept a vigilant eye on changes in the market. During the 1880s and into the 1890s, Standard Oil, through its continuing productive achievement, remained dominant in an ever-growing market.

Contrary to the antitrust expectation, Rockefeller did not artificially restrict supply and dictate higher prices. He neither had nor sought such power. But he did have the power to be very profitable by producing an excellent product at low cost and by selling it at low prices. In 1880, kerosene cost 9.33 cents/gallon; in 1885, 8.13 cents; in 1890, 7.38 cents. As for the industry’s total output, it increased steadily throughout the late 1800s; for example, between 1890 and 1897 kerosene production increased 74 percent, lubricating oil production increased 82 percent, and wax production increased 84 percent.69

The fact that Standard Oil faced such stiff competition and was driven to expand output and lower prices even further demonstrates the myth of Rockefeller’s “control” of the market. Markets are not possessions that one can acquire or control. They are dynamic, evolving systems of voluntary association, in which competing producers have no ability to force customers to buy their product, nor any ability to prevent others from offering their customers superior substitutes. The expression “control a market share,” translated into reality, means simply that at a given time one has persuaded a given group of individuals to buy one’s product — a state of affairs that can quickly change if someone offers a superior substitute.

Standard Oil enjoyed high market share because it produced a highly desirable product and offered it at a price that the vast majority of people were willing to pay. If someone else had made cheaper kerosene or a better illuminant than kerosene, or if Rockefeller had lowered his standards or raised his prices significantly, his customers would have purchased their goods elsewhere. Such is the nature of the so-called “monopolist’s” control. And such is the nature of economic power.

Contrast this with the genuine coercive power commanded by governments — which can create real monopolies by granting certain companies exclusive rights to produce a certain type of product. For example, state governments long gave horse-and-buggy-driving teamsters a monopoly on the local transportation of crude, forbidding the construction of local pipelines — and they long gave railroads a monopoly on long-distance transportation, forbidding the construction of long-distance pipelines. Where Rockefeller’s competitors failed because they could not match his quality and prices, railroads’ and teamsters’ competitors failed because the government forbade others from building a higher-quality, lower-priced product. If one wants an example of monopoly in the 19th century, this is it — and its lesson is this: Keep political power out of the markets.

People have long regarded Standard Oil’s ability to maintain a 90 percent market share for twenty years as evidence of coercive evil. But if one understands what it took to achieve and maintain that share, one can see that it is evidence only of Rockefeller’s productive virtue.

The Standard Oil Trust and the
Science of Corporate Productivity

Standard’s success in the face of the tremendous competitive challenges of the 1880s was made possible by strategic decisions (such as the Lima venture), by continued improvement in the company’s operations, and by Rockefeller’s remarkable leadership.

In 1882, the Standard Oil Company became the Standard Oil Trust. As the company had grown across state lines, it needed a corporate structure that could enable it to function as a unified, national corporation. The Trust — officially combining disparate branches of Standard Oil under common ownership and control — was an ingenious way of achieving such integration. As Dominick Armentano explains:

Choosing an effective legal structure was proving particularly bothersome. Almost all states, including Ohio, did not permit chartered companies to hold the stock of firms incorporated in other states. Yet Standard, by 1880, effectively controlled fourteen different firms, and had a considerable stock interest in about twenty-five others, including the giant National Transit Company. How were these companies to be legally and efficiently managed? In addition, Pennsylvania had just unearthed (with the help of Standard’s competitors and some producers) an old state law that allowed a tax on the entire capital stock of any corporation doing any business within its borders; other states threatened to follow suit. Thus, a new organizational arrangement was mandatory to allow effective control of all owned properties and to escape confiscatory taxation without breaking the law.

Standard chose to resurrect an old common law arrangement known as the trust. In a trust, individuals pool their property and agree to have a trustee or trustee group manage that property in the interests of all the owners. Just as incorporation allows incorporators to pool their property and choose their directors and managers, trusts in the 1880s allowed the same convenience with entire corporate holdings. Thus, a trust was a modern holding company, but frequently without the formalities of legal incorporation and the necessity of any public disclosure.

The Standard Oil Trust was formed in 1882. . . . The forty-two stockholders of the thirty-nine companies associated with Standard agreed to tender their stock to nine designated trustees; in return, the ex-stockholders received twenty trustee certificates per share of stock tendered. The original Standard Oil Trust was capitalized at $70 million, and John D. Rockefeller himself held over 25 percent. Rockefeller, his brother William, Henry Flagler, John D. Archbold, and five others then managed Standard’s entire operations, setting up committees on transportation, export, manufacturing, lubricating, and other affairs to advise the executive committee.70

The Trust, often thought of as an economically destructive device, enabled Standard to achieve still greater productivity — every bit of which the company needed in order to face continuing challenges. Let us examine several important aspects of the Standard Oil Trust to appreciate how productively it functioned.

One cardinal aspect was specialization, the process of assigning employees to areas of special focus where they could concentrate their time and effort to become experts at one thing (rather than masters of none). The more Standard Oil grew, the more specialized Rockefeller made his divisions and employees. The Standard Oil Trust featured separate divisions and personnel for every aspect of the productive process — buying, transporting, refining, marketing — and for the different regions of the business. The company operated on the premise that there are always better ways of doing things, often involving machinery, and Rockefeller had an insatiable thirst for new ideas.

In particular, Standard pioneered and excelled at scientific research and development — the key to successes such as that at Lima. Rockefeller’s investment in Lima became spectacularly profitable and value-creating — but only because Rockefeller had the vision and courage to also invest, heavily, in scientists.

Most historians overlook Rockefeller’s advances in corporate science and focus exclusively on discounts he received from railroads, but this must be rectified. Today, we take R&D for granted as an inherent aspect of business, but it is not; someone pioneered it, and that someone was Rockefeller. Rockefeller pioneered both integrated, large-scale businesses, and the investment of large amounts of capital into scientific research and technological application. As historian Burton Fulsom notes:

When Frasch cracked the riddle of Lima crude, he was probably the only trained petroleum chemist in the United States. By the time Rockefeller retired, he had a test laboratory in every refinery and even one on the top floor of 26 Broadway. This was yet another way in which he converted Standard Oil into a prototype of the modern industrial organization, its progress assured by the steady application of science.71

Standard’s focus on science led to many other profitable breakthroughs — including the ability to “crack” crude for maximum gasoline. (“Cracking” is changing the molecular structure of crude to increase the amount of a given fraction.) In science, as in many other areas, Standard’s internal specialization paid off. Just as specialization under the division of labor in a society makes the society incomparably more productive than a society in which each individual has to produce everything for himself, specialization under the division of labor within Standard Oil had similar results for the company.

The heart of Standard’s corporate management structure was its committee system. Its goal was to maximize individual autonomy and creativity, while ensuring that all elements of the company were integrated in the direction Rockefeller chose.

An executive committee comprising Rockefeller’s top associates was in charge of the general direction of the company. This committee oversaw and monitored various specialized subcommittees that dealt with all different aspects of the business: manufacturing, transportation, purchasing, pipelines, export trade, and so on. And these subcommittees oversaw various subsidiaries in their line of the business, giving them basic direction and enabling them to share and grow their knowledge. As Rockefeller expressed the value of this arrangement:

A company of men, for example, were specialists in manufacture. These were chosen experts, who had daily sessions and study of their problems, new as well as old, constantly arising. The benefit of their research, their study, was available for each of the different concerns whose shares were held by these trustees.72

These subsidiaries even competed with one another, circulating their performance figures and always seeking to improve their performance. As a result, every realm of Standard’s productive process got better and better.

Giving the various aspects of the company both independence and an integrated purpose were vital to Standard’s ability to take on increasingly more functions of the oil refining industry. A case in point is Standard’s entry into the business of distributing refined oil, a business that it had long left to middlemen.

Standard’s pre-integration approach to distribution was simply to pay three cents a gallon for existing, antiquated distribution methods. Middlemen would remove barrels of kerosene from trains, pile them onto a horse-drawn carriage, and make their rounds selling them to retailers. The efficiency of this process was comparable to the efficiency of transporting crude oil before the advent of tank cars. The quantity, cost-effectiveness, and safety of the arrangement were far less than they could have been. So Standard invested in and utilized high-capacity tank-wagons, delivering crude straight to customers in the precise quantities they wanted, cutting out both the middlemen and the barrels.

Taking a swath of the industry that had been the province of others for years and quickly revolutionizing it was a common practice at Standard, one made possible by the organizational system that achieved both autonomy and unity among the company’s employees.

Rockefeller’s management techniques attracted great minds to Standard, for it gave them work that stimulated their intellect and excited their passions. Rockefeller recognized that nothing mattered more to his organization than talented, thinking men who could generate and execute new ideas. “Has anyone given you the law of these offices?” he asked a new executive. “No? It is this: nobody does anything if he can get anybody else to do it. . . . As soon as you can, get some one whom you can rely on, train him in the work, sit down, cock up your heels, and think out some way for the Standard Oil to make some money.”73

Of his ability to attract and coordinate talent, Rockefeller said: “It is chiefly to my confidence in men and my ability to inspire their confidence in me that I owe my success in life.”74 “I’ve never heard of his equal,” said Thomas Wheeler, one of his oil buyers, “in getting together a lot of the very best men in one team, and inspiring in each man to do his best for the enterprise.”75

A key trait Rockefeller exhibited enabled him to bring out greatness in his employees: He communicated in every way he could the importance of the work they were doing — its importance to him, to Standard Oil, and therefore, as he always stressed, to the advancement of human life. He paid higher than market wages to attract the best employees. He awarded shares in the company to employees, explaining: “I would have every man a capitalist, every man, woman and child. I would have everyone save his earnings, not squander it; own the industries, own the railroads, own the telegraph lines.”76 He called Standard Oil a “family” — and he meant it. Wheeler describes how Rockefeller

sometimes joined the men in their work, and urged them on. At 6:30 in the morning, there was Rockefeller, this billionaire, rolling barrels, piling hoops, and wheeling out shavings. In the oil fields, there was Rockefeller trying to fit 9 barrels on an 8 barrel wagon. He came to know the oil business inside and out and won the respect of his workers. Praise he would give, rebukes he would avoid. “Very well kept, very well indeed,” said Rockefeller to an accountant about his books before pointing out a minor error, and leaving.77

Rockefeller commanded a huge amount of respect, but did not need to demand it. Burton Fulsom tells a story that illustrates how unconcerned Rockefeller was about deference:

One time a new accountant moved into a room where Rockefeller kept an exercise machine. Not knowing what Rockefeller looked like, the accountant saw him, and ordered him to remove it. “Alright,” said Rockefeller, and he politely took it away. Later when the embarrassed accountant found out whom he had chided he expected to be fired. But Rockefeller never mentioned it.78

Everyone in the family was valued, but none more than his leading thinkers, the top managers:

Rockefeller treated his top managers as conquering heroes and gave them praise, rest, and comfort. He knew that good ideas were almost priceless. They were the foundation for the future of Standard Oil. To one of his oil buyers Rockefeller wrote, “I trust you will not worry about the business. Your health is more important to you and to us than the business.” Long vacations at full pay were Rockefeller’s antidotes for his weary leaders. After Johnson M. Camden consolidated the West Virginia/Maryland refiner for Standard Oil, Rockefeller said, “Please feel at perfect liberty to break away 3, 6, 12, 15 months, more or less. Your salary will not cease, however long you decide to remain away from business.” But neither Camden nor the others rested long. They were too anxious to succeed at what they were doing and to please the leader who trusted them so.79

Would you want to work for such a manager? That so many did and were inspired to be their best was no doubt indispensable to making the company as innovative and efficient as it was. It is no wonder, then, that many people who are intimately familiar with Rockefeller believe that, in the words of one of his biographers, “Rockefeller must be accepted as the greatest business administrator America has produced.”80 Without such innovative administration, surely no oil company would have achieved anywhere near Standard’s degree of success.

Lessons Not Learned

Given the tenuous, voluntary nature of Standard’s market share, it was inevitable that at some point the market would expand beyond its reach. Given the explosion of possibilities in the oil industry — the rise of the automobile and the need for gasoline, the discovery of oil in all corners of the planet — not even Standard Oil could be the best at everything. It certainly did not help that Rockefeller became progressively less involved in the company’s affairs starting in the 1890s.

The fact that Standard was bound to lose market share did not prevent it from growing. It could and did continue to grow, while others grew, too. Its market percentage shrank, even as its market grew — and changed.

Between 1899 and 1914, the market for kerosene shrank with the rise and continuous improvement of Edison’s lightbulb, and with the rise of the automobile. Kerosene dropped from 58 to 25 percent of refined products, whereas gasoline rose from 15 to 48 percent. The age of kerosene, which Standard had dominated, was over.

In the early 1900s, many more competitors came on the scene, some of whom remain household names: Associated Oil and Gas, Texaco, Gulf, Sun Oil, and Union Oil, to name a few. Whereas the number of refineries had once shrunk due to a glut of inefficient ones, new demand across a wide variety of locations along with better business organization and better technology led to a growth in the number of separate refineries — from 125 in 1908 to 147 in 1911.

Between 1898 and 1906, Standard’s oil production increased, but its market share of oil production declined from 34 to 11 percent. Similarly, in the realm of refining, Standard’s market share declined, while its volume increased steadily from 39 million barrels in 1892 to 99 million in 1911.81

By the early 1900s, Standard Oil had provided the world with an illustration of the magnificent productive achievements that are made possible by economic freedom. It had shown that when companies are free to produce and trade as they choose, to sell to as many willing customers as they can, a man or a company of extraordinary ability can make staggering contributions to human life — in this case, lighting up the world, fueling transportation, and pioneering corporate structures that would make every other industry more productive in the decades to come. And, with the emergence of highly profitable competitors in the early 1900s, the notion that Standard “controlled” the market should have been scrapped once and for all.

Unfortunately, blinded by bad ideas and bad motives, the most prominent reporters on Rockefeller and his company did not see this illustration of the glory of laissez-faire — and did not depict it for others to see. Instead, they painted the false picture that has, to this day, tarnished a great man, a great company, and a great economic system.

In 1902, Ida Tarbell began publishing her History of the Standard Oil Company as a series of articles in McClure’s magazine. Meanwhile, Rockefeller critics in the press and in politics called for an end to this “menacing monopoly.” According to antitrust historian Dominick Armentano, “Between 1904 and 1906, at least twenty-one state antitrust suits were brought against Standard Oil subsidiaries in ten states. And on November 15, 1906, the federal government filed its Sherman Act case and petitioned for the dissolution of Standard Oil of New Jersey.”82

The intellectual and political groundwork for a breakup of Standard Oil — and for preventing potential future Standard Oils from reaching its degree of success — had been laid more than a decade earlier when, in 1890, the Sherman Antitrust Act was made law. The act was a fundamental attack on economic freedom — on the premise, as Chernow later put it, that “Free markets, if left completely to their own devices can wind up terribly unfree.” Freedom, in other words, requires government force.

Consider the key clause of the Sherman Act: “Every contract, combination . . . or conspiracy, in restraint of trade or commerce among the several States, or with foreign nations, is declared to be illegal.”83 This explicitly denies businesses the freedom to associate with other businesses and with customers on terms of their choosing; it means that any voluntary arrangement deemed by the government to be in “restraint of trade” can be stopped and punished. And the standard story of Standard Oil gave (and continues to give) supporters of this law ample ammunition.

Thus it is not surprising that, in 1911, the U.S. Supreme Court ruled that Standard Oil had violated the Sherman Act — and broke up the company into thirty-four pieces. The only problem with the proceeding, most believed and still believe, is that it had not taken place many decades earlier, when Standard was “monopolizing” the market in the 1870s.

But having seen the benevolent, life-giving process that actually constituted this “monopolization,” we should feel intensely relieved that the Sherman Act was not a factor during Rockefeller’s rise. Had it been, his company would have been stunted in its infancy. The original interpretation of the Sherman Act regarded any combination or merger as a “restraint of trade” and thus illegal. Recall that Rockefeller’s investments in science, his abilities to hire diverse minds and deliver the cheapest, highest-quality petroleum products to people across the nation depended on Standard being a national corporation — and for that, given the legal framework at the time, the Trust was necessary. And under today’s interpretation of antitrust law, a company “controlling” more than 30 percent of the market is often considered “anticompetitive” and thus criminal. Standard had a 30 percent market share in the early 1870s, when it had achieved only a fraction of what it would later achieve. Where would we be today if the young genius from Cleveland had had his vision quashed in his youth? How much would corporate efficiency, research and development, and effective management have suffered, not just in the petroleum industry, but in all of American industry? And, most importantly, how unjust would that have been to a man who wanted nothing more than to earn a living by producing kerosene and gasoline as cheaply and plentifully as possible?

All men — including exceptional men such as Rockefeller — have a right to take their enterprises as far as their vision and effort will take them. To throttle an individual because he is a superlative producer who supplies an abundance of life-serving goods to people eager to pay for them is to assault the central requirement of human life: the virtue of productivity.

It is time to bury the myth of Rockefeller the “robber baron” and to replace it with the truth about this paragon of production. And it is time to repeal the assault on such men that is antitrust law and replace it with the full legal recognition of individual rights.

About The Author

Alex Epstein

Alex Epstein was a writer and a fellow on staff at the Ayn Rand Institute between 2004 and 2011.

Endnotes

1 David Freeman Hawke, ed., The William O. Inglis Interview with John D. Rockefeller, 1917–1920 (Westport, CT: Meckler Publishing, 1984), microfiche.

2 Henry Demarest Lloyd, “The Story of a Great Monopoly,” Atlantic Monthly, March 1881, http://www.theatlantic.com/doc/188103/monopoly.

3 Felicity Barringer, “Journalism’s Greatest Hits: Two Lists of a Century’s Top Stories,” New York Times, March 1, 1999, http://query.nytimes.com/gst/fullpage.html?res=9407E3D6123CF932A35750C0A96F958260.

4 Ida Tarbell, The History of the Standard Oil Company (New York, NY: McClure, Phillips & Company, 1904), pp. 36–37.

5 Ibid., p. 37. It is worth noting that in this legendary work of journalism, Tarbell fails to meet basic standards of disclosure by not stating that her father was an oilman in the Oil Regions—and that her brother was the treasurer of the Pure Oil Company, a competitor of Standard Oil that began in the late 1800s.

6 Howard Zinn, A People’s History of the United States: 1492–Present (New York, NY: Perennial Classics, 2003), p. 256.

7 Paul Krugman, “Lifting the Fog From Antitrust,” Fortune, June 8, 1998, http://money.cnn.com/magazines/fortune/fortune_archive/1998/06/08/243492/index.htm.

8 Ron Chernow, Titan (city, state: Vintage, 2004), p. 297.

9 http://www.justice.gov/atr/public/speeches/0114.htm.

10 Robert L. Bradley, Oil, Gas, and Government: The U.S. Experience, vol. 1 (London: Rowman & Littlefield, 1996), pp. 1073–74.

11 Harold F. Williamson and Arnold R. Daum, The American Petroleum Industry 1859–1899: The Age of Illumination (Evanston, IL: Northwestern University: 1963), p. 212.

12 Charles R. Morris, The Tycoons: How Andrew Carnegie, John D. Rockefeller, Jay Gould, and J.P. Morgan Invented the American Supereconomy (New York, NY: Times Books, 2005), p. 80.

13 Dominick Armentano, Antitrust and Monopoly: Anatomy of a Policy Failure (Oakland , CA: The Independent Institute, 1999), p. 55.

14 Williamson and Daum, American Petroleum Industry, p. 320.

15 Armentano, Antitrust and Monopoly, p. 56.

16 Ibid.

17 Daniel Yergin, The Prize: The Quest for Oil, Money & Power (New York: Free Press, 1992), p. 50.

18 Williamson and Daum, American Petroleum Industry, pp. 203–12.

19 Ibid., p. 211.

20 The Encyclopedia Americana: A Library of Universal Knowledge (New York, NY: Encyclopedia Americana Corp., 1920), p. 479.

21 Morris, Tycoons, p. 82.

22 Fulsom, Myth of the Robber Barons, p. 85.

23 Williamson and Daum, American Petroleum Industry, p. 292.

24 Morris, Tycoons, p. 18.

25 Bradley, Oil, Gas, and Government, p. 1070.

26 Williamson and Daum, American Petroleum Industry, p. 212.

27 Morris, Tycoons, p. 18.

28 Chernow, Titan, p. 46.

29 Ibid., pp. 44–45.

30 Burton W. Fulsom, The Myth of the Robber Barons (Herndon, VA: Young America’s Foundation, 1996), p. 86.

31 Allan Nevins, Study in Power: John D. Rockefeller Industrialist and Philanthropist, vol. 1 (New York: Charles Scribner’s Sons, 1953), p. 71.

32 Williamson and Daum, American Petroleum Industry, p. 285.

33 John D. Rockefeller, Random Reminiscences of Men and Events (New York, NY: Doubleday, Page & Company, 1909), pp. 87–88.

34 Williamson and Daum, American Petroleum Industry, p. 225.

35 Armentano, Antitrust and Monopoly, p. 58.

36 Chernow, Titan, pp. 86–87.

37 Williamson and Daum, American Petroleum Industry, p. 302.

38 Ibid., p. 305.

39 Ibid., p. 303.

40 Hawke, William O. Inglis Interview.

41 Tarbell, History of the Standard Oil Company, pp. 44–46.

42 Ibid., p. 306.

43 Ibid.

44 Armentano, Antitrust and Monopoly, p. 62.

45 Hawke, William O. Inglis Interview.

46 Morris, Tycoons, p. 82.

47 Nevins, Study in Power, p. 96.

48 Williamson and Daum, American Petroleum Industry, p. 307.

49 Ibid.

50 Armentano, Antitrust and Monopoly, p. 59.

51 Morris, Tycoons, p. 85.

52 Williamson and Daum, American Petroleum Industry, p. 359.

53 Morris, Tycoons, p. 84.

54 Bradley, Oil, Gas, and Government, p. 1071.

55 John S. McGee, “Predatory Price Cutting: The Standard Oil (N. J.) Case,” Journal of Law and Economics, vol. 1 (October 1958), pp. 137–69.

56 Hawke, William O. Inglis Interview.

57 Williamson and Daum, American Petroleum Industry, p. 367.

58 Hawke, William O. Inglis Interview.

59 Morris, Tycoons, p. 151.

60 Chernow, Titan, pp. 188–89.

61 Bradley, Oil, Gas, and Government, pp. 615–16.

62 Armentano, Antitrust and Monopoly, p. 66.

63 “Natural Gas and Coal Gas,” New York Times, December 28, 1886, http://query.nytimes.com/gst/abstract.html?res=9E0DE7DA133FE533A2575BC2A9649D94679FD7CF.

64 Chernow, Titan, p. 283.

65 Fulsom, Myth of the Robber Barons, p. 89.

66 Ibid., p. 90.

67 Bradley, Oil, Gas, and Government, pp. 1073–74.

68 Ibid., p. 615.

69 Armentano, Antitrust and Monopoly, p. 66.

70 Ibid., pp. 64–65.

71 Chernow, Titan, p. 287.

72 Ibid., p. 229.

73 Ibid., p. 179.

74 Michael D. Mumford, Pathways to Outstanding Leadership, (Mahwah, NJ: Lawrence Erlbaum Associates, Inc., 2006), p. 165.

75 Fulsom, Myth of the Robber Barons, p. 94.

76 Chernow, Titan, p. 227.

77 Fulsom, Myth of the Robber Barons, p. 93.

78 Ibid.

79 Ibid., p. 94.

80 Chernow, Titan, p. 228.

81 Armentano, Antitrust and Monopoly, p. 67.

82 Ibid., p. 68.

83 Sherman Antitrust Act, http://www.justice.gov/atr/foia/divisionmanual/ch2.htm.

The Morality of Moneylending: A Short History

by Yaron Brook | Fall 2007 | The Objective Standard

It seems that every generation has its Shylock—a despised financier blamed for the economic problems of his day. A couple of decades ago it was Michael Milken and his “junk” bonds. Today it is the mortgage bankers who, over the past few years, lent billions of dollars to home buyers—hundreds of thousands of whom are now delinquent or in default on their loans. This “sub-prime mortgage crisis” is negatively affecting the broader financial markets and the economy as a whole. The villains, we are told, are not the borrowers—who took out loans they could not afford to pay back—but the moneylenders—who either deceived the borrowers or should have known better than to make the loans in the first place. And, we are told, the way to prevent such problems in the future is to clamp down on moneylenders and their industries; thus, investigations, criminal prosecutions, and heavier regulations on bankers are in order.

Of course, government policy for decades has been to encourage lenders to provide mortgage loans to lower-income families, and when mortgage brokers have refused to make such loans, they have been accused of “discrimination.” But now that many borrowers are in a bind, politicians are seeking to lash and leash the lenders.

This treatment of moneylenders is unjust but not new. For millennia they have been the primary scapegoats for practically every economic problem. They have been derided by philosophers and condemned to hell by religious authorities; their property has been confiscated to compensate their “victims”; they have been humiliated, framed, jailed, and butchered. From Jewish pogroms where the main purpose was to destroy the records of debt, to the vilification of the House of Rothschild, to the jailing of American financiers—moneylenders have been targets of philosophers, theologians, journalists, economists, playwrights, legislators, and the masses.

Major thinkers throughout history—Plato, Aristotle, Thomas Aquinas, Adam Smith, Karl Marx, and John Maynard Keynes, to name just a few—considered moneylending, at least under certain conditions, to be a major vice. Dante, Shakespeare, Dickens, Dostoyevsky, and modern and popular novelists depict moneylenders as villains.

Today, anti-globalization demonstrators carry signs that read “abolish usury” or “abolish interest.” Although these protestors are typically leftists—opponents of capitalism and anything associated with it—their contempt for moneylending is shared by others, including radical Christians and Muslims who regard charging interest on loans as a violation of God’s law and thus as immoral.

Moneylending has been and is condemned by practically everyone. But what exactly is being condemned here? What is moneylending or usury? And what are its consequences?

Although the term “usury” is widely taken to mean “excessive interest” (which is never defined) or illegal interest, the actual definition of the term is, as the Oxford English Dictionary specifies: “The fact or practice of lending money at interest.” This is the definition I ascribe to the term throughout this essay.

Usury is a financial transaction in which person A lends person B a sum of money for a fixed period of time with the agreement that it will be returned with interest. The practice enables people without money and people with money to mutually benefit from the wealth of the latter. The borrower is able to use money that he would otherwise not be able to use, in exchange for paying the lender an agreed-upon premium in addition to the principal amount of the loan. Not only do both interested parties benefit from such an exchange; countless people who are not involved in the trade often benefit too—by means of access to the goods and services made possible by the exchange.

Usury enables levels of life-serving commerce and industry that otherwise would be impossible. Consider a few historical examples. Moneylenders funded grain shipments in ancient Athens and the first trade between the Christians in Europe and the Saracens of the East. They backed the new merchants of Italy and, later, of Holland and England. They supported Spain’s exploration of the New World, and funded gold and silver mining operations. They made possible the successful colonization of America. They fueled the Industrial Revolution, supplying the necessary capital to the new entrepreneurs in England, the United States, and Europe. And, in the late 20th century, moneylenders provided billions of dollars to finance the computer, telecommunications, and biotechnology industries.

By taking risks and investing their capital in what they thought would make them the most money, moneylenders and other financiers made possible whole industries—such as those of steel, railroads, automobiles, air travel, air conditioning, and medical devices. Without capital, often provided through usury, such life-enhancing industries would not exist—and homeownership would be impossible to all but the wealthiest people.

Moneylending is the lifeblood of industrial-technological society. When the practice and its practitioners are condemned, they are condemned for furthering and enhancing man’s life on earth.

Given moneylenders’ enormous contribution to human well-being, why have they been so loathed throughout history, and why do they continue to be distrusted and mistreated today? What explains the universal hostility toward one of humanity’s greatest benefactors? And what is required to replace this hostility with the gratitude that is the moneylenders’ moral due?

As we will see, hostility toward usury stems from two interrelated sources: certain economic views and certain ethical views. Economically, from the beginning of Western thought, usury was regarded as unproductive—as the taking of something for nothing. Ethically, the practice was condemned as immoral—as unjust, exploitative, against biblical law, selfish. The history of usury is a history of confusions, discoveries, and evasions concerning the economic and moral status of the practice. Until usury is recognized as both economically productive and ethically praiseworthy—as both practical and moral—moneylenders will continue to be condemned as villains rather than heralded as the heroes they in fact are.

Our brief history begins with Aristotle’s view on the subject.

Aristotle

The practice of lending money at interest was met with hostility as far back as ancient Greece, and even Aristotle (384–322 b.c.) believed the practice to be unnatural and unjust. In the first book of Politics he writes:

The most hated sort [of moneymaking], and with the greatest reason, is usury, which makes a gain out of money itself, and not from the natural use of it. For money was intended to be used in exchange, but not to increase at interest. And this term Usury which means the birth of money from money, is applied to the breeding of money, because the offspring resembles the parent. Wherefore of all modes of making money this is the most unnatural.1

Aristotle believed that charging interest was immoral because money is not productive. If you allow someone to use your orchard, he argued, the orchard bears fruit every year—it is productive—and from this product the person can pay you rent. But money, Aristotle thought, is merely a medium of exchange. When you loan someone money, he receives no value over and above the money itself. The money does not create more money—it is barren. On this view, an exchange of $100 today for $100 plus $10 in interest a year from now is unjust, because the lender thereby receives more than he gave, and what he gave could not have brought about the 10 percent increase. Making money from money, according to Aristotle, is “unnatural” because money, unlike an orchard, cannot produce additional value.

Aristotle studied under Plato and accepted some of his teacher’s false ideas. One such idea that Aristotle appears to have accepted is the notion that every good has some intrinsic value—a value independent of and apart from human purposes. On this view, $100 will be worth $100 a year from now and can be worth only $100 to anyone, at any time, for any purpose. Aristotle either rejected or failed to consider the idea that loaned money loses value to the lender over time as his use of it is postponed, or the idea that money can be invested in economic activity and thereby create wealth. In short, Aristotle had no conception of the productive role of money or of the moneylender. (Given the relative simplicity of the Greek economy, he may have had insufficient evidence from which to conclude otherwise.) Consequently, he regarded usury as unproductive, unnatural, and therefore unjust.

Note that Aristotle’s conclusion regarding the unjust nature of usury is derived from his view that the practice is unproductive: Since usury creates nothing but takes something—since the lender apparently is parasitic on the borrower—the practice is unnatural and immoral. It is important to realize that, on this theory, there is no dichotomy between the economically practical and the morally permissible; usury is regarded as immoral because it is regarded as impractical.

Aristotle’s economic and moral view of usury was reflected in ancient culture for a few hundred years, but moral condemnation of the practice became increasingly pronounced. The Greek writer Plutarch (46–127 a.d.), for example, in his essay “Against Running In Debt, Or Taking Up Money Upon Usury,” described usurers as “wretched,” “vulture-like,” and “barbarous.”2 In Roman culture, Seneca (ca. 4 b.c.–65 a.d.) condemned usury for the same reasons as Aristotle; Cato the Elder (234–149 b.c.) famously compared usury to murder;3 and Cicero (106–43 b.c.) wrote that “these profits are despicable which incur the hatred of men, such as those of . . . lenders of money on usury.”4

As hostile as the Greeks and Romans generally were toward usury, their hostility was based primarily on their economic view of the practice, which gave rise to and was integrated with their moral view of usury. The Christians, however, were another matter, and their position on usury would become the reigning position in Western thought up to the present day.

The Dark and Middle Ages

The historian William Manchester described the Dark and Middle Ages as

stark in every dimension. Famines and plague, culminating in the Black Death [which killed 1 in 4 people at its peak] and its recurring pandemics, repeatedly thinned the population. . . . Among the lost arts were bricklaying; in all of Germany, England, Holland and Scandinavia, virtually no stone buildings, except cathedrals, were raised for ten centuries. . . . Peasants labored harder, sweated more, and collapsed from exhaustion more often than their animals.5

During the Dark Ages, the concept of an economy had little meaning. Human society had reverted to a precivilized state, and the primary means of trade was barter. Money all but disappeared from European commerce for centuries. There was, of course, some trade and some lending, but most loans were made with goods, and the interest was charged in goods. These barter-based loans, primitive though they were, enabled people to survive the tough times that were inevitable in an agrarian society.6

Yet the church violently opposed even such subsistence-level lending.

During this period, the Bible was considered the basic source of knowledge and thus the final word on all matters of importance. For every substantive question and problem, scholars consulted scripture for answers—and the Bible clearly opposed usury. In the Old Testament, God says to the Jews: “[He that] Hath given forth upon usury, and hath taken increase: shall he then live? he shall not live . . . he shall surely die; his blood shall be upon him.”7 And:

Thou shalt not lend upon usury to thy brother; usury of money; usury of victuals; usury of anything that is lent upon usury.

Unto a stranger thou mayest lend upon usury; but unto thy brother thou shalt not lend upon usury, that the Lord thy God may bless thee in all that thou settest thine hand to in the land whither thou goest to possess it.8

In one breath, God forbade usury outright; in another, He forbade the Jews to engage in usury with other Jews but permitted them to make loans at interest to non-Jews.

Although the New Testament does not condemn usury explicitly, it makes clear that one’s moral duty is to help those in need, and thus to give to others one’s own money or goods without the expectation of anything in return—neither interest nor principal. As Luke plainly states, “lend, hoping for nothing again.”9 Jesus’ expulsion of the moneychangers from the temple is precisely a parable conveying the Christian notion that profit is evil, particularly profit generated by moneylending. Christian morality, the morality of divinely mandated altruism, expounds the virtue of self-sacrifice on behalf of the poor and the weak; it condemns self-interested actions, such as profiting—especially profiting from a seemingly exploitative and unproductive activity such as usury.

Thus, on scriptural and moral grounds, Christianity opposed usury from the beginning. And it constantly reinforced its opposition with legal restrictions. In 325 a.d., the Council of Nicaea banned the practice among clerics. Under Charlemagne (768–814 a.d.), the Church extended the prohibition to laymen, defining usury simply as a transaction where more is asked than is given.10 In 1139, the second Lateran Council in Rome denounced usury as a form of theft, and required restitution from those who practiced it. In the 12th and 13th centuries, strategies that concealed usury were also condemned. The Council of Vienne in 1311 declared that any person who dared claim that there was no sin in the practice of usury be punished as a heretic.

There was, however, a loophole among all these pronouncements: the Bible’s double standard on usury. As we saw earlier, read one way, the Bible permits Jews to lend to non-Jews. This reading had positive consequences. For lengthy periods during the Dark and Middle Ages, both Church and civil authorities allowed Jews to practice usury. Many princes, who required substantial loans in order to pay bills and wage wars, allowed Jewish usurers in their states. Thus, European Jews, who had been barred from most professions and from ownership of land, found moneylending to be a profitable, albeit hazardous, profession.

Although Jews were legally permitted to lend to Christians—and although Christians saw some practical need to borrow from them and chose to do so—Christians resented this relationship. Jews appeared to be making money on the backs of Christians while engaging in an activity biblically prohibited to Christians on punishment of eternal damnation. Christians, accordingly, held these Jewish usurers in contempt. (Important roots of anti-Semitism lie in this biblically structured relationship.)

Opposition to Jewish usurers was often violent. In 1190, the Jews of York were massacred in an attack planned by members of the nobility who owed money to the Jews and sought to absolve the debt through violence.11 During this and many other attacks on Jewish communities, accounting records were destroyed and Jews were murdered. As European historian Joseph Patrick Byrne reports:

“Money was the reason the Jews were killed, for had they been poor, and had not the lords of the land been indebted to them, they would not have been killed.”12 But the “lords” were not the only debtors: the working class and underclass apparently owed a great deal, and these violent pogroms gave them the opportunity to destroy records of debt as well as the creditors themselves.13

In 1290, largely as a result of antagonism generated from their moneylending, King Edward I expelled the Jews from England, and they would not return en masse until the 17th century.

From the Christian perspective, there were clearly problems with the biblical pronouncements on usury. How could it be that Jews were prohibited from lending to other Jews but were allowed to lend to Christians and other non-Jews? And how could it be that God permitted Jews to benefit from this practice but prohibited Christians from doing so? These questions perplexed the thinkers of the day. St. Jerome’s (ca. 347–420) “solution” to the conundrum was that it was wrong to charge interest to one’s brothers—and, to Christians, all other Christians were brothers—but it was fine to charge interest to one’s enemy. Usury was perceived as a weapon that weakened the borrower and strengthened the lender; so, if one loaned money at interest to one’s enemy, that enemy would suffer. This belief led Christians to the absurd practice of lending money to the Saracens—their enemies—during the Crusades.14

Like the Greeks and Romans, Christian thinkers viewed certain economic transactions as zero-sum phenomena, in which a winner always entailed a loser. In the practice of usury, the lender seemed to grow richer without effort—so it had to be at the expense of the borrower, who became poorer. But the Christians’ economic hostility toward usury was grounded in and fueled by biblical pronouncements against the practice—and this made a substantial difference. The combination of economic and biblical strikes against usury—with an emphasis on the latter—led the Church to utterly vilify the usurer, who became a universal symbol for evil. Stories describing the moneylenders’ horrible deaths and horrific existence in Hell were common. One bishop put it concisely:

God created three types of men: peasants and other laborers to assure the subsistence of the others, knights to defend them, and clerics to govern them. But the devil created a fourth group, the usurers. They do not participate in men’s labors, and they will not be punished with men, but with the demons. For the amount of money they receive from usury corresponds to the amount of wood sent to Hell to burn them.15

Such was the attitude toward usury during the Dark and early Middle Ages. The practice was condemned primarily on biblical/moral grounds. In addition to the fact that the Bible explicitly forbade it, moneylending was recognized as self-serving. Not only did it involve profit; the profit was (allegedly) unearned and exploitative. Since the moneylender’s gain was assumed to be the borrower’s loss—and since the borrower was often poor—the moneylender was seen as profiting by exploiting the meek and was therefore regarded as evil.

Beginning in the 11th century, however, a conflicting economic reality became increasingly clear—and beginning in the 13th century, the resurgence of respect for observation and logic made that reality increasingly difficult to ignore.

Through trade with the Far East and exposure to the flourishing cultures and economies of North Africa and the Middle East, economic activity was increasing throughout Europe. As this activity created a greater demand for capital and for credit, moneylenders arose throughout Europe to fill the need—and as moneylenders filled the need, the economy grew even faster.

And Europeans were importing more than goods; they were also importing knowledge. They were discovering the Arabic numerical system, double-entry accounting, mathematics, science, and, most importantly, the works of Aristotle.

Aristotle’s ideas soon became the focus of attention in all of Europe’s learning centers, and his writings had a profound effect on the scholars of the time. No longer were young intellectuals satisfied by biblical references alone; they had discovered reason, and they sought to ground their ideas in it as well. They were, of course, still stifled by Christianity, because, although reason had been rediscovered, it was to remain the handmaiden of faith. Consequently, these intellectuals spent most of their time trying to use reason to justify Christian doctrine. But their burgeoning acceptance of reason, and their efforts to justify their ideas accordingly, would ultimately change the way intellectuals thought about everything—including usury.

Although Aristotle himself regarded usury as unjust, recall that he drew this conclusion from what he legitimately thought was evidence in support of it; in his limited economic experience, usury appeared to be unproductive. In contrast, the thinkers of this era were confronted with extensive use of moneylending all around them—which was accompanied by an ever-expanding economy—a fact that they could not honestly ignore. Thus, scholars set out to reconcile the matter rationally. On Aristotelian premises, if usury is indeed unjust and properly illegal, then there must be a logical argument in support of this position. And the ideas that usury is unproductive and that it necessarily consists in a rich lender exploiting a poor borrower were losing credibility.

Public opinion, which had always been against usury, now started to change as the benefits of credit and its relationship to economic growth became more evident. As support for usury increased, however, the Church punished transgressions more severely and grew desperate for theoretical justification for its position. If usury was to be banned, as the Bible commands, then this new world that had just discovered reason would require new, non-dogmatic explanations for why the apparently useful practice is wrong.

Over the next four hundred years, theologians and lawyers struggled to reconcile a rational approach to usury with Church dogma on the subject. They dusted off Aristotle’s argument from the barrenness of money and reasserted that the profit gained through the practice is unnatural and unjust. To this they added that usury entails an artificial separation between the ownership of goods and the use of those same goods, claiming that lending money is like asking two prices for wine—one price for receiving the wine and an additional price for drinking it—one price for its possession and another for its use. Just as this would be wrong with wine, they argued, so it is wrong with money: In the case of usury, the borrower in effect pays $100 for $100, plus another fee, $10, for the use of the money that he already paid for and thus already owns.16

In similar fashion, it was argued that usury generates for the lender profit from goods that no longer belong to him—that is, from goods now owned by the borrower.17 As one Scholastic put it: “[He] who gets fruit from that money, whether it be pieces of money or anything else, gets it from a thing which does not belong to him, and it is accordingly all the same as if he were to steal it.”18

Another argument against usury from the late Middle Ages went to a crucial aspect of the practice that heretofore had not been addressed: the issue of time. Thinkers of this period believed that time was a common good, that it belonged to no one in particular, that it was a gift from God. Thus, they saw usurers as attempting to defraud God.19 As the 12th-century English theologian Thomas of Chobham (1160–1233) wrote: “The usurer sells nothing to the borrower that belongs to him. He sells only time, which belongs to God. He can therefore not make a profit from selling someone else’s property.”20 Or as expressed in a 13th-century manuscript, “Every man stops working on holidays, but the oxen of usury work unceasingly and thus offend God and all the Saints; and, since usury is an endless sin, it should in like manner be endlessly punished.”21

Although the identification of the value of time and its relationship to interest was used here in an argument against usury, this point is actually a crucial aspect of the argument in defense of the practice. Indeed, interest is compensation for a delay in using one’s funds. It is compensation for the usurer’s time away from his money. And although recognition of an individual’s ownership of his own time was still centuries away, this early acknowledgment of the relationship of time and interest was a major milestone.

The Scholastics came to similar conclusions about usury as those reached by earlier Christian thinkers, but they sought to defend their views not only by reference to scripture, but also by reference to their observational understanding of the economics of the practice. The economic worth of usury—its productivity or unproductively—became their central concern. The question became: Is money barren? Does usury have a productive function? What are the facts?

This is the long arm of Aristotle at work. Having discovered Aristotle’s method of observation-based logic, the Scholastics began to focus on reality, and, to the extent that they did, they turned away from faith and away from the Bible. It would take hundreds of years for this perspective to develop fully, but the type of arguments made during the late Middle Ages were early contributions to this crucial development.

As virtuous as this new method was, however, the Scholastics were still coming to the conclusion that usury is unproductive and immoral, and it would not be until the 16th century and the Reformation that usury would be partially accepted by the Church and civil law. For the time being, usury remained forbidden—at least in theory.

Church officials, particularly from the 12th century on, frequently manipulated and selectively enforced the usury laws to bolster the financial power of the Church. When it wanted to keep its own borrowing cost low, the Church enforced the usury prohibition. At other times, the Church itself readily loaned money for interest. Monks were among the earliest moneylenders, offering carefully disguised interest-bearing loans throughout the Middle Ages.

The most common way to disguise loans—and the way in which banking began in Italy and grew to be a major business—was through money exchange. The wide variety of currencies made monetary exchange necessary but difficult, which led to certain merchants specializing in the field. With the rapid growth of international trade, these operations grew dramatically in scale, and merchants opened offices in cities all across Europe and the eastern Mediterranean. These merchants used the complexities associated with exchange of different currencies to hide loans and charge interest. For example, a loan might be made in one currency and returned in another months later in a different location—although the amount returned would be higher (i.e., would include an interest payment), this would be disguised by a new exchange rate. This is one of many mechanisms usurers and merchants invented to circumvent the restrictions. As one commentator notes, “the interest element in such dealings [was] normally . . . hidden by the nature of the transactions either in foreign exchange or as bills of exchange or, frequently, as both.”22 By such means, these merchants took deposits, loaned money, and made payments across borders, thus creating the beginnings of the modern banking system.

Although the merchant credit extended by these early banks was technically interest, and thus usury, both the papal and civic authorities permitted the practice, because the exchange service proved enormously valuable to both. In addition to financing all kinds of trade across vast distances for countless merchants, such lending also financed the Crusades for the Church and various wars for various kings.23 Everyone wanted what usury had to offer, yet no one understood exactly what that was. So while the Church continued to forbid usury and punish transgressors, it also actively engaged in the practice. What was seen as moral by the Church apparently was not seen as wholly practical by the Church, and opportunity became the mother of evasion.

The Church also engaged in opportunistic behavior when it came to restitution. Where so-called “victims” of usury were known, the Church provided them with restitution from the usurer. But in cases where the “victims” were not known, the Church still collected restitution, which it supposedly directed to “the poor” or other “pious purposes.” Clerics were sold licenses empowering them to procure such restitution, and, as a result, the number of usurers prosecuted where there was no identifiable “victim” was far greater than it otherwise would have been. The death of a wealthy merchant often provided the Church with windfall revenue. In the 13th century, the Pope laid claim to the assets of deceased usurers in England. He directed his agents to “inquire concerning living (and dead) usurers and the thing wrongfully acquired by this wicked usury . . . and . . . compel opponents by ecclesiastical censure.”24

Also of note, Church officials regularly ignored the usury of their important friends—such as the Florentine bankers of the Medici family—while demonizing Jewish moneylenders and others. The result was that the image of the merchant usurer was dichotomized into “two disparate figures who stood at opposite poles: the degraded manifest usurer-pawnbroker, as often as not a Jew; and the city father, arbiter of elegance, patron of the arts, devout philanthropist, the merchant prince [yet no less a usurer!].”25

In theory, the Church was staunchly opposed to usury; in practice, however, it was violating its own moral law in myriad ways. The gap between the idea of usury as immoral and the idea of usury as impractical continued to widen as the evidence for its practicality continued to grow. The Church would not budge on the moral status, but it selectively practiced the vice nonetheless.

This selective approach often correlated with the economic times. When the economy was doing well, the Church, and the civil authorities, often looked the other way and let the usurers play. In bad times, however, moneylenders, particularly those who were Jewish, became the scapegoats. (This pattern continues today with anti-interest sentiment exploding whenever there is an economic downturn.)

To facilitate the Church’s selective opposition to usury, and to avoid the stigma associated with the practice, religious and civil authorities created many loopholes in the prohibition. Sometime around 1220, a new term was coined to replace certain forms of usury: the concept of interest.26 Under circumstances where usury was legal, it would now be called the collecting of interest. In cases where the practice was illegal, it would continue to be called usury.27

The modern word “interest” derives from the Latin verb intereo, which means “to be lost.” Interest was considered compensation for a loss that a creditor had incurred through lending. Compensation for a loan was illegal if it was a gain or a profit, but if it was reimbursement for a loss or an expense it was permissible. Interest was, in a sense, “damages,” not profit. Therefore, interest was sometimes allowed, but usury never.

So, increasingly, moneylenders were allowed to charge interest as a penalty for delayed repayment of a loan, provided that the lender preferred repayment to the delay plus interest (i.e., provided that it was seen as a sacrifice). Loans were often structured in advance so that such delays were anticipated and priced, and so the prohibition on usury was avoided. Many known moneylenders and bankers, such as the Belgian Lombards, derived their profits from such penalties—often 100 percent of the loan value.28

Over time, the view of costs or damages for the lender was expanded, and the lender’s time and effort in making the loan were permitted as a reason for charging interest. It even became permissible on occasion for a lender to charge interest if he could show an obvious, profitable alternative use for the money. If, by lending money, the lender suffered from the inability to make a profit elsewhere, the interest was allowed as compensation for the potential loss. Indeed, according to some sources, even risk—economic risk—was viewed as worthy of compensation. Therefore, if there was risk that the debtor would not pay, interest charged in advance was permissible.29

These were major breakthroughs. Recognition of the economic need for advanced calculation of a venture’s risk, and for compensation in advance for that risk, were giant steps in the understanding of and justification for moneylending.

But despite all these breakthroughs and the fact that economic activity continued to grow during the later Middle Ages, the prohibition on usury was still selectively enforced. Usurers were often forced to pay restitution; many were driven to poverty or excommunicated; and some, especially Jewish moneylenders, were violently attacked and murdered. It was still a very high-risk profession.

Not only were usurers in danger on Earth; they were also threatened with the “Divine justice” that awaited them after death.30 They were considered the devil’s henchmen and were sure to go to Hell. It was common to hear stories of usurers going mad in old age out of fear of what awaited them in the afterlife.

The Italian poet Dante (1265–1321) placed usurers in the seventh rung of Hell, incorporating the traditional medieval punishment for usury, which was eternity with a heavy bag of money around one’s neck: “From each neck there hung an enormous purse, each marked with its own beast and its own colors like a coat of arms. On these their streaming eyes appeared to feast.”31 Usurers in Dante’s Hell are forever weighed down by their greed. Profits, Dante believed, should be the fruits of labor—and usury entailed no actual work. He believed that the deliberate, intellectual choice to engage in such an unnatural action as usury was the worst kind of sin.32

It is a wonder that anyone—let alone so many—defied the law and their faith to practice moneylending. In this sense, the usurers were truly heroic. By defying religion and taking risks—both financial and existential—they made their material lives better. They made money. And by doing so, they made possible economic growth the likes of which had never been seen before. It was thanks to a series of loans from local moneylenders that Gutenberg, for example, was able to commercialize his printing press.33 The early bankers enabled advances in commerce and industry throughout Europe, financing the Age of Exploration as well as the early seeds of technology that would ultimately lead to the Industrial Revolution.

By the end of the Middle Ages, although everyone still condemned usury, few could deny its practical value. Everyone “knew” that moneylending was ethically wrong, but everyone could also see that it was economically beneficial. Its moral status was divinely decreed and appeared to be supported by reason, yet merchants and businessmen experienced its practical benefits daily. The thinkers of the day could not explain this apparent dichotomy. And, in the centuries that followed, although man’s understanding of the economic value of usury would advance, his moral attitude toward the practice would remain one of contempt.

Renaissance and Reformation

The start of the 16th century brought about a commercial boom in Europe. It was the Golden Age of Exploration. Trade routes opened to the New World and expanded to the East, bringing unprecedented trade and wealth to Europe. To fund this trade, to supply credit for commerce and the beginnings of industry, banks were established throughout Europe. Genoese and German bankers funded Spanish and Portuguese exploration and the importation of New World gold and silver. Part of what made this financial activity possible was the new tolerance, in some cities, of usury.

The Italian city of Genoa, for example, had a relatively relaxed attitude toward usury, and moneylenders created many ways to circumvent the existing prohibitions. It was clear to the city’s leaders that the financial activities of its merchants were crucial to Genoa’s prosperity, and the local courts regularly turned a blind eye to the usurious activities of its merchants and bankers. Although the Church often complained about these activities, Genoa’s political importance prevented the Church from acting against the city.

The Catholic Church’s official view toward usury remained unchanged until the 19th century, but the Reformation—which occurred principally in northern Europe—brought about a mild acceptance of usury. (This is likely one reason why southern Europe, which was heavily Catholic, lagged behind the rest of Europe economically from the 17th century onward.) Martin Luther (1483–1546), a leader of the Reformation, believed that usury was inevitable and should be permitted to some extent by civil law. Luther believed in the separation of civil law and Christian ethics. This view, however, resulted not from a belief in the separation of state and religion, but from his belief that the world and man were too corrupt to be guided by Christianity. Christian ethics and the Old Testament commandments, he argued, are utopian dreams, unconnected with political or economic reality. He deemed usury unpreventable and thus a matter for the secular authorities, who should permit the practice and control it.

However, Luther still considered usury a grave sin, and in his later years wrote:

[T]here is on earth no greater enemy of man, after the Devil, than a gripe-money and usurer, for he wants to be God over all men. . . . And since we break on the wheel and behead highwaymen, murderers, and housebreakers, how much more ought we to break on the wheel and kill . . . hunt down, curse, and behead all usurers!34

In other words, usury should be allowed by civil authorities (as in Genoa) because it is inevitable (men will be men), but it should be condemned in the harshest terms by the moral authority. This is the moral-practical dichotomy in action, sanctioned by an extremely malevolent view of man and the universe.

John Calvin, (1509–1564), another Reformation theologian, had a more lenient view than Luther. He rejected the notion that usury is actually banned in the Bible. Since Jews are allowed to charge interest from strangers, God cannot be against usury. It would be fantastic, Calvin thought, to imagine that by “strangers” God meant the enemies of the Jews; and it would be most unchristian to legalize discrimination. According to Calvin, usury does not always conflict with God’s law, so not all usurers need to be damned. There is a difference, he believed, between taking usury in the course of business and setting up business as a usurer. If a person collects interest on only one occasion, he is not a usurer. The crucial issue, Calvin thought, is the motive. If the motive is to help others, usury is good, but if the motive is personal profit, usury is evil.

Calvin claimed that the moral status of usury should be determined by the golden rule. It should be allowed only insofar as it does not run counter to Christian fairness and charity. Interest should never be charged to a man in urgent need, or to a poor man; the “welfare of the state” should always be considered. But it could be charged in cases where the borrower is wealthy and the interest will be used for Christian good. Thus he concluded that interest could neither be universally condemned nor universally permitted—but that, to protect the poor, a maximum rate should be set by law and never exceeded.35

Although the religious authorities did little to free usury from the taint of immorality, other thinkers were significantly furthering the economic understanding of the practice. In a book titled Treatise on Contracts and Usury, Molinaeus, a French jurist, made important contributions to liberate usury from Scholastic rationalism.36 By this time, there was sufficient evidence for a logical thinker to see the merits of moneylending. Against the argument that money is barren, Molinaeus (1500–1566) observed that everyday experience of business life showed that the use of any considerable sum of money yields a service of importance. He argued, by reference to observation and logic, that money, assisted by human effort, does “bear fruit” in the form of new wealth; the money enables the borrower to create goods that he otherwise would not have been able to create. Just as Galileo would later apply Aristotle’s method of observation and logic in refuting Aristotle’s specific ideas in physics, so Molinaeus used Aristotle’s method in refuting Aristotle’s basic objection to usury. Unfortunately, like Galileo, Molinaeus was to suffer for his ideas: The Church forced him into exile and banned his book. Nevertheless, his ideas on usury spread throughout Europe and had a significant impact on future discussions of moneylending.37

The prevailing view that emerged in the late 16th century (and that, to a large extent, is still with us today) is that money is not barren and that usury plays a productive role in the economy. Usury, however, is unchristian; it is motivated by a desire for profit and can be used to exploit the poor. It can be practical, but it is not moral; therefore, it should be controlled by the state and subjected to regulation in order to restrain the rich and protect the poor.

This Christian view has influenced almost all attitudes about usury since. In a sense, Luther and Calvin are responsible for today’s so-called “capitalism.” They are responsible for the guilt many people feel from making money and the guilt that causes people to eagerly regulate the functions of capitalists. Moreover, the Protestants were the first to explicitly assert and sanction the moral-practical dichotomy—the idea that the moral and the practical are necessarily at odds. Because of original sin, the Protestants argued, men are incapable of being good, and thus concessions must be made in accordance with their wicked nature. Men must be permitted to some extent to engage in practical matters such as usury, even though such practices are immoral.

In spite of its horrific view of man, life, and reality, Luther and Calvin’s brand of Christianity allowed individuals who were not intimidated by Christian theology to practice moneylending to some extent without legal persecution. Although still limited by government constraints, the chains were loosened, and this enabled economic progress through the periodic establishment of legal rates of interest.

The first country to establish a legal rate of interest was England in 1545 during the reign of Henry VIII. The rate was set at 10 percent. However, seven years later it was repealed, and usury was again completely banned. In an argument in 1571 to reinstate the bill, Mr. Molley, a lawyer representing the business interests in London, said before the House of Commons:

Since to take reasonably, or so that both parties might do good, was not hurtful; . . . God did not so hate it, that he did utterly forbid it, but to the Jews amongst themselves only, for that he willed they should lend as Brethren together; for unto all others they were at large; and therefore to this day they are the greatest Usurers in the World. But be it, as indeed it is, evil, and that men are men, no Saints, to do all these things perfectly, uprightly and Brotherly; . . . and better may it be born to permit a little, than utterly to take away and prohibit Traffick; which hardly may be maintained generally without this.

But it may be said, it is contrary to the direct word of God, and therefore an ill Law; if it were to appoint men to take Usury, it were to be disliked; but the difference is great between that and permitting or allowing, or suffering a matter to be unpunished.38

Observe that while pleading for a bill permitting usury—on the grounds that it is necessary (“Traffick . . . hardly may be maintained generally without [it]”)—Molley concedes that it is evil. This is the moral-practical dichotomy stated openly and in black-and-white terms, and it illustrates the general attitude of the era. The practice was now widely accepted as practical but still regarded as immoral, and the thinkers of the day grappled with this new context.

One of England’s most significant 17th-century intellectuals, Francis Bacon (1561–1626), realized the benefits that moneylending offered to merchants and traders by providing them with capital. He also recognized the usurer’s value in providing liquidity to consumers and businesses. And, although Bacon believed that the moral ideal would be lending at 0 percent interest, as the Bible requires, he, like Luther, saw this as utopian and held that “it is better to mitigate usury by declaration than suffer it to rage by connivance.” Bacon therefore proposed two rates of usury: one set at a maximum of 5 percent and allowable to everyone; and a second rate, higher than 5 percent, allowable only to certain licensed persons and lent only to known merchants. The license was to be sold by the state for a fee.39

Again, interest and usury were pitted against morality. But Bacon saw moneylending as so important to commerce that the legal rate of interest had to offer sufficient incentive to attract lenders. Bacon recognized that a higher rate of interest is economically justified by the nature of certain loans.40

The economic debate had shifted from whether usury should be legal to whether and at what level government should set the interest rate (a debate that, of course, continues to this day, with the Fed setting certain interest rates). As one scholar put it: “The legal toleration of interest marked a revolutionary change in public opinion and gave a clear indication of the divorce of ethics from economics under the pressure of an expanding economic system.”41

In spite of this progress, artists continued to compare usurers to idle drones, spiders, and bloodsuckers, and playwrights personified the moneygrubbing usurers in characters such as Sir Giles Overreach, Messrs. Mammon, Lucre, Hoard, Gripe, and Bloodhound. Probably the greatest work of art vilifying the usurer was written during this period—TheMerchant of Venice by Shakespeare (1564–1616), which immortalized the character of the evil Jewish usurer, Shylock.

In The Merchant of Venice, Bassanio, a poor nobleman, needs cash in order to court the heiress, Portia. Bassanio goes to a Jewish moneylender, Shylock, for a loan, bringing his wealthy friend, Antonio, to stand as surety for it. Shylock, who has suffered great rudeness from Antonio in business, demands as security for the loan not Antonio’s property, which he identifies as being at risk, but a pound of his flesh.42

The conflict between Shylock and Antonio incorporates all the elements of the arguments against usury. Antonio, the Christian, lends money and demands no interest. As Shylock describes him:

Shy. [Aside.] How like a fawning publican he looks!
I hate him for he is a Christian;
But more for that in low simplicity
He lends out money gratis, and brings down
The rate of usance here with us in Venice.
If I can catch him once upon the hip,
I will feed fat the ancient grudge I bear him.
He hates our sacred nation, and he rails,
Even there where merchants most do congregate,
On me, my bargains, and my well-won thrift,
Which he calls interest. Cursed be my tribe,
If I forgive him!43

Shylock takes usury. He is portrayed as the lowly, angry, vengeful, and greedy Jew. When his daughter elopes and takes her father’s money with her, he cries, “My daughter! O my ducats! Oh my daughter!”44 —not sure for which he cares more.

It is clear that Shakespeare understood the issues involved in usury. Note Shylock’s (legitimate) hostility toward Antonio because Antonio loaned money without charging interest and thus brought down the market rate of interest in Venice. Even Aristotle’s “barren money” argument is present. Antonio, provoking Shylock, says:

If thou wilt lend this money, lend it not
As to thy friends,—for when did friendship take
A breed for barren metal of his friend?—
But lend it rather to thine enemy:
Who if he break, thou mayst with better face
Exact the penalty.45

Friends do not take “breed for barren metal” from friends; usury is something one takes only from an enemy.

Great art plays a crucial role in shaping popular attitudes, and Shakespeare’s depiction of Shylock, like Dante’s depiction of usurers, concretized for generations the dichotomous view of moneylending and thus helped entrench the alleged link between usury and evil. As late as 1600, medieval moral and economic theories were alive and well, even if they were increasingly out of step with the economic practice of the time.

The Enlightenment

During the Enlightenment, the European economy continued to grow, culminating with the Industrial Revolution. This growth involved increased activity in every sector of the economy. Banking houses were established to provide credit to a wide array of economic endeavors. The Barring Brothers and the House of Rothschild were just the largest of the many banks that would ultimately help fuel the Industrial Revolution, funding railroads, factories, ports, and industry in general.

Economic understanding of the important productive role of usury continued to improve over the next four hundred years. Yet, the moral evaluation of usury would change very little. The morality of altruism—the notion that self-sacrifice is moral and that self-interest is evil—was embraced and defended by many Enlightenment intellectuals and continued to hamper the acceptability of usury. After all, usury is a naked example of the pursuit of profit—which is patently self-interested. Further, it still seemed to the thinkers of the time that usury could be a zero-sum transaction—that a rich lender might profit at the expense of a poor borrower. Even a better conception of usury—let alone the misconception of it being a zero-sum transaction—is anathema to altruism, which demands the opposite of personal profit: self-sacrifice for the sake of others.

In the mid-17th century, northern Europe was home to a new generation of scholars who recognized that usury served an essential economic purpose, and that it should be allowed freely. Three men made significant contributions in this regard.

Claudius Salmasius (1588–1653), a French scholar teaching in Holland, thoroughly refuted the claims about the “barrenness” of moneylending; he showed the important productive function of usury and even suggested that there should be more usurers, since competition between them would reduce the rate of interest. Other Dutch scholars agreed with him, and, partially as a result of this, Holland became especially tolerant of usury, making it legal at times. Consequently, the leading banks of the era were found in Holland, and it became the world’s commercial and financial center, the wealthiest state in Europe, and the envy of the world.46

Robert Jacques Turgot (1727–1781), a French economist, was the first to identify usury’s connection to property rights. He argued that a creditor has the right to dispose of his money in any way he wishes and at whatever rate the market will bear, because it is his property. Turgot was also the first economist to fully understand that the passing of time changes the value of money. He saw the difference between the present value and the future value of money—concepts that are at the heart of any modern financial analysis. According to Turgot: “If . . . two gentlemen suppose that a sum of 1000 Francs and a promise of 1000 Francs possess exactly the same value, they put forward a still more absurd supposition; for if these two things were of equal value, why should any one borrow at all?”47 Turgot even repudiated the medieval notion that time belonged to God. Time, he argued, belongs to the individual who uses it and therefore time could be sold.48

During the same period, the British philosopher Jeremy Bentham (1748–1832) wrote a treatise entitled A Defense of Usury. Bentham argued that any restrictions on interest rates were economically harmful because they restricted an innovator’s ability to raise capital. Since innovative trades inherently involved high risk, they could only be funded at high interest rates. Limits on permissible interest rates, he argued, would kill innovation—the engine of growth. Correcting another medieval error, Bentham also showed that restrictive usury laws actually harmed the borrowers. Such restrictions cause the credit markets to shrink while demand for credit remains the same or goes up; thus, potential borrowers have to seek loans in an illegal market where they would have to pay a premium for the additional risk of illegal trading.

Bentham’s most important contribution was his advocacy of contractual freedom:

My neighbours, being at liberty, have happened to concur among themselves in dealing at a certain rate of interest. I, who have money to lend, and Titus, who wants to borrow it of me, would be glad, the one of us to accept, the other to give, an interest somewhat higher than theirs: Why is the liberty they exercise to be made a pretence for depriving me and Titus of ours.49

This was perhaps the first attempt at a moral defense of usury.

Unfortunately, Bentham and his followers undercut this effort with their philosophy of utilitarianism, according to which rights, liberty, and therefore moneylending, were valuable only insofar as they increased “social utility”: “the greatest good for the greatest number.” Bentham famously dismissed individual rights—the idea that each person should be free to act on his own judgment—as “nonsense upon stilts.”50 He embraced the idea that the individual has a “duty” to serve the well-being of the collective, or, as he put it, the “general mass of felicity.”51 Thus, in addition to undercutting Turgot’s major achievement, Bentham also doomed the first effort at a moral defense of usury—which he himself had proposed.

An explicitly utilitarian attempt at a moral defense of usury was launched in 1774 in the anonymously published Letters on Usury and Interest. The goal of the book was to explain why usury should be accepted in England of the 18th century, and why this acceptance did not contradict the Church’s teachings. The ultimate reason, the author argued, is one of utility:

Here, then, is a sure and infallible rule to judge of the lawfulness of a practice. Is it useful to the State? Is it beneficial to the individuals that compose it? Either of these is sufficient to obtain a tolerance; but both together vest it with a character of justice and equity. . . . In fact, if we look into the laws of different nations concerning usury, we shall find that they are all formed on the principle of public utility. In those states where usury was found hurtful to society, it was prohibited. In those where it was neither hurtful nor very beneficial, it was tolerated. In those where it was useful, it was authorized. In ours, it is absolutely necessary.52

And:

[T]he practice of lending money to interest is in this nation, and under this constitution, beneficial to all degrees; therefore it is beneficial to society. I say in this nation; which, as long as it continues to be a commercial one, must be chiefly supported by interest; for interest is the soul of credit and credit is the soul of commerce.53

Although the utilitarian argument in defense of usury contains some economic truth, it is morally bankrupt. Utilitarian moral reasoning for the propriety of usury depends on the perceived benefits of the practice to the collective or the nation. But what happens, for example, when usury in the form of sub-prime mortgage loans creates distress for a significant number of people and financial turmoil in some markets? How can it be justified? Indeed, it cannot. The utilitarian argument collapses in the face of any such economic problem, leaving moneylenders exposed to the wrath of the public and to the whips and chains of politicians seeking a scapegoat for the crisis.

Although Salmasius, Turgot, and Bentham made significant progress in understanding the economic and political value of usury, not all their fellow intellectuals followed suit. The father of economics, Adam Smith (1723–1790), wrote: “As something can everywhere be made by the use of money, something ought everywhere to be paid for the use of it.”54 Simple and elegant. Yet, Smith also believed that the government must control the rate of interest. He believed that unfettered markets would create excessively high interest rates, which would hurt the economy—which, in turn, would harm society.55 Because Smith thought that society’s welfare was the only justification for usury, he held that the government must intervene to correct the errors of the “invisible hand.”

Although Smith was a great innovator in economics, philosophically, he was a follower. He accepted the common philosophical ideas of his time, including altruism, of which utilitarianism is a form. Like Bentham, he justified capitalism only through its social benefits. If his projections of what would come to pass in a fully free market amounted to a less-than-optimal solution for society, then he advocated government intervention. Government intervention is the logical outcome of any utilitarian defense of usury.

(Smith’s idea that there need be a “perfect” legal interest rate remains with us to this day. His notion of such a rate was that it should be slightly higher than the market rate—what he called the “golden mean.” The chairman of the Federal Reserve is today’s very visible hand, constantly searching for the “perfect” rate or “golden mean” by alternately establishing artificially low and artificially high rates.)

Following Bentham and Smith, all significant 19th-century economists—such as David Ricardo, Jean Baptiste Say, and John Stuart Mill—considered the economic importance of usury to be obvious and argued that interest rates should be determined by freely contracting individuals. These economists, followed later by the Austrians—especially Carl Menger, Eugen von Böhm-Bawerk, and Ludwig von Mises—developed sound theories of the productivity of interest and gained a significant economic understanding of its practical role. But the moral-practical dichotomy inherent in their altruistic, utilitarian, social justification for usury remained in play, and the practice continued to be morally condemned and thus heavily regulated if not outlawed.

The 19th and 20th Centuries

Despite their flaws, the thinkers of the Enlightenment had created sufficient economic understanding to fuel the Industrial Revolution throughout the 19th century. Economically and politically, facts and reason had triumphed over faith; a sense of individualism had taken hold; the practicality of the profit motive had become clear; and, relative to eras past, the West was thriving.

Morally and philosophically, however, big trouble was brewing. As capitalism neared a glorious maturity, a new, more consistent brand of altruism, created by Kant, Hegel, and their followers, was sweeping Europe. At the political-economic level, this movement manifested itself in the ideas of Karl Marx (1818–1883).

Marx, exploiting the errors of the Classical economists, professed the medieval notion that all production is a result of manual labor; but he also elaborated, claiming that laborers do not retain the wealth they create. The capitalists, he said, take advantage of their control over the means of production—secured to them by private property—and “loot” the laborers’ work. According to Marx, moneylending and other financial activities are not productive, but exploitative; moneylenders exert no effort, do no productive work, and yet reap the rewards of production through usury.56 As one 20th-century Marxist put it: “The major argument against usury is that labor constitutes the true source of wealth.”57 Marx adopted all the medieval clichés, including the notion that Jews are devious, conniving money-grubbers.

What is the profane basis of Judaism? Practical need, self-interest. What is the worldly cult of the Jew? Huckstering. What is his worldly god? Money.

Money is the jealous god of Israel, beside which no other god may exist. Money abases all the gods of mankind and changes them into commodities.58

Marx believed that the Jews were evil—not because of their religion, as others were clamoring at the time—but because they pursued their own selfish interests and sought to make money. And Marxists were not alone in their contempt for these qualities.

Artists who, like Marx, resented capitalists in general and moneylenders in particular, dominated Western culture in the 19th century. In Dickens’s A Christmas Carol, we see the moneygrubbing Ebenezer Scrooge. In Dostoyevsky’s Crime and Punishment, the disgusting old lady whom Raskalnikov murders is a usurer. And in The Brothers Karamazov, Dostoyevsky writes:

It was known too that the young person had . . . been given to what is called “speculation,” and that she had shown marked abilities in the direction, so that many people began to say that she was no better than a Jew. It was not that she lent money on interest, but it was known, for instance, that she had for some time past, in partnership with old Karamazov, actually invested in the purchase of bad debts for a trifle, a tenth of their nominal value, and afterwards had made out of them ten times their value.59

In other words, she was what in the 1980s became known as a “vulture” capitalist buying up distressed debt.

Under Marx’s influential ideas, and given the culture-wide contempt for moneylenders, the great era of capitalism—of thriving banks and general financial success—was petering out. Popular sentiment concerning usury was reverting to a dark-ages type of hatred. Marx and company put the moneylenders back into Dante’s Inferno, and to this day they have not been able to escape.

The need for capital, however, would not be suppressed by the label “immoral.” People still sought to start businesses and purchase homes; thus usury was still seen as practical. Like the Church of the Middle Ages, people found themselves simultaneously condemning the practice and engaging in it.

Consequently, just as the term “interest” had been coined in the Middle Ages to facilitate the Church’s selective opposition to usury and to avoid the stigma associated with the practice, so modern man employed the term for the same purpose. The concept of moneylending was again split into two allegedly different concepts: the charging of “interest” and the practice of “usury.” Lending at “interest” came to designate lower-premium, lower-risk, less-greedy lending, while “usury” came to mean specifically higher-premium, higher-risk, more-greedy lending. This artificial division enabled the wealthier, more powerful, more influential people to freely engage in moneylending with the one hand, while continuing to condemn the practice with the other. Loans made to lower-risk, higher-income borrowers would be treated as morally acceptable, while those made to higher-risk, lower-income borrowers would remain morally contemptible. (The term “usury” is now almost universally taken to mean “excessive” or illegal premium on loans, while the term “interest” designates tolerable or legal premium.)

From the 19th century onward, in the United States and in most other countries, usury laws would restrict the rates of interest that could be charged on loans, and there would be an ongoing battle between businessmen and legislators over what those rates should be. These laws, too, are still with us.

As Bentham predicted, such laws harm not only lenders but also borrowers, who are driven into the shadows where they procure shady and often illegal loans in order to acquire the capital they need for their endeavors. And given the extra risk posed by potential legal complications for the lenders, these loans are sold at substantially higher interest rates than they would be if moneylending were fully legal and unregulated.

In the United States, demand for high-risk loans has always existed, and entrepreneurs have always arisen to service the demand for funds. They have been scorned, condemned to Hell, assaulted, jailed, and generally treated like the usurers of the Middle Ages—but they have relentlessly supplied the capital that has enabled Americans to achieve unprecedented levels of productiveness and prosperity.

The earliest known advertisement for a small-loan service in an American newspaper appeared in the Chicago Tribune in November 1869. By 1872, the industry was prospering. Loans collateralized by furniture, diamonds, warehouse receipts, houses, and pianos were available (called chattel loans). The first salary loan office (offering loans made in advance of a paycheck) was opened by John Mulholland in Kansas City in 1893. Within fifteen years he had offices all across the country. The going rate on a chattel loan was 10 percent a month for loans under $50, and 5-7 percent a month for larger loans. Some loans were made at very high rates, occasionally over 100 percent a month.60

The reason rates were so high is because of the number of defaults. With high rates in play, the losses on loans in default could ordinarily be absorbed as a cost of doing business. In this respect, the 19th-century small-loan business was a precursor of the 20th-century “junk” bond business or the 21st-century sub-prime mortgage lender. However, unlike the “junk” bond salesman, who had recourse to the law in cases of default or bankruptcy, these small-loan men operated on the fringes of society—and often outside the law. Because of the social stigmatization and legal isolation of the creditors, legal recourse against a defaulting borrower was generally unavailable to a usurer. Yet these back-alley loans provided a valuable service—one for which there was great demand—and they enabled many people to start their own businesses or improve their lives in other ways.

Of course, whereas most of these borrowers paid off their loans and succeeded in their endeavors, many of them got into financial trouble—and the latter cases, not the former, were widely publicized. The moneylenders were blamed, and restrictions were multiplied and tightened.

In spite of all the restrictions, laws, and persecutions, the market found ways to continue. In 1910, Arthur Morris set up the first bank in America with the express purpose of providing small loans to individuals at interest rates based on the borrower’s “character and earning power.” In spite of the usury limit of 6 percent that existed in Virginia at the time, Morris’s bank found ways, as did usurers in the Middle Ages, to make loans at what appeared to be a 6 percent interest rate while the actual rates were much higher and more appropriate. For instance, a loan for $100 might be made as follows: A commission of 2 percent plus the 6 percent legal rate would be taken off the top in advance; thus the borrower would receive $92. Then he would repay the loan at $2 a week over fifty weeks. The effective compound annual interest rate on such a loan was in excess of 18 percent. And penalties would be assessed for any delinquent payments.61 Such camouflaged interest rates were a throwback to the Middle Ages, when bankers developed innovative ways to circumvent the restrictions on usury established by the Church. And, as in the Middle Ages, such lending became common as the demand for capital was widespread. Consequently, these banks multiplied and thrived—for a while.

(Today’s credit card industry is the successor to such institutions. Credit card lenders charge high interest rates to high-risk customers, and penalties for delinquency. And borrowers use these loans for consumption as well as to start or fund small businesses. And, of course, the credit card industry is regularly attacked for its high rates of interest and its “exploitation” of customers. To this day, credit card interest rates are restricted by usury laws, and legislation attempting to further restrict these rates is periodically introduced.)

In 1913, in New York, a moneylender who issued loans to people who could not get them at conventional banks appeared before a court on the charge of usury. In the decision, the judge wrote:

You are one of the most contemptible usurers in your unspeakable business. The poor people must be protected from such sharks as you, and we must trust that your conviction and sentence will be a notice to you and all your kind that the courts have found a way to put a stop to usury. Men of your type are a curse to the community, and the money they gain is blood money.62

This ruling is indicative of the general attitude toward usurers at the time. The moral-practical dichotomy was alive and kicking, and the moneylenders were taking the blows. Although their practical value to the economy was now clear, their moral status as evil was still common “sense.” And the intellectuals of the day would only exacerbate the problem.

The most influential economist of the 20th century was John Maynard Keynes (1883–1946), whose ideas not only shaped the theoretical field of modern economics but also played a major role in shaping government policies in the United States and around the world. Although Keynes allegedly rejected Marx’s ideas, he shared Marx’s hatred of the profit motive and usury. He also agreed with Adam Smith that government must control interest rates; otherwise investment and thus society would suffer. And he revived the old Reformation idea that usury is a necessary evil:

When the accumulation of wealth is no longer of high social importance, there will be great changes in the code of morals. We shall be able to rid ourselves of many of the pseudo-moral principles which have hag-ridden us for two hundred years, by which we have exalted some of the most distasteful of human qualities into the position of the highest virtues. . . . But beware! The time for all this is not yet. For at least another hundred years we must pretend to ourselves and to everyone that fair is foul and foul is fair; for foul is useful and fair is not. Avarice and usury and precaution must be our gods for a little longer still. For only they can lead us out of the tunnel of economic necessity into daylight.63

Although Keynes and other economists and intellectuals of the day recognized the need of usury, they universally condemned the practice and its practitioners as foul and unfair. Thus, regardless of widespread recognition of the fact that usury is a boon to the economy, when the Great Depression occurred in the United States, the moneylenders on Wall Street were blamed. As Franklin Delano Roosevelt put it:

The rulers of the exchange of mankind’s goods have failed, through their own stubbornness and their own incompetence, have admitted failure, and have abdicated. Practices of the unscrupulous money changers stand indicted in the court of public opinion, rejected by the hearts and minds of men . . . [We must] apply social values more noble than mere monetary profit.64

And so the “solution” to the problems of the Great Depression was greater government intervention throughout the economy—especially in the regulation of interest and the institutions that deal in it. After 1933, banks were restricted in all aspects of their activity: the interest rates they could pay their clients, the rates they could charge, and to whom they could lend. In 1934, the greatest bank in American history, J. P. Morgan, was broken up by the government into several companies. The massive regulations and coercive restructurings of the 1930s illustrate the continuing contempt for the practice of taking interest on loans and the continuing distrust of those—now mainly bankers—who engage in this activity. (We paid a dear price for those regulations with the savings and loan crisis of the 1970s and 1980s, which cost American taxpayers hundreds of billions of dollars.65 And we continue to pay the price of these regulations in higher taxes, greater financial costs, lost innovation, and stifled economic growth.)

The 21st Century

From ancient Greece and Rome, to the Dark and Middle Ages, to the Renaissance and Reformation, to the 19th and 20th centuries, moneylending has been morally condemned and legally restrained. Today, at the dawn of the 21st century, moneylending remains a pariah.

One of the latest victims of this moral antagonism is the business of providing payday loans. This highly popular and beneficial service has been branded with the scarlet letter “U”; consequently, despite the great demand for these loans, the practice has been relegated to the fringes of society and the edge of the law. These loans carry annualized interest rates as high as 1000 percent, because they are typically very short term (i.e., to be paid back on payday). By some estimates there are 25,000 payday stores across America, and it is “a $6 billion dollar industry serving 15 million people every month.”66 The institutions issuing these loans have found ways, just as banks always have, to circumvent state usury laws. Bank regulators have severely restricted the ability of community banks to offer payday loans or even to work with payday loan offices, more than 13 states have banned them altogether, and Congress is currently looking at ways to ban all payday loans.67 This is in spite of the fact that demand for these loans is soaring and that they serve a genuine economic need, that they are a real value for low-income households. As the Wall Street Journal reports, “Georgia outlawed payday loans in 2004, and thousands of workers have since taken to traveling over the border to find payday stores in Tennessee, Florida and South Carolina. So the effect of the ban has been to increase consumer credit costs and inconvenience for Georgia consumers.”68

A story in the LA Weekly, titled “Shylock 2000”—ignoring the great demand for payday loans, ignoring the economic value they provide to countless borrowers, and ignoring the fact that the loans are made by mutual consent to mutual advantage—proceeded to describe horrific stories of borrowers who have gone bankrupt. The article concluded: “What’s astonishing about this story is that, 400 years after Shakespeare created the avaricious lender Shylock, such usury may be perfectly legal.”69

What is truly astonishing is that after centuries of moneylenders providing capital and opportunities to billions of willing people on mutually agreed upon terms, the image of these persistent businessmen has not advanced beyond that of Shylock.

The “Shylocks” du jour, of course, are the sub-prime mortgage lenders, with whom this article began. These lenders provided mortgages designed to enable low-income borrowers to buy homes. Because the default rate among these borrowers is relatively high, the loans are recognized as high-risk transactions and are sold at correspondingly high rates of interest. Although it is common knowledge that many of these loans are now in default, and although it is widely believed that the lenders are to blame for the situation, what is not well known is, as Paul Harvey would say, “the rest of the story.”

The tremendous growth in this industry is a direct consequence of government policy. Since the 1930s, the U.S. government has encouraged home ownership among all Americans—but especially among those in lower income brackets. To this end, the government created the Federal Home Loan Banks (which are exempt from state and local income taxes) to provide incentives for smaller banks to make mortgage loans to low-income Americans. Congress passed the Community Reinvestment Act, which requires banks to invest in their local communities, including by providing mortgage loans to people in low-income brackets. The government created Fannie Mae and Freddie Mac, both of which have a mandate to issue and guarantee mortgage loans to low-income borrowers.

In recent years, all these government schemes and more (e.g., artificially low-interest rates orchestrated by the Fed) led to a frenzy of borrowing and lending. The bottom line is that the government has artificially mitigated lenders’ risk, and it has done so on the perverse, altruistic premise that “society” has a moral duty to increase home ownership among low-income Americans. The consequence of this folly has been a significant increase in delinquent loans and foreclosures, which has led to wider financial problems at banks and at other institutions that purchased the mortgages in the secondary markets.

Any objective evaluation of the facts would place the blame for this disaster on the government policies that caused it. But no—just as in the past, the lenders are being blamed and scapegoated.

Although some of these lenders clearly did take irrational risks on many of these loans, that should be their own problem, and they should have to suffer the consequences of their irrational actions—whether significant financial loss or bankruptcy. (The government most certainly should not bail them out.) However, without the perception of reduced risk provided by government meddling in the economy, far fewer lenders would have been so frivolous.

Further, the number of people benefiting from sub-prime mortgage loans, which make it possible for many people to purchase a home for the first time, is in the millions—and the vast majority of these borrowers are not delinquent or in default; rather, they are paying off their loans and enjoying their homes, a fact never mentioned by the media.

It should also be noted that, whereas the mortgage companies are blamed for all the defaulting loans, no blame is placed on the irresponsible borrowers who took upon themselves debt that they knew—or should have known—they could not handle.

After four hundred years of markets proving the incredible benefits generated by moneylending, intellectuals, journalists, and politicians still rail against lenders and their institutions. And, in spite of all the damage done by legal restrictions on interest, regulation of moneylenders, and government interference in financial markets, whenever there is an economic “crisis,” there is invariably a wave of demand for more of these controls, not less.

Moneylenders are still blamed for recessions; they are still accused of being greedy and of taking advantage of the poor; they are still portrayed on TV and in movies as slick, murderous villains; and they are still distrusted by almost everyone. (According to a recent poll, only 16 percent of Americans have substantial confidence in the American financial industry.)70 Thus, it should come as no surprise that the financial sector is the most regulated, most controlled industry in America today.

But what explains the ongoing antipathy toward, distrust of, and coercion against these bearers of capital and opportunity? What explains the modern anti-moneylending mentality? Why are moneylenders today held in essentially the same ill repute as they were in the Middle Ages?

The explanation for this lies in the fact that, fundamentally, 21st-century ethics is no different from the ethics of the Middle Ages.

All parties in the assault on usury share a common ethical root: altruism—belief in the notion that self-sacrifice is moral and self-interest is evil. This is the source of the problem. So long as self-interest is condemned, neither usury in particular, nor profit in general, can be seen as good—both will be seen as evil.

Moneylending cannot be defended by reference to its economic practicality alone. If moneylending is to be recognized as a fully legitimate practice and defended accordingly, then its defenders must discover and embrace a new code of ethics, one that upholds self-interest—and thus personal profit—as moral.

Conclusion

Although serious economists today uniformly recognize the economic benefits of charging interest or usury on loans, they rarely, if ever, attempt a philosophical or moral defense of this position. Today’s economists either reject philosophy completely or adopt the moral-practical split, accepting the notion that although usury is practical, it is either immoral or, at best, amoral.

Modern philosophers, for the most part, have no interest in the topic at all, partly because it requires them to deal with reality, and partly because they believe self-interest, capitalism, and everything they entail, to be evil. Today’s philosophers, almost to a man, accept self-sacrifice as the standard of morality and physical labor as the source of wealth. Thus, to the extent that they refer to moneylending at all, they consider it unquestionably unjust, and positions to the contrary unworthy of debate.

It is time to set the record straight.

Whereas Aristotle united productiveness with morality and thereby condemned usury as immoral based on his mistaken belief that the practice is unproductive—and whereas everyone since Aristotle (including contemporary economists and philosophers) has severed productiveness from morality and condemned usury on biblical or altruistic grounds as immoral (or at best amoral)—what is needed is a view that again unifies productiveness and morality, but that also sees usury as productive, and morality as the means to practical success on earth. What is needed is the economic knowledge of the last millennium combined with a new moral theory—one that upholds the morality of self-interest and thus the virtue of personal profit.

Let us first condense the key economic points; then we will turn to a brief indication of the morality of self-interest.

The crucial economic knowledge necessary to a proper defense of usury includes an understanding of why lenders charge interest on money—and why they would do so even in a risk-free, noninflationary environment. Lenders charge interest because their money has alternative uses—uses they temporarily forego by lending the money to borrowers. When a lender lends money, he is thereby unable to use that money toward some benefit or profit for himself. Had he not lent it, he could have spent it on consumer goods that he would have enjoyed, or he could have invested it in alternative moneymaking ventures. And the longer the term of the loan, the longer the lender must postpone his alternative use of the money. Thus interest is charged because the lender views the loan as a better, more profitable use of his money over the period of the loan than any of his alternative uses of the same funds over the same time; he estimates that, given the interest charged, the benefit to him is greater from making the loan than from any other use of his capital.71

A lender tries to calculate in advance the likelihood or unlikelihood that he will be repaid all his capital plus the interest. The less convinced he is that a loan will be repaid, the higher the interest rate he will charge. Higher rates enable lenders to profit for their willingness to take greater risks. The practice of charging interest is therefore an expression of the human ability to project the future, to plan, to analyze, to calculate risk, and to act in the face of uncertainty. In a word, it is an expression of man’s ability to reason. The better a lender’s thinking, the more money he will make.

Another economic principle that is essential to a proper defense of usury is recognition of the fact that moneylending is productive. This fact was made increasingly clear over the centuries, and today it is incontrovertible. By choosing to whom he will lend money, the moneylender determines which projects he will help bring into existence and which individuals he will provide with opportunities to improve the quality of their lives and his. Thus, lenders make themselves money by rewarding people for the virtues of innovation, productiveness, personal responsibility, and entrepreneurial talent; and they withhold their sanction, thus minimizing their losses, from people who exhibit signs of stagnation, laziness, irresponsibility, and inefficiency. The lender, in seeking profit, does not consider the well-being of society or of the borrower. Rather, he assesses his alternatives, evaluates the risk, and seeks the greatest return on his investment.

And, of course, lent money is not “barren”; it is fruitful: It enables borrowers to improve their lives or produce new goods or services. Nor is moneylending a zero-sum game: Both the borrower and the lender benefit from the exchange (as ultimately does everyone involved in the economy). The lender makes a profit, and the borrower gets to use capital—whether for consumption or investment purposes—that he otherwise would not be able to use.72

An understanding of these and other economic principles is necessary to defend the practice of usury. But such an understanding is not sufficient to defend the practice. From the brief history we have recounted, it is evident that all commentators on usury from the beginning of time have known that those who charge interest are self-interested, that the very nature of their activity is motivated by personal profit. Thus, in order to defend moneylenders, their institutions, and the kind of world they make possible, one must be armed with a moral code that recognizes rational self-interest and therefore the pursuit of profit as moral, and that consequently regards productivity as a virtue and upholds man’s right to his property and to his time.

There is such a morality: It is Ayn Rand’s Objectivist ethics, or rational egoism, and it is the missing link in the defense of usury (and capitalism in general).

According to rational egoism, man’s life—the life of each individual man—is the standard of moral value, and his reasoning mind is his basic means of living. Being moral, on this view, consists in thinking and producing the values on which one’s life and happiness depend—while leaving others free to think and act on their own judgment for their own sake. The Objectivist ethics holds that people should act rationally, in their own long-term best interest; that each person is the proper beneficiary of his own actions; that each person has a moral right to keep, use, and dispose of the product of his efforts; and that each individual is capable of thinking for himself, of producing values, and of deciding whether, with whom, and on what terms he will trade. It is a morality of self-interest, individual rights, and personal responsibility. And it is grounded in the fundamental fact of human nature: the fact that man’s basic means of living is his ability to reason.

Ayn Rand identified the principle that the greatest productive, life-serving power on earth is not human muscle but the human mind. Consequently, she regarded profit-seeking—the use of the mind to identify, produce, and trade life-serving values—as the essence of being moral.73

Ayn Rand’s Objectivist ethics is essential to the defense of moneylending. It provides the moral foundation without which economic arguments in defense of usury cannot prevail. It demonstrates why moneylending is supremely moral.

The Objectivist ethics frees moneylenders from the shackles of Dante’s inferno, enables them to brush off Shakespeare’s ridicule, and empowers them to take an irrefutable moral stand against persecution and regulation by the state. The day that this moral code becomes widely embraced will be the day that moneylenders—and every other producer of value—will be completely free to charge whatever rates their customers will pay and to reap the rewards righteously and proudly.

If this moral ideal were made a political reality, then, for the first time in history, moneylenders, bankers, and their institutions would be legally permitted and morally encouraged to work to their fullest potential, making profits by providing the lifeblood of capital to our economy. Given what these heroes have achieved while scorned and shackled, it is hard to imagine what their productive achievements would be if they were revered and freed.

About The Author

Yaron Brook

Chairman of the Board, Ayn Rand Institute

Bibliography

Buchan, James. Frozen Desire: The Meaning of Money. New York: Farrar, Straus & Giroux, 1997.

Cohen, Edward E. Athenian Economy and Society. Princeton: Princeton University Press, 1992.

Davies, Glyn. A History of Money. Cardiff: University of Wales Press, 1994.

Ferguson, Niall. The Cash Nexus. New York: Basic Books, 2001.

Grant, James. Money of the Mind. New York: The Noonday Press, 1994.

Homer, Sidney. A History of Interest Rates. New Brunswick: Rutgers University Press, 1963.

Le Goff, Jacques. Your Money or Your Life. New York: Zone Books, 1988.

Lewis, Michael. The Money Culture. New York: W. W. Norton & Company, 1991.

Lockman, Vic. Money, Banking, and Usury (pamphlet). Grants Pass, OR: Westminster Teaching Materials, 1991.

Murray, J. B. C. The History of Usury. Philadelphia: J. B. Lippincott & Co., 1866.

Sobel, Robert. Dangerous Dreamers. New York: John Wiley & Sons, Inc., 1993.

von Böhm-Bawerk, Eugen. Capital and Interest: A Critical History of Economical Theory. Books I–III. William A. Smart, translator. London: Macmillan and Co., 1890.


Endnotes

Acknowledgments: The author would like to thank the following people for their assistance and comments on this article: Elan Journo, Onkar Ghate, Sean Green, John D. Lewis, John P. McCaskey, and Craig Biddle.

1 Aristotle, The Politics of Aristotle, translated by Benjamin Jowett (Oxford: Clarendon Press, 1885), book 1, chapter 10, p. 19.

2 Plutarch, Plutarch’s Morals, translated by William Watson Goodwin (Boston: Little, Brown, & Company, 1874), pp. 412–24.

3 Lewis H. Haney, History of Economic Thought (New York: The Macmillan Company, 1920), p. 71.

4 Anthony Trollope, Life of Cicero (Kessinger Publishing, 2004), p. 70.

5 William Manchester, A World Lit Only by Fire (Boston: Back Bay Books, 1993), pp. 5–6.

6 Glyn Davies, A History of Money: From Ancient Times to the Present Day (Cardiff: University of Wales Press, 1994), p. 117.

7 Ezekiel 18:13.

8 Deuteronomy 23:19–20.

9 Luke 6:35.

10 Jacques Le Goff, Your Money Or Your Life (New York: Zone Books, 1988), p. 26.

11 Edward Henry Palmer, A History of the Jewish Nation (London: Society for Promoting Christian Knowledge, 1874), pp. 253–54. And www.routledge-ny.com/ref/middleages/Jewish/England.pdf.

12 Byrne is here quoting Jacob Twinger of Königshofen, a 14th-century priest.

13 Joseph Patrick Byrne, The Black Death (Westport: Greenwood Press, 2004), p. 84.

14 Sidney Homer, A History of Interest Rates (New Brunswick: Rutgers University Press, 1963), p. 71.

15 Sermon by Jacques de Vitry, “Ad status” 59,14, quoted in Le Goff, Your Money Or Your Life, pp. 56–57.

16 See Thomas Aquinas, Summa Theologica, part II, section II, question 78, article 1.

17 Ibid.

18 Frank Wilson Blackmar, Economics (New York: The Macmillan Company, 1907), p. 178.

19 Le Goff, Your Money Or Your Life, pp. 33–45.

20 Jeremy Rifkin, The European Dream (Cambridge: Polity, 2004), p. 105.

21 Le Goff, Your Money Or Your Life, p. 30.

22 Davies, A History of Money, p. 154.

23 Ibid., pp. 146–74.

24 Robert Burton, Sacred Trust (Oxford: Oxford University Press, 1996), p. 118.

25 Ibid., pp. 118–20.

26 Homer, A History of Interest Rates, p. 73.

27 As Blackstone’s Commentaries on the Laws of England puts it: “When money is lent on a contract to receive not only the principal sum again, but also an increase by way of compensation for the use, the increase is called interest by those who think it lawful, and usury by those who do not.” p. 1336.

28 Homer, A History of Interest Rates, pp. 72–74.

29 Le Goff, Your Money Or Your Life, p. 74.

30 Ibid., pp. 47–64.

31 Dante Alighieri, The Inferno, Canto XVII, lines 51–54.

32 Dorothy M. DiOrio, “Dante’s Condemnation of Usury,” in Re: Artes Liberales V, no. 1, 1978, pp. 17–25.

33 Davies, A History of Money, pp. 177–78.

34 Paul M. Johnson, A History of the Jews (New York: HarperCollins, 1988), p. 242.

35 Eugen von Böhm-Bawerk, Capital and Interest: A Critical History of Economical Theory (London: Macmillan and Co., 1890), translated by William A. Smart, book I, chapter III.

36 Charles Dumoulin (Latinized as Molinaeus), Treatise on Contracts and Usury (1546).

37 von Böhm-Bawerk, Capital and Interest, book I, chapter III.

38 Sir Simonds d’Ewes, “Journal of the House of Commons: April 1571,” in The Journals of all the Parliaments during the reign of Queen Elizabeth (London: John Starkey, 1682), pp. 155–80. Online: http://www.british-history.ac.uk/report.asp?compid=43684.

39 Francis Bacon, “Of Usury,” in Bacon’s Essays (London: Macmillan and Co., 1892), p. 109.

40 Davies, A History of Money, p. 222.

41 Ibid., p. 222, emphasis added.

42 James Buchan, Frozen Desire (New York: Farrar, Strauss & Giroux, 1997), p. 87 (synopsis of the play).

43 William Shakespeare, The Merchant of Venice, Act 1, Scene 2.

44 Ibid., Act 3, Scene 2.

45 Ibid., Act 1, Scene 3.

46 von Böhm-Bawerk, Capital and Interest, book I, chapter III.

47 Ibid., book I, p. 56.

48 Ibid., book I, chapter IV.

49 Jeremy Bentham, A Defence of Usury (Philadelphia: Mathew Carey, 1787), p. 10.

50 Jeremy Bentham, The Works of Jeremy Bentham, edited by John Bowring (Edinburgh: W. Tait; London: Simpkin, Marshall, & Co., 1843), p. 501.

51 Ibid., p. 493.

52 Anonymous, Letters on Usury and Interest (London: J. P. Coghlan, 1774).

53 Ibid.

54 Adam Smith, The Wealth of Nations (New York: Penguin Classics, 1986), p. 456.

55 Ibid.

56 For a thorough rebuttal of Marx’s view, see von Böhm-Bawerk, Capital and Interest, book I, chapter XII.

57 Gabriel Le Bras, quoted in Le Goff, Your Money Or Your Life, p. 43.

58 Johnson, A History of the Jews, p. 351.

59 Fyodor M. Dostoevsky, The Brothers Karamozov, translated by Constance Garnett (Spark Publishing, 2004), p. 316.

60 James Grant, Money of the Mind (New York: Noonday Press, 1994), p. 79.

61 Ibid., pp. 91–95.

62 Ibid., p. 83.

63 John Maynard Keynes, “Economic Possibilities for our Grandchildren,” in Essays in Persuasion (New York: W. W. Norton & Company, 1963), pp. 359, 362. Online: http://www.econ.yale.edu/smith/econ116a/keynes1.pdf.

64 Franklin D. Roosevelt, First Inaugural Address, March 4, 1933, http://www.historytools.org/sources/froosevelt1st.html.

65 To understand the link between 1930s regulations and the S&L crisis, see Edward J. Kane, The S&L Insurance Mess: How Did it Happen? (Washington, D.C.: The Urban Institute Press, 1989), and Richard M. Salsman , The Collapse of Deposit Insurance—and the Case for Abolition (Great Barrington, MA: American Institute for Economic Research, 1993).

66 “Mayday for Payday Loans,” Wall Street Journal, April 2, 2007, http://online.wsj.com/article/SB117546964173756271.html.

67 “U.S. Moves Against Payday Loans, Which Critics Charge Are Usurious,” Wall Street Journal, January 4, 2002, http://online.wsj.com/article/SB1010098721429807840.html.

68 “Mayday for Payday Loans,” Wall Street Journal.

69 Christine Pelisek, “Shylock 2000,” LA Weekly, February 16, 2000, http://www.laweekly.com/news/offbeat/shylock-2000/11565/.

70 Wall Street Journal, August 2, 2007, p. A4.

71 For an excellent presentation of this theory of interest, see von Böhm-Bawerk, Capital and Interest, book II.

72 For a discussion of the productive nature of financial activity see my taped course, “In Defense of Financial Markets,” http://www.aynrandbookstore2.com/prodinfo.asp?number=DB46D.

73 For more on Objectivism, see Leonard Peikoff, Objectivism: The Philosophy of Ayn Rand (New York: Dutton, 1991); and Ayn Rand, Atlas Shrugged (New York: Random House, 1957) and Capitalism: The Unknown Ideal (New York: New American Library 1966).

Neoconservative Foreign Policy: An Autopsy

by Yaron Brook | Summer 2007 | The Objective Standard

The Rise and Fall of Neoconservative Foreign Policy

When asked during the 2000 presidential campaign about his foreign policy convictions, George W. Bush said that a president’s “guiding question” should be: “What’s in the best interests of the United States? What’s in the best interests of our people?”1

A president focused on American interests, he made clear, would not risk troops’ lives in “nation-building” missions overseas:

I don’t think our troops ought to be used for what’s called nation-building. I think our troops ought to be used to fight and win war. I think our troops ought to be used to help overthrow the dictator when it’s in our best interests. But in [Somalia] it was a nation-building exercise, and same with Haiti. I wouldn’t have supported either.2

In denouncing “nation-building” Bush was in line with a long-standing animus of Americans against using our military to try to fix the endless problems of other nations. But at the same time, he was going against a major contingent of conservatives, the neoconservatives, who had long been arguing for more, not less, nation-building.

By 2003, though, George W. Bush had adopted the neoconservatives’ position. He sent the American military to war in Iraq, not simply to “overthrow the dictator,” but to build the primitive, tribal nation of Iraq into a “democratic,” peaceful, and prosperous one. This “Operation Iraqi Freedom,” he explained, was only the first step of a larger “forward strategy of freedom” whose ultimate goal was “the end of tyranny in our world”3 — a prescription for worldwide nation-building. All of this, he stressed, was necessary for America’s “national interest.”

President Bush’s profound shift in foreign policy views reflected the profound impact that September 11 had on him and on the American public at large.

Before 9/11, Americans were basically satisfied with the existing foreign policy. They had little desire to make any significant changes, and certainly not in the direction of more nation-building. The status quo seemed to be working; Americans seemed basically safe. The Soviet Union had fallen, and America was the world’s lone superpower. To be sure, we faced occasional aggression, including Islamic terrorist attacks against Americans overseas — but these were not large enough or close enough for most to lose sleep over, let alone demand fundamental changes in foreign policy over.

Everything changed on that Tuesday morning when nineteen members of a terrorist network centered in Afghanistan slaughtered thousands of Americans in the name of an Islamic totalitarian movement supported by states throughout the Arab-Islamic world. What once seemed like a safe world was now obviously fraught with danger. And what once seemed like an appropriate foreign policy toward terrorism and its state supporters was now obviously incapable of protecting America. Prior to 9/11, terrorism was treated primarily as a problem of isolated gangs roaming the earth, to be combated by police investigations of the particular participants in any given attack; our leaders turned a blind eye to the ideology driving the terrorists and to the indispensable role of state support for international terrorist groups. State sponsors of terrorism were treated as respected members of the “international community,” and, to the extent their aggression was acknowledged, it was dealt with via “diplomacy,” a euphemism for inaction and appeasement. Diplomacy had been the dominant response in 1979, when a new Islamist Iranian regime supported a 444-day hostage-taking of fifty Americans — as part of an Islamic totalitarian movement openly committed to achieving Islamic world domination, including the destruction of Israel and America. Diplomacy had been the response when the terrorist agents of Arab-Islamic regimes killed marines in Lebanon in 1983 — and bombed a TWA flight in 1986 — and bombed the World Trade Center in 1993 — and bombed the Khobar towers in 1996 — and bombed the U.S. embassies in Kenya and Tanzania in 1998 — and bombed the USS Cole in 2000. Diplomacy had also been the response when Iran issued a death decree on a British author for “un-Islamic” writings, threatening American bookstores and publishers associated with him, and thus denying Americans their sacred right to free speech. Throughout all of this, Americans had accepted that our leaders knew what they were doing with regard to protecting America from terrorism and other threats. On 9/11, Americans saw with brutal clarity that our actions had been somewhere between shortsighted and blind. The country and its president were ripe for a dramatic departure from the policies that had guided and failed America pre-9/11.

The only prominent group of intellectuals that offered a seemingly compelling alternative claiming to protect America in the modern, dangerous world (a standard by which neither pacifists nor Buchananite xenophobes qualify) were neoconservatives.

Neoconservatives had long been critics of America’s pre-9/11 foreign policy, the technical name for which is “realism.” “Realism” holds that all nations are, in one form or another, “rational” actors that pursue common interests such as money, power, and prestige. Given such common goals among nations, “realists” hold that, no matter what another nation’s statements or actions toward the United States, there is always a chance for a diplomatic deal in which both sides make concessions; any other nation will be “rational” and realize that an all-out military conflict with superpower America is not in its interest. Thus, America pursuing its “national interest” means a constant diplomatic game of toothless resolutions, amorphous “pressure,” and dressed-up bribery to keep the world’s assorted threatening nations in line. The only time “realists” are willing to abandon this game in favor of using genuine military force against threatening regimes is in the face of some catastrophic attack. Otherwise, they regard it as not in our “national interest” to deal with other nations by military means. Why take such a drastic step when a successful deal may be just around the corner?

In the 1980s and 1990s, as “realism” dominated foreign policy, neoconservatives criticized it for having a false view of regimes, and a “narrow,” shortsighted view of the “national interest” in which only tangible, immediate threats to American security warranted military action. They rightly pointed out that “realism” was a shortsighted prescription for long-range disaster — a policy of inaction and appeasement in the face of very real threats, and thus a guarantor that those threats would grow bolder and stronger. A neoconservative essay published in 2000 expresses this viewpoint:

The United States, both at the level of elite opinion and popular sentiment, appears to have become the Alfred E. Newman of superpowers — its national motto: “What, me worry?” . . . [T]here is today a “present danger.” It has no name. It is not to be found in any single strategic adversary. . . . Our present danger is one of declining military strength, flagging will and confusion about our role in the world. It is a danger, to be sure, of our own devising. Yet, if neglected, it is likely to yield very real external dangers, as threatening in their way as the Soviet Union was a quarter century ago.4

In place of “realism,” neoconservatives advocated a policy often called “interventionism,” one component of which calls for America to work assertively to overthrow threatening regimes and to replace them with peaceful “democracies.” Bad regimes, they asserted vaguely, were responsible for threats like terrorism; such threats could never emerge from “democracies.” “Interventionism,” they said, took a “broad” and ultimately more realistic view of America’s “national interest,” by dealing with threats before they metastasized into catastrophes and by actively replacing threatening governments with “democracies” that would become our allies. In place of a series of “realist” responses to the crisis of the moment, they claimed, they were offering a long-range foreign policy to protect America now and in the future.

After 9/11, the neoconservatives felt intellectually vindicated, and they argued for “interventionism” with regard to state sponsors of terrorism. An editorial in the leading neoconservative publication, The Weekly Standard, called for a “war to replace the government of each nation on earth that allows terrorists to live and operate within its borders.”5 The replacement governments would be “democracies” that would allegedly ensure that new threatening regimes would not take the place of old ones.

These ideas exerted a major influence on President Bush immediately after 9/11, an influence that grew in the coming years. On September 20, 2001, influenced by neoconservative colleagues and speechwriters, he proclaimed a desire to end state sponsorship of terrorism: “Every nation, in every region, now has a decision to make: either you are with us, or you are with the terrorists. . . . From this day forward, any nation that continues to harbor or support terrorism will be regarded by the United States as a hostile regime.”6 His neoconservative deputy secretary of defense, Paul Wolfowitz, publicly called for “ending states who sponsor terrorism”7 (though the “realists” in the State Department caused the administration to partially recant).

Soon thereafter, President Bush made clear that he wanted to replace the state sponsors of terrorism with “democracies,” beginning with Afghanistan. When he dropped bombs on that country, he supplemented them with food packages and a tripling of foreign aid; he declared the Afghan people America’s “friend” and said that we would “liberate” them and help them establish a “democracy” to replace the terrorist-sponsoring Taliban.

The full influence of neoconservatism was evident by the time of the Iraq War. Prior to 9/11, the idea of democratic “regime change” in Iraq with the ultimate aim of “spreading democracy” throughout the Arab-Islamic world was unpopular outside neoconservative circles — dismissed as a “nation-building” boondoggle waiting to happen. After 9/11, George W. Bush became convinced — and convinced Americans — that such a quest was utterly necessary in today’s dangerous world, and that it could and would succeed. “Iraqi democracy will succeed,” he said in 2003, “and that success will send forth the news, from Damascus to Teheran — that freedom can be the future of every nation. The establishment of a free Iraq at the heart of the Middle East will be a watershed event in the global democratic revolution.”8

Thus, the neoconservative foreign policy of “regime change” and “spreading democracy” had become the American foreign policy — and the hope of Americans for protecting the nation.

As neoconservative columnist Charles Krauthammer wrote in 2005:

What neoconservatives have long been advocating is now being articulated and practiced at the highest levels of government by a war cabinet composed of individuals who, coming from a very different place, have joined . . . the neoconservative camp and are carrying the neoconservative idea throughout the world.9

At first, Operation Iraqi Freedom — and thus our new neoconservative foreign policy — seemed to most observers to be a success. The basic expectation of the war’s architects had been that by ousting a tyrant, “liberating” Iraqis, and allowing them to set up a “democracy” in Iraq, we would at once be deterring future threats from Iran and Syria, setting up a friendly, allied regime in Iraq, and empowering pro-American influences throughout the Middle East. And when the American military easily took Baghdad, when we witnessed Kodak moments of grateful Iraqis hugging American soldiers or razing a statue of Saddam Hussein, when President Bush declared “major combat operations in Iraq have ended,”10 neoconservatives in particular thought that everything was working. Their feeling of triumph was captured on the back cover of The Weekly Standard on April 21, 2003, in which the magazine parodied prominent Iraq war critics by printing a fake apology admitting that their opposition to “Operation Iraqi Freedom” reflected stupidity and ignorance. “We’re Idiots. We Admit It,” the parody read. “We, the Undersigned, Agree that We Got this Whole War in Iraq Business Spectacularly Wrong. We didn’t see that it was a war of liberation, not a war of colonization. . . . We thought the Iraqi people would resent American troops. We thought the war would drag on and on. . . . We wanted to preserve the status quo.”11 Future cover stories of The Weekly Standard featured inspiring titles such as “Victory: The Restoration of American Awe and the Opening of the Arab Mind” and “The Commander: How Tommy Franks Won the Iraq War.”

But the luster of the Iraq War quickly wore off as American troops faced an insurgency that the Bush team had not anticipated; it turned out that many of the lovable, freedom-loving Iraqis we had heard about prewar were in fact recalcitrant, dictatorship-seeking Iraqis. Still, even through 2005, many viewed the Iraq War as a partial success due to the capture of Saddam Hussein and such alleged milestones as a “transfer of power” in 2004, an election and the passage of a constitution in January 2005, and a ratified constitution in December 2005 — events that were heralded even by many of the President’s most dependable critics, such as the New York Times.

Now, however, in mid-2007, the Iraq War is rightly regarded by most as a disaster that utterly failed to live up to its promise. The Bush-neoconservative vision of deterred enemies, a friendly Iraq, and the inspiration of potential allies around the world has not materialized. Instead, for the price of more than 3,200 American soldiers, and counting, we have gotten an Iraq in a state of civil war whose government (to the extent it has one) follows a constitution avowedly ruled by Islamic law and is allied with Iran; more-confident, less-deterred regimes in Iran and Syria; and the increasing power and prestige of Islamic totalitarians around the world: in Egypt, in the Palestinian territories, in Saudi Arabia, in Lebanon. And all of this from a policy that was supposed to provide us with a clear-eyed, farsighted view of our “national interest” — as against the blindness and short-range mentality of our former “realist” policies.

How have we managed to fail so spectacularly to secure our interests in the perfect neoconservative war? The state of affairs it has brought about is so bad, so much worse than anticipated, that it cannot be explained by particular personalities (such as Bush or Rumsfeld) or particular strategic decisions (such as insufficient troop levels). Such a failure can be explained only by fundamental flaws in the policy.

On this count, most of the President’s critics and critics of neoconservatism heartily agree; however, their identification of neoconservatism’s fundamental problems has been abysmal. The criticism is dominated by the formerly discredited “realists,” who argue that the Iraq War demonstrates that “war is not the answer” to our problems — that the United States was too “unilateralist,” “arrogant,” “militaristic” — and that we must revert to more “diplomacy” to deal with today’s threats. Thus, in response to Iran’s ongoing support of terrorism and pursuit of nuclear weapons, to North Korea’s nuclear tests, to Saudi Arabia’s ongoing financing of Islamic Totalitarianism — they counsel more “diplomacy,” “negotiations,” and “multilateralism.” In other words, we should attempt to appease the aggressors who threaten us with bribes that reward their aggression, and we should allow our foreign policy to be dictated by the anti-Americans at the United Nations. These are the exact same policies that did absolutely nothing to prevent 9/11 or to thwart the many threats we face today.

If these are the lessons we draw from the failure of neoconservatism, we will be no better off without that policy than with it. It is imperative, then, that we gain a genuine understanding of neoconservatism’s failure to protect American interests. Providing this understanding is the purpose of this essay. In our view, the basic reason for neoconservatism’s failure to protect America is that neoconservatism, despite its claims, is fundamentally opposed to America’s true national interest.

What Is the “National Interest”?

When most Americans hear the term “national interest” in foreign policy discussions, they think of our government protecting our lives, liberty, and property from foreign aggressors, today and in the future. Thus, when neoconservatives use the term “national interest,” most Americans assume that they mean the protection of American lives and rights. But this assumption is wrong. To neoconservatives, the “national interest” means something entirely different than the protection of American individual rights. To understand what, we must look to the intellectual origins of the neoconservative movement.

The movement of “neoconservatives” (a term initially used by one of its critics) began as a group of disillusioned leftist-socialist intellectuals. Among them were Irving Kristol, the widely-acknowledged “godfather” of neoconservatism and founder of the influential journals The Public Interest and The National Interest; Norman Podhoretz, long-time editor of Commentary; Nathan Glazer, a Harvard professor of sociology; and Daniel Bell, another Harvard sociologist.

The cause of the original neoconservatives’ disillusionment was the massive failure of socialism worldwide, which had become undeniable by the 1960s, combined with their leftist brethren’s response to it.

In the early 20th century, American leftists were professed idealists. They were true believers in the philosophy of collectivism: the idea that the group (collective) has supremacy over the individual, both metaphysically and morally — and therefore that the individual must live in service to the collective, sacrificing for “its” sake. Collectivism is the social-political application of the morality of altruism: the idea that individuals have a duty to live a life of selfless service to others. The variety of collectivism that leftists subscribed to was socialism (as against fascism). They sought to convert America into a socialist state in which “scientific” social planners would coercively direct individuals and “redistribute” their property for the “greater good” of the collective. Many leftists believed, in line with socialist theory, that this system would lead to a level of prosperity, harmony, and happiness that the “atomistic,” “unplanned” system of capitalism could never approach.

The Left’s vision of the flourishing socialist Utopia collapsed as socialist experiment after socialist experiment produced the exact opposite results. Enslaving individuals and seizing their production led to destruction wherever and to whatever extent it was implemented, from the Communist socialism of Soviet Russia and Red China, to the National Socialism (Nazism) of Germany, to the disastrous socialist economics of Great Britain. At this point, as pro-capitalist philosopher Ayn Rand has observed, the Left faced a choice: Either renounce socialism and promote capitalism — or maintain allegiance to socialism, knowing full well what type of consequences it must lead to.

Most leftists chose the second. Knowing that they could no longer promise prosperity and happiness, they embraced an anti-wealth, anti-American, nihilist agenda. Whereas the Old Left had at least ostensibly stood for intellectualism, pro-Americanism, and prosperity-seeking, the New Left exhibited mindless hippiedom, anti-industrialization, environmentalism, naked egalitarianism, and unvarnished hatred of America’s military. Despite incontrovertible evidence of the continuous atrocities committed by the Union of Soviet Socialist Republics, American leftists continued to support that regime while denouncing all things American.

The soon-to-be neoconservatives were among the members of the Old Left who opposed the New Left. Irving Kristol and his comrades felt increasingly alienated from their former allies — and from the socialist policies they had once championed. They had come to believe that some variant of a free economy, not a command-and-control socialist state, was necessary for human well-being. And they recognized, by the 1960s, that the Soviet Union was an evil aggressor that threatened civilization and must be fought, at least intellectually if not militarily.

But this “neoconservative” transformation went only so far. Kristol and company’s essential criticism of socialism pertained to its practicality as apolitical program; they came to oppose such socialist fixtures as state economic planning, social engineering of individuals into collectivist drones, and totalitarian government. Crucially, though, they did not renounce socialism’s collectivist moral ideal. They still believed that the individual should be subjugated for the “greater good” of “society” and the state. They just decided that the ideal was best approximated through the American political system rather than by overthrowing it.

One might ask how America’s form of government can be viewed as conducive to the ideals of thoroughgoing collectivists — given that it was founded on the individual’s right to life, liberty, and the pursuit of happiness. The answer is that the America neoconservatives embraced was not the individualistic America of the Founding Fathers; it was the collectivist and statist post-New Deal America. This modern American government — which violated individual rights with its social security and welfare programs and its massive regulation of business all in the name of group “rights” and had done so increasingly for decades — was seen by the neoconservatives as a basically good thing that just needed some tweaking in order to achieve the government’s moral purpose: “the national interest” (i.e., the alleged good of the collective at the expense of the individual). The neoconservatives saw in modern, welfare-state America the opportunity to achieve collectivist goals without the obvious and bloody failures of avowedly socialist systems.

There was a time in American history when the individualism upon which America was founded was advocated, albeit highly inconsistently, by American conservatives — many of whom called for something of a return to the original American system. (Individualism is the view that in social issues, including politics, the individual, not the group, is the important unit.) The best representative of individualism in conservatism in the past fifty years was Barry Goldwater, who wrote: “The legitimate functions of government are actually conducive to freedom. Maintaining internal order, keeping foreign foes at bay, administering justice, removing obstacles to the free interchange of goods — the exercise of these powers makes it possible for men to follow their chosen pursuits with maximum freedom.”12

The neoconservatives, however, openly regarded an individualistic government as immoral. The “socialist ideal,” writes Irving Kristol, is a “ necessary ideal, offering elements that were wanting in capitalist society — elements indispensable for the preservation, not to say perfection, of our humanity.” Socialism, he says, is properly “community-oriented” instead of “individual-oriented”; it encourages individuals to transcend the “vulgar, materialistic, and divisive acquisitiveness that characterized the capitalist type of individual.”13 Criticizing the original American system, Kristol writes: “A society founded solely on ‘individual rights’ was a society that ultimately deprived men of those virtues which could only exist in a political community which is something other than a ‘society.’” Such a society, he says, lacked “a sense of distributive justice, a fund of shared moral values, and a common vision of the good life sufficiently attractive and powerful to transcend the knowledge that each individual’s life ends only in death.”14

Translation: Individuals’ lives are only truly meaningful if they sacrifice for some collective, “higher” purpose that “transcends” their unimportant, finite selves. That “higher” purpose — not individuals’ lives, liberty, and property — is the “national interest.”

For traditional socialists, that purpose was the material well-being of the proletariat. But as Kristol’s comments demeaning “materialism” indicate, the “higher” purpose of the neoconservatives is more concerned with the alleged moral and spiritual well-being of a nation. (One reason for this difference is that the neoconservatives are strongly influenced by the philosophy of Plato.) In this sense, neoconservatism is more a nationalist or fascist form of collectivism than socialist.

Ayn Rand highlights this difference between fascism and socialism in her essay “The Fascist New Frontier”:

The basic moral-political principle running through [fascism and socialism] is clear: the subordination and sacrifice of the individual to the collective.

That principle (derived from the ethics of altruism) is the ideological root of all statist systems, in any variation, from welfare statism to a totalitarian dictatorship. . . .

The socialist-communist axis keeps promising to achieve abundance, material comfort and security for its victims, in some indeterminate future. The fascist-Nazi axis scorns material comfort and security, and keeps extolling some undefined sort of spiritual duty, service and conquest. The socialist-communist axis offers its victims an alleged social ideal. The fascist-Nazi axis offers nothing but loose talk about some unspecified form of racial or national “greatness.”15

For neoconservatives, such nationalistic pursuit of “national greatness” is the “national interest” — the interest, not of an individualistic nation whose purpose is to protect the rights of individual citizens, but of an organic nation whose “greatness” is found in the subjugation of the individuals it comprises.

Fittingly, during the late 1990s, “national greatness” became the rallying cry of top neoconservatives. In an influential 1997 Wall Street Journal op-ed, neoconservatives William Kristol (son of Irving Kristol) and David Brooks called directly for “national greatness conservatism.” They criticized “the antigovernment, ‘leave us alone’ sentiment that was crucial to the Republican victory of 1994. . . . Wishing to be left alone isn’t a governing doctrine.”16 (Actually, it was exactly the “governing doctrine” of the Founding Fathers, who risked their lives, fortunes, and families to be left alone by the British, and to establish a government that would leave its citizens alone.) Brooks and Kristol pined for leaders who would call America “forward to a grand destiny.”17

What kind of “grand destiny”? Brooks explained in an article elaborating on “national greatness.”

It almost doesn’t matter what great task government sets for itself, as long as it does some tangible thing with energy and effectiveness. . . . [E]nergetic government is good for its own sake. It raises the sights of the individual. It strengthens common bonds. It boosts national pride. It continues the great national project.18

Brooks and Kristol bemoaned America’s lack of a task with which to achieve “national greatness.” They got it with 9/11, which necessitated that America go to war.

In an individualistic view of the “national interest,” a war is a negative necessity; it is something that gets in the way of what individuals in a society should be doing: living their lives and pursuing their happiness in freedom. Not so for the neoconservatives.

Consider the following passage from the lead editorial of the neoconservative Weekly Standard the week after 9/11, the deadliest foreign attack ever on American soil. Remember how you felt at that time, and how much you wished you could return to the seemingly peaceful state of 9/10, when you read this:

We have been called out of our trivial concerns. We have resigned our parts in the casual comedy of everyday existence. We live, for the first time since World War II, with a horizon once again. . . . [There now exists] the potential of Americans to join in common purpose — the potential that is the definition of a nation. . . . There is a task to which President Bush should call us . . . [a] long, expensive, and arduous war. . . . It will prove long and difficult. American soldiers will lose their lives in the course of it, and American civilians will suffer hardships. But that . . . is what real war looks like.19

Why is the Weekly Standard practically celebrating the slaughter of thousands of Americans? Because the slaughter created “the potential of Americans to join in common purpose — the potential that is the definition of a nation.” Even if a “long, expensive, and arduous war” were necessary to defeat the enemy that struck on 9/11 — and we will argue that it is not — it is profoundly un-American and morally obscene to treat such a war as a positive turn of events because it generates a collective purpose or “horizon.” Observe the scorn with which this editorial treats the normal lives of individuals in a free nation. Pursuing our careers and creative projects, making money, participating in rewarding hobbies, enjoying the company of friends, raising beloved children — these are desecrated as “trivial concerns” and “parts in the casual comedy of everyday existence.” The editorial makes clear that its signers think the exalted thing in life is “the potential of Americans to join in common purpose” — not the potential of individual Americans to lead their own lives and pursue their own happiness. This is the language of those who believe that each American is merely a cog in some grand collective machine, to be directed or discarded as the goal of “national greatness” dictates.

Americans sacrificing for the “higher” good of the nation and its “greatness” is what the neoconservatives mean by the “national interest.” And in foreign policy, this is the sort of “national interest” they strive to achieve.

An Altruistic Nationalism

Today’s neoconservative foreign policy has been formulated and advocated mostly by a younger generation of neoconservatives (though supported by much of the old guard) including the likes of William Kristol, Robert Kagan, Max Boot, Joshua Muravchik, and former deputy secretary of defense Paul Wolfowitz. It holds that America’s “national interest” in foreign policy is for America to establish and maintain a “democratic international order”20 that promotes the long-term security and well-being of all the world’s peoples.

Neoconservatives, in keeping with their altruist-collectivist ideals, believe that America has no right to conduct its foreign policy for its own sake — that is, to focus its military energies on decisively defeating real threats to its security, and otherwise to stay out of the affairs of other nations. Instead, they believe, America has a “duty” to, as leading neoconservatives William Kristol and Robert Kagan put it, “advance civilization and improve the world’s condition.”21 Just as neoconservatives hold that the individual should live in service to the American collective, so they hold that America should live in service to the international collective. And because America is the wealthiest and most powerful of all nations, neoconservatives say, it has the greatest “duty” to serve. In doing its duty to the world, Kristol and Kagan say, America will further its “national greatness,” achieving a coveted “place of honor among the world’s great powers.”22

In this view of nationalism and “national greatness,” neoconservatives are more consistently altruistic than other nationalists. Most nationalist nations are altruistic in that they believe their individual citizens are inconsequential, and should be sacrificed for the “higher cause” that is the nation. But they are “selfish” with regard to their own nation; they believe that their nation is an end in itself, that it is right to sacrifice other nations to their nation’s needs; thus, the expansionist, conquering designs of fascist Italy and Nazi Germany.

The neoconservatives’ brand of “nationalism” does not regard America as an end in itself. It believes that America has a duty to better the condition of the rest of the world (i.e., other nations). It is an altruistic nationalism.

Neoconservatives do not put it this way; Kristol and Kagan come out for “a nationalism . . . of a uniquely American variety: not an insular, blood-and-soil nationalism, but one that derived its meaning and coherence from being rooted in universal principles first enunciated in the Declaration of Independence.”23

One might wonder how neoconservatives square their views with the universal principles of the Declaration — which recognize each American’s right to live his own life and pursue his own happiness and which say nothing about a duty to bring the good life to the rest of the world.

Neoconservatives attempt to reconcile the two by holding that freedom is not a right to be enjoyed, but a duty to be given altruistically to those who lack it. They do not mean simply that we must argue for the moral superiority of freedom and tell the Arabs that this is the only proper way for men to live — and mail them a copy of our Constitution for guidance — but that we give up our lives and our freedom to bring them freedom.

Thus, after 9/11, the neoconservatives did not call for doing whatever was necessary to defeat the nations that sponsor terrorism; rather, they championed a welfare war in Iraq to achieve their longtime goal of “Iraqi democracy.” Just a few weeks after 9/11, Max Boot wrote:

This could be the chance . . . to show the Arab people that America is as committed to freedom for them as we were for the people of Eastern Europe. To turn Iraq into a beacon of hope for the oppressed peoples of the Middle East: Now that would be a historic war aim.24

For those familiar with the history of the 20th century, the international collectivist goals of the neoconservative foreign policy should not seem new; they are nearly identical to those of the foreign policy school of which President Woodrow Wilson was the most prominent member, the school known in modern terms as “Liberal Internationalism” or just “Wilsonianism.”

According to Wilsonianism, America must not restrict itself to going to war when direct threats exist; it must not “isolate” itself from the rest of the world’s troubles, but must instead “engage” itself and work with others to create a world of peace and security — one that alleviates suffering, collectively opposes “rogue nations” that threaten the security of the world as a whole, and brings “democracy” and “self-determination” to various oppressed peoples around the world. It was on this premise that both the League of Nations and its successor, the United Nations, were formed — and on which America entered World War I (“The world,” Wilson said, “must be made safe for democracy”).25

The Wilsonian-neoconservative view of America’s “national interest” is in stark contrast to the traditional, individualistic American view of America’s national interest in foreign policy. Angelo Codevilla, an expert on the intellectual history of American foreign policy, summarizes the difference. Before the 20th century,

Americans, generally speaking, wished the rest of the world well, demanded that it keep its troubles out of our hemisphere, and hoped that it would learn from us.

By the turn of the 20th century, however, this hope led some Americans to begin to think of themselves as the world’s teachers, its chosen instructors. This twist of the founders’ views led to a new and enduring quarrel over American foreign policy — between those who see the forceful safeguarding of our own unique way of life as the purpose of foreign relations, and those who believe that securing the world by improving it is the test of what [Iraqi “democracy” champion] Larry Diamond has called “our purpose and fiber as a nation.”26

As to how to “secure the world by improving it,” Wilsonianism and neoconservatism have substantial differences. Wilsonianism favors American subordination to international institutions and “diplomacy,” whereas neoconservatism favors American leadership and more often advocates force in conjunction with diplomacy. Traditional Wilsonians are not pacifists (Wilson, after all, brought America into World War I), but they tend to believe that almost all problems can be solved by peaceful “cooperation” among members of world bodies to paper over potential conflicts or “isolate” aggressive nations that go against the “international community.” Neoconservatives openly state that their ambitious foreign-policy goals — whether removing a direct threat or stopping a tribal war in a faraway land — require the use of force.

Some neoconservatives, such as Max Boot, embrace the term “Hard Wilsonianism,” not only to capture their intense affinity with Woodrow Wilson’s liberal international collectivism, but also to highlight their differences in tactics:

[A] more accurate term [than “neoconservatism”] might be “hard Wilsonianism.” Advocates of this view embrace Woodrow Wilson’s championing of American ideals but reject his reliance on international organizations and treaties to accomplish our objectives. (“Soft Wilsonians,” a k a liberals, place their reliance, in Charles Krauthammer’s trenchant phrase, on paper, not power.)27

Not only must “power, not paper” (to reverse Krauthammer’s expression) be used more often in achieving the desired “international order” than Wilsonians think, say neoconservatives, but America must lead that order. It must not subordinate its decision-making authority to an organization such as the U.N., nor cede to other countries the “responsibilities” for solving international problems.

America must lead, they say, because it is both militarily and morally the preeminent nation in the world. America, they observe, has on many occasions come to the rescue of other nations, even at its own expense (such as in World War I or Vietnam) — the ultimate proof of altruistic virtue. (According to the neoconservatives, “Americans had nothing to gain from entering Vietnam — not land, not money, not power. . . . [T]he American effort in Vietnam was a product of one of the noblest traits of the American character — altruism in service of principles.”)28 By contrast, they observe, other nations, including many in Europe, have not even shown willingness to defend themselves, let alone others.

The cornerstone policy of the neoconservatives’ American-led, “hard” collectivist foreign policy is the U.S.-led military “intervention”: using the American military or some military coalition to correct some evil; give “humanitarian” aid; provide “peacekeeping”; and, ideally, enact “regime change” and establish a new, beneficial “democracy” for the formerly oppressed.

Given the desired “international order” and America’s “responsibility” to “improve the world’s condition,” the obligation to “intervene” goes far beyond nations that threaten the United States. And when America is “intervening” in a threatening nation, the “intervention” cannot simply defeat the nation and render it non-threatening; it must seek to benefit the nation’s inhabitants, preferably by furnishing them with a new “democracy.”

Throughout the past decade and a half, neoconservatives have called for major “interventions” in remote tribal wars in Bosnia, Somalia, Kosovo, Darfur, and Liberia — none of which entailed a direct threat to the United States. And when they have called for responses to real threats, their focus has been on “liberating” the Afghans, Iraqis, and Iranians — not on breaking the hostile inhabitants’ will to keep supporting and sponsoring Islamist, anti-American causes.

Endorsing this broad mandate for “intervention,” William Kristol and Robert Kagan write, in their seminal neoconservative essay “National Interest and Global Responsibility,” that America must be “more rather than less inclined to weigh in when crises erupt, and preferably before they erupt”; it must be willing to go to war “even when we cannot prove that a narrowly construed ‘vital interest’ of the United States is at stake.” In other words, to use a common phrase, America must be the “world’s policeman” — and not just any policeman, either: It must be a highly active one. In the words of Kristol and Kagan: “America cannot be a reluctant sheriff.”29

Despite Wilsonianism and “Hard Wilsonianism’s” differences, they agree entirely on a key aspect of the means to their goals: Any mission must involve substantial American sacrifice — the selfless surrender of American life, liberty, and property for the sake of other nations.

When Woodrow Wilson asked Congress for a declaration of war to enter World War I, he said:

We have no selfish ends to serve. We desire no conquest, no dominion. We seek no indemnities for ourselves, no material compensation for the sacrifices we shall freely make. We are but one of the champions of the rights of mankind. We shall be satisfied when those rights have been made as secure as the faith and the freedom of nations can make them. . . .

[W]e fight without rancor and without selfish object, seeking nothing for ourselves but what we shall wish to share with all free peoples. . . .30

Similarly, President Bush extolled the (alleged) virtue of “sacrifice for the freedom of strangers” in his decision to invade Iraq. Later that year, in a landmark speech at the National Endowment for Democracy, Bush said:

Are the peoples of the Middle East somehow beyond the reach of liberty? Are millions of men and women and children condemned by history or culture to live in despotism?. . . I, for one, do not believe it. I believe every person has the ability and the right to be free. . . .

Securing democracy in Iraq is the work of many hands. American and coalition forces are sacrificing for the peace of Iraq. . . .31

Why must America “sacrifice for the freedom of strangers”? By what right do the problems of barbarians overseas exert a claim on the life of an American twenty-year-old, whose life may be extinguished just as it is beginning?

Both neoconservatives and Wilsonians have a dual answer: It is morally right and practically necessary for America to sacrifice for the international collective.

The moral component of this is straightforward. In our culture, it is uncontroversial that a virtuous person is one who lives a life of altruism — a life of selfless service to others, in which he puts their well-being and desires above his own. This is the premise behind our ever-growing welfare state and every socialist and semi-socialist country. Max Boot applies this premise logically to sacrificing for other nations: “Why not use some of the awesome power of the U.S. government to help the downtrodden of the world, just as it is used to help the needy at home?”32

And this help is not just money — it is also blood. For example, several years ago, when President Clinton finally succumbed to pressure from neoconservatives and liberal internationalists to attack Serbia in an attempt to force its surrender of Kosovo, the neoconservatives condemned him morally — because Clinton decided to forgo sending ground troops, which may have minimized Kosovar casualties, in favor of bombing, which would spare American lives. To quote Max Boot: “It is a curious morality that puts greater value on the life of even a single American pilot — a professional who has volunteered for combat — than on hundreds, even thousands, of Kosovar lives.”33

This moral argument is crucial to appeals for sacrifice — but it is not sufficient. Imagine if neoconservatives or Wilsonians openly said: “We believe that Americans should be sent to die for the sake of other nations, even though it will achieve no American interest.” Americans would rebel against the naked self-sacrifice being demanded.

Thus, a crucial component of the neoconservative call for international self-sacrifice is the argument that it is ultimately a practical necessity — that it is ultimately in our self-interest — that the sacrifice is ultimately not really a sacrifice.

Does National Security Require International Sacrifice?

Nearly every moral or political doctrine in history that has called on individuals to sacrifice their well-being to some “higher” cause has claimed that their sacrifices are practical necessities and will lead to some wonderful long-term benefit, either for the sacrificers or for their fellow citizens or descendants.

For example, calls to sacrifice one’s desires for the sake of the supernatural are coupled with the threat of burning in hell and promises of eternal bliss in heaven. (In the militant Muslim form of this, calls to sacrifice one’s life along with as many others as possible are coupled with promises of seventy-two virgins.) Environmentalist calls to sacrifice development and industrial civilization for nature are coupled with promises to stave off some ecological apocalypse (currently “global warming”) and to reach some future ecological paradise. Calls to sacrifice for the Socialist dictatorship of the proletariat were coupled with claims about the inevitable collapse of capitalism and promises that the sacrificers’ children and grandchildren would live in a Utopia where the state had withered away.

The argument always takes the same form. Our well-being depends on “higher cause” X — nature, “God,” “Allah,” the proletariat — and therefore we must sacrifice for its sake if we are to avoid disaster and procure some necessary benefit. The “higher cause” is always viewed as metaphysically superior to the individuals being sacrificed: Religionists view man as helpless in comparison to their supernatural being of choice; environmentalists view man in relation to Mother Nature in much the same way; and collectivists view man as metaphysically inferior to the collective as a whole. If we refuse to subordinate ourselves to this cause, they believe, only disaster can result — and if we do subordinate ourselves, something positive must follow.

Fittingly, both neoconservatism and Wilsonianism promise an ultimate, self-interested payoff to Americans for their acts of international sacrifice: a level of security that is unachievable by any other means. Both promise that when we toil and bleed to “make the world safe for democracy” or to create a “democratic international order,” we will ultimately bring about a world in which we achieve new heights of peace and security — in which the collective will of the various “democracies” will make war or terrorism virtually impossible. World War I was called “the war to end all wars.”

Instead of the dangerous, threatening world we live in today, the argument goes, a world in which aggressors are willing to threaten us without hesitation, the “international order” would feature an array of friendly, peace-loving “democracies” that would not even think of starting wars, that would inspire backward people of the world to set up similar governments, and that would eagerly act collectively, when necessary, to halt any threats to the “international order.” This was the basic argument behind Bush’s sending soldiers to bleed setting up voting booths in tribal Iraq, which sacrifice was ultimately supposed to lead to “the end of tyranny” — including international aggression — “in our world.”

What if, instead, we refuse to sacrifice for foreign peoples and resolve to use our military only to protect our own security? We will fail, the collectivists say, because our security depends on the well-being of other nations and on “international order.” If we let other peoples remain miserable and unfree on the grounds that it is not our problem, they argue, that will give comfort to dictators and breed hatred in populations that will ultimately lead to attacks on the United States.

In their essay “National Interest and Global Responsibility,” William Kristol and Robert Kagan write that America should

act as if instability in important regions of the world, and the flouting of civilized rules of conduct in those regions, are threats that affect us with almost the same immediacy as if they were occurring on our doorstep. To act otherwise would . . . erode both American pre-eminence and the international order . . . on which U.S. security depends. Eventually, the crises would appear at our doorstep.34

After 9/11, neoconservatives argued that the case of Afghanistan proved the necessity of “interventions” to resolve foreign crises and spread “democracy.” Max Boot writes that many thought that after Afghanistan was abandoned as an ally to fight the Soviets, we could

let the Afghans resolve their own affairs . . . if the consequence was the rise of the Taliban — homicidal mullahs driven by a hatred of modernity itself — so what? Who cares who rules this flyspeck in Central Asia? So said the wise elder statesmen. The “so what” question has now been answered definitively; the answer lies in the rubble of the World Trade Center.35

What should we have done in Afghanistan? Boot says that, in the case of Afghanistan, we should have kept troops there throughout the 1980s and 1990s:

It has been said, with the benefit of faulty hindsight, that America erred in providing the [mujahedeen] with weapons and training that some of them now turn against us. But this was amply justified by the exigencies of the Cold War. The real problem is that we pulled out of Afghanistan after 1989. . . .

We had better sense when it came to the Balkans.36

In President Bush’s second inaugural address, he clearly summarized his agreement with the neoconservatives’ position regarding the threat of Islamic terrorism: American security requires us to bring “democracy” to all corners of the earth.

We have seen our vulnerability — and we have seen its deepest source. For as long as whole regions of the world simmer in resentment and tyranny — prone to ideologies that feed hatred and excuse murder — violence will gather, and multiply in destructive power, and cross the most defended borders, and raise a mortal threat. There is only one force of history that can break the reign of hatred and resentment, and expose the pretensions of tyrants, and reward the hopes of the decent and tolerant, and that is the force of human freedom.

We are led, by events and common sense, to one conclusion: The survival of liberty in our land increasingly depends on the success of liberty in other lands. The best hope for peace in our world is the expansion of freedom in all the world.37

But does American security really require that we “sacrifice for the liberty of strangers”? Is every poor, miserable, unfree village on earth the potential source of another 9/11, and is it thus incumbent on America to become not only the world’s policeman, but also its legislator?

Absolutely not.

The idea that we depend on the well-being of other nations for our security — or, more specifically, that “the survival of liberty in our land increasingly depends on the success of liberty in other lands” — is given plausibility by the fact that free nations do not start wars and are not a threat to other free nations, including America. But it is false. It evades the fact that innumerable unfree nations are in no way, shape, or form threats to America (e.g., most of the nations of Africa) because their peoples and leaders have no ideological animus against us, or, crucially, because their leaders and peoples fear initiating aggression against us.

America does not require the well-being of the whole world to survive and thrive; it is not a mere appendage or parasite of an international organism that cannot live without its host. America is an independent nation whose well-being requires not that all nations be free, prosperous, and happy, but simply that they be non-threatening. And this can be readily achieved by instilling in them fear of the consequences of any aggression whatsoever against America.

Thomas Sowell, one of America’s most astute and historically-knowledgeable cultural commentators, cites 19th-century England as having such a policy: “There was a time when it would have been suicidal to threaten, much less attack, a nation with much stronger military power because one of the dangers to the attacker would be the prospect of being annihilated. . . .” Sowell elaborates citing the instructive case of the Falkland Islands war:

Remember the Falkland Islands war, when Argentina sent troops into the Falklands to capture this little British colony in the South Atlantic?

Argentina had been claiming to be the rightful owner of those islands for more than a century. Why didn’t it attack these little islands before? At no time did the British have enough troops there to defend them. . . . [but] sending troops into those islands could easily have meant finding British troops or bombs in Buenos Aires.38

If a pipsqueak nation’s leader knows that instigating or supporting anti-American aggression will mean his extermination, he will avoid doing so at all costs. If a people know that supporting a movement of America-killing terrorists will lead to their destruction, they will run from that movement like the plague.

Politicians and intellectuals of all stripes continuously express worries that some policy or other, and especially the use of the American military, will engender hatred for America, and that such hatred will “radicalize” populations and leaders who will then become greater threats to us. But America need not fear other people’s hatred of us. For a nation, movement, or individual to pose a threat to America, some form of hatred or animus is always a necessary condition, but it is never a sufficient condition. For hatred to translate into attacks on America, it must be accompanied by hope of success: hope that the would-be attackers’ values, including his movement or cause, will be advanced by anti-American aggression. When all such hope is lost, the respective movements and causes die; their former adherents no longer find glory in dying for them and thus lose interest in doing so. Consequently, the crucial precondition of American security is our declaration, in word and deed, to all nations and movements, that there is no hope for any movement or nation that threatens America.

Let us apply this to the case of Islamic Totalitarianism, the state-supported, ideological movement that terrorizes us. If America had memorably punished the Iranian regime once it took fifty Americans hostage in 1979, then other nations would have feared lifting a trigger finger in America’s direction. We are a target today because hostile nations do not fear us, but rather have contempt for us, since we have shown time and again that we are a paper tiger who will rarely punish — and never fully punish — aggressor nations.

If we had made clear that any association with Islamic Totalitarianism and/or Islamic terrorism would mean a regime’s annihilation, it is extremely unlikely that the Taliban would ever have risen in Afghanistan. And if it had, a foreign policy of true American self-interest would have taken care of that threat as soon as it became a threat — as soon as it demonstrated the ability and willingness to attack America. Once the Taliban rose in Afghanistan, openly proclaiming its goal of Islamic world domination, and started providing safe harbor to Islamic terrorists such as Osama Bin Laden, it was a threat and should have been immediately defeated. This was especially true once Afghanistan became the launching pad for terrorist attacks against American embassies in Africa and against the USS Cole.

It would have been absurdly sacrificial to do as Boot suggests and plant troops in Afghanistan from 1989 on to prevent something bad from happening or to facilitate the “self-determination” of Afghans. Who knows how many American lives would have been sacrificed and how much American wealth would have been wasted in such a debacle — let alone if Boot’s principle of “intervening” in “flyspeck regions” had been applied consistently.

The proven way of ending present threats and effectively deterring future threats in any era is to respond to real threats with moral righteousness and devastating power.

America’s true national interest in response to 9/11 was to use America’s unequaled firepower not to “democratize” but to defeat the threatening countries — those countries that continue to support the cause of Islamic Totalitarianism — and to make an example of them to deter other countries. We should have made clear to the rest of the world that our government does not care what kind of government they adopt, so long as those governments do not threaten us.

There was and is no practical obstacle to such a policy; America’s military and technological prowess relative to the rest of the world, let alone to the piddling Middle East, has never been greater. Nor is there any obstacle in terms of knowledge: America’s ability to destroy enemy regimes is not some secret of history; everyone knows how we got Japan to surrender and then covet our friendship for sixty-two years and counting.

America could have responded to 9/11 by calling for devastating retaliation against the state sponsors of terrorism, so as to demoralize the Islamists and deter any future threats from thinking they can get away with attacking America. But, under sway of the neoconservatives, it did not. The neoconservatives never even considered it an option — because they believe that going all-out to defeat America’s enemies would be immoral.

Consider this typical neoconservative response to 9/11 in the editorial from The Weekly Standard immediately following the attacks: “There is a task to which President Bush should call us. It is the long, expensive, and arduous war to replace the government of each nation on earth that allows terrorists to live and operate within its borders.”39

There is no practical reason why a war between superpower America and piddling dictatorships need be “long, expensive, and arduous.” It would be easy to make terrorist nations today feel as terrified to threaten us as the Argentineans felt with regard to 19th-century Britain. Potential aggressors against America should be in awe of our power and should fear angering us, but they are not and do not. Why?

Because, per the neoconservatives’ prescriptions, America has placed the full use of its military capabilities off-limits. The neoconservatives have taken all-out war — real war — off the table.

The reason is their basic view of the goal of foreign policy: the altruistic “national interest.” In this view, the justification of America using its military supremacy is ultimately that it will do so to “improve the world’s condition” — not that it has an unqualified right to defend itself for its own sake.

The right to self-defense rests on the idea that individuals have a moral prerogative to act on their own judgment for their own sake; in other words, it rests on the morality of egoism. Egoism holds that a nation against which force is initiated has a right to kill whomever and to destroy whatever in the aggressor nation is necessary to achieve victory.40 The neoconservatives, true to their embrace of altruism, reject all-out war in favor of self-sacrificial means of combat that inhibit, or even render impossible, the defeat of our enemies. They advocate crippling rules of engagement that place the lives of civilians in enemy territory above the lives of American soldiers — and, by rendering victory impossible, above the lives of all Americans.

In Afghanistan, for instance, we refused to bomb the known hideouts of many top Taliban and Al Qaeda leaders for fear of civilian casualties; thus these men were left free to continue killing American soldiers. In Iraq, our hamstrung soldiers are not allowed to smash a militarily puny insurgency; instead, they must suffer an endless series of deaths at the hand of an enemy who operates at the discretion of America. Neoconservatives are avid supporters of such restrictions and of the altruistic theory they are based on: “Just War Theory.” To act otherwise would be to contradict the duty of selfless service to others that is allegedly the justification and purpose of America using its military might. (For a thorough explanation of this viewpoint, see our essay “‘Just War Theory’ vs. American Self-Defense” in the Spring 2006 issue of TOS.)

Following the invasion of Iraq — in which American soldiers began the half-measures that eventually enabled pitifully-armed Iraqis to take over cities and kill our soldiers by the thousands, neoconservative Stephen Hayes wrote glowingly of the “Just War” tactics of our military.

A war plan that sought to spare the lives not only of Iraqi civilians, but of Iraqi soldiers. Then, liberation. Scenes of jubilant Iraqis in the streets — praising President Bush as “The Hero of the Peace.” A rush to repair the damage — most of it caused not by American bombs, but by more than three decades of tyranny.41

Such is the behavior, not of a self-assertive nation committed to defending itself by any means necessary, but of a self-effacing nation that believes it has no right to exist and fight for its own sake.

The idea that America must become the world’s democratizer is not the mistaken product of an honest attempt to figure out the most advantageous way to defend America. Neoconservatives have not evaluated our options by the standard of defending America and then concluded that using our overwhelming firepower to defeat our enemies is inferior to timidly coaxing the entire Middle East into a free, pro-American society. Rather, they have chosen policies by the standard of their altruistic conception of the “national interest” and have tried to rationalize this as both consistent with and necessary to America’s security from threats. But sacrifice and self-interest are opposites. To sacrifice is to surrender one’s life-serving values — to willingly take an action that results in a net loss of such values. By definition, this cannot be practical; on the contrary, it is deadly.

Bloodshed was the necessary result of Wilsonianism in the early 20th century, just as it is the result of neoconservatism today. Given the destructive history of Wilsonianism (unfortunately unknown to most Americans), the neoconservatives’ calls for international self-sacrifice for a “higher” cause that would ultimately somehow secure America should have been ominous. Wilsonianism demonstrated the logical consequences of America sacrificing for some “higher” cause that our well-being allegedly depends on. The sacrifice — Americans toiling and dying for the sake of foreign peoples — is never followed by the alleged payoff — American security.

Thomas Sowell illuminated this point in January 2003, before President Bush had officially decided to go to war for “Iraqi Freedom” but while neoconservatives were clamoring for such a war. For neoconservatives to place themselves “in the tradition of Woodrow Wilson,” he wrote, “is truly chilling”:

Many of the countries we are having big trouble with today were created by the Woodrow Wilson policies of nation-building by breaking up empires, under the principle of “self-determination of nations.” Such trouble spots as Iraq, Syria, and Lebanon were all parts of the Ottoman Empire that was dismembered after its defeat in the First World War.

The Balkan cauldron of nations was created by dismembering the defeated Austro-Hungarian Empire. That dismemberment also facilitated Adolph Hitler’s picking off small nations like Czechoslovakia and Austria in the 1930s, without firing a shot, because they were no longer part of a defensible empire.

The track record of nation-building and Wilsonian grandiosity ought to give anyone pause. The very idea that young Americans are once again to be sent out to be shot at and killed, in order to carry out the bright ideas of editorial office heroes, is sickening.42

All of this is true.

But the editorial office heroes disagreed that they were going to bring about new debacles — both in the case of Iraq and in their broader quest to bring about an “international order” of “democracies.” This time, international collectivism would work; this time, the sacrifices would be worth it, and the desired “international order” would materialize. The reason it would work this time is that these editorial office heroes were “Hard Wilsonians.”

Soft and Deluded Wilsonianism in Iraq

Since neoconservatism counsels military action, not merely in response to threats to America, but also in response to threats to the “international order” — with the aim of improving that “order” and the lives of foreign peoples — it imposes an effectively unlimited obligation on Americans to sacrifice for the “international order” until we achieve the neoconservatives’ triumph of “international democracy” or Bush’s “the end of tyranny in our world.” It would seem straightforward that this would involve years upon years of nation-building exercises, and thus years upon years of terrible burdens borne by Americans.

But the neoconservatives claimed that the burdens of their policy would not be all that great. They thought that their desired “international order” could be brought about without too much sacrifice on the part of Americans — sacrifice that would allegedly be paid for many times over by the ultra-secure world we would achieve thereby. “Hard Wilsonianism,” they said, was an eminently practical policy. Why? Because, they said, with the willingness to use force, and American leadership, “democratic regime change” is far easier than the “cynics” claim — and because successful “interventions” and the spread of “democracy” will deter future aggressors and inspire freedom fighters around the world.

In 2000, Kristol and Kagan wrote of their entire foreign policy of Iraq-like missions, that

to create a force that can shape the international environment today, tomorrow, and twenty years from now will probably require spending . . . about three and a half per cent of GDP on defense, still low by the standards of the past fifty years, and far lower than most great powers have spent on their militaries throughout history.43

They conclude this thought by asking, rhetorically: “Is the aim of maintaining American primacy not worth a hike in defense spending from 3 to 3.5 per cent of GDP?” — as if their policies, fully implemented, would not cost many multiples of that — and as if money, and not the irreplaceable lives lost, was the only value being spent.44

Part of the way the neoconservatives and President Bush justify their belief in the ease of “democratic regime change” is to cite the successful American occupation of Japan and Germany. When commentators criticized the viability of Bush’s plan to “democratize” Iraq, the Middle East, and ultimately the whole world, the president pointed to the example of Japan, which previous generations of commentators once said was unfit for proper government. Max Boot uses this same example when he writes that “we need to liberalize the Middle East. . . . And if this requires occupying Iraq for an extended period, so be it; we did it with Germany, Japan and Italy, and we can do it again.”45

But in fact, the examples of Germany and Japan do not vindicate the neoconservative foreign policy; they highlight its crucial vice. Note that these occupations were entirely different than the Iraq “liberation” occupation — the type prescribed by neoconservatism — both in ends and means. Their purpose was to render non-threatening the hostile populations of those countries; their purpose was America’s true “national interest,” not neoconservative “national greatness.” And the most important means to those occupations producing their desired result was the utter destruction and resulting demoralization that the Allies brought upon the Germans and Japanese.46

Contrast this to the altruistic policy of neoconservatism, which seeks to “liberate,” not defeat, hostile regimes. In the Iraq war, we treated hostile Iraqis with kid gloves and made it our mission to let them elect whatever government they chose, no matter how hostile to America or how friendly to Islamic Totalitarians. To try transforming an enemy nation without first defeating and demoralizing its complicit inhabitants is to invite those inhabitants both to rise up and rebel against America and to feel no fear in empowering even more anti-American leaders.

Another reason neoconservatives cite for the practicality of their policies is that each “intervention” will have a deterrent effect on future threats to America and to other evils — as well as an inspiring effect on good people — so “interventions” become progressively less necessary. Our “intervention” in Iraq, for example, was supposed to deter Iran and Syria and to inspire alleged masses of latent freedom-loving Muslims to democratize their whole region.

The alleged deterrent effect of “interventionism” is one reason Kristol and Kagan write that “a foreign policy premised on American hegemony, and on the blending of principle with material interest, may in fact mean fewer, not more, overseas interventions. . . .”47 Now if an “intervention” means decisively defeating a real threat, then that certainly has a deterrent effect on potential threats, just as appeasement has an emboldening effect. But neoconservatives argue for the deterrent effect of altruistic missions fought with pulled punches.

In the 1990s, neoconservatives made this deterrence argument in favor of a policy of “intervention” in the conflicts of Bosnia and Kosovo. Many opponents of the war objected to intervention because these conflicts, involving the slaughter of racial minorities by Serbs, were of no threat to America. But the neoconservatives claimed that in Kosovo

allowing a dictator like Serbia’s Slobodan Milosevic to get away with aggression, ethnic cleansing, and mass murder in Europe would tempt other malign men to do likewise elsewhere, and other avatars of virulent ultra nationalism to ride this ticket to power. Neoconservatives believed that American inaction would make the world a more dangerous place, and that ultimately this danger would assume forms that would land on our own doorstep.48

So, while Milosevic was no direct threat to the United States, the argument goes, it was necessary to deal with him to deter those who are or might become threats to America.

But how was America’s use of its limited military resources to go after a random dictator who poses absolutely no threat to it — treating him as a far greater priority than Iran, North Korea, the Taliban, or the like — supposed to deter Iran or North Korea or the Taliban or Bin Laden or Hussein? No such explanation was given, because none was possible. One does not deter a genuine enemy by picking a weak, irrelevant adversary to beat up on while leaving the genuine enemy be. Such conduct emboldens him, because he concludes that we are not strong enough, or courageous enough, to go after him. If Iran is a real threat, then to attack Serbia suggests to our enemies our lack of focus as well as our lack of moral backbone in going after our real enemies. The way to deter potential threats is to make clear that there is nothing to gain and indeed everything to lose from anti-American aggression (including supporting terrorism or spreading Islamic Totalitarianism).

To say that welfare missions such as our foray into Kosovo deter terrorist nations is like saying that going on a mission to “liberate” South Africa pre-World War II would have prevented the attack on Pearl Harbor or Hitler’s march across Europe.

No practical benefits for American self-defense can materialize from a policy whose central pursuit is American self-sacrifice. If one understands that the neoconservative foreign policy is a self-sacrificial “nationalism” — the goal of which is for Americans to sacrifice, to take a loss for some “higher purpose” — then it should be no surprise that, by the standard of the interests of individual Americans, a war conceived on this philosophy turned out to be a failure. The key thing to understand, however, is that by the standard of neoconservatism, the war has been a success.

Guided by neoconservative altruist-collectivist values, the Bush administration sought and fought a war of self-sacrifice — a war that necessarily failed to accomplish the only thing that can end threats to America: the thorough defeat of the enemies that threaten us. This war instead devoted us to the “national greatness” of endless “sacrifice for the [alleged] freedom of strangers.”

Given the nature of the Islamic Totalitarian threat, a war in Iraq did not have to be self-sacrificial. Iraq, after all, was no Kosovo. It was run by an avowed enemy of the United States who broke his terms of surrender, sponsored anti-American terrorists, and heavily sponsored suicide bombers against our vital strategic ally in the Middle East, Israel.

A war to defeat that regime could have served a valid purpose as a first step in ousting the terrorist-sponsoring, anti-American regimes of the Middle East and thus rendering the region non-threatening. For example, it could be used to create a strategic base for taking on Iran, our most important enemy to defeat. But such a goal would entail rendering enemy regimes non-threatening, which is not the same as free or “democratic.”

But if one’s standard of value is an altruist-collectivist ideal such as the “international order” — or if one seeks to police the “flouting of civilized rules of conduct,”49 — then it is possible to do what President Bush did, which is to make Iraq a top priority, to evade the major threat that is Iran, and to set goals that were not oriented toward American self-defense.

Bush went to war with neoconservative, thus altruistic, ends and means. He thereby necessitated a disaster.

In the run-up to the war, President Bush stated not one but three goals in invading Iraq: 1) ending the threat to the United States posed by Saddam Hussein’s support of terrorists, his apparent possession of chemical and biological weapons, and his apparent pursuit of nuclear weapons; 2) “restoring” the “integrity of the U.N.”, which Saddam Hussein had allegedly tarnished by violating seventeen U.N. resolutions; and 3) “liberating” Iraq from the evil tyrant Hussein and furnishing the Iraqi people with a peaceful, prosperous new “democracy.”

In the view of President Bush and the neoconservatives, this combination of self-interested and altruistic goals was ideal; it was an act of selfless service to the world that would also supposedly protect America. But in fact, it was disastrous, because it did not focus America on identifying and eliminating the actual threat in Iraq; rather, it tore us between the contradictory goals of ending a threat and empowering Iraqis to do whatever they want. Combined with tactics designed to protect Iraqis at the expense of American lives, this contradictory combination guaranteed the fiasco that we are witnessing today.

Is it any wonder that our sacrificial objectives and sacrificial tactics have neither deterred our enemies nor inspired freedom-seeking allies — but instead have inspired large populations to elect our enemies into political power? We have seen a definite trend in the rise of Islamic Totalitarianism, the ideology that motivates Islamic terrorists and their strongest supporters — for example, the rise of Hamas in the Palestinian territories, Hezbollah in Lebanon, Ahmedinijad in Iran, and the Muslim Brotherhood in Egypt. Our enemies who were militant before 9/11 are now even more so. Iran and Syria, for instance, continue to support the slaughter of American soldiers in Iraq without fear of consequence, and Iran pursues nuclear weapons to bolster its policy of worldwide Islamic terror.

Given the neoconservative foreign policy’s altruistic ends and means, a war based on them would have to be a disaster. (In the run-up to and early aftermath of the Iraq War, the authors went on record on various occasions predicting this.) No president or secretary of defense or number of troops can make a policy of self-sacrifice yield anything but self-destruction.

But another component of the neoconservative foreign policy has made the Iraq war even more self-destructive — a component that made us pursue the particularly absurd altruistic mission we set ourselves regarding Iraqi governance. It is one thing, on the premise of altruism, to provide a foreign people with “humanitarian” aid or even to kill their reigning dictator in hopes that someone better comes along. It is quite another to try to make a primitive, tribal country into a modern “democracy” — and to expect that mission to protect us by inspiring the other primitive, tribal countries in the region to embrace “democracy” as well.

If one knows anything about those to whom we are bringing about “the end of tyranny in our world,” if one looks at the endless warring tribes and religious factions raised on a philosophy of faith, mindless obedience, and coercion, and if one knows anything about the meaning and preconditions of freedom — one sees that the “Hard Wilsonian” policy is a prescription for endless welfare wars and countless American casualties, not a mere half percentage point hike of GDP.

To the credit of neoconservatives’ opponents, many of them ridiculed the idea of an easily achieved, thriving Iraqi “democracy” that inspired spontaneous “democratic” uprisings in the Middle East. And given the results — the triumph of Islamists in Iraq, Afghanistan, Egypt, Lebanon, and the Palestinian Authority — they were right.50

To understand how the neoconservatives were so deluded about Iraq, we must grasp the essence of their political philosophy. For neoconservatives (who are influenced by the writings of philosopher Leo Strauss), politics is the central force influencing and guiding a culture. Thus, the regime a country has is the dominant cause of the direction its culture takes. Consistent with their view of individuals as metaphysically inferior to the collective, neoconservatives believe that the individual is necessarily an ineffectual product of the regime he is brought up in.

Bad regimes, they argue, inculcate in a people bad behavior and norms. If you take the same people and place them under a good regime (i.e., a “democracy”), they will become radically better people. The regime changes the culture. Thus, it is the governing elite, not the people, who ultimately determine the regime and the culture in a given country. If we replace the elite through regime change and help to establish a better elite that is pro-“democracy,” a new, better culture will be born.

Ultimately, according to the neoconservatives, the foundations for any good culture, the sort that a regime must strive to foster, lie in a respect for tradition and a strong role for religion. These are the forces that restrain individuals in every society from pursuing their own “passions” and thus from immorality and anarchy.51

By this standard, Iraq was a promising yet troubled country in need of assistance. It was a tradition-based, religion-oriented society that for decades had been ruled by a cruel, inhumane elite — the Ba’ath Party and Saddam Hussein — an elite that had not been chosen by the people. Do away with that elite, cultivate the local traditions and the religious leaders, and Iraq was ripe for “democracy.” Once Iraqis experienced the wonders of electing their own leaders, once they participated in writing their own constitution, the neoconservatives postulated, Iraq would be transformed. The euphoria they expressed after the January 2005 Iraqi elections and the subsequent approval of a new constitution expressed their sincere belief that Iraq had fundamentally changed for the better. The new regime and the new practices in “democracy” would bring out the best in the Iraqis. Not only would this approach lead to political freedom in Iraq; it would also lead to economic prosperity through the adoption of free markets and to the peaceful coexistence of Iraq with its neighbors. And — in the ultimate payoff for America — this new Iraq would become our ally in the Middle East; it would help us reshape the region and destroy the threat of terrorism forever.

The neoconservative view of the relationship between individuals and regimes — which President Bush holds in an even stronger form, believing that freedom is “written on the soul of every human being” — also explains the plausibility of the idea that bringing “democracy” militarily to one country will likely set off a chain of “democracies” in other countries due to overwhelming civilian demand — thus lessening the need for future military “interventions.” As Bush put it, to raucous applause at the National Endowment for Democracy in late 2003: “Iraqi democracy will succeed — and that success will send forth the news, from Damascus to Teheran — that freedom can be the future of every nation. The establishment of a free Iraq at the heart of the Middle East will be a watershed event in the global democratic revolution.”52

But the view of individuals and regimes that all of this is based on is false.

The truth is that the entrenched philosophy of a people is fundamental to what type of government those people can live under, and a government based on tradition and religion is in total opposition to freedom. (Contrary to the claims of conservatives, America was founded in complete opposition to centuries of religious and statist tradition — opposition that included its revolutionary separation of religion and state.) For example, it would be impossible for Americans, especially 18th-century Americans, to accept the rule of Saddam Hussein. Our forefathers did not submit to tyranny; they courageously rebelled against it in the name of individual rights, against far greater odds than Iraqis faced under Hussein. By contrast, today’s Iraqis, with the primacy they place on mystical dogma and tribal allegiances, are utterly incapable of the respect for the individual and individual rights that define a free society. Their religion and traditions do not facilitate respect for freedom; they make such respect impossible.

The Iraqis are essentially similar in this regard to the other peoples of the Middle East who are subjugated under terrorist states. It is no accident that the Islamic Totalitarian movement that terrorizes us enjoys widespread support throughout the Arab-Muslim world. If it did not, hostile governments would not be able to rally their populations with appeals to that cause.

(As for claims about freedom being “written on the soul of every human being,” this is false. There is no inherent belief in either freedom or anti-freedom — though one could make a far stronger case for an innate hostility toward freedom. Freedom is incredibly rare historically — because its root, a rational, individualistic philosophy, has been so rare.)

As a result of the neoconservatives’ false view of regimes, they take lightly the colossal task of replacing a barbaric nation with a civilized one — in fact, they do not even acknowledge it as barbaric. The pitiful peoples of oppressed nations are lionized as mere victims of bad actors — victims who must merely be “liberated” to go from members of terrorist states to good neighbors.

The neoconservatives’ false belief in the fundamentality of regimes, not philosophy, in human action, is made worse by the political system they advocate: “democracy.”

When President Bush and the neoconservatives use the term “democracy,” they act as if the term refers more or less to the type of government we have in the United States. Thus, the term “Iraqi democracy,” at least prior to its implementation, conjured up images of a nation with civilized courts, rule of law, respect for individual rights (including those of racial minorities), a prosperous, free-market economy, separation of church and state, and so on.

But the literal meaning of “democracy” — and the meaning applied in the actual carrying out of “Iraqi democracy” — is unlimited majority rule. “Democracy” refers to the system by which ancient Athenians voted to kill Socrates for voicing unpopular ideas. In 1932, the German people “democratically” elected the Nazi Party, including future chancellor Adolph Hitler. “Democracy” and liberty are not interchangeable terms; they are in fact antithetical. The distinctively American, pro-liberty principle of government is the principle of individual rights — which, to be upheld in a given society, requires a constitution that specifically protects these rights against the tyranny of the majority.53

Neoconservatives are unabashed promoters of “democracy,” while knowing that it is not America’s system of government and that it was opposed by the Founding Fathers of this country. As Joshua Muravchik writes, “This is the enthusiasm for democracy. Traditional conservatives are more likely to display an ambivalence towards this form of government, an ambivalence expressed centuries ago by the American founders. Neoconservatives tend to harbor no such doubts.”54

The practical justification for “spreading democracy” is that “democracies don’t start wars,” and thus to promote “democracy” is to promote our long-term security. But that idea is a dangerous half-truth. “Democracies,” in the literal sense, do attack other countries. To take a modern example, observe the elected Hamas government whose fundamental goal is to exterminate Israel. Or observe the triumph of the Supreme Council for the Islamic Revolution in Iraq and Moqtada al Sadr in Iraq’s “democratic” political process.

What gives plausibility to the notion that “democracies don’t start wars” is the fact that free nations do not start wars. This truth was elaborated by Ayn Rand in her landmark essay, “The Roots of War,” reprinted in her anthology Capitalism: The Unknown Ideal.55 But a free society is not simply one that holds elections — it is one that holds elections as a delimited function to select officials who must carry out, and cannot contradict, a constitution protecting individual rights.

To the extent that it is necessary for America’s national security to occupy a given country, an understanding about the relationship between voting, freedom, and aggression is imperative. Because the neoconservatives and President Bush lack such an understanding, we have been treated to the spectacle of an Iraqi “democracy” in which “Islam is a basic source of legislation” and “No law may contradict the undisputed principles of Islam.”56 We have a “democracy” that is dangerously close to being a puppet or clone of the theocracy of Iran — an enemy we will have created on the grounds that “democracies don’t start wars.”

Holding the false view that freedom equals “democracy,” and clinging to the fiction of the noble Mideast Muslims, we have abetted and applauded these freedom haters as they have voted themselves toward terrorist theocracy. And we have promoted elections around the Middle East as the solution to the threats these nations pose, as if the people are civilized and friendly toward America but just “happen” to be under despotic rule. The results of these elections, which have empowered Islamic Totalitarians or their close allies in the Palestinian territories, Egypt, and Lebanon, is testament to how deluded the neoconservative advocacy of “spreading democracy” is.

To add offense to this destruction, in responding to criticisms of Mideast “democracy,” Bush administration members and neoconservative intellectuals have the gall to counter that American “democracy” has had problems, too. “Working democracies always need time to develop, as did our own,” says Bush, who calls on us to be “patient and understanding as other nations are at different stages of this journey.” Thus, the American “stage” of the Jefferson-Hamilton debates and the Iraqi “stage” of Sadr vigilante executions are rendered equivalent: two peas in a “democratic” pod.

The Realistic Moral Alternative: A Morality of Self-Interest

The basic reason for the failure of the neoconservative foreign policy is that it is a thoroughly altruistic, self-sacrificial foreign policy, and American self-defense is incompatible with self-sacrifice. Importantly, however, this analysis is not limited to the policy of the neoconservatives; it applies equally to the allegedly opposite policy of “realism.”

In our earlier discussion of “realism,” we focused on that doctrine’s view of nations as “rational” actors and its view of “diplomacy” as the foreign policy cure-all. Part of this policy’s failure derives from its short-range mentality that views America’s “national interest” in the time frame of political terms, a mentality that is always willing to kick the can down the road. But equally important is the policy’s thoroughly altruistic moral base. This may seem strange to those familiar with “realism,” because one tenet of that doctrine is that a nation should reject moral considerations in foreign policy and instead concern itself solely with its “vital interests.” In the “realist” view, moral considerations — moral ideals, moral restrictions, moral judgments of good and evil — get in the way of dealing with “practical reality.”

For the “realist,” in any given situation, everything is theoretically on the table, to be accepted or rejected depending on whether it will “work” to achieve the “national interest.” Moral principles cannot be permitted to get in the way; one must be “pragmatic,” not an “ideologue.”

But this is nonsense. To pursue “practicality” divorced from morality is impossible. Any claim that a course of action is “practical” presupposes some basic end that the course of action aims to achieve. For example, any claim that “diplomacy” with Iran is practical, or that democratic “regime change” is practical, presupposes some basic goal — whether achieving the approval of others, or establishing “stability” in the Middle East, or winning “hearts and minds,” or fulfilling our duty to “improve the world’s condition,” or maintaining the status quo, or eliminating the Iranian threat. The question of what basic ends one should pursue in foreign policy is inescapable to the issue of practicality — and it is a moral question.

Because “realism” rejects the need for moral evaluation, and because the need for moral evaluation cannot be escaped, its advocates necessarily take certain goals for granted as “obviously” practical — and reject others as “obviously” impractical. Which goals are good? Goals consistent with altruism and collectivism — like winning over positive “world opinion,” or coalition-building as an end in itself.

They will not consider any truly self-interested goals or means of achieving them — for example, ending state sponsorship of terrorism through devastating military action. To propose such an alternative to them would bring a flood of practical rationalizations — “We can’t go to war with the whole world”; “What about the allies?”; “That will just ‘radicalize’ more potential terrorists.” But all such objections evade the fact that such wars historically have ended the threats. This fact is ignored by the so-called “realists” because their opposition to such wars is rooted in their acceptance of altruism.

Take the example of former secretary of state Colin Powell, a prominent “realist” about whom we wrote in “‘Just War Theory’ vs. American Self-Defense”:

Does he call for America’s unequivocal, uncompromising self-defense using its full military might, since that would be eminently practical in achieving America’s self-interest? No. Instead, when he ran the State Department, he sought to avoid war, to appease any and every enemy, to court “world opinion,” to build coalitions, to avoid civilian casualties — while at the same time somehow to protect America. In other words, he did everything that pacifism and Just War Theory would have him do. While Powell and his ilk may say that they eschew moral analysis in matters of foreign policy and war, altruism nevertheless shapes what they think and seek to do.57

Since “realists” cannot conceive of doing what is truly practical in regard to threats, and since they reject explicitly altruistic missions of the neoconservative variety, they are left with only the option of ignoring or appeasing threats. This dereliction of responsibility makes more plausible the neoconservative idea that we need to be the world’s policeman — since the most prominent alternative is to be a negligent, passive, doughnut-munching American policeman.

But this is a false alternative.

The antidote to both of these disastrous options is to truly embrace the virtues that the neoconservatives claim to embrace — such as thinking long range, wide range, and morally about America’s interests — but to make our moral standard American self-interest — that is, the individual rights of Americans. If America is to have a future of freedom and security, this must be the supreme and ruling goal of American defense. (What such a standard means and why it is morally correct was a major theme of our essay “‘Just War Theory’ vs. American Self-Defense.”) This is the moral perspective needed to defeat Islamic Totalitarianism — a moral perspective that truly values American lives and liberty.

So long as we evaluate the question of where to go in foreign policy by the standards of the two leading altruist foreign policies — in terms of how many more troops, or whom to hold talks with, or how many U.N. Resolutions to pass — we will continue to lose. We need to jettison the corrupt moral framework of “realism” and neoconservatism, and adopt one in which American self-defense is the sole concern and standard of value — in which we take a long-range, principled, selfish approach to our self-defense.

In the wake of neoconservatism’s fall from grace, we must make clear that there is another alternative. Our true national interest, our lives and our freedom, depend on it.

About The Author

Yaron Brook

Chairman of the Board, Ayn Rand Institute

Endnotes

Acknowledgment: The authors would like to thank Onkar Ghate, senior fellow of the Ayn Rand Institute, for his invaluable editorial assistance with this project.

1 George W. Bush, Second Presidential Debate, October 11, 2000, http://www.debates.org/pages/trans2000b.html.

2 Ibid.

3 Office of the Press Secretary, “State of the Union: A Strong America Leading the World,” January 31, 2006, http://www.whitehouse.gov/news/releases/2006/01/20060131-8.html.

4 William Kristol and Robert Kagan, “Introduction: National Interest and Global Responsibility,” Present Dangers: Crisis and Opportunity in American Foreign Policy (San Francisco: Encounter Books, 2000), p. 4.

5 J. Bottum, “A Nation Mobilized,” Weekly Standard, September 24, 2001. (Weekly Standard pdf, p. 8. J. Bottum, for the Editors.)

6 George W. Bush, Address to a joint session of Congress, September 20, 2001, http://www.whitehouse.gov/news/releases/2001/09/20010920-8.html.

7 http://www.pbs.org/newshour/bb/terrorism/july-dec01/wide_war.html.

8 George W. Bush, Forward Strategy of Freedom speech—President Bush Discusses Freedom in Iraq and Middle East at the 20th Anniversary of the National Endowment for Democracy, U.S. Chamber of Commerce, Washington, DC, November 6, 2003, http://www.whitehouse.gov/news/releases/2003/11/20031106-2.html.

9 Charles Krauthammer, “The Neoconservative Convergence,” Commentary, July/August 2005.

10 “Bush calls end to ‘major combat’” CNN.com May 2, 2003. http://www.cnn.com/2003/WORLD/meast/05/01/sprj.irq.main/.

11 Weekly Standard, April 21, 2003, p. 40.

12 Barry Goldwater, The Conscience of a Conservative (Shepherdsville, KY: Victor Publishing Co., 1960; reprint, Washington, DC: Regnery Gateway, Inc., 1990), p. 11 (page reference is to reprint edition).

13 Irving Kristol, Reflections of a Neoconservative: Looking Back, Looking Ahead (New York: Basic Books, 1983), p. 116; Kristol, Two Cheers for Capitalism, (New York: Basic Books, 1979), p. 119.

14 Irving Kristol, “Socialism: An Obituary for an Idea,” Reflections of a Neoconservative: Looking Back, Looking Ahead (New York: Basic Books, 1983), pp. 116–17.

15 Ayn Rand, “The Fascist New Frontier,” The Ayn Rand Column. Reprinted in The Ayn Rand Column, p. 99. Speech given at Ford Hall Forum in 1962.

16 William Kristol and David Brooks, “What Ails Conservatism,” Wall Street Journal, September 15, 1997.

17 Ibid.

18 David Brooks, “A Return to National Greatness: A Manifesto for a Lost Creed,” The Weekly Standard, March 3, 1997.

19 J. Bottum, “A Nation Mobilized,” Weekly Standard, September 24, 2001. (Weekly Standard pdf, p. 8. J. Bottum, for the Editors.)

20 Kristol and Kagan, “Introduction,” p. 4.

21 Ibid., p. 23.

22 Ibid.

23 Kristol and Kagan, Present Dangers, p. 83.

24 Max Boot, “The Case for American Empire,” Weekly Standard, October 15, 2001, p. 30.

25 Woodrow Wilson speech to Congress, April 2, 1917, http://historymatters.gmu.edu/d/4943/.

26 Angelo M. Codevilla, “Some Call it Empire,” Claremont Review of Books, Fall 2005, http://www.claremont.org/publications/crb/id.842/article_detail.asp.

27 Max Boot, “What the Heck Is a ‘Neocon’?” Wall Street Journal, December 30, 2002.

28 Mark Gerson, The Neoconservative Vision: From the Cold War to the Culture Wars (Linham: Madison Books, 1997), p. 181.

29 Kristol and Kagan, “Introduction,” p. 15.

30 Wilson speech, April 2, 1917.

31 Speech delivered by President Bush at the National Endowment for Democracy on November 6, 2003. http://www.whitehouse.gov/news/releases/2003/11/20031106-2.html.

32 Max Boot, The Savage Wars of Peace (New York: Basic Books, reprint ed., 2003), p. 350.

33 Ibid., p. 342.

34 Kristol and Kagan, “Introduction,” p. 16.

35 Boot, “American Empire,” pp. 27–28.

36 Ibid., p. 27.

37 George W. Bush, State of the Union Address, February 2, 2005 .

38 Thomas Sowell, “Pacifists vs. Peace,” Falkland Islands, July 21, 2006, http://www.realclearpolitics.com/articles/2006/07/pacifists_versus_peace.html.

39 J. Bottum, “A Nation Mobilized,” Weekly Standard, September 24, 2001. (Weekly Standard pdf, p. 8. J. Bottum, for the Editors.)

40 For further elaboration and explanation on this point, see Yaron Brook and Alex Epstein, “‘Just War Theory’ vs. American Self-Defense,” The Objective Standard, Spring 2006, p. 44.

41 Stephen Hayes, “Beyond Baghdad,” The Weekly Standard, April 21, 2003, p. 14.

42 Thomas Sowell, “Dangers ahead—from the Right,” editorial, Jewish World Review, January 6, 2003, http://www.jewishworldreview.com/cols/sowell010603.asp.

43 Kristol and Kagan, “Introduction,” p. 15.

44 Ibid.

45 Max Boot, “‘Neocon’.”

46 For an excellent elaboration on this point, see John Lewis, “No Substitute for Victory: The Defeat of Islamic Totalitarianism,” The Objective Standard, Winter 2006.

47 Kristol and Kagan, “Introduction,” p. 13.

48 Joshua Muravchik, “The Neoconservative Cabal,” Commentary, September 2003.

49 Kristol and Kagan, “Introduction,” p. 16.

50 For a detailed discussion of Bush’s failed “Forward Strategy for Freedom,” see Yaron Brook and Elan Journo, “The Forward Strategy for Failure,” The Objective Standard, Spring 2007.

51 For a discussion of this point, see “An Autobiographical Memoir,” by Irving Kristol, in Neoconservatism: The Autobiography on an Idea (Chicago: Elephant Paperback, 1999), pp. 8.

52 Bush, Forward Strategy of Freedom speech.

53 For further discussion of this point, see Yaron Brook and Elan Journo, “The Forward Strategy for Failure,” The Objective Standard, Spring 2007.

54 Muravchik, “Neoconservative Cabal.”

55 Ayn Rand, “The Roots of War,” Capitalism: The Unknown Ideal (New York: Signet, 1967) pp. 35–44.

56 Full Text of Iraqi Constitution, courtesy of the Associated Press, October 12, 2005, http://www.washingtonpost.com/wp-dyn/content/article/2005/10/12/AR2005101201450.html.

57 Yaron Brook and Alex Epstein, “‘Just War Theory’ vs. American Self-Defense,” The Objective Standard, Spring 2006, p. 44.

The “Forward Strategy” for Failure

by Yaron Brook and Elan Journo | Spring 2007

Authors’ note: This essay is partially based on the lecture “Democracy vs. Freedom” that Yaron Brook delivered on September 12, 2006, in Irvine, CA, and on October 22, 2006, at the Ford Hall Forum in Boston, MA.

A Strategy for Security?

The attacks of 9/11 exposed the magnitude of the threats we face, and, ever since then, one question has become a depressing fixture of our lives: Are we safe? Scarcely two years ago, many Americans believed that our salvation was imminent, for the means of achieving our security was at hand; no longer would we have to live in dread of further catastrophic attacks. These people were swept up in euphoric hope inspired by the Bush administration’s new strategy in the Middle East. The strategy promised to deliver permanent security for our nation. It promised to eradicate the fundamental source of Islamic terrorism. It promised to make us safe.

The strategy’s premise was simple: “[T]he security of our nation,” President Bush explained, “depends on the advance of liberty in other nations”;1 we bring democracy to the Middle East, and thereby make ourselves safer. To many Americans, this sounded plausible: Western nations, such as ours, are peaceful, since they have no interest in waging war except in self-defense: Their prosperity depends on trade, not on conquest or plunder; the more such nations in the world, the better off we would be. Informally, Bush called this idea the “forward strategy for freedom.”2

By January 2005, an early milestone of this strategy was manifest to all. Seemingly every news outlet showed us the images of smiling Iraqis displaying their ink-stained fingers. They had just voted in the first elections in liberated Iraq. Those images, according to breathless pundits, symbolized a momentous development.

Commentators saw reason to believe Bush’s grandiose prediction of 2003, when he declared: “Iraqi democracy will succeed — and that success will send forth the news, from Damascus to Teheran — that freedom can be the future of every nation. The establishment of a free Iraq at the heart of the Middle East will be a watershed event in the global democratic revolution.”3 At the summit of the Arab League in 2004, according to Reuters, Arab heads of state had “promised to promote democracy, expand popular participation in politics, and reinforce women’s rights and civil society.”4 By the spring of 2005, several Arab regimes had announced plans to hold popular elections.

Even confirmed opponents of Bush applauded the strategy. An editorial in the New York Times in March 2005, for example, declared that the “long-frozen political order seems to be cracking all over the Middle East.” The year so far had been full of “heartening surprises — each one remarkable in itself, and taken together truly astonishing [chief among them being Iraq’s elections and the prospect of Egyptian parliamentary elections]. The Bush administration is entitled to claim a healthy share of the credit for many of these advances.”5 Senator Edward Kennedy (of all people) felt obliged to concede, albeit grudgingly, that “What’s taken place in a number of those [Middle Eastern] countries is enormously constructive,” adding that “It’s a reflection the president has been involved.”6

Washington pursued the forward strategy with messianic zeal. Iraq has had not just one, but several popular elections, as well as a referendum on a new constitution written by Iraqi leaders; with U.S. endorsement and prompting, the Palestinians held what international monitors declared were fair elections; and Egypt’s authoritarian regime, under pressure from Washington, allowed the first contested parliamentary elections in more than a decade. Elections were held as well in Lebanon (parliamentary) and Saudi Arabia (municipal). In sum, these developments seemed to indicate a salutary political awakening. The forward march toward “liberty in other nations” seemed irresistible and “the security of our nation,” inevitable.

But has the democracy crusade moved us toward peace and freedom in the Middle East — and greater security at home?

Consider three elections and their implications for the region.

The elections in Iraq were touted as an outstanding success for America, but the new Iraqi government is far from friendly. It is dominated by a Shiite alliance led by the Islamic Daawa Party and the Supreme Council for Islamic Revolution in Iraq (SCIRI). The alliance has intimate ties with the first nation to undergo an Islamic revolution, Iran. Both Daawa and SCIRI were previously based in Iran, and SCIRI’s leader has endorsed Lebanese Hezbollah, a terrorist proxy for Iran.7 Teheran is thought to have a firm grip on the levers of power within Iraq’s government, and it actively arms and funds anti-American insurgents. The fundamental principle of Iraq’s new constitution — as of Iran’s totalitarian regime — is that Islam is inviolable.

Instead of embracing pro-Western leaders, Iraqis have made a vicious Islamic warlord, Moqtada al-Sadr, one of the most powerful men in Iraqi politics. Although Sadr has not run for office, his bloc holds thirty seats in Iraq’s assembly, controls two ministries, and wields a decisive swing vote: Iraq’s current prime minister, Nuri al-Maliki, and his predecessor, Ibrahim Jaafari, both owe their jobs to Sadr’s support. Sadr (who is wanted by Iraqi authorities for murder) is vociferously anti-American, favors Iranian-style theocratic rule, and has vowed to fight in defense of Iran.

Sadr has a private militia, the Mahdi Army, through which he has repeatedly attacked American forces. One of the fiercest encounters was in 2004 in Najaf. Confronted by U.S. forces, Sadr’s militiamen entrenched themselves in a holy shrine. But the standoff ended when Grand Ayatollah Sistani, the leading Shiite cleric in Iraq, interceded on Sadr’s behalf. Washington capitulated for fear of upsetting Shiites and let the militia go (officials no longer talk of arresting Sadr for murder). Since that standoff, the Mahdi Army has swollen nearly threefold to an estimated fifteen thousand men and, according to a Pentagon report, it has surpassed Al Qaeda in Iraq as “the most dangerous accelerant” of the sectarian violence.8

Emancipated from Hussein’s tyranny, a large number of Iraqis embraced the opportunity to tyrannize each other by reprising sadistic feuds (both sectarian and ethnic) — and to lash out at their emancipators, the American forces. The insurgency, which has attracted warriors from outside Iraq, is serving as a kind of proving ground where jihadists can hone their skills. According to news reports, Lebanese Hezbollah has been training members of the Mahdi Army in Lebanon, while some Hezbollah operatives have helped with training on the ground in Iraq.9 The new Iraq has become what the old one never was: a hotbed of Islamic terrorism. It is a worse threat to American interests than Saddam Hussein’s regime ever was.

Consider the election results in the Palestinian territories. For years, Bush had asked Palestinians “to elect new leaders, . . . not compromised by terror.”10And, finally, in the U.S.-endorsed elections of January 2006, the Palestinians did turn their backs on the cronies of Yasser Arafat; they rejected the incumbent leadership of Fatah — and elected the even more militant killers of Hamas: an Islamist group notorious for suicide bombings. Hamas won by a landslide and now rules the Palestinian territories.

Refusing to recognize Israel’s legitimacy, Hamas is committed to annihilating that state and establishing a totalitarian Islamic regime. In the previous year, Hezbollah took part in the U.S.-endorsed elections in Lebanon, formed part of that country’s cabinet for the first time, and won control of two ministries.11 In the summer of 2006, the Iranian-backed Hamas and Hezbollah killed and kidnapped Israeli soldiers — and precipitated a month-long war in the region. Since the ceasefire that ended the war, Hezbollah has continued to amass weapons and foment terrorism, emboldened by its popular electoral support.

Consider, as a final example of the trend, the 2005 parliamentary elections in Egypt, the Arab world’s most populous country. The group that scored the most impressive gains was the Muslim Brotherhood — the intellectual origin of the Islamist movement, whose offshoots include Hamas and parts of Al Qaeda. The Brotherhood’s founding credo is “Allah is our goal; the Koran is our constitution; the Prophet is our leader; Struggle is our way; and death in the path of Allah is our highest aspiration.”12

The Brotherhood’s electoral success was staggering. Although the group is officially banned in Egypt, its candidates won eighty-eight seats — about 20 percent — in Egypt’s assembly, and became the largest opposition bloc the body has ever had.13 This was all the more significant considering the regime’s brutal attempts to protect its grip on power. During one round of voting, the New York Times reports, “police officers in riot gear and others in plainclothes and armed civilians working for the police began blocking polling stations, preventing supporters of the Brotherhood from casting their votes.” Dozens were injured, and several people died from gunshots to the head.14 Some observers reckon that the Brotherhood could have won even more power if it had not limited itself to running 125 candidates (it did so, presumably, to avoid an even tougher government crackdown).

The Muslim Brotherhood, Hamas, Lebanese Hezbollah, the Islamist regime in Iran, the Mahdi Army, Al Qaeda — these are all part of an ideological movement: Islamic Totalitarianism. Although differing on some details and in tactics, all of these groups share the movement’s basic goal of enslaving the entire Middle East, and then the rest of the world, under a totalitarian regime ruled by Islamic law. The totalitarians will use any means to achieve their goal — terrorism, if it proves effective; all-out war, if they can win; and politics, if it can bring them power over whole countries.

Bush’s forward strategy has helped usher in a new era in the Middle East: By its promotion of elections, it has paved the road for Islamists to grab political power and to ease into office with the air of legitimacy and without the cost of bombs or bullets. Naturally, totalitarians across the region are encouraged. They exhibit a renewed sense of confidence. The Iran-Hamas-Hezbollah war against Israel last summer is one major symptom of that confidence; another is Iran’s naked belligerence through insurgent proxies in Iraq, and its righteously defiant pursuit of nuclear technology.

The situation in the Middle East is worse for America today than it was in the wake of 9/11. Iraq is a bloody fiasco. The chaos in Iraq makes it a haven for anti-American terrorists. Iran’s influence in Iraq and in the region is growing. Saudi Arabia, along with five other Arab states, announced its intention to pursue nuclear technology. In Lebanon, thousands of people have taken part in massive street demonstrations demanding greater power for Hezbollah in the government. The Hamas regime, though starved of Western aid, remains in power, and Palestinians continue to fire rockets at Israeli towns.

A further effect of the elections in the region has been the invigoration of Islamists in Afghanistan. Legions of undefeated Taliban and Al Qaeda warriors in that country have regrouped and renewed their jihad. Flush with money, amassing recruits, and armed with guns, rockets, and explosives, they are fighting to regain power. They have mounted a string of massive suicide bombings and rocket attacks against American and NATO forces; more U.S. troops died in Afghanistan during 2005 and 2006 than during the peak of the war.15 With astounding boldness, the Taliban have assassinated clerics and judges deemed friendly to the new government, and fired rockets at schools for using “un-Islamic” books. The Taliban have effectively taken over certain regions of the country.16

Jihadists continue to carry out and plot mass-casualty atrocities against the West. In 2004 they bombed commuter trains during rush hour in Madrid. The next summer, suicide bombers blew themselves up on London’s underground. In August 2006, British police foiled a plot to set off a wave of bombings on trans-Atlantic airliners. British authorities recently disclosed that they were tracking two hundred cells involving more than sixteen hundred individuals who were “actively engaged in plotting or facilitating terrorist acts here and overseas.”17 The question now is not if there will be another catastrophic attack, but only when.

By any objective assessment, the forward strategy is a dismal failure. What went wrong?

Some commentators, particularly so-called “realists” in foreign policy, have condemned the strategy as intrinsically unworkable. In January 2007, Dimitri Simes, publisher of The National Interest, argued: “The debacle that is Iraq reaffirms the lesson that there is no such thing as a good crusade. This was true a thousand years ago when European Christian knights tried to impose their faith and way of life on the Holy Land, . . . and it is equally true today. Divine missions and sensible foreign policy just don’t mix.” Inspiring the Bush administration’s crusade is the (purported) “true calling of spreading liberty throughout the world, even at the barrel of a gun.”18 Bush’s strategy was driven by an ideal — spreading democracy — and that idealism is what made it impractical. This complaint was also voiced early on in the war. About six months before Iraq’s first elections were held, amid continuing insurgent attacks, Anthony Cordesman, a defense analyst writing in the New York Times, bluntly summed up this line of thinking: “What we need now is pragmatism, not ideology.”19

The “realist” critique flows from a rejection of “the assumption that state behavior is a fit subject for moral judgment” (as diplomat George Kennan once noted of this outlook).20 The ideology and character of a regime are irrelevant to how we should act toward it; distinguishing between friends and foes is pointless. Practicality (i.e., achieving U.S. security) requires amoral diplomatic deals. That implies that we should talk and make deals with any regime, however monstrous or hostile.

“Realists” urge action divorced from moral principles, but history demonstrates that such a policy is suicidal. Recall that, in compliance with “realism,” Washington backed jihadist forces, despite their perverse ideals, in the fight against the Soviets in Afghanistan — jihadists who, in keeping with their ideology, later turned their sights on the United States. The same amoralism animated the British in the 1930s. Britain disregarded Hitler’s stated ambition and his vicious ideology (set out in Mein Kampf and broadcast at mass rallies throughout Germany), and agreed to a “land for peace” deal. Given Hitler’s goals, the deal predictably encouraged his belligerence, and so the Nazi war machine proceeded to enslave and exterminate millions of human beings.

The disasters of “Realism” underscore the need for moral ideals in foreign policy, and the “Realist” explanation for the failure of Bush’s strategy is false.

Consider another, increasingly prevalent, explanation for what went wrong — the idea that Bush’s strategy is a good idea that was poorly implemented. Proponents of this view believe that the problem is not Bush’s goal of spreading democracy, which they regard as a noble ideal worth pursuing, but rather the administration’s failure to pursue this goal properly. For example, Michael Rubin of the American Enterprise Institute laments that “Instead of securing Iraq’s borders, the Bush administration accepted Syrian and Iranian pledges of non-interference.”21 Max Boot, a columnist for the Los Angeles Times and a fellow at the Council on Foreign Relations, is a supporter of Bush’s strategy, but acknowledges numerous ways in which the mission was botched, including: “the lack of pre-invasion diplomacy, the lack of post-invasion planning, the lack of ground troops, the lack of intelligence, the lack of coordination and oversight, the lack of armor, the lack of electricity. . . .”22

The concrete means were supposedly inadequate or badly implemented. The strategy could be made to work, if we could shrewdly tinker with troop levels, border security, the training of Iraqi police, and so on, and if we could install a competent secretary of Defense to see to it that the strategy is implemented properly.

But none of these adjustments, nor any others, would have averted the disaster wrought by the forward strategy. The problem does not lie with a shortage of resources or blunders in executing the strategy. The problem lies with the strategy’s basic goal, whose legitimacy critics fail to challenge.

The strategy has failed to make us safer, because making us safer was never its real goal. That goal is mandated by the corrupt moral ideal driving the strategy.

What, then, is the actual goal of the strategy?

A Forward Strategy for . . . What End?

Let us begin by considering what the strategy’s putative goal would have required.

Suppose that on September 12, 2001, Bush’s strategists had asked themselves the following: What steps are necessary to make American lives safegiven the lethal threat of Islamic terrorism?

The rational answer: We must defeat the enemy.

When foreign aggressors are diligently working to slaughter Americans, our government is obligated to use retaliatory force to eliminate the threat permanently. This is what it must do to completely restore the protection of the individual rights of Americans. Defeating the enemy is necessary to bring about a return to normal life — life in which Americans are free to produce and thrive without the perpetual dread of terrorist atrocities.

Making the enemy permanently non-threatening is the objective measure of success in war. Recall, for example, our last indisputably successful war — World War II. By 1945, the air attacks ended, ground combat ended, naval battles ended; the war was over — because the Allied powers defeated Nazi Germany and Imperialist Japan. The threat was over. People in the West rejoiced and began returning to their normal lives.

The Allied powers achieved victory because they committed themselves to crushing the enemy. They understood that the enemy was Nazism and Japanese imperialism and that the political manifestations of these ideologies had to be stopped. They also understood, in some terms, that merely assassinating Hitler or Japan’s emperor Hirohito would not be enough, because the people of Germany and Japan supported the goals of their regime (after all, Hitler was democratically elected and Hirohito was a venerated ruler). Victory required a punishing military onslaught not only to stop the enemy’s war machine, but also to demoralize its supporters.

The Allies inflicted the pain of war so intensely that the enemy laid down its arms and abandoned its cause — permanently. They flattened German cities, pulverized factories and railroads, devastated the country’s infrastructure. The campaign against Japan likewise sought to break the enemy’s will to fight. On one day of extremely fierce combat, for example, U.S. bombers dropped five-hundred-pound incendiary clusters every fifty feet. “Within thirty minutes,” one historian writes,

a 28-mile-per-hour ground wind sent the flames roaring out of control. Temperatures approached 1,800 degrees Fahrenheit. . . . [General Curtis LeMay] wished to destroy completely the material and psychological capital of the Japanese people, on the brutal theory that once civilians had tasted what their soldiers had done to others, only then might their murderous armies crack. Advocacy for a savage militarism from the rear, he thought, might dissipate when one’s house was in flames. People would not show up to work to fabricate artillery shells that killed Americans when there was no work to show up to. . . . The planes returned with their undercarriages seared and the smell of human flesh among the crews. Over 80,000 Japanese died outright; 40,918 were injured; 267,171 buildings were destroyed. One million Japanese were homeless.

The fire in Tokyo, the empire’s center, burned for four days; the glow of the inferno could be seen from one hundred and fifty miles away.23

To defeat Japan thoroughly, however, required even more: To cut short the war and save untold thousands of American lives, the United States dropped atomic bombs on Hiroshima and Nagasaki. The bombs laid waste to vast tracts of land, killed thousands of Japanese — and demonstrated that if Japan continued to threaten America, thousands more Japanese would suffer and die.

That overwhelming and ruthless use of force achieved its intended purpose. It ended the threat to the lives of Americans and returned them to safety — by demonstrating to the Germans and the Japanese that any attempt to implement their vicious ideologies would bring them only destruction. Defeated materially and broken in spirit, these enemies gave up. Since then Nazism and Japanese imperialism have withered as ideological forces.

Today, American self-defense requires the same kind of military action.

We are not in some “new kind of conflict” that must drag on for generations. As in World War II, the enemy we must defeat is an ideological movement: Islamic Totalitarianism. Just as the Nazis sought to dominate Europe and then the world, so the Islamists dream of imposing a global caliphate. To them, Western secularism — and America in particular — constitutes an obstacle to the expansion of Islam’s dominion and must be extirpated by force. The attacks of 9/11 were the culmination, so far, of a long succession of deadly strikes against us. The supposedly “faceless, stateless” terrorists are part of the totalitarian movement. They are motivated to fight and able to kill, because they are inspired and armed by regimes that back the movement and embody its ideal of Islamic domination. Chief among them are Iran and Saudi Arabia. Without Iran’s support, for example, legions of Hamas and Hezbollah jihadists would be untrained, unarmed, unmotivated, impotent.24

Victory today requires destroying regimes that provide logistical and moral support for Islamic Totalitarianism. An overwhelming show of force against Iran — and the promise to repeat it against other hostile regimes — would do much to end support for the Islamist movement, for it would snuff out the movement’s beacon of inspiration. Nearly thirty years after its Islamic revolution, Iran is brazenly chasing nuclear weapons and threatening the world’s most powerful nation. To many Muslims, Iran symbolizes the power of totalitarian Islam to overcome irreligious regimes (such as that of the Shah) and to reshape the geopolitical landscape.

A war against Islamic totalitarians would target not just the leadership of a hostile regime; it must demoralize the movement and its many supporters, so that they, too, abandon their cause as futile. The holy warriors are able to train, buy arms, hide their explosives, plan and carry out their attacks, only because vast numbers of Muslims agree with their goals. These supporters of jihad against the West who cheer when Americans die; who protect, support, and encourage the terrorists lusting to kill us; who are accomplices to mass murder; who urge their children to become “martyrs” in the path of Allah — they must experience a surfeit of the pain that their jihad has visited upon us. We must demonstrate to them that any attempt to perpetuate their cause will bring them personal destruction; they must be made to see that their cause is manifestly unachievable, hopelessly lost.25

This is how the Japanese were forced to renounce their cause. Having been abjectly humiliated, they did not rampage in the streets nor launch an insurgency; by crushing them, we did not create new enemies. An observation by General Douglas MacArthur, the commander in charge of occupied Japan, points to the reason why. At the end of the war, the Japanese

suddenly felt the concentrated shock of total defeat; their whole world crumbled. It was not merely the overthrow of their military might — it was the collapse of a faith, it was the disintegration of everything they had believed in and lived by and fought for. It left a complete vacuum, morally, mentally, and physically.

That collapse, disintegration, and vacuum is what we need to effect among the myriad supporters of Islamic Totalitarianism in the Middle East. (Notice that the vacuum left by Japan’s defeat cleared the way for the country to embrace rational values and build a new, peaceful regime. The Japanese, observes one writer, were “in a mood to question everything to which they had been loyal,” while “everything the Americans did was food for thought.”)26

There are many tactical options in prosecuting such a war, but, whatever the specifics, such a war is necessary to defeat Islamic Totalitarianism and end the threat it presents to American lives.

The forward strategy of freedom, however, called for something completely different. At no point — not even in the wake of 9/11 — did Bush declare his willingness to inflict serious damage on our enemies in the Middle East (whom Washington evasively calls “terrorists”). At every opportunity he took pains to assuage the grumbling of the “Arab Street” and the international community by affirming that our quarrel is only with the (allegedly) tiny minority of “radicals,” not with the vast majority of Muslims who (supposedly) reject the jihadists. Instead of aiming to defeat our enemies, the strategy’s fanciful goal was to replace the Taliban and Saddam Hussein with democratic regimes. Explaining his strategy, President Bush stated:

[W]e’re advancing our security at home by advancing the cause of freedom across the world, because, in the long run, the only way to defeat the terrorists is to defeat their dark vision of hatred and fear by offering the hopeful alternative of human freedom. . . . [T]he security of our nation depends on the advance of liberty in other nations.27

Pared to its essentials, the strategy’s rationale comes to this. We have just two options: Either we bring elections to them (and somehow become safer) — or they annihilate us.

Imagine that this strategy had guided America in World War II. Suppose that after the attack on Pearl Harbor, Americans were told that security would come, not by fighting the enemy until its unconditional surrender, but by deposing Hitler and Hirohito, and then setting up elections for their formerly enslaved people. What do you think would have happened? Would there have been any reason to believe that the Germans would not have elected another Nazi, or the Japanese another imperialist?

Precisely because this approach was not taken, and precisely because the Allied forces waged a vigorously assertive war to destroy the enemy, World War II was won decisively within four years of Pearl Harbor. Yet more than five years after 9/11, against a far weaker enemy, our soldiers still die every day in Iraq. And now the Islamists confidently believe that their ideal is more viable than ever, partly because we have helped them gain political power.

Of course, Bush’s hope was that elections would bring to power pro-democracy, pro-American leaders; that establishing a democracy in Iraq would set off a chain reaction in which the entire Middle East would be transformed politically. Elected new regimes would allegedly have the effect of drying out the metaphorical swamps wherein a “dark vision of hatred and fear” apparently infects Muslims and impels them to slaughter Westerners. The desire for liberty, Bush assured us, was universal (“I believe that God has planted in every human heart the desire to live in freedom”28); therefore, if only we could break the chains that oppress the peoples of the Middle East, they would gratefully befriend and emulate the West, rather than loathe and war against it. If we were to topple Saddam Hussein and deliver ballot boxes, Iraqis would be exposed to liberty and (just like any freedom-loving people) seek to realize this innate, though long suppressed, ideal. By implication, an all-out war to defeat the enemy was unnecessary, because U.S.-engineered elections would bring us long-term security.

Was Bush’s hope about who would win, shared by commentators and other politicians, honestly mistaken? Could genuinely pro-Western leaders be elected in the culture of the Middle East? Any objective assessment of the region would dispel that hope.

The region’s culture is and has been dominated by primitive tribalism, by mysticism, by resentment toward any ideas that challenge Islam. Popular reactions to real or imagined slights to the religion express widespread hostility to freedom. In 1989, TheSatanic Verses, a novel by Salman Rushdie that allegedly insults Islam, was met with a brutal response. Few if any Muslims bothered to read the book, judge it firsthand, and reach an independent assessment of it — but they fervidly demanded that Rushdie be executed. Muslim reaction to the publication of Danish cartoons of Mohammed in 2006 again underscored, in blood, the animus to freedom of speech. Muslim rioters, who attacked Western embassies in the Middle East, demanded the beheading of the cartoonists whose drawings were deemed unholy. And observe that censorship is routine not only in highly religious regimes such as Saudi Arabia, but also in ostensibly “moderate” ones such as Egypt, where the public sometimes clamors for it.

This opposition to freedom is not an accidental feature of particular regimes. Its source is a cultural antipathy to the values on which freedom depends. Political freedom in America is the product of the 18th-century Enlightenment, an epoch that venerated the individual and the sovereignty of his rational mind. And freedom can arise only in a culture that recognizes the irreplaceable value of a man’s life and that grasps the life-sustaining value of worldly, scientific learning. But the Arab-Islamic world today rejects the values necessary for freedom to take root (let alone flourish).

The endemic contempt for the individual is apparent in the deep-seated worship of family and clan. The individual is seen as possessing neither sovereignty over his own life nor independent value; he is regarded as merely a subordinate cell in a larger organism, which can and does demand his sacrifice. The group’s members kill to preserve family “honor”; brothers butcher sisters merely suspected of “disgracing” the family name. Yoked to the unchosen bonds of whatever tribe claims him, man must bow to its authority over his life regardless of what he believes to be true or good. Hence the custom of arranged marriages and the shunning of those who dare find a mate outside the tribe.

Whereas a Muslim who renounces material goods and memorizes the Koran is esteemed for his devotion, an individual who values progress and pursues secular knowledge is resented as disloyal to religious tradition. Accordingly, the modern Islamic world has given rise to only a miniscule number of scientists or innovators, who have produced nothing of significance. Intellectual giants (such as Newton and Einstein), innovators (such as Thomas Edison), and entrepreneurs (such as Bill Gates) are non-existent in the Middle East. Such men cannot develop in a culture that denigrates worldly knowledge, isolates itself from the books of the West, and wallows in self-righteous irrationality. A culture that deems self-assertion — whether in pursuit of scientific enlightenment, untraditional values, or individual happiness — an offense against tradition and the tribe, has no reason to embrace the rights of the individual. Individual rights are precisely the principles defining and protecting man’s freedom to act on his own goals according to his own judgment.

Most Muslims in the Middle East are not the people that Bush would like us to believe they are: They do not have a repressed love of freedom; they are not lovers of prosperity and individual fulfillment; they are not our friends. Vast numbers of them are rabidly anti-American. To believe otherwise is to evade the jubilant Palestinians, Egyptians, and Iraqis celebrating the attacks of 9/11; the street demonstrations across the Islamic world lionizing Osama Bin Laden; the popular glorification of “martyrs” on posters and in videos; the dedicated support for totalitarian organizations such as the Muslim Brotherhood and Lebanese Hezbollah. What so many Muslims harbor is not hopeful aspiration for, but savage hostility toward, rational ideals.

The relevant facts about the region are universally available and incontestable. Given an opportunity to choose their leaders, it is clear whom Muslims in the region would bring to power. Yet the Bush administration wishes to believe that Iraqis crave freedom and prosperity, that they are just like Americans (except for the misfortune of living under a dictator); it wishes to believe in this notion so much that the administration embraces it in flagrant defiance of reality.

Through various elections, however, the voices of the people in the region have been heard. The people have demanded rule by Hamas in the Palestinian territories, rule by the Muslim Brotherhood in Egypt, rule by Hezbollah in Lebanon, rule by SCIRI and Daawa in Iraq. (We are likely to see the pattern continue as well in Jordan, when that country holds elections at Washington’s urging.) Such results have exposed the forward strategy for the sheer fantasy it is and always was.

Bush’s strategy was concocted and advocated dishonestly: It is a product of evasion. The facts are plain, but Bush has refused to accept them. Bush’s plan is not some sophisticated alternative means of achieving victory — a means that somehow sidesteps a self-assertive war; victory was never its purpose. Rather, his plan is a rejection of the very goal of defeating the enemy. If not victory, what is the ultimate goal of the forward strategy?

The Crusade for Democracy

Although Bush’s strategy is called the “forward strategy for freedom,” this designation is a vicious fraud. The strategy has nothing to do with political freedom. An accurate title would have been the “forward strategy for democracy” — for unlimited majority rule — which is what it actually endorsed. There is a profound — and revealing — difference between advocating for freedom and advocating for democracy.

In today’s intellectual chaos, these two terms are regarded as equivalents; in fact, however, they are antithetical. Freedom is fundamentally incompatible with democracy. Political freedom means the absence of physical coercion. Freedom is premised on the idea of individualism: the principle that every man is an independent, sovereign being; that he is not an interchangeable fragment of the tribe; that his life, liberty, and possessions are his by moral right, not by the permission of any group. It is a profound value, because in order to produce food, cultivate land, earn a living, build cars, perform surgery — in order to live — man must think and act on the judgment of his own rational mind. To do that, man must be left alone, left free from the initiation of physical force by the government and by other men.

Since freedom is necessary for man to live, a proper government is one that protects the freedom of individuals. It does that by recognizing and protecting their rights to life, liberty, property, and the pursuit of happiness. It must seek out and punish those — whether domestic criminals or foreign aggressors — who violate the rights of its citizens. Above all, the government’s own power must be strictly and precisely delimited, so that neither the government nor any mob seeking to wield the state’s power can abrogate the freedom of citizens. This kind of government renders the individual’s freedom untouchable, by putting it off-limits to the mob or would-be power lusters. A man’s life remains his own, and he is left free to pursue it (while reciprocally respecting the freedom of others to do the same). This is the system that the Founding Fathers created in America: It is a republic delimited by the Constitution of the United States and the Bill of Rights. It is not a democracy.

The Founders recognized that a democracy — a system that confers unlimited power on the majority — is antithetical to freedom. Democracy rests on the primacy of the group. The system’s supreme principle is that the will — the desire — of the collective is the proper standard regarding political matters; thus, the majority can arrogate to itself the power to exploit and tyrannize others. If your gang is large enough, you can get away with whatever you want. James Madison observed that in a system of unlimited majority rule

there is nothing to check the inducements to sacrifice the weaker party or an obnoxious individual. Hence it is that such democracies have ever been spectacles of turbulence and contention; have ever been found incompatible with personal security or the rights of property; and have in general been as short in their lives as they have been violent in their deaths.29

Democracy is tyranny by the mob.

Accordingly, the constitutional framework of the United States prohibits the majority from voting away the rights of anyone. It is intended to prevent the mob from voting to execute a Socrates, who taught unconventional ideas. It is intended to prevent the majority from democratically electing a dictator such as Hitler or Robert Mugabe, who expropriates and oppresses a minority (e.g., Jews in Germany, white farmers in Zimbabwe) and devastates the lives of all. By delimiting the power that government is permitted to exercise, even if a majority demands that it exercise that power, the U.S. Constitution serves to safeguard the freedom of individuals.

The original American system is the system of political freedom — and it is incompatible with democracy. What Bush’s strategy advocates globally, however, is democracy.

Granted, the political system that Washington wishes to spread in the Middle East differs from the original direct democracy of ancient Athens in form, and it has some of the trappings of American political institutions; but it nevertheless enshrines the will of the majority. In Iraq, it puts the whims of Iraqi mobs first. Iraqis drafting the country’s new constitution were unconcerned with safeguarding the rights of the individual; instead, we bore witness to the ugly spectacle of rival pressure groups, representing ethnic and sectarian factions, wrangling to assert themselves as the voice of the collective will. Recall the protracted and histrionic clashes among those factions while they divvied up ministries in Iraq’s government and selected a prime minister. And Washington’s commitment to this perverse system is unambiguous.

Given the invasion of Iraq, if self-defense were part of the goal of the forward strategy, then one would logically expect that, for the sake of protecting American lives, Washington would at least insist on ensuring that the new regime be non-threatening, so that we do not have to face a resurgent threat. But Bush proclaimed all along that America would never determine the precise character of Iraq’s (or Afghanistan’s) new regime. The Iraqis were left to contrive their own constitution. Whatever the Iraqis chose, whomever they elected — Washington promised to endorse. The decision was entirely theirs. When asked whether the United States would acquiesce to an Iranian-style militant regime in the new Iraq, Bush said yes. Why should America help create a new hostile regime, a worse threat to our security than Saddam Hussein was? Because, Bush explained, “democracy is democracy. . . . If that’s what the people choose, that’s what the people choose.”30

To appreciate just how serious Washington is about putting the will of Iraqi mobs above the rights of Americans, consider how it conducted the war.

From the outset, Washington committed us to a war of liberation. Just as we toppled the Taliban, to liberate the Afghans, so we toppled Saddam Hussein, to liberate the Iraqis. The campaign in Iraq, after all, was called Operation Iraqi Freedom, not Operation American Defense. “Shock and awe” — the supposedly merciless bombing of Baghdad — never materialized. The reason is that Washington’s goal precluded devastating Iraq’s infrastructure and crushing whatever threat the Hussein regime posed to us; its goal was to provide welfare services and hasten the arrival of elections.

Bush had promised that America will “stand ready to help the citizens of a liberated Iraq. We will deliver medicine to the sick, and we are now moving into place nearly three million emergency rations to feed the hungry.”31 And, indeed, the fighting had hardly begun when Washington launched the so-called reconstruction. Our military was ordered to commit troops and resources (which were needed to defend our personnel) to the tasks of reopening schools, printing textbooks, renovating hospitals, repairing dams. This was a Peace Corps, not an Army corps, mission. Washington doled out food and medicine and aid to Iraqis, but it tied the hands of our military.

The U.S. military was ordered to tiptoe around Iraq. “We have a very, very deliberate process for targeting,” explained Brigadier General Vincent Brooks, deputy director of operations for the United States Central Command, at a briefing in 2003. ‘‘It’s unlike any other targeting process in the world. . . . [W]e do everything physically and scientifically possible to be precise in our targeting and also to minimize secondary effects, whether it’s on people or on structures.”32 So our forces refrained from bombing high-priority targets such as power plants, or in some cases even military targets located in historic sites. Troops were coached in all manner of cultural-sensitivity training, lest they offend Muslim sensibilities, and ordered to avoid treading in holy shrines or firing at mosques (where insurgents hide). The welfare of Iraqis was placed above the lives of our soldiers, who were thrust into the line of fire but prevented from using all necessary force to win the war or, tragically, even to defend themselves. (No wonder an insurgency has flourished, emboldened by Washington’s self-crippling policies.) Treating the lives of our military personnel as expendable, Washington wantonly spills their blood for the sake of democracy-building.33

In the run-up to the war, Bush promised that

The first to benefit from a free Iraq would be the Iraqi people themselves. . . . [T]hey live in scarcity and fear, under a dictator who has brought them nothing but war, and misery, and torture. Their lives and their freedom matter little to Saddam Hussein — but Iraqi lives and freedom matter greatly to us.34

Their lives did matter greatly to Washington — regardless of the cost to the lives or security of Americans.

The forward strategy dictates that we shower Iraqis with food, medicine, textbooks, billions in aid, a vast reconstruction, so that they can hold elections. We must do this even if it means the deaths of thousands of U.S. troops and a new Iraqi regime that is more hostile than the one it replaced.

This policy, and the calamity it has produced, are too much for (most) Americans to stomach. But such is the zealous commitment of Bush and others to the spread of democracy, that they conceive of Iraq as merely the beginning. Leaving aside whether they now think it is politically feasible, their goal was a far larger campaign.

In his State of the Union address in 2006, Bush proclaimed that America is “committed to an historic long-term goal: To secure the peace of the world, we seek the end of tyranny in our world.”35 To carry out this global crusade for democracy is supposedly to fulfill our nation’s “destiny,” our moral duty as a noble people. On another occasion, Bush asserted that “the advance of freedom is the calling of our time.”36 This mission, he claimed in another speech, was conferred upon us by God: “[H]istory has an author who fills time and eternity with his purpose. . . . We did not ask for this mission, yet there is honor in history’s call.”37

Picture what such an undertaking would entail, in practical terms, given the nightmare that Iraq has become. About 140,000 troops are stuck in the quagmire there today. At the peak of the fighting, as many as 300,000 of our military were involved. According to one estimate, Operation Iraqi Freedom has cost about $318 billion.38 So far in Iraq more than 3,000 Americans have died. About 22,000 have come home missing arms, limping, burned, blinded, deafened, psychologically scarred, brain damaged. Although the casualties of the war may be largely unseen, the carnage is all too real. What the advocates of a wider democracy crusade are calling for is morally obscene: not just one hellish ordeal like Iraq (which is horrendous in itself), but dozens of new campaigns that grind up American troops and flush away our nation’s lifeblood, campaigns that drag on indefinitely as we fulfill the open-ended “calling of our time.”

There is no conceivable reason to believe that a strategy so contemptuous of American lives is at all concerned with our self-defense. When its deceptions and lies are peeled away, what remains is a pernicious strategy that puts nothing above the goal of spreading democracy. Such dedication is mandated by the fundamental premise that serves as the strategy’s justification.

Observe what the dogged advocates of the strategy reject as illegitimate: a self-assertive war to defend America. It is unimaginable to them that America should fight a war against Islamic Totalitarianism with the intensity and righteousness that we did in World War II; that the United States should seek to demoralize the enemy; that we should, as Winston Churchill put it, “create conditions intolerable to the mass of the [enemy] population”;39 that the United States should seek victory, for its own self-defense. All of this is off the table. But sending young American men to die in order to bring Iraqis the vote is deemed virtuous, a noble imperative.

Advocates of the forward strategy are fervently committed to spreading democracy, because they are guided by a moral ideal. The principle shaping their thinking is the idea that pursuing values for your own benefit is evil, but selfless service to others is good. Virtue, on this morality, is self-sacrifice.

This ethical ideal dominates our culture. We are told by secular and religious authorities, left and right, that to be moral is to give up our values selflessly. We are bombarded with a seemingly endless variety of slogans that inculcate this same droning message: Put other people first; renounce your goals for the sake of others; don’t be selfish. As Bush puts it: “Serve in a cause larger than your wants, larger than yourself”; “our duty is fulfilled in service to one another.”40

In the religious variant of this moral code, the ideal is personified in Jesus. He suffered the agonies of crucifixion and perished on the cross — for the sake of all mankind. It is the self-abnegation of Jesus that Christian moralists enjoin their followers to emulate, and accordingly, over the centuries, Christian saints have been extolled for their asceticism and self-effacement. Though religion is the main propagator of this moral standard, it has also been propounded in various secularized forms.

The morality of self-sacrifice is today almost universally equated with morality per se. It is the standard of right and wrong that people accept unquestioningly, as if it were a fact of nature. Observe that even non-religious people regard Mother Theresa as a moral hero, because she devoted her life to ministering to the sick and hungry. Conversely, the achievements of a productive businessman like Bill Gates are deemed to have (at best) no moral significance. On this view, whoever pursues profits is plainly looking out for himself; he is being “selfish.” This creed teaches men to damn profit-seeking and, more widely, to suspect any self-assertion toward one’s goals. It teaches that only acting for the sake of others is virtuous.

The injunction to selfless service is addressed to the “haves” — those who have earned a value and have something to give up — because on this moral code they have no right to keep their wealth, to enjoy their freedom, to pursue justice, to protect themselves. Whom must they serve? Whoever has failed or never bothered to achieve a value: the “have-nots.” Their lack of a value, regardless of its cause, is taken as a moral claim on the productive and able. Because America is a “have” — strong, wealthy, prosperous — it has no right to prosecute a war to destroy Islamic Totalitarianism. It must instead renounce the pursuit of justice — which is what the Christian version of this morality counsels the innocent victim to do: “[R]esist not evil: but whosoever shall smite thee on thy right cheek, turn to him the other also,” and “Love your enemies, bless them that curse you, do good to them that hate you, and pray for them which despitefully use you, and persecute you.”41

But the oppressed, impoverished, primitive Iraqis are definitely “have-nots.” They have no food, no electricity, no freedom. It is they, not we, who have a moral claim to our time and money and lives, because they have not earned those values. Deep-pocketed America must therefore jettison the goal of its security and engage in selfless missions to bring succor to these destitute people. This comports perfectly both with Christian precepts and with the dogmas of secular moralists on the left (and the right) who demand that America embark on global “humanitarian” adventures rather than unleash its powerful military and defend itself.

The principle of self-sacrifice implies that the desires of Iraqis, however irrational and destructive, must be accorded moral legitimacy. Instead of prevailing upon them to adopt a new, secular regime that will be non-threatening to America, we must efface that goal and respect the whims of tribal hordes. It is their moral right to pen a constitution enshrining Islam as the supreme law of the land and to elect Islamists to lead their nation. They are the suffering and “needy,” we the prosperous and wealthy; on the ethics of self-sacrifice, the productive must be sacrificed to the unproductive.

But, as we have seen, carrying out the injunctions of this ideal contradicts the needs of U.S. security. It is self-destructive — and that is why it is regarded as noble. As Bush has stated approvingly, we Americans know how to “sacrifice for the liberty of strangers.”42

A sacrifice is the surrender of a value for the sake of a lesser value or a non-value; it entails a net loss. Giving up something trivial for the sake of a big reward at the end, for instance, scrimping and saving today in order to buy a car next year, is not a sacrifice. But giving up your savings to a random stranger is. It is no sacrifice to enlist in the military and risk your life in order to defend your cherished freedom. It is a sacrifice to send American soldiers to Iraq not to defend their own liberty and ours, but to ensure that Iraqis have functioning sewers. The ideal of self-sacrifice constitutes a fundamental rejection of man’s right to exist for his own sake.

The forward strategy is a faithful application of this moral code. America is the innocent victim of Islamist aggression, but on this code such victims have no right to exist or to defend their freedom. To mount a military campaign against the enemy in defense of our lives would be self-interested. Our duty, on this morality, is to renounce our self-interest. Ultimately, the goal of the crusade for democracy is not to destroy the threats arrayed against us; the goal is for America to sacrifice itself.

What, then, are we to make of the Bush administration’s contradictory rhetoric? It claims, seemingly in earnest, to be seeking America’s security. We were told for instance that “any nation that continues to harbor or support terrorism will be regarded by the United States as a hostile regime”;43 and that “We will not waver; we will not tire; we will not falter; and we will not fail” in the face of the enemy.44 At the same time, we were deluged with maudlin, though sincere, talk about sacrificing in order to lift the Middle East out of misery, poverty, and suffering. And what are we to make of the claims about the strategy’s chances of making us safer — in defiance of all facts to the contrary?

The rhetoric is not solely or primarily aimed at winning over the American public. The rhetoric serves as rationalization for a profoundly irrational moral ideal.

Our leaders recognize that we face a mortal threat, but they shrink from the kind of action necessary for our self-defense. We can see that in the initial reaction to 9/11. For a brief period after the attacks, the American public and its leaders did feel a genuine and profound outrage — and their (healthy) response was to demand retaliation. The nation was primed to unleash its full military might to annihilate the threat. Symbolizing that righteous indignation was the name chosen for the military campaign against the Taliban regime in Afghanistan: Operation Infinite Justice. The prevailing mood conveyed a clear message: America was entitled to defend itself. But this reaction was evanescent.

Even before the reality of the attacks faded, the flush of indignation subsided — as did the willingness to fight for our self-defense. In deference to the feelings of Muslims, Operation Infinite Justice was renamed Operation Enduring Freedom. After drawing a line in the sand separating regimes that are with us from those that are with our enemies, Washington hastened to erase that line by inviting various sponsors of terrorism, including Iran and Syria, to join a coalition against terrorism. The initial confidence that our leaders felt, the sense that they had right on their side, petered out.

They recoiled from that goal of self-defense when they considered it in the light of their deeply held moral premises. Their moral ideal told them that acting to defend America would be “selfish” and thus immoral. They could not endorse or pursue such a course of action. But their ideal further told them that to act selflessly is virtuous: the forward strategy, therefore, is obviously a moral and noble strategy, because its aim is self-sacrificial ministration to the needs of the meek and destitute. This was a policy that the Bush administration could endorse and act on.

The claims about ensuring the well-being of Iraqis reflect the strategy’s fundamental moral impetus. The claims about defending America are necessary for the advocates of the strategy to delude themselves and the American public. They need to make themselves believe that they can pursue a self-sacrificial policy and America’s security — that they can pursue both and need not choose between the two.

In essence, what our leaders want to believe is that their self-destructive policy is actually in our self-interest, that somehow it is a self-sacrifice (hence noble and moral) and yet not a self-sacrifice (hence achieves American security). This is why the administration insists that “Helping the people of Iraq is the morally right thing to do — America does not abandon its friends in the face of adversity. Helping the people of Iraq, however, is also in our own national interest.”45 This delusion involves two intertwined self-deceptions: that the forward strategy will occasion benefits in practice (i.e., security to America); and that a self-assertive war on the model of World War II, though apparently a practical way to make us safe, is ultimately suicidal.

Portraying Iraqis, and others in the Muslim Middle East, as our “friends in the face of adversity” was key to this delusion. They were characterized as latent friends who, when unshackled, would freely express their goodwill and form regimes that would become our allies. Proponents of the strategy had to tell themselves, and the public, that the vast majority of Muslims just want a better life, “moderate” leaders, and peace — not totalitarian rule by Islamists (as in fact so many do). If one pretends that the Iraqis are pining for liberty, it seems that they would welcome our troops with flowers and candy, and that the mission would be a cakewalk. If one evades the truth and insists on the self-deception that we would effectively be lending a hand to allies who share our values, then it seems vaguely plausible that the loss of American lives on this mission is not actually a loss, because major benefits redound back to America: By selflessly liberating Iraqis (and Afghans) we would, incidentally, be attaining the eventual security of our nation.

Bound up with that self-deception was another one. Its purpose is to discredit assertive war (such as we waged in World War II) as counterproductive and therefore impractical. Evading logic and the lessons of history, advocates of the forward strategy wished to believe that a war driven by self-interest (hence ignoble and immoral) would, after all, undermine our self-interest (by making us more, not less, vulnerable). This kind of war, the rhetoric would have us believe, is not an option worth entertaining. The delusion relied, again, on the rosy portrayal of the lovable peoples of the Middle East. The killers are said to be a scattered group lurking in shadows and operating across borders. For targeting the dispersed jihadists, the methods of past wars, such as carpet bombing, are far too coarse and would fail to kill off the scattered enemy. Further, if we were to use such means to flatten the Iraqi city of Fallujah to quell the insurgency, for instance, or to topple Iran’s totalitarian regime, we would thereby ignite the fury of Muslims caught up in the bombing. And that would turn the entire region against us.

Part of the alleged impracticality of a self-assertive war is that it would profoundly and adversely alter the ideological landscape in the region. Bush and other advocates of the forward strategy claim (preposterously) that all human beings are endowed with an “innate” love of liberty, that the multitude of Muslims in the region are “moderates” who passionately long for political freedom, and that if we nourish their longing for freedom, they will cease to hate or threaten us. But, we are told, if America were to deploy overwhelming military force against their hostile regimes and demoralize large numbers of Muslims, we would thereby deliver the masses to the hands of Bin Laden and other Islamic totalitarians. Demoralizing them would somehow overturn their purportedly “innate” desire for freedom and make them long to be enslaved under a sharia regime. By acting on our self-interest, so the rationalization goes, we will create new enemies and undermine our self-interest.

But there is no reason to believe that a devastating war — such as the one we waged against Japan — could fail to achieve its purpose: to destroy the enemy. We need to inflict widespread suffering and death, because the enemy’s supporters and facilitators are many and widespread. We need to demoralize them, because (as noted earlier) people demoralized in this fashion are motivated to doubt the beliefs and leaders that inspired their belligerence, promised them triumph, yet brought them a shattering defeat. Demoralizing Muslims who endorse and perpetuate the jihad would indeed overturn their ideas — their chosen allegiance to the vicious ideology of totalitarian Islam. Seeing that their cause is hopeless, they would be driven to abandon, not to intensify or renew, their fight. Scenarios predicting our doom if we dare assert ourselves are factually groundless and incoherent.

The rhetoric emanating from the Bush administration and its supporters is unavoidably contradictory. Our leaders need to make themselves, and us, believe that the inherently self-destructive forward strategy is the only way to proceed. It is (on their terms) the only moral path, but it must also be made to seem practical, that is, to our advantage.

The forward strategy is like umpteen other policies and doctrines that, in various ways, portray self-sacrifice as beneficial to the victim. The moral injunction to selflessness would hardly have the power that it does, were it not for the (empty) promise of some kind of eventual reward. For example, during certain periods Christianity (and Islam) stressed the everlasting rewards that a believer can hope to attain in Heaven, if he dutifully serves God’s will, effaces his desire for wealth and sexual pleasure, and selflessly ministers to the suffering. Heaven on earth — a workers’ paradise of limitless leisure and fulfillment — was Communism’s promise to the proletariat who renounced personal gain and labored selflessly for the sake of the collective (“from each according to his ability, to each according to his need”).

The forward strategy belongs in the same category as another contemporary scheme, which commands near-universal support: foreign aid. Both liberal and conservative advocates of foreign aid claim that by (selflessly) doling out billions of dollars in aid, America will somehow find itself better off. When no such gains ever materialize, we are rebuked for having been too stingy — and commanded to give more, much more, to give until it hurts, since doing so is allegedly in our interest. For example, in the wake of the suicide bombings in London two years ago, Bush and British Prime Minister Tony Blair claimed that we must give away $50 billion to feed Africans impoverished by their tribal wars and anti-capitalist ideas and to prop up the anti-Western terrorist regime in the Palestinian Territories. Why? Presumably, the suicide bombings were proof that we simply had not given enough, and, as Blair claimed (speaking for Bush and other supporters of aid) more aid would help us triumph over terrorism, someday.46 The forward strategy can be viewed as a perverse continuation of foreign aid: Since money alone was insufficient, we should selflessly bring elections to the world’s oppressed and sink billions in vast reconstruction projects for their sake.

Just as bribes in the form of foreign aid will supposedly make us safer but have not and cannot — so, too, sacrificing U.S. lives to spread democracy in the Middle East, and ultimately across the globe, will supposedly make us safer but has not and cannot. The actual effects are unmistakable.

The forward strategy is an immoral policy, and so its consequences are necessarily self-destructive. You cannot sacrifice your military strength and defend yourself in the face of threats.

The contradiction becomes harder to evade as more Americans die in Iraq and as more Islamist terror plots are uncovered. While the evidence mounts, however, the Bush administration remains undeterred, responding only with further evasions. Despite the unequivocal results of elections in the region that empowered Islamists; despite the overwhelming Muslim celebration of the war initiated by the Iran-Hamas-Hezbollah axis against Israel; despite the violent return of the Taliban in Afghanistan; despite the raging civil war in Iraq; despite the wholesale refutation of Bush’s predictions and hopes to date, he claimed in January 2007: “From Afghanistan to Lebanon to the Palestinian Territories, millions of ordinary people are sick of the violence, and want a future of peace and opportunity for their children.” Despite the reality that no Iraqis to speak of have made a choice for freedom, and although an overwhelming number of them remain openly hostile both to America and to freedom, Bush promises to stand with “the Iraqis who have made the choice for freedom.” Iraq’s fledgling democracy has yet to flower, he claimed, because it was being impeded by a lack of security. “So America will change our strategy to help the Iraqis carry out their campaign to put down sectarian violence and bring security to the people of Baghdad.” (Emphasis added.) Bush vowed to deploy a “surge” of some twenty thousand additional U.S. troops in Iraq.47

Although characterized as a change in strategy, this is just a change in means, not ends. Spreading democracy remains the unquestioned, self-delusional end, for which more troops and a push for security are the means. Before the war, Saddam Hussein’s regime was the obstacle that had to be removed so that Iraqis could have democracy. Now, it is the utter chaos of insurgency and civil war that obstructs the realization of an Iraqi democracy. In both cases, American men and women in uniform lay down their own lives for the sake of Iraqis. Amid the inevitable results of a democracy’s mob rule and the predictable sectarian war, the Bush administration looks on with purposefully unseeing eyes and rededicates America to a “surge” of senseless sacrifices. The multiplying evasions enable Bush and other advocates of the strategy to fool themselves, and any remaining Americans who still believe in the strategy.

But no amount of self-delusion can erase the facts.

After five years of war, America now faces a threat that we helped to make stronger. This is precisely the result to which the forward strategy had to lead, given its moral premises and the evasions of its proponents. The strategy entails sacrificing American lives, not merely for the sake of indifferent strangers, but for the sake of our enemy.

We have galvanized the undefeated enemy. The forward strategy has taught jihadists everywhere a profoundly heartening lesson: that America fights so that Muslims can assert their desire for Islamist leadership; that America does not destroy those who threaten the lives of its citizens; that America renounces its self-interest on principle, because we do not believe we have a right to defend ourselves. A “paper tiger” is how Osama Bin Laden characterized America prior to 9/11, and he thereby inspired many Muslims to join the jihad. Nothing that we have done since 9/11 has contradicted the jihadists’ view; the forward strategy and America’s other policies have confirmed it. Note Teheran’s glee at the grotesque spectacle of America’s suicidal policy. To say that Iran in particular feels invulnerable or that jihadists in general are encouraged is to understate matters.

Meanwhile, Bush’s strategy has drained not only the material strength of the United States, but also our will to fight. The nation is in retreat. Sickened and demoralized by the debacle in Iraq, many Americans dismiss the possibility of success. This ethos is typified by the Iraq Study Group, whose purpose was to offer forward-looking remedies for the deteriorating situation in Iraq and the region. In December, the group issued a report specifying seventy-nine recommendations, but these options boil down to a maddeningly limited range: Stick with the failed idealistic strategy (i.e., send more troops to engage in democracy-building); or abandon it (pull out the troops); and, either way, concede the “realist” claim that moral principles are a hindrance to foreign policy (and, of course, appease the hostile regimes in Syria and Iran). Ruled out from consideration is a self-assertive war to defeat the enemy, because tragically many misconstrue the forward strategy as epitomizing, and thus discrediting, that option.

Consequently, the amoralists, seemingly vindicated by the Iraq Study Group, are winning a larger audience. They advise us to check our principles at the door, pull up a seat at the negotiating table, and hash out a deal: “It’s not appeasement to talk to your enemies,” James Baker reassures us, referring to the prospect of diplomatic talks with North Korea (Baker is cochairman of the Iraq Study Group). Nicholas Kristof, a columnist for the New York Times, joins the chorus: “We sometimes do better talking to monsters than trying to slay them. . . .”48

But the correct lesson to draw is that we must reject neither idealism nor war, but the particular moral ideal driving the forward strategy. A campaign guided by the ideal of self-sacrifice and renunciation cannot bring us victory. We need a different ideal.

An Unknown Ideal

Our leaders insist that the forward strategy is indispensable to victory. They claim that only this strategy is noble, because it is self-sacrificial, and that only it is practical, because it will somehow protect our lives. We are asked to believe that in slitting our own throats, we will do ourselves no real harm, that this is actually the cure for our affliction. What the strategy’s advocates would like us to believe is that there is no alternative to the ideal of selflessness.

But we are confronted by a choice — and the alternatives are mutually exclusive. The choice is between self-sacrifice — and self-interest. There is no middle ground. There is no way to unite these alternatives. We must choose one or the other. If we are to make our lives safe, we must embrace the ideal of America’s self-interest.

Though largely unknown and misconceived, this moral principle is necessary to the achievement of America’s national security. Let us briefly consider what it stands for and what it would mean in practice.49

If we are to pursue America’s self-interest, we must above all be passionate advocates for rational moral ideals. We need to recognize that embracing the right ideals is indispensable to achieving our long-term, practical goal of national security. Key to upholding our national self-interest is championing the ideal of political freedom — not crusading for democracy.

Freedom, as noted earlier, is a product of certain values and moral premises. Fundamentally, it depends on the moral code of rational egoism. Whereas the morality of self-sacrifice punishes the able and innocent by commanding them to renounce their values, egoism, as defined in the philosophy of Ayn Rand, holds that the highest moral purpose of man’s life is achieving his own happiness. Egoism holds that each individual has an unconditional moral right to his own life, that no man should sacrifice himself, that each must be left free from physical coercion by other men and the government. Politically, this entails a government that recognizes the individual as a sovereign being and upholds his inviolable right to his life and possessions. This is the implicit moral basis of the Founders’ original system of government as the protector of the rights of individuals.

If we take the ideal of freedom seriously, then we must staunchly defend ourselves from foreign aggression. The liberty of Americans cannot endure unless our government takes the military steps necessary to protect our right to live in freedom. As a necessary first step, we should proclaim our commitment to this ideal, and promote it as a universal value for mankind. By demonstrating that we hold our own ideals as objectively right, as standards for all to live up to, we evince confidence in our values. The knowledge that America upholds its own ideals and will defend them to the death is a powerful deterrent in the minds of actual and would-be enemies.

A derivative benefit is that we can encourage the best among men, wherever they may be, to embrace the ideal of political freedom. To free nations, to nations moving toward freedom, and to genuine freedom fighters, we should give our moral endorsement, which is a considerable, if often underappreciated, value. For example, we should endorse the Taiwanese who are resisting the claims of authoritarian China to rule the island state. We should declare that a rights-respecting system of government is morally right and that an authoritarian one is morally wrong.

Although America should be an intellectual advocate for freedom, this precludes devoting material resources to “spread freedom.” It is only proper for our government to provide such resources to help a true ally, and only when doing so is necessary for the protection of the rights of Americans. It is never moral for America to send its troops for the sole purpose of liberating a people and then pile sacrifice upon sacrifice for the sake of nation building. It is never moral to send our troops on selfless missions or to fight wars in which America’s security is not directly at stake. Such wars — and the forward strategy itself — violate the rights of our troops and of all other American citizens by imperiling their freedom and security.

In order to protect the freedom of Americans, we must be able to distinguish friends from foes; we must, in other words, judge other regimes and treat them accordingly. The criterion for evaluating other regimes is the principle that government is properly established to uphold: freedom. A country that does respect the individual rights of its citizens has a valid claim to sovereignty. By befriending such a country, we stand to gain a potential trading partner and ally. Because tyrannies violate, instead of protect, the rights of their citizens, such regimes are illegitimate and have no right to exist whatsoever. Tyrannies are by their nature potential threats to America (and any free nation). History has repeatedly shown that a regime that enslaves its own citizens will not hesitate to plunder and murder beyond its borders. Rather than treating tyrannies as peace partners whom we can tame with the right mix of bribes, we should shun and denounce them loudly — and, when necessary, defend ourselves militarily against their aggression.

Determining which course of action, strategy, or foreign policy actually serves America’s self-interest requires rational deliberation and reference to valid principles. It cannot be achieved by wishful thinking (e.g., by pretending that our self-immolation can lead to our future benefit), nor by whatever our leaders pray will be expedient for the range of the moment (e.g., by “talking with monsters” and appeasing enemies). Instead, the goal of securing the enduring freedom of Americans, which includes security from foreign threats, requires figuring out rationally what constitutes a present threat; what is the most efficient means of permanently eliminating a given threat; which regimes are our allies and which our enemies (among other issues). What is required, in short, is a commitment to the full awareness of the relevant facts. That necessitates following the facts wherever they lead — and not being led by the corrupt delusions that the morality of self-sacrifice entails.

Today, the facts tell us that Islamic Totalitarianism is waging war on America. This movement commands wide support and is nourished principally by Iran and Saudi Arabia. The moral ideal of rational egoism counsels an unequivocal response: Defend the lives of Americans. That goal requires, as argued earlier, an overwhelming military campaign to destroy the enemy, leaving it permanently non-threatening. We need to wage as ruthless, unrelenting, and righteous a war as we waged sixty years ago against Germany and Japan. Only that kind of war can make us safe and enshrine our freedom. It is the only moral response, and therefore the only practical one.50

As to what America should do once it has defeated this enemy, again, the guiding moral principle should be that of our national self-interest.

It might be in our interest to install a free political system in a Middle Eastern country that we have defeated — if we have good reason to believe that we can create a permanently non-threatening regime and do so without sacrificing U.S. wealth or lives. And if we were to choose such a course, the precise character of the new regime would have to be decided by America. For instance, in contrast to Bush’s selfless approach to the constitutions of Iraq and Afghanistan, in post-war Japan the United States did not give the Japanese people a free hand to draw up whatever constitution they wished, nor to bring to power whomever they liked. We set the terms and guided the creation of the new state, and in part because this is how Japan was reborn, it became an important friend to America. (Observe that the Japanese were receptive to new political ideals only after they were thoroughly defeated in war; Iraqis were never defeated and, on the contrary, were encouraged to believe that their tribalism and devotion to Islam were legitimate foundations for a new government.)

But we have no moral duty to embroil ourselves in selfless nation-building. In a war of retaliation against a present threat, we are morally entitled to crush an enemy regime because we are innocent victims defending our unconditional right to be free. Our government’s obligation is to protect the lives of Americans, not the welfare of people in the Middle East. The responsibility for the suffering or death of people in a defeated regime belongs to those who initiated force against us. If it proves to be in our national self-interest to withdraw immediately after victory, leaving the defeated inhabitants to sift through the rubble and rebuild on their own, then we should do exactly that. In doing so, we must instill in them the definite knowledge that, whatever new regime they adopt, it too will face devastation if it threatens America.

If Islamic totalitarians and their many followers know without a doubt that the consequence of threatening us is their own demise, the world will be a peaceful place for Americans. And that, ultimately, is the end for which our government and its policies are the means: to defend our freedom so that we can live and prosper.

The struggle to defend our freedom depends fundamentally on an ideological battle. The clash is one that our leaders persist in evading and obscuring, but which cannot be escaped. At issue is the moral principle that shapes America’s foreign policy. The conflict comes down to this: Do Americans have a duty to sacrifice themselves for strangers — or do we have a moral right to exist and pursue our individual happiness? This is the battle that we must fight and win in America if we are to triumph over Islamic Totalitarianism.


About The Authors

Yaron Brook

Chairman of the Board, Ayn Rand Institute

Elan Journo

Senior Fellow and Vice President of Content Products, Ayn Rand Institute

Endnotes

Acknowledgment: The authors would like to thank Dr. Onkar Ghate, senior fellow of the Ayn Rand Institute, for his invaluable editorial assistance with this project.

 

1 President Addresses American Legion, Discusses Global War on Terror, February 24, 2006, http://georgewbush-whitehouse.archives.gov/news/releases/2006/08/20060831-1.html.

2 The administration has referred to it both as a “forward strategy of freedom” and as a “forward strategy for freedom.” For instance, President Bush, State of the Union address, January 20, 2004; and, President Addresses American Legion, February 24, 2006.

3 President Bush Discusses Freedom in Iraq and Middle East, November 6, 2003, http://georgewbush-whitehouse.archives.gov/news/releases/2003/11/20031106-2.html.

4 Reuters, “Most Arab Leaders Survive to See Another Summit,” New York Times, March 27, 2006.

5 “Mideast Climate Change,” New York Times, March 1, 2005.

6 Tyler Marshall, “Changes in Mideast Blunt Bush’s Critics,” Los Angeles Times (published in Boston Globe), March 7, 2005, http://www.boston.com/news/world/articles/2005/03/07/changes_in_mideast_blunt_bushs_critics/.

7 Ari Z. Weisbard, “Militants at the Crossroads,” The Nation (web edition), April 24, 2003, http://www.thenation.com/article/militants-crossroads.

8 Jeffrey Bartholet, “How Al-Sadr May Control U.S. Fate in Iraq,” Newsweek, December 4, 2006; “Pentagon: Militia More Dangerous Than al Qaeda in Iraq,” CNN, December 19, 2006.

9 Michael R. Gordon and Dexter Filkins, “Hezbollah Said to Help Shiite Army in Iraq,” New York Times, November 28, 2006.

10 President Bush Calls for New Palestinian Leadership, June 24, 2002, http://georgewbush-whitehouse.archives.gov/news/releases/2002/06/20020624-3.html.

11 Ramsay Short, “Key Job for ‘Terrorist’ Hizbollah in Lebanon’s New Cabinet,” Daily Telegraph, July 20, 2005.

12 Efraim Karsh, Islamic Imperialism: A History (New Haven: Yale University Press, 2006), p. 209.

13 Joshua Muravchik, “Jihad or Ballot-Box?” Wall Street Journal, December 13, 2005.

14 Michael Slackman, “Egyptians Rue Election Day Gone Awry,” New York Times, December 9, 2005.

15 See the statistics compiled by icasualties.org using data from CENTCOM and the U.S. Department of Defense.

16 Carlotta Gall, “Taliban Threat Is Said to Grow in Afghan South,” New York Times, May 3, 2006; Carlotta Gall, “Attacks in Afghanistan Grow More Frequent and Lethal,” New York Times, September 27, 2006; Carlotta Gall and Abdul Waheed Wafa, “Taliban Truce in District of Afghanistan Sets Off Debate,” New York Times, December 2, 2006.

17 Associated Press, “U.K. Tracking 30 Terror Plots, 1,600 Suspects,” MSNBC.com, November 10, 2006, http://www.msnbc.msn.com/id/15646571/.

18 Dimitri Simes, “No More Middle East Crusades,” Los Angeles Times, January 9, 2007.

19 Anthony Cordesman, “Al Qaeda’s Small Victories Add Up,” New York Times, June 3, 2004.

20 Lawrence Kaplan, “Springtime for Realism,” The New Republic, June 21, 2004.

21 Michael Rubin, “Right War, Botched Occupation,” USA Today, November 27, 2006.

22 Max Boot, “Defending and Advancing Freedom: A Symposium,” Commentary, vol. 120, no. 4, November 2005, p. 24.

23 Victor Davis Hanson, The Soul of Battle (New York: The Free Press, 1999), p. 2.

24 On the motivation of totalitarian Islam, see Elan Journo, “Jihad on America,” The Objective Standard, vol. 1, no. 3, Fall 2006.

25 For a detailed consideration of what’s required to defeat the enemy, see John David Lewis, “‘No Substitute for Victory’: The Defeat of Islamic Totalitarianism,” The Objective Standard, vol. 1, no. 4, Winter 2006–2007; see also Yaron Brook and Alex Epstein, “‘Just War Theory’ vs. U.S. Self-Defense,” The Objective Standard, vol. 1, no. 1, Spring 2006.

26 MacArthur and the writer, Theodore Cohen, are quoted in Joshua Muravchik, Exporting Democracy: Fulfilling America’s Destiny, revised paperback ed. (Washington, DC: AEI Press, 1992), pp. 101–102.

27 President Addresses American Legion, February 24, 2006.

28 President Bush, State of the Union address, January 20, 2004.

29 Alexander Hamilton, James Madison, and John Jay, The Federalist Papers, edited by Clinton Rossiter (New York: Mentor, 1999), p. 49.

30 Associated Press, “Bush Doesn’t See Longtime U.S. Presence in Iraq,” FoxNews.com, October 19, 2004.

31 President Discusses the Future of Iraq, February 26, 2003, http://georgewbush-whitehouse.archives.gov/news/releases/2003/02/20030226-11.html.

32 Thom Shanker and Eric Schmitt, “American Planners Stick With the Scalpel Instead of the Bludgeon,” New York Times, March 27, 2003.

33 For more on the self-destructive rules of engagement governing U.S. forces, see Brook and Epstein, “Just War Theory.”

34 President Discusses the Future of Iraq, February 26, 2003.

35 State of the Union address, January 31, 2006.

36 President Bush, Address to the Nation on U.S. Policy in Iraq, January 10, 2007.

37 President Bush Speaks to United Nations, November 10, 2001.

38 Amy Belasco, The Cost of Iraq, Afghanistan, and Other Global War on Terror Operations Since 9/11, (CRS Report for Congress [RL33110], Congressional Research Service, Library of Congress), updated September 22, 2006, p. 11.

39 Churchill quoted in Michael Walzer, Just and Unjust War: A Moral Argument With Historical Illustrations, 3rd ed. (New York: Basic Books, 2000), p. 261.

40 President Bush, Swearing-In Ceremony, January 20, 2005; Inaugural Address, January 20, 2001.

41 The Holy Bible, King James Version, (New York: American Bible Society: 1999; Bartleby.com, 2000), The Gospel according to St. Matthew.

42 President Bush, State of the Union, January 28, 2003, http://georgewbush-whitehouse.archives.gov/news/releases/2003/01/20030128-19.html.

43 President Bush, Address to a Joint Session of Congress and the American People, September 20, 2001, http://georgewbush-whitehouse.archives.gov/news/releases/2001/09/20010920-8.html.

44 George Bush, Presidential Address to the Nation, October 7, 2001, http://georgewbush-whitehouse.archives.gov/news/releases/2001/10/20011007-8.html.

45 National Strategy for Victory in Iraq, National Security Council, November 2005; http://georgewbush-whitehouse.archives.gov/news/releases/2005/11/20051130-2.html.

46 See, for example, “Blair Says Hope Can Fight Terror,” BBC News online, July 8, 2005.

47 President Bush, Address to the Nation on U.S. Policy in Iraq, January 10, 2007. Quotations as recorded in the transcript of the New York Times, January 11, 2007.

48 Nicholas Kristof, “Talking With the Monsters,” New York Times, October 10, 2006. Baker’s comment is quoted in this article.

49 We here offer a non-exhaustive discussion of the subject; for a detailed treatment of it, see Peter Schwartz’s book The Foreign Policy of Self-Interest: A Moral Ideal for America. See also Brook and Epstein, “Just War Theory.”

50 On the practicality of such a campaign, see Lewis, “No Substitute for Victory.”

 

Further Reading

Ayn Rand | 1957
For the New Intellectual

The Moral Meaning of Capitalism

An industrialist who works for nothing but his own profit guiltlessly proclaims his refusal to be sacrificed for the “public good.”
View Article
Ayn Rand | 1961
The Virtue of Selfishness

The Objectivist Ethics

What is morality? Why does man need it? — and how the answers to these questions give rise to an ethics of rational self-interest.
View Article