Government Debt: Jefferson and Gallatin Were Right

Chris Edwards

The world economy is getting rattled this week by the consequences of excessive government debt. Greece may be cut off from its international creditors, and Puerto Rico announced that it cannot make full payments on its massive debt. In both cases, years of excessive spending are sadly dealing a crushing blow to the living standards of millions average citizens.

These jurisdictions have fallen into the abyss, but debt has risen to dangerous levels in many places around the world, including in our federal government. The root of the problem is Keynesian economics, which has taught governments since the 1930s that deficit spending is good for the economy. That message has been fiscal catnip for politicians, who have eagerly run deficits year after year, and built up debt to massive levels. To compound the problem, some economists—such as Paul Krugman—have been falsely recommending that we not worry too much about rising debt because it is “money we owe to ourselves.”

The effects of Keynesianism can be seen in federal budget data. From 1791 to 1929, the federal government balanced its budget in 68 percent of the years. But from 1930 to 2015, the government balanced its budget in just 15 percent of the years. The result is that federal debt has risen to levels unprecedented in our peacetime history

Alexander Hamilton was a brilliant man and an important Founding Father, but he was on the wrong side of the crucial debt issue.”

Economist James M. Buchanan pointed his finger squarely at Keynesianism for the decline in beneficial “Victorian fiscal morality,” which had constrained the political incentive to deficit-spend in our early history. With the rise in Keynesianism, the “modern era of profligacy” was born, he said. Looking ahead, official projections show federal debt soaring in coming decades unless we get the profligacy under control.

Battles over federal government debt go back to the beginning of our nation, as I discuss in recent testimony to the House Budget Committee. On one side in the 1790s were Treasury Secretary Alexander Hamilton and other Federalists, who favored a perpetual federal debt, believing that it would create economic and political benefits.

On the other side were Thomas Jefferson and Albert Gallatin, who were appalled by high debt, and led the opposition to Hamilton’s fiscal policies. They believed that government debt was economically dangerous and politically corruptive. And they argued that debt enriched the financial elite at the expense of the people, while unjustly imposing burdens on future generations. They were right on all counts.

Fortunately for the nation, Jefferson’s election to the White House in 1800 was the beginning of the end for the big-government Federalists. Jefferson and his Treasury Secretary Gallatin substantially cut the debt before the War of 1812 intervened. After the war, Jeffersonian leaders pushed once again to run surpluses and pay down the debt. That anti-debt drive succeeded with the complete extinction of federal debt under President Andrew Jackson in the mid-1830s.

Jefferson’s anti-debt views more or less held sway in national politics through to the 1920s. The government racked up debt during wars, but it always paid it down in subsequent years. The rise of Keynesianism in the 1930s ended all that, reviving misguided Hamiltonian ideas in favor of government spending, debt, and central planning. Those ideas have caused a great deal of damage in recent decades, and have led to perpetually unbalanced federal budgets.

The Treasury recently announced plans to replace Hamilton on the $10 bill. Hamilton was a brilliant man and an important Founding Father, but he was on the wrong side of the crucial debt issue. If he is going to be replaced, we should swap him out for Albert Gallatin. Gallatin’s absence on any of our coins or notes is a major oversight given that he was a highly accomplished Treasury Secretary for 13 years, serving much longer in that post than Hamilton.

Gallatin was also a congressman, senator, foreign minister, a founder of New York University, and an expert on Native American languages. Gallatin was a stellar public servant, and he was on the right side of the government debt issue. He also favored transparency in government finances, and worked to present full and accurate Treasury accounts to the public. Furthermore, Gallatin played a leading role in the Whiskey Rebellion, opposing an unfair excise tax scheme pushed by Hamilton.

With the perils of government debt now more clear than ever, it would be a good time to give Gallatin his due and feature him on our currency. That would honor a man who was every bit as smart as Hamilton, but showed more foresight in recognizing that politicians and credit markets are a toxic combination.

Chris Edwards is editor of at the Cato Institute.

Six Humpty Dumptys Playing Calvinball

Michael F. Cannon

In King v. Burwell, all nine Supreme Court Justices agreed on one thing. The King challengers claimed the Patient Protection and Affordable Care Act (ACA) authorizes the Internal Revenue Service to issue tax credits and impose the related penalties only “through an Exchange established by the State,” and not through exchanges established by the federal government. “Petitioners’ arguments about the plain meaning of Section 36B are strong,” Chief Justice John Roberts wrote, and their interpretation is “the most natural reading of the pertinent statutory phrase.” Justice Antonin Scalia agreed, finding the meaning of that phrase “so obvious there would hardly be a need for the Supreme Court to hear a case about it.”

There was no dissent about the plain meaning of the phrase “through an Exchange established by the State.” All seven of the other Justices joined one of those two opinions. Nor was there dissent about the fact that that phrase, used repeatedly in the statute, is the only provision of the Act that speaks directly to the question presented. Not a single Justice lent credence to the government’s assertions that this was a meritless case, or one that the Court should never have accepted. Nor was there dissent about the consequences of that provision’s plain meaning in the face of broad state resistance to the ACA. All agreed that withholding tax credits in the thirty-four states with federal exchanges could lead to adverse selection in those states, with premiums climbing higher and higher in a “death spiral.”

A number of things stand out about the King v. Burwell opinion.”

Where disagreement emerged was over the question of whether the former should alter the latter — whether the potential for adverse consequences “compels” the Court to disregard the universally acknowledged meaning of the operative text. The Court split six to three in favor of rewriting plain text, and rendering the requirement “established by the State” a nullity. The Chief Justice wrote for the majority, Scalia for the dissent. The effect of the ruling is that this and future administrations must do what everyone agrees the plain meaning of the operative text does not permit: spend hundreds of billions of dollars and tax 70 million employers and individuals in those thirty-four states.

Others can speak with greater authority about what this ruling means for textualism, contextualism, purposivism, and other legal doctrines. I can speak as one who relied on the plain meaning of that provision; who encouraged states not to establish exchanges because of the power that provision gave them to affect the course of the ACA; who helped lay the groundwork for King and three related cases; who can reasonably claim to have done more research on this aspect of the statute and its legislative history than any living human being; and who followed King all the way to the Supreme Court.

A number of things stand out about the opinion of the Court.

First, the Chief Justice would have benefited from spending more time with the statute. One of the factors that leads him to conclude “the Act may not always use the phrase ‘established by the State’ in its most natural sense [and] the meaning of that phrase may not be as clear as it appears” has to do with the Act’s definition of “qualified individuals.” He writes:

[Section 1312] defines the term “qualified individual” in part as an individual who “resides in the State that established the Exchange.”…And that’s a problem: If we give the phrase “the State that established the Exchange” its most natural meaning, there would be no “qualified individuals” on Federal Exchanges. But the Act clearly contemplates that there will be qualified individuals on every Exchange… It would be odd indeed for Congress to write such detailed instructions about customers on a State Exchange, while having nothing to say about those on a Federal Exchange.

As Jonathan Adler and I explain elsewhere, Roberts discovers a tension that simply isn’t there. Congress naturally referred to “the State that established the Exchange” because in Section 1312, Congress was speaking to the states and under the presumption that they would comply with the directive in the previous section (Section 1311) that each state “shall” establish an Exchange. Roberts is also incorrect that the Act “ha[s] nothing to say” about qualified individuals on federal exchanges. Just two sections later, indeed the instant Congress finished laying out the rules for state-established exchanges, Section 1321 drops the presumption of state cooperation and provides that if a state doesn’t establish its own exchange, “the Secretary shall take such actions as are necessary to implement such [a] requirement[].”

Then again, Roberts managed to conclude that “by the State” could be read to mean “by the federal government,” even though he acknowledged Congress explicitly defined “State” in a way “that does not include the Federal Government.” So perhaps spending more time with the statute would not have helped.

Second, Roberts’s opinion might benefit from basic logic. “The conclusion that Section 36B is ambiguous,” he writes, “is further supported by several provisions that assume tax credits will be available on both State and Federal Exchanges.” But do they? He cites two provisions in the section of the Act where Congress directs states to establish exchanges and requires them to furnish information about “the tax credits [available] under section 36B.” He cites a third that directs state and federal exchanges to report the information on people who do and don’t receive tax credits, and “aggregate amount of any advance payment of such credit,” if there was one. Petitio principii.

Third, for all of Roberts’s appeals to context, his opinion might benefit from spending more time with the broader context of the Act. Roberts concludes that Congress could not have meant “established by the State” literally because “Congress passed the Affordable Care Act to improve health insurance markets, not to destroy them.” Oh? As Roberts acknowledged the last time he adopted a saving construction of the ACA, Congress included in the Act a Medicaid expansion that would have destroyed Medicaid in any state that did not participate. The consequences of that part of the Act would have been even more jarring. Health coverage for the state’s most vulnerable residents would have disappeared.

Congress enacted both a Medicaid expansion and a system of Exchanges that allowed states to destroy what Congress sought to create. Congress did so for the same reason both times: they didn’t have the votes for anything else. Until today, I guess.

Fourth, his opinion might benefit from spending more time with the ACA’s legislative history. Roberts maintains that while Congress might have been willing to tolerate adverse selection in “comparatively minor programs,” it is “implausible” that Congress would have been willing to do so in “the very heart of the Act.” Yet all sides acknowledge that another leading bill advanced by Senate Democrats did exactly that.

In 2009, Democrats on the Senate’s Health, Education, Labor, and Pensions (HELP) Committee reported to the full Senate a bill that would have withheld exchange subsidies in any state that failed to implement that bill’s employer mandate. The HELP bill was ultimately merged with a Finance Committee-reported bill to produce the ACA. All sides agree that condition sat like a dagger pointed at the heart of the Act.

Indeed, the HELP bill’s community-rating price controls were even tighter then the ACA’s. In other words, many ACA supporters were willing to tolerate an even greater risk of adverse selection than would exist under a plain-meaning interpretation of the ACA, which Roberts was “compel[led] to reject” because Congress supposedly would never tolerate such.

Embarrassingly, Roberts even cites testimony from a 2009 HELP Committee hearing to show that Congress understood (and did all it could to avoid) the danger of adverse selection, while ignoring legislation the HELP Committee produced that shows ACA supporters rejected the very lesson Roberts thinks it internalized.

Fifth, the Court likely forestalled any challenges to other ways the IRS is illegally implementing the ACA’s premium tax credits and employer mandate.

University of Iowa professor Andy Grewal has identified several additional instances where the IRS expanded the availability of tax credits beyond the clear and unambiguous eligibility limits Congress imposed. Somewhat ironically, the agency “effectively provides the largest tax credits to persons who don’t satisfy the statutory criteria.” Some of these eligibility expansions would trigger penalties against employers under the ACA’s employer mandate, and therefore “could face judicial challenge.”

If six Justices were willing to rewrite a congressionally defined term like “State,” however, there may not be five votes on the Court to enforce a numerical eligibility limit like the requirement that tax-credit recipients have household income above one hundred percent of poverty. I mean, issuing tax credits to people below the poverty line advances the goal of “ensur[ing] that anyone who wanted to buy health insurance could do so,” right? Even if five votes do somehow exist, this opinion hid them so well I doubt any plaintiff would be willing to go to the trouble. The IRS may understandably see itself as having free rein to expand and contract both the ACA’s premium subsidies—and its mandate penalties. Because democracy.

Sixth, speaking of democracy, the Chief Justice’s majority opinion in King v. Burwell, like his controlling opinion in NFIB v. Sebelius, has further protected the Act from democratic accountability.

MIT health economist Jonathan Gruber infamously explained that architects of the ACA deliberately designed it to hide its taxes and transfers and thereby skirt democratic accountability. If those taxes were transparent, Gruber explained, public opposition would have prevented it from passing. Put differently, the ACA’s authors knew that voters would have rejected the ACA if they understood how it works, so they hid what they were doing. The ACA was an undemocratic enterprise from the start.

In NFIB v. Sebelius, Roberts positively gutted the most important constraint the Constitution imposes on Congress — democratic accountability — to save the ACA. He reasoned that if the individual mandate could fairly be viewed as a tax, it was the Court’s duty to uphold it as an exercise of Congress’s taxing power. How Congress described the mandate (i.e., whether it invoked its taxing power) was not relevant. What mattered was whether any of Congress’s enumerated powers could produce such a measure.

The so-called “magic words” doctrine has merit, but not when it allows Congress to defeat democratic accountability. In this case, Congress deliberately chose not to create the mandate using Congress’s taxing power because doing so would have prevented the bill from passing.Just as the enumerated powers doctrine prevented Congress from enacting an individual mandate under the Commerce Clause, the Constitution’s democratic constraints prevented Congress from imposing a tax on Americans who don’t buy insurance. It is difficult to imagine a clearer instance where democratic accountability prevented Congress from using an enumerated power.

When Roberts applied the “magic words” doctrine to the ACA’s individual mandate, he allowed Congress to blow right through the constitutional constraint of democratic accountability. Enabling Congress to evade this constraint is no more defensible than allowing it to evade the other. Roberts helped Congress enact a tax that Congress never could have imposed by itself.

Three years later, the Court is presented with the fact, and agrees, that the plain terms of the ACA give states the ability to veto the Act’s premium-assistance tax credits, cost-sharing subsidies, employer mandate, and (largely) its individual mandate. If enough states exercise those vetoes, the cost of ACA coverage would become transparent to such an extent that Congress would be forced to open it. Thirty-four states exercised those vetoes. A critical mass if ever there was one.

Indeed, ACA opponents swept into state office in 2010, 2011, and 2012 on waves of public opposition to that law. When the Obama administration chose to implement the disputed taxes and subsidies in those states anyway, it disenfranchised its political opponents in the states. It made the ACA appear more workable than, as all nine Supreme Court Justices acknowledge, the plain language of the statute would suggest. It insulated itself, and its allies in Congress, from democratic accountability for their support for the ACA. By promising to spend, and then spending tens of billions of dollars, it created a constituency with funds that, as nine Supreme Court Justices agree, the plain text of the ACA does not authorize it to spend. The Obama administration and its allies gained tremendous political advantage by departing from the plain meaning of the ACA. By baptizing the administration’s departure from the text of the ACA to its own political advantage — all in service of the ACA’s underlying goals, of course — the Court has once again protected the ACA from democratic accountability.

What it all means is that the ACA’s opponents haven’t failed. They succeeded. They read the bill. They stopped implementation in thirty-four states. Under the plain terms of the Act, that means they exposed the true cost of “ObamaCare” coverage. They forced Congress to reopen the statute. They won, and without abrogating any laws. They won using good, clean democracy.

Seventh, unfortunately, winning only gets you so far when you’re up against six Humpty Dumptys playing Calvinball. When he blew past Congress’ express definition of the term “State,” Roberts inadvertently omitted his cite to Louis Carroll:

“When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean — neither more nor less.”

“The question is,” said Alice, “whether you can make words mean so many different things.”

“The question is,” said Humpty Dumpty, “which is to be master — that’s all.”

When he rewrote “established by the State” to include federal Exchanges and then glossed over the absurdities that interpretation creates elsewhere in the Act by claiming the phrase “mean different things in different places,” he likewise omitted his citation to Calvin and Hobbes:

The only permanent rule in Calvinball is that you can’t play it the same way twice.

image Michael F. Cannon is director of health policy studies at the libertarian Cato Institute.

Scalia’s Obamacare Argument Is Stronger than Roberts’

Ilya Shapiro

This Obamacare case was supposed to be different. King v. Burwell was not a constitutional challenge that threatened or promised to roll back federal power. It merely asked the U.S. Supreme Court to interpret four simple words, “established by the state,” that recur multiple times in myriad variations to distinguish among the different kinds of exchanges through which people can buy insurance under the Affordable Care Act: state, federal, territorial, private, other.

Pretty esoteric and sui generis — as many exercises in interpreting complex statutes are — although at stake were subsidies flowing to millions of people and mandates/penalties constricting millions of others.

And yet the result was the same: Chief Justice John Roberts, who had been the apple of George W. Bush’s eye, twisted key words away to vindicate the administration of Barack Obama.

Scalia renamed the law at issue “SCOTUScare,” but really it deserves the moniker RobertsCare.”

Take this sentence from one of the opinions: “If we give the phrase ‘the State that established the Exchange’ its most natural meaning, there would be no ‘qualified individuals’ on Federal Exchanges.” You’d think that I pulled that line from Justice Antonin Scalia’s dissenting opinion. After all, it takes the statutory text on its face, with the interpreted result that Congress gave states the incentive to create exchanges by making their citizens eligible for tax credits if they do.

But you’d be wrong. It comes from the majority opinion, in which the chief justice admits, as he did three years ago in the individual mandate case of the National Federation of Independent Business v. Sebelius, that those challenging the administration are correct on the law. Nevertheless, again as he did before, Roberts contorted himself to ignore that “natural meaning” and rewrite Congress’s “inartful” scheme, this time such that “exchange established by the state” means exchange established by anyone. Scalia rightly called this novel interpretation “absurd.”

Of course, Roberts explained his twistification by finding it “implausible that Congress meant the Act to operate in this manner,” to deny tax credits for health insurance as part of legislation intended to expanded coverage. Yet it’s hardly implausible to think that a statute that still says that states “shall” set up exchanges — the drafters forgot to fix this bit after lawyers pointed out that Congress can’t command states to enforce federal law — would effectively give states an offer nobody thought they’d refuse.

It was supposed to be a win-win: States would run health care exchanges (yay federalism!) and everyone who needed subsidies to afford Obamacare’s more expensive policies would get them (yay universal health care!).

But a funny thing happened on the way to utopia, and only 16 states (plus the District of Columbia) took that bargain, perhaps having been burned too many times already by the regulations that accompany any pots of “free” federal money. And so we were stuck with way too many people in the federal exchange, covering the states that didn’t opt in. Obamacare the reality didn’t accomplish Obamacare the dream. And the law as written said that subsidies for health insurance — in the form of tax credits — were to be given only to people in state exchanges, not the federal one.

That may not be the absolutely correct reading of the Affordable Care Act story. But it’s nothing if not plausible.

And it should’ve been the end of the legal analysis: The law’s clear text produces a plausible result, so courts should enforce that “natural meaning.” As Scalia put it, “Words no longer have meaning if an Exchange that is not established by a State is ‘established by the State.’”

“Under all the usual rules of interpretation,” he continued, explaining the obvious, “the Government should lose this case.” Alas, “normal rules of interpretation seem always to yield to the overriding principle of the present Court: The Affordable Care Act must be saved.”

The best that can be said about the ruling is that it disclaimed any reliance on the so-called Chevron doctrine, which tells courts almost always to defer to executive agency interpretations of statutes. (A more recent case expanded Chevron even to questions of the agencies’ own authority.)

That pernicious bit of jurisprudence has enabled the administrative state to explode and, had Roberts relied on it, would have allowed the executive branch to legislate without any judicially enforceable limit. Instead, again like three years ago, we have a horrendous bit of wordplay that subverts the rule of law to preserve the operation of an unpopular program that has done untold damage to the economy — but is really a ticket good for the ACA train only.

No, instead of allowing agencies to rewrite laws, the chief justice on Thursday gave himself that power. Scalia renamed the law at issue “SCOTUScare,” but really it deserves the moniker RobertsCare.

Ilya Shapiro is a senior fellow in constitutional studies at the Cato Institute, a libertarian think tank, and editor-in-chief of the Cato Supreme Court Review.

Top Dozen Villains in Greek Soap Opera: Who Is to Blame as Greece and Euro Stagger toward the Brink?

Doug Bandow

Negotiations in Brussels to resolve the Greek fiscal crisis appear deadlocked, with Athens heading toward default on Tuesday. German Chancellor Angela Merkel insisted that Greece make a deal before the markets open Monday: Germany “will not be blackmailed.” Greek Prime Minister Alexis Tsipris responded by denouncing “blackmails and ultimatum” and scheduling a referendum on the deal on July 5.

The European Union was supposed was supposed to create a de facto United States of Europe. Although the original constitution was rejected by Dutch and French voters, the Eurocratic elite forged ahead with a treaty, which did not require popular ratification. Only the Irish voted, and they first said no. But under pressure from virtually every establishment individual and institution across the continent, the Irish did as they were told and voted yes the second time.

In 2009 the Lisbon Treaty finally took effect. The result was supposed to be a new Weltmacht, a putative superpower with a president, foreign minister, and parliament, moving toward ever greater centralization. The Europeans prided themselves on answering Henry Kissinger, who so many years ago sarcastically asked for Europe’s phone number.

Alas, after last January’s Greek election it was obvious that whoever answers that line does not speak for Greece. Indeed, it isn’t clear if the EU’s leaders, many appointees confirmed by parliamentarians elected by national voters primarily using their ballots as protest votes, represented anyone in Europe other than themselves. And the Eurocrats, an amalgam of bureaucrats, academics, journalists, businessmen, politicians, and lobbyists who dominate Brussels.

The European story is reaching its climax and no one knows how it is going to turn out.”

To most EU leaders common people are an impediment. The Eurocrats reflexively intone “more Europe” in answer to every question, but voters increasingly are supporting protest parties, some populist, some worse. In countries like the Netherlands the rabble-rousers seem destined for government. In London it is the government, led by Prime Minister David Cameron, which is seeking to weaken Brussels’ control, after which it will hold a referendum on continuing membership in the body.

The most fundamental problem remains the “democratic deficit,” which then Czech President Vaclav Klaus spoke of. The EU began as a forum for economic cooperation, mainly to help integrate West Germany back into Europe. The later Common Market created a relatively free trade zone for member states, breaking down import barriers. But a forum for increased economic liberty offered little to the Eurocrats, who, like most denizens of Washington, D.C., are by inclination and profession meddlers. Their objective is to coercively reorder society. That requires a strong central government. Hence creation of the European Union.

Advocates weren’t shy about their ambitions. In 1992 German Chancellor Helmut Kohl predicted “creation of what the founding fathers of modern Europe dreamed of after the war, the United States of Europe.” Of course, Kohl and his fellow Eurocrats were about the only people with such a fantasy. But that didn’t matter, since they had the power. Or so they thought.

After all, they surmounted the embarrassing defeat of the European constitution. The Euro was another step in the unification process. The commercial union added a monetary union, currently joining 19 nations. The drafters recognized that leaving fiscal policy to individual members created a dangerously unbalanced system. But for the Eurocrats this flaw actually was a benefit: they believed further political integration would be necessary in response. A more powerful EU was expected, one that really would be more like the United States of America.

Alas, the European Union exalts process not substance. The EU has three presidents, who compete for attention and authority. The parliament has only passing connection to the people of Europe. No one salutes the flag. Obnoxious, meddling rules poured forth from Brussels, and, like those issued by Washington, D.C., do more to obstruct than advance daily life. Most important, virtually no one feels loyal to the new behemoth emerging one dictate at a time. When it comes to football (soccer), no one identifies with Europe, but with Germany, Spain, Portugal, England, and every other individual country. There are no EU patriots. No European would die for Brussels, not even the Belgians, who barely averted the break-up of their badly divided nation.

Cato Institute forum on EU and Eurozone

But the EU carries on as a conglomeration rather than a unit, secure in the support of the vast majority of the continent’s elite, including national governments. The Euro crisis has shaken this political foundation, however. The EU was sold as an organization of equals, which promised its peoples there would be no bail-outs. It turned into a transfer union. In wealthy nations, most notably Germany, people have tired of subsidizing their neighbors. In deeply indebted countries people have tired of economic sacrifices imposed to secure outside aid. And everywhere Europeans increasingly resent the growing internationalism and cosmopolitanism imposed by the center.

Until January the popular revolt was contained. But then radical left-wing Syriza, many leaders just a step or two away from communism, won a near majority. The new government rejected the Brussels consensus, overturned previously negotiated austerity, and demanded debt relief. However, the other European nations, especially those which had implemented their own wrenching reform programs, were not inclined to toss more good money after bad to those who tore up past agreements. Tsipras and his colleagues spent the last five months frittering away what little good will with which they were greeted. Now default threatens, which most likely would mean a Greek exit from the Eurozone (“Grexit”) and possibly even from the EU.

However the crisis—running so long that it should be referred to as a regularly scheduled soap opera—is resolved, the march toward ever greater power in Brussels appears over. Euroskeptic parties are on the rise and other radical left and independent groups have followed Syriza’s example in Italy and Spain. If London wins concessions to hold it within the EU, other nations likely will follow with the own demands, leaving the European project something less.

Nevertheless, the ongoing drama continues to entertain. Who are the dozen most spectacular villains?

  1. Helmut Kohl. The first chancellor of a united Germany, Kohl agreed to sacrifice the legendary German Mark for the Euro even though he knew he was creating a monetary union without a fiscal union. He assumed that the resulting problems would push Europe toward greater fiscal coordination and central control. Instead, the Eurozone risks a crack-up.
  2. The Greek political establishment. Both the dominant parties, Pasok and New Democracy, profited from the sclerotic and venal state which they created, filled with clients dependent on government hand-outs. Athens lied about its fiscal status to meet the Euro’s criteria and then borrowed wildly to throw a grand party for a people more inclined to security and leisure than entrepreneurship and productivity.
  3. Greece’s creditors. They lent money at near-German interest rates to a nation as unlike Germany as exists. After all, what could possible go wrong? Athens had been inducted into the sacred Eurozone. Many European banks, including in Germany, went in deep, risking their own solvency. Thus, the entire bail-out program was more a rescue of banks than Greeks, who ended up saddled with more debt, though with longer maturities and at lower interest rates.
  4. The International Monetary Fund. Long one of the world’s least popular organizations, the IMF used to spark riots in foreign nations when its officials arrived to negotiate a new “adjustment” program. Originally created to support a system of fixed exchange rates, it transformed itself into a “development” bank, subsidizing socialist and authoritarian regimes and creating long-term financial dependency around the globe. Since then it has become one of the primary financiers of troubled European nations, most notably Greece.
  5. Valery Giscard d ‘Estaing. One of the primary architects of the initial, and later rejected, constitution, he pushed forward with the Lisbon Treaty, constructed to leapfrog the European people in creating a more powerful European Union, a consolidated government more characteristic of a traditional nation state. He exemplifies the corrupt establishment which benefited most from Brussels’ rise.
  6. Alexis Tsipris and Syriza. This disparate movement of the left believed in fiscal alchemy—more government spending, taxing, and regulating would turn into a roaring economy. Filled with the chutzpah of deadbeat borrowers, Greece’s new government believed that the plurality which brought it to power could overrule the electorates in 27 other countries which would pay for additional bail-outs. Perhaps even worse in the high stakes personal negotiations, the Greek government believed studied insult would generate concessions instead of the contempt.
  7. The European Central Bank. With the traumatic memory of post-World War I hyper-inflation still haunting Berlin, Germany agreed to cede monetary authority to a common central bank only after receiving due assurances of fiscal propriety. However, the ECB shifted from economics to politics when it began buying the bonds of deeply indebted European states, most notably Greece, providing a sub rosa subsidy to the improvident. Like America’s Federal Reserve, the European Central Bank believed itself to be responsible for protecting elites rather than the people ultimately stuck paying the bill.
  8. The Greek people. Life long seemed wonderful in Greece, which gloried in its system of relaxed but corrupt inefficiency. Life became even better with Euro-subsidized borrowing, which allowed people to prosper despite their economy being littered with bloated bureaucracies and privileged cartels, and hamstrung with debilitating regulations and profiteering politics. When the bill finally came due, they reacted with shocked outrage and insisted that someone somewhere had to be responsible.
  9. British Prime Minister Tony Blair. The heir to the world’s longest and most influential parliamentary democracy, this exemplar of “New Labor” sacrificed national sovereignty in ratifying the Lisbon Treaty. Now the British people are inclined to reverse course without major concessions (“Brexit”), and the Eurocrats are reluctantly retreating, promising change. To Britain’s great gain Blair was unable to drag his nation into the Eurozone; then Chancellor and later Prime Minister Gordon Brown helped block that fool’s errand.
  10. The French political elite. France has one of the most venerable and celebrated histories in Europe. However, its people suffer from the stultifying state controls and high taxes for which their nation is famous. Avowedly conservative Nicolas Sarkozy proved that the right could be as statist as the left, while socialist Francois Hollande, who took power in a close election, doubled down. He was forced to reverse course as the economy crashed and burned. So poorly is he regarded that the National Front’s Marine Le Pen could edge him out for the likely run-off with the conservative candidate, expected to be Sarkozy again, in the 2017 presidential contest.
  11. The European Parliament. Nominally the one democratic arm of the EU, the EP has been pressing for more authority. Yet it may be the one legislative assembly in the world which no one votes for on its own merit. The best predictor of EP election results is the status of ruling parties in individual nations. More often than not voters use MEP contests as an opportunity to protest against unpopular rulers. The exact parliamentary make-up therefore results more from political happenstance than conscious choice.
  12. Angela Merkel. Into her supposedly safe hands she has thrice convinced German voters to place their government. However, she represents the most insipid and disappointing qualities in a politician. Blessed with responsibility for governing Europe’s most prosperous and populous nation, she has spent her years in office doing as little consequential as possible. The liberalish economic reforms which enable Germany to prosper today were put in place by the last Social Democratic Chancellor, Gerhard Schroeder. When ruling with the business-friendly Free Democratic Party Merkel allowed policy to drift. Her most important legacy will have been to force her people to bail out virtually everyone in Europe. But she is a Eurocrat to the core. Opined Merkel: “We have a common currency, but no common political and economic union. And this is exactly what we must change. To achieve this—therein lies the opportunity of the crisis.”

What an opportunity!

The European story is reaching its climax and no one—since the show is live, not prerecorded—knows how it is going to turn out. Almost certainly the European confederation is going to loosen. The Eurocrats no longer can stop the retreat. Worried an overwrought Merkel: “If the Euro fails it’s not just the currency that fails, but Europe and the idea of European unification.”

Actually, the bigger threat to the grand European Project is rising popular resistance. At the time the Lisbon Treaty was ratified, polls indicated that majorities in half of the EU members would have rejected the pact. Since the treaty was not put to a vote, however, the Eurocrats were able to ram the changes through. But populist forces are gaining ground in national and EP elections. Even in Germany opposition to the Euro has spawned a new party, Alternative for Germany. France’s Le Pen, who has an outside chance of winning the French presidency in 2017, called the Greek election a “victory for the people against European oligarchy.” Earlier this week she declared: “Today we’re talking about Grexit, tomorrow it will be Brexit, and the day after tomorrow it will be Frexit.”

Greece and the European establishment might yet come to terms and the Euro might stagger along. But this almost certainly is not the Eurozone’s last crisis. And it certainly is the end of the easy assumption that the incantation of “more Europe” will solve all problems. It appears that many Europeans have had just about as much Europe as they can stand. In coming months and years the debate is likely to be over how much and how fast they can roll back “Europe.”

Doug Bandow is a Senior Fellow at the Cato Institute and a former Special Assistant to President Ronald Reagan.

Taper Talk, and the $10 Bill

Steve H. Hanke

Since May 2013, Fed taper talk has fluctuated between hot and cold. When it’s hot, the markets anticipate a monetary tightening and prices become volatile.

Recently, speculation about just when the Fed will increase interest rates has reared its head, again. Since early 2013, I have said that the Fed would not act until late 2015. Well, it’s now approaching that date and I think the Fed will act, but later, rather than earlier.

The U.S. monetary stance remains schizophrenic and tight. In consequence, the U.S. remains in a growth recession — growing, but below the trend rate.”

The U.S. monetary stance remains schizophrenic and tight. In consequence, the U.S. remains in a growth recession — growing, but below the trend rate. The CFS Divisia M4 — the most important measure of the money supply for those of us who embrace a monetarist approach to national income determination — is growing at an anemic year-over-year rate of 2.8 percent.

How could this be? After all, over the past few years, the Fed has been engaged in the largest quantitative easing program in its history. To find explanations, we must revert to John Maynard Keynes at his best. Specifically, we must look at his two-volume 1930 work, A Treatise on Money. Keynes separates money into two classes: state money and bank money.

State money is the high-powered money (the so-called monetary base) that is produced by central banks. Bank money is produced by commercial banks through deposit creation.

Today, bank money accounts for about 80 percent of the total U.S. money supply, measured by M4. Anything that affects bank money dominates the production of money. So, we must look at bank regulations — courtesy of the Basel regulatory procedures and the Dodd-Frank legislation. These new regulations have been ill-conceived, procyclical, and fraught with danger. Indeed, bank money, the elephant in the room, has been struggling under a very tight monetary policy regime since the financial crisis of 2008–2009. This has forced the Fed to keep state money on an ultra-loose leash. The net result of this schizophrenic, tight monetary policy stance has been a sluggish growth rate in broad money and a continued growth recession, absent inflation, in the U.S.. The accompanying chart tells that tale. But, that’s not the only one swirling around Washington D.C..


In 2013, the U.S. government decided that the greenback needed a facelift. That didn’t raise eyebrows. But, Treasury Secretary Jack Lew’s June 17th announcement did. That is when Secretary Lew shocked many by proclaiming that Alexander Hamilton (1755-1804) — the first and foremost Treasury secretary — would be demoted and share the ten-dollar bill with a yet unnamed woman. While this shocked many, it shows how political currency can be.

Just how great was Hamilton? A recent scholarly book by Robert E. Wright and David J. Cowen, Financial Founding Fathers: The Men Who Made America Rich, begins its pantheon of greats with a chapter on Alexander Hamilton. It is aptly titled “the Creator.” But, I am getting ahead of the story.

A graduate of King’s College (now Columbia University), Hamilton began his professional career as George Washington’s Chief of Staff during the Revolutionary War. During his varied career, Hamilton was a profound journalist. His most famous journalistic project was a series of 85 opinion pieces that called for the ratification of the Constitution. These essays are called The Federalist Papers, and are the most cited sources by the U.S. Supreme Court. The Federalist Papers were published in 1787 and 1788 in New York City’s Independent Journal. These important essays — written under pseudonyms by Alexander Hamilton, James Madison and John Jay — were of very high quality and set the stage for the Constitutional Convention and the resulting product. In passing, it is worth mentioning that Hamilton organized this project, wrote most of the essays, and, of all the Founding Fathers, performed most of the intellectual work for the least historical credit. That said, two notable economists have given Hamilton his due. Lionel Robbins thought The Federalist Papers were “the best book on political science and its broad practical aspects written in the last thousand years.” And if that were not enough, Milton Friedman wrote in 1973 that The Federalist Paper, No. 15, written by Hamilton, “contains a more cogent analysis of the European Common Market than any I have seen from the pen of a modern writer.”

Hamilton’s prowess as a writer and journalist wasn’t a one-shot affair. He drafted a large part of George Washington’s famous Farewell Address, which was published in the American Daily Advertiser. And only three years before his untimely death, resulting from a wound inflicted in a duel with Aaron Burr, Hamilton founded the New-York Evening Post.

Alexander Hamilton was also a distinguished lawyer. He took on many famous cases out of principle. After the Revolutionary War, the state of New York enacted harsh measures against Loyalists and British subjects. These included the Confiscation Act (1779), the Citation Act (1782) and the Trespass Act (1783). All involved the taking of property. In Hamilton’s view, these acts illustrated the inherent difference between democracy and the law. Even though the acts were widely popular, they flouted fundamental principles of property law. Hamilton carried his views into action and successfully defended — in the face of enormous public hostility — those who had property taken under the three New York state statutes.

After the Constitution was ratified and George Washington was elected President, the new federal government lacked credibility. Public finances hung like a threatening cloud over the government. Recall that paper money and debt were innovations of the colonial era, and that once the Revolutionary War began, Americans used these innovations to the maximum. As a result, the United States was born in a sea of debt. A majority of the public favored a debt default. Alexander Hamilton, acting as Washington’s Secretary of the Treasury, was firmly against default. As a matter of principle, he argued that the sanctity of contracts was the foundation of all morality. And as a practical matter, Hamilton argued that good government depended on its ability to fulfill its promises.

Hamilton won the argument and set about digging the country out of its financial debacle. Among other things, Hamilton was — what would today be called — a first-class financial engineer. He established a federal sinking fund to finance the Revolutionary War debt. He also engineered a large debt swap in which the debts of individual states were assumed by the newly created federal government. By August 1791, federal bonds sold above par in Europe, and by 1795, all foreign debts had been paid off. Hamilton’s solution for America’s debt problem provided the country with a credibility and confidence shock.

Hamilton’s legacy is — like it or not — the U.S. federal system of government. It follows rather closely the outlines laid out by Hamilton and his fellow Federalists. This, of course, probably makes Hamilton’s life-long nemesis, fellow Founding Father and President Thomas Jefferson, turn in his grave. And Jefferson is not alone. Indeed, Hamilton was, and always has been, a lightning rod. In consequence, Hamilton’s fame has experienced ups and downs.

When Hamilton died in 1804, Jeffersonians seized the opportunity to launch a smear campaign to diminish the reputation of their rival. This campaign was effective. Hamilton’s reputation slumped from his death until the 1860s.

Once the Civil War broke out, Hamilton’s star rose. His popularity began to surge in the North because he had been an active abolitionist. His fame would peak during the Gilded Age (1873-1900), a time when America experienced unprecedented industrialization and the spread of high finance — both Hamiltonian features.

With the U.S. stock market crash in 1929 and the Great Depression, Hamilton’s star plummeted, as Hamilton’s name was associated with Wall Street, banking, and high finance. It is not surprising that Hamilton proved to be an inviting target. Indeed, Franklin Delano Roosevelt, in his 1932 Commonwealth Club Address, lambasted Hamilton.

Hamilton would stay down until 1982, when the Federalist Society was founded during the first Reagan administration. Hamilton’s image remained on the upswing until the Great Recession of 2009.

The current crisis brought with it the anti-Wall Street, anti-bank, and anti-capitalistic sentiments. And, yes, Hamilton’s fame took a hit, once again. To relegate the great Hamilton to a bit part on the $10 bill is going too far, however. But, when it comes to currencies, we should never be surprised by what politicians can serve up.

Steve H. Hanke is a professor of Applied Economics at The Johns Hopkins University in Baltimore and a Senior Fellow at the Cato Institute in Washington, D.C. You can follow him on Twitter: @Steve_Hanke

Libertarians Have Long Led the Way on Marriage

David Boaz

As the Supreme Court prepares for a possibly historic ruling, most of the country now supports gay marriage. Libertarians were there first. Indeed John Podesta, a top adviser to Bill Clinton,  Barack Obama, and Hillary Clinton and founder of the Center for American Progress, noted in 2011 that you probably had to have been a libertarian to have supported gay marriage 15 years earlier.

Just seven years ago, in the 2008 presidential campaign, Barack Obama, Joe Biden, and Hillary Clinton all opposed gay marriage. The Libertarian Party endorsed gay rights with its first platform in 1972 — the same year the Democratic nominee for vice president referred to “queers” in a Chicago speech. In 1976 the Libertarian Party issued a pamphlet calling for an end to antigay laws and endorsing full marriage rights.

That’s no surprise, of course. Libertarians believe in individual rights for all people and equality before the law. Of course they recognized the rights of gay people before socialists, conservatives, or big-government liberals.

As long as marriage is licensed by government, same-sex couples are entitled to equal legal rights.”

The Declaration of Independence promised life, liberty, and the pursuit of happiness to Americans. Of course, not everybody enjoyed those rights at first. But eventually those ideas took root and led the abolition of slavery and later to civil rights and women’s rights. It took even longer for people to take seriously the idea of homosexual activity as a matter of personal freedom and to recognize gays and lesbians as a group deserving of rights.

It was the classical liberals, the ancestors of libertarians, who first came to that recognition. From Montesquieu and Adam Smith in the 18th century to the Nobel Prize–winning economist F.A. Hayek in 1960, it was libertarians who insisted that (in Hayek’s words) “private practice among adults, however abhorrent it may be to the majority, is not a proper subject for coercive action for a state whose object is to minimize coercion.”

Historians have often noted the general danger to minorities of a powerful and expansive government. In his book Christianity, Social Tolerance, and Homosexuality, the Yale historian John Boswell wrote that “gay people were actually safer under the [Roman] Republic, before the state had the authority or means to control aspects of the citizenry’s personal lives. Any government with the power, desire, and means to control such individual matters as religious belief may also regulate sexuality, and since gay people appear to be always a minority, the chance that their interests will carry great weight is relatively slight.” In Intimate Matters: A History of Sexuality in America, John D’Emilio and Estelle Freedman noted that a growing commitment to freedom in 18th-century America brought about “an overall decline in state regulation of morality and a shift in concerns from private to public moral transgressions.”

Despite the broad influence of liberalism in the world, governments have continued to meddle in sexuality. As recently as the 1960s, homosexual relations were illegal in almost all states, and 13 states still had such laws on the books until the Supreme Court struck them down in 2003. When these laws were vigorously enforced, they drove gay people underground and created much misery. Gays and lesbians could not be open about their lives. If they were, they risked being fired, being thrown out of their homes, and even being beaten or killed. Once gay people stood up for their rights, social attitudes began to change and governments backed away from enforcing the laws. However, until the court ruling, sodomy laws were still used, for instance, to deny gay parents custody of their children.

Today, Libertarians believe, as John Stuart Mill famously wrote, that “over himself, over his own body and mind, the individual is sovereign.” That applies to gay people and to everyone else. Thus Libertarians continue to oppose laws criminalizing any consensual sexual activity among adults, in the United States and elsewhere.

Many Libertarians argue for the complete privatization of marriage, making marriage a matter of individual contract and — for those who want it — a religious ceremony, thus removing any need for state recognition of marriages. As long as marriage is licensed by government, however, same-sex couples are entitled to equal legal rights. The same rule applies to other government programs, from tax laws to Social Security to adoption. Libertarians would like to get government out of most areas, but as long as government is involved, it must treat citizens equally. The Supreme Court may be about to agree.

David Boaz is executive vice president of the Cato Institute and author of The Libertarian Mind, just published by Simon & Schuster.

Who Would Not Favor Economic Growth?

Michael D. Tanner

One of the enduring fault lines in American politics has been over how to best increase economic growth. Conservatives and libertarians have generally argued for lower taxes and fewer regulations. Liberals have called for government investment in infrastructure and measures designed to boost consumer spending. Supply-side versus demand-side stimulus gets debated, and the impact of trade and immigration policies cleaves party lines. Even issues like unemployment insurance and education reform are argued about in terms of their contribution to a growing economy.

They may have wildly different ideas about how to get there, but all sides have agreed on the basic destination: a growing economy. Until recently.

Lately, on the Left a new strain of thought has risen that questions whether growth is a good thing after all. “Growth shouldn’t be any president’s economic goal,” writes former labor secretary Robert Reich. Reich complains that “almost all the gains from growth have gone to the richest 1 percent.” He goes on to suggest that rather than growing the economy, the government should be concerned with creating more jobs, even if that means sacrificing innovation or efficiency that might benefit the economy as a whole.

Some on the Left are questioning whether growth is a good thing after all.”

There is a story, perhaps apocryphal, about Milton Friedman. While touring China, he came upon a team of nearly 100 workers building an earthen dam with shovels. Friedman pointed out that with a bulldozer, a single worker could create the dam in an afternoon. A Communist official replied, “Yes, but think of all the unemployment that would create.”

“Oh,” said Friedman, “I thought you were building a dam. If it’s jobs you want, then take away their shovels and give them spoons.”

Reich clearly has joined the spoon brigade.

Speaking of China: That also seems to be the attitude of the Left’s new intellectual hero, Thomas Piketty. In his now-famous book, Capital in the Twenty-First Century, Piketty laments that economic growth in China has led to rising inequality; he is apparently indifferent to the fact that that economic growth has lifted billions of people out of poverty. Better, he seems to be saying, that we be equally poor than unequally rich.

It is one thing when these ideas are bandied about by pundits or academics, but they are also making their way into the political arena. Vermont senator and Democratic presidential candidate Bernie Sanders is also critical of economic growth. He was asked directly: If the policies he was advocating were to “result in a more equitable distribution of income, but less economic growth, is that trade-off worth making?” Sanders replied forthrightly, “Yes.”

In fact Sanders has a host of reasons for opposing economic growth. Taking a Reich-like line, he believes that growth serves little purpose because most of the gains go to the rich. “If 99 percent of all the new income goes to the top 1 percent, you could triple it, it wouldn’t matter much to the average middle-class person. The whole size of the economy and the GDP doesn’t matter if people continue to work longer hours for low wages.”

Moreover, “you can’t just continue growth for the sake of growth in a world in which we are struggling with climate change and all kinds of environmental problems.” Besides, economic growth, according to Sanders, both stems from and leads to mindless consumerism. “You don’t necessarily need a choice of 23 underarm spray deodorants or of 18 different pairs of sneakers when children are hungry in this country.”

So far, Hillary Clinton still talks about her desire for a growing economy. But with Sanders attracting large and enthusiastic crowds, and coming within ten points of Clinton in one poll in New Hampshire, one wonders how long that commitment will last. Indeed, already her policies have moved left; she has started espousing more redistribution, higher taxes, and increased regulation of business in ways that would almost certainly slow economic growth.

To be sure, the Republican candidates remain focused on economic growth. Jeb Bush says that he seeks an economy growing at least 4 percent a year. Marco Rubio calls for “a growing economy, not a growing government.” Scott Walker says that “nationally we need to have more talk about growth.” And Rand Paul explains that his new tax plan is “pro-growth.”

But even among Republicans, there are voices — especially among opponents of free trade and open immigration — that seem to put jobs or cultural values ahead of economic growth.

Economic growth has benefited us all, rich and poor alike, enormously. It’s easy to forget how different life was just a century ago, when only 20 percent of American homes had electricity, life expectancy was not even 55, and one out of ten children died before the age of 15. We no longer devote nearly half of our days simply to putting food on the table. Women, minorities, and the poor in particular are better off because we are a wealthier nation.

The difference between redistribution and growth is as stark as the difference between Venezuela and Hong Kong. It may well be a difference that we see debated in 2016.

Michael Tanner is a Senior Fellow at the Cato Instititute and author of Leviathan on the Right: How Big-Government Conservatism Brought Down the Republican Revolution.

Supreme Court Dries Price Controls Like a Raisin in the Sun

Ilya Shapiro and Randal Meyer

How would you like it if, after a year of planting, growing, and harvesting a crop, you learned the government would take 47 percent of your yield without any compensation or guarantee of future return? That was the dilemma Marvin and Laura Horne faced when federal trucks showed up at their farm to haul away tons of raisins. Unlike most growers, however, the Hornes decided not to let the government simply take the literal fruits of their labor, but instead to fight all the way to the Supreme Court—twice.

The Agricultural Marketing Agreement Act (AMAA) of 1937, a cockamamie New Deal program that nobody’s bothered to repeal, allows the U.S. secretary of agriculture to create commissions for certain agricultural products in certain geographical areas, and to approve “marketing orders” from those commissions. These marketing orders allocate a portion of the designated crop from all producers in the covered region for government control and disposition. In other words, they set aside part of farmers’ crops to do with as bureaucrats decide.

The Hornes’ crop was controlled by—we’re not making this up—the Raisin Administrative Committee. In 2002–2003, this committee ordered raisin growers to turn over 47 percent of their crop; the following year, 30 percent. The Hornes decided not to comply with the order when the federal government came to collect its “free” raisins. For their troubles, they received a fine in the amount of the “fair market value” of the raisins—nearly half a million dollars—plus a $200,000 penalty for their disobedience.

Agriculture despots used to be able to seize farmers’ produce at will. With a new Supreme Court decision in favor of two raisin farmers, that may change.”

What the Raisin Decision Says

Horne v. Department of Agriculture first came to the Supreme Court on the question of whether the Hornes even had the right to claim that the government action was indeed a “taking” for “public use,” requiring the government to give the Hornes “just compensation” under the Fifth Amendment’s Takings Clause. The Supreme Court unanimously sided with the Hornes and remanded the case for lower-court review of that takings issue.

Now, two years after permitting the Hornes to escape a byzantine administrative purgatory and have their day in court, the Supreme Court sided with them again, declaring that raisins are “private property—the fruit of the growers’ labor—not ‘public things subject to the absolute control of the state.’ Any physical taking of them for public use must be accompanied by just compensation.”

Moreover, as Chief Justice John Roberts wrote for all of his colleagues save Justice Sonia Sotomayor, “[n]othing in the text or history of the Takings Clause, or our precedents, suggests that the rule is any different when it comes to appropriation of personal property [as distinct from real estate]. The Government has a categorical duty to pay just compensation when it takes your car, just as when it takes your home.”

What the Decision Means

Today’s Supreme Court decision has far-reaching implications for the continuation of all the New Deal-era agricultural price controls—which seem so bizarre that the Hornes’ case attracted more media attention than your typical regulatory challenge (or perhaps Jon Stewart just has a thing for nature’s candy).

Indeed, marketing orders exist across a cornucopia of agricultural products, including almonds, apricots, avocados, cherries (sweet and tart, respectively), citrus (Florida and Texas), cranberries, dates, grapes, hazelnuts, kiwifruit, olives, onions (four geographic designations), pears, pistachios, plums/prunes, potatoes (five geographic areas), spearmint oil, tomatoes, and walnuts—nearly 30 bureaucracies in total! The AMAA also permits the federal government to reach into the hops and honeybee industries. All of these schemes are at a minimum constitutionally suspect now, and likely would be invalidated if some brave plaintiff followed the Hornes’ example, stood in the way of government trucks, and fought the resulting fines in court.

The ruling also has implications for state and federal dairy schemes. The AMAA also regulates dairy production, with marketing orders reaching more than half the country. Dairy controls have been particularly contentious; they were even the underlying subject of the infamous United States v. Carolene Products, the 1938 case that bifurcated our rights and allowed governments at all levels to run roughshod over economic and property rights.

After Horne, some 80 years since the start of the New Deal, the government’s agriculture technocracy is finally drying up like a raisin in the sun.

Ilya Shapiro is a senior fellow in constitutional studies at the Cato Institute, which filed an amicus brief in the Horne case and where Randal Meyer is a legal associate.

The Real World Is Better Off than Pope Francis’s Gloomy Vision of It

Marian L. Tupy

In Laudato Si’, which was released officially on Thursday, Pope Francis draws a grim, even apocalyptic picture of humanity’s future. No doubt, his views on anthropogenic global warming will be hotly debated. But his gloom is unwarranted.

The pope, as the Independent sums up the encyclical, asserts that “the world’s poorest are the biggest victims of a web of environmental, human, financial and ethical degradation that puts the entire planet at risk,” … lambasts rich countries for ‘looting’ the world, … warns that the world is facing widespread crop failure, economic ruin … [and avers that] warming caused by the enormous consumption of some rich countries has repercussions in the poorest places on earth, especially in Africa, where the increase in temperature, combined with drought, has had disastrous effects on the performance of crops.”

Paradoxically, the pope’s encyclical comes in the midst of some excellent news. The world population is likely to peak in 2050 and then start falling, as was explained recently in an article in the New York Times. Also, the highly respected Harvard University professor Steven Pinker published a bestselling book showing that, in stark contrast with our violent past, the age in which we now live is one of unprecedented peace and security.

Contrary to Francis’s assertions, poverty is falling precipitously throughout the world. According to the Brookings Institution, absolute poverty (i.e., people living on less than $1.25 a day) has declined from 52.2 percent of the world’s population in 1981 to 14 percent today. In sub-Saharan Africa, the world’s poorest region, absolute poverty declined from 53 percent to 47 percent — a reduction not to be frowned on given that the region’s population grew from 393 million in 1981 to 937 million in 2013.

Incomes in Africa, Asia, and South America have never been higher. Increasing prosperity means lower infant and maternal mortality, higher calorie consumption, greater access to sanitation and information, etc. Surprisingly, rising prosperity in the developing countries has narrowed global income inequality.

Let us turn to the “looting of the planet.” The surest way to ascertain whether humanity is running out of natural resources is to look at their prices. If a price of a commodity increases over a long period of time, it is becoming scarcer. If it declines, it is becoming more plentiful.

The World Bank has been measuring an exhaustive array of commodity prices since 1960. Of the 15 commodity indexes measured, eight fell in price. Food, for example, was 8 percent cheaper in 2014 than in 1960. This is all the more remarkable given that the world’s population increased by 135 percent over the same time period. In five commodity indexes, prices rose but at a lower rate than income did. Only two indexes, energy and precious metals, showed an increase greater than the increase in income.

Overall, the all commodity index average rose by 88 percent. Discounting for the rise in energy and precious metals prices, it declined by 3 percent. Assuming that an average person spent exactly the same fraction of her income on the World Bank’s list of commodities in 1960 and in 2014, she would be better off in either case, because her income rose by 161 percent.

Commodity Index
Average (without Energy and Precious Metals) Relative to Population and Income, 1960–2014

Next, let us look at the state of agriculture in general and of African agriculture in particular. As previously mentioned, despite population growth, global food prices are lower today than they were 54 years ago. That is in large part due to massive increases in agricultural productivity. In 1866, for example, American farmers produced 24 bushels of corn per acre; in 2012, 122 bushels.

One reason why African calorie consumption rose from 2,150 in 1990 to 2,430 in 2013 is that African agriculture is becoming more efficient. Yields of cereals, citrus, pulses, and vegetables, are at a record high.

What does this mean for the world? Jesse H. Ausubel of the Rockefeller University points out that

if the world farmer reaches the average yield of today’s U.S. corn grower during the next 70 years, 10 billion people eating as people now on average do will need only half of today’s cropland. The land spared exceeds Amazonia. This will happen if farmers sustain the yearly 2 percent worldwide yield growth of grains achieved since 1960, in other words if social learning continues as usual.

Contrary to Francis’s vision of a dire future, the world is, in many ways, getting better. Those improvements ought to be celebrated, not mourned. A good man, who cares passionately about the plight of the least fortunate, the pope finds it impossible to reconcile his ideological presuppositions with the real world that surrounds him.

Marian L. Tupy is a senior policy analyst at the Cato Institute’s Center for Global Liberty and Prosperity and editor of

CA Uber Ruling Is a Worrying Sign for Sharing Economy Fans

Matthew Feeney

Earlier this month the California Labor Commissioner’s Office ruled that a former Uber driver is owed around $4,000 in expenses because she was, contrary to what Uber claimed, an employee rather than an independent contractor. The ruling could have potentially devastating implications for the rideshare company.

Uber claims that the ruling, which it is appealing, is non-binding and only relates to one driver. Regardless, many in Uber’s San Francisco headquarters will undoubtedly be concerned about what the ruling means for the future of the company. If Uber is eventually required to consider all California drivers employees it will drastically change its very popular business model, as it will have to offer drivers a range of benefits it currently does not provide.

To treat Uber drivers and other sharing economy providers as if they are employees is a categorical error that will harm a popular sector of the economy.”

Uber’s ridesharing service works by connecting passengers and drivers, who use their own vehicles and drive whenever they want. Uber does provide drivers with technology, carries out background checks via a third party, and requires that a driver applicant’s vehicle is not too old.

According to the California Labor Commissioner’s Office, Uber is “involved in every aspect of the operation.” But this is an overly broad understanding of the word “every” considering that Uber drivers set their own schedules and are using their own vehicles.

The ruling highlights how lawmakers and regulators have struggled to keep up with technological changes. Uber and other sharing economy companies don’t fit well into many regulatory regimes, which are often designed for market incumbents and sharing economy competitors such as taxi companies and hotels.

Taxi companies might not be happy with the rise of Uber, and hotels might prefer for Airbnb to close shop, but Uber is not a taxi company and Airbnb is not a hotel chain. Instead of thinking of sharing economy companies as if they are the same as their competitors they ought to be understood as means by which to solve information problems.

Many people have skills and assets that are not used to their fullest extent. A full-time baker, accountant, software engineer, or airline pilot might want to rent a spare bedroom or provide rides on weekends, but potential customers need information about where these people are. Companies like Uber and Airbnb make finding this information very easy.

It is the case that Uber, as well as sharing economy players like Lyft, Airbnb, EatWith, and TaskRabbit carry out a vetting process before letting providers onto their platform. But the vetting process and the fact that these companies allow consumers and providers to use their technology hardly warrants classifying sharing economy workers like Uber drivers as traditional employees.

The sharing economy is here to stay, but its growth could be hampered if companies like Uber are required to classify drivers as employees. To treat Uber drivers and other sharing economy providers as if they are employees is a categorical error that will harm a popular sector of the economy.

Matthew Feeney is a policy analyst at the Cato Institute.