Share |

Whistleblower Retaliation: a Governmental Accountability and National Security Crisis

Patrick G. Eddington

The role of an Inspector General (IG) office in a federal agency or department is to root out waste, fraud, and abuse, and where necessary refer criminal conduct to the Justice Department for prosecution. But what happens when the IG itself is corrupt, especially in a national security context where secrecy can be used to conceal malfeasance?

Austrian filmmaker Fritz Moser’s documentary, “A Good American,” released in February 2017, explores a real-world case of IG corruption and misconduct before and after the September 11, 2001 attacks. The film tells the tale of a small group of National Security Agency employees who, prior to 9/11 developed a revolutionary intelligence collection and analysis capability, code-named THINTHREAD. Had THINTHREAD been fully operationally deployed even a few months before the attacks, it likely would have detected most or all of the would-be hijackers before they had a chance to act, as the THINTHREAD team lays out in the movie.

The documentary poses and answers another major, relevant question. What happens when conscientious government employees in the national security establishment report wrongdoing that costs American lives and billions in taxpayer dollars?

More often than not, their careers are destroyed by senior bureaucrats who will seemingly stop at nothing to bury the truth.

Through its long-running indifference, Congress has effectively encouraged the deep flaws that appear to be rampant in the Pentagon’s internal oversight offices.

The THINTHREAD core team consisted of NSA’s leading crypto-mathematician, William Binney; analyst Kirk Wiebe; and computer whiz Ed Loomis. They were supported in their efforts by a single GOP House Intelligence Committee staffer, Diane Roark, who helped get the fledgling program money in its earliest days and was a tireless, but ultimately unsuccessful, champion of the program.

The single biggest obstacle to THINTHREAD’s success came from then-NSA director Air Force Gen. (R) Michael Hayden, who became so incensed with the THINTHREAD team’s Capitol Hill lobbying that he threatened each team member with disciplinary action. Hayden had his own pet program he wanted to promote at THINTHREAD’s expense: a Science Applications International Corporation (SAIC)-sponsored program called TRAILBLAZER, which cost more than $1 billion compared to THINTHREAD’s $3 million. The latter became a classic Washington, DC defense contractor fiasco; wasting huge sums but never producing a single piece of intelligence for NSA. (Jim Risen’s Pay Any Price also has a fair amount on this episode that’s worth reading.)

There is a reason that this story has not been widely told before now: neither the Congressional Joint Inquiry into 9/11, the 9/11 Commission, nor either Congressional intelligence committee followed up on the scandal, despite each being approached by THINTHREAD’s developers in the aftermath of the attacks. 

I know, because when I worked as senior policy advisor to then-Rep. Rush Holt (D-NJ), I spoke at length to THINTHREAD’s developers about the controversy and the subsequent waste, fraud, and abuse complaint they filed with the Defense Department Inspector General’s (IG) office in 2002, and how that one act altered their lives forever.

After Gen. Hayden killed THINTHREAD in the weeks before 9/11, Binney, Wiebe, Loomis, and Roark agreed that the waste, fraud, and abuse from the TRAILBLAZER program, along with the lost opportunity to stop the attacks via THINTHREAD, required a real investigation. Subsequent inquiries resulted in one major DoD IG  report being issued in December 2004, a partially declassified version of which was subsequently obtained via the Freedom of Information Act (FOIA) by the Project on Government Oversight in 2011. Some 100 paragraphs of the report remained redacted, including 80 that were marked “Unclassified/For Official Use Only” (U/FOUO).

The portions that were readable gave a sense of the debacle that was TRAILBLAZER, but the most damaging portions of the report were withheld from the public.

In the seven years between the IG report’s publication and its partial release, Binney, Wiebe, Loomis, Roark, and a fifth colleague, NSA Senior Executive Service member Tom Drake, had all been investigated by the FBI for leaking information about the controversy to the Baltimore Sun. None had revealed classified information, and Drake was in fact the source of the stories about TRAILBLAZER’s massive cost overruns and ineffectuality. The government went so far as to charge Drake under the Espionage Act, but the felony case against him fell apart in 2011. Drake subsequently pleaded to a misdemeanor charge of misusing a government computer, his government career destroyed and his personal finances wrecked from the legal battle with the government.

I followed all of this from Rep. Holt’s office, even after Holt rotated off of the House Intelligence Committee in 2011. When I finally had the chance to spend more time with the THINTHREAD team and learn the full details of their experience in 2013, it became clear that someone in the DoD IG’s office had falsely accused one or more of them of leaking classified information. I became determined to learn who had been responsible for railroading the THINTHREAD team.

By the summer of 2013, I had the original, classified 2004 DODIG report in my hands. Reading it made my blood boil. It was the most damning report of its kind I’d seen in my over 25 years in Washington. And it confirmed the core allegations the THINTHREAD team made in their original complaint.

Unfortunately, the relevant Congressional commission had no appetite to reopen the issue, as its tenure was drawing to a close and its report for Congress was largely complete. Within months, Holt would announce his retirement from Congress, and I too left the House of Representatives. But having seen the 2004 report, and other investigative documents as well, I was more determined than ever to continue pressing for the declassification of all relevant THINTHREAD and TRAILBLAZER documents.

In early 2015, I filed an extensive FOIA request seeking every available document on both programs, but was essentially led around in circles. In late January 2017, with the help of the Chicago-based firm of Loevy and Loevy and the Government Accountability Project, I filed suit in federal district court to try to get answers. But the THINTHREAD team’s experience is, unfortunately, just one example of the kinds of integrity problems plaguing the DoD and NSA IG offices.

As outlined below, there are similar investigations now underway, looking into other whistleblower retaliation complaints against the DoD IG and NSA’s IG office—complaints that raise the specter of other unexamined government surveillance and national security programs that threaten citizens’ rights while wasting still more taxpayer money.

In March 2016, the Office of Special Counsel announced that it had uncovered evidence of Drake prosecution-related document destruction by the DoD IG, involving a “substantial likelihood” that IG personnel had potentially violated the law.” The case was referred to the Justice Department for possible prosecution, where it remains under review.

Those allegations received additional support when former DoD IG Assistant Inspector General John Crane went public in May 2016 with allegations that he had witnessed retaliation against Drake while working in the DoD IG office.

And in July 2016, former DoD IG ombudsman Dan Meyer officially claimed that he had experienced retaliation for exposing attempts by DoD IG officials to manipulate a final version of an investigative report into allegations that then-Defense Secretary Leon Panetta “had leaked classified information to the makers of the film ‘Zero Dark Thirty.’” (For the last several years, Meyer has headed the Intelligence Community Inspector General’s whistleblower protection unit.)

If the head of the entire Intelligence Community’s whistleblower protection operation is under attack, how can an average CIA, NSA or other intelligence officer possibly hope to report waste, fraud, abuse, or criminal conduct without fear of retaliation?

On December 13, 2016, The Intercept reported that the Government Accountability Office (GAO) had “quietly launched an investigation into the ‘integrity’ of the Pentagon’s whistleblower protection program.” Whether Drake’s case is one of the subjects of the GAO probe is unknown, but the fact that the entire Pentagon Inspector General operation is now the subject of an external investigation is virtually unprecedented.

And just three days after The Intercept’s story on the GAO inquiry broke, Government Executive reported that NSA IG George Ellard had been recommended for termination for whistleblower retaliation by NSA Director Adm. Mike Rogers, based on the recommendations of a three-person external IG review panel established under an Obama-era presidential directive, PPD-19.

Indeed, recently two lawyers who represent whistleblowers argued that PPD-19 works and “It is only through cases like Ellard’s that senior officials will be forcedto realize that reprisal comes with consequences and that seniority will have no bearing on an investigation’s outcome.” This is magical thinking.

The fact that the Obama administration felt compelled to issue PDD-19 in the first place was a tacit admission that the DoD and NSA IG’s were broken and corrupt. Additionally, PPD-19 covers only IC employees, not IC contractors. Thus, IC contractors like Edward Snowden had no protection under PPD-19. They still have none.

Finally, PPD-19 can be rescinded by President Trump, just like any other executive action taken by his predecessors. Given Trump’s obsession with leaks he views as damaging to him politically, it’s simply a matter of time before PPD-19 is history.

Whether Ellard was involved in retaliation against Drake or other THINTHREAD team members is unknown, but learning the truth about the level of corruption in these two critically important internal Pentagon watchdog units is the core reason why I filed my FOIA suit in the first place. Whether Michael Hayden or any of his subordinates at NSA (including former SAIC executive-turned-NSA-senior manager Bill Black) engaged in contract steering for TRAILBLAZER at the expense of an internally-developed NSA program that could’ve possibly prevented the 9/11 attacks is just one question for which the families of the 9/11 victims deserve an answer.

And the rest of us need to know that corruption in government agencies will be rooted out, and that anyone working in the IC—government employee or contractors–will know that he or she can safely report waste, fraud, abuse or unconstitutional conduct without fear of retaliation or improper prosecution.

Through its long-running indifference to these episodes, Congress has effectively encouraged the deep flaws that appear to be rampant in the Pentagon’s internal oversight offices. The failure to properly investigate the THINTHREAD-TRAILBLAZER controversy, as well as other surveillance overreaches, has clearly contributed to what appears to be extraordinary corruption in parts of the oversight of the Intelligence Community itself. In our system of government, this is the kind of problem Congress should fix. Absent a groundswell of public outrage over any such abuses, it’s not likely to happen quickly, if at all.

Patrick Eddington is a Policy Analyst in Homeland Security and Civil Liberties at the Cato Institute.

Share |

Is Manufacturing Employment the Only Thing That Counts?

Daniel R. Pearson

The Trump administration seems obsessed with stemming the downtrend in the number of U.S. factory jobs. Posted on the White House website is a statement called “Trade Deals That Work For All Americans.” Among other things, it asserts that “blue-collar towns and cities have watched their factories close and good-paying jobs move overseas, while Americans face a mounting trade deficit and a devastated manufacturing base.”

It’s true that employment in U.S. manufacturing facilities has been sliding. The Bureau of Labor Statistics reports that the number of factory workers peaked in 1979 at 19.4 million and has fallen to 12.3 million today. Sounds like a textbook example of a sector in decline, right?

Well, it depends on the perspective. Let’s compare with another sector — agriculture — that has gone through a major structural adjustment. Employment on U.S. farms peaked around 1910 at 11.8 million, which accounted for 31 percent of the entire U.S. workforce. Since then the number of farm operators and hired workers has declined precipitously to somewhere in the neighborhood of 2.5 million people, or 1.6 percent of total employment.

The U.S. manufacturing sector is alive and well. It generates greater economic activity than ever before.

Does U.S. agriculture represent a failure because it now employs only a fraction of its former workforce? Hardly. Mechanization and sophisticated production techniques for crops and livestock have greatly boosted agricultural productivity. Fewer farmers now produce much more output. The United States has become the world’s largest agricultural exporting country. The employment trend in farming may well continue downward, but the trend toward greater agricultural innovation is rising. American agriculture is an irrepressible success story.

Now, back to manufacturing. Even though factory employment has fallen, the economic value added by the sector has continued to rise. According to the Bureau of Economic Analysis, the United States set an all-time record for value added in manufacturing in 2015 of $2.2 trillion. Value added in manufacturing has risen every year since the recession ended in 2009. The United States is a competitive producer of a wide range of factory products, and ranks third as a manufacturing exporter behind China and Germany.

Given that the sector is growing year by year and is a major exporter, has the manufacturing base really been — in the words of the White House — “devastated”? An unbiased observer likely would conclude instead that — as in agriculture — fewer workers are doing a fine job of producing more goods of higher value.

Has the downtrend in manufacturing employment been driven primarily by globalization? No. An analysis in 2015 by the Center for Business and Economic Research at Ball State University showed that trade has, indeed, had a modest effect on manufacturing employment. The study found that roughly 13 percent of manufacturing job losses between 2000 and 2010 were due to international competition.

The other 87 percent of the decline, though, has come from greater automation — robots and computers are reducing the number of workers required on factory floors. Just as in farming, productivity gains allow manufacturing employees to generate far more output than in the past. Many people would see this as progress.

The White House apparently sees imported goods as the enemy of U.S. manufacturers. The situation is rather more nuanced. Some imports may take sales away from U.S. manufacturers, potentially reducing their profitability and leading to layoffs. However, half of all imported goods are used as inputs by U.S. manufacturers. Cross-border supply chains serve to strengthen the U.S. manufacturing sector. They allow labor-intensive components to be made overseas, while more highly skilled operations are carried out in this country. So instead of undermining the manufacturing economy, imports have helped it to remain competitive and to grow.

Finally, it’s important to view manufacturing employment in the context of the broader U.S economy. The Bureau of Labor Statistics reports that 152 million people now are employed in the United States, more than at any time in history. That number has grown since 2009 by 14 million people, an increase of 10 percent. The 12.3 million factory workers constitute an important portion — 8 percent — of overall employment.

As long as the economy is growing, employment can be expected to grow along with it. This is reassuring. If manufacturing employment continues its gradual decline, other jobs should become available elsewhere in the economy.

Those who seek to prevent further reductions in manufacturing employment would be well advised instead to embrace the sector’s ongoing evolution, and to focus on helping displaced factory workers make successful transitions to other careers.

The U.S. manufacturing sector is alive and well. It generates greater economic activity than ever before. Its workers are, on average, better educated, better paid and use more-advanced equipment. The decline over time in output of shirts and shoes has been more than offset by increased production of airplanes and other high-value items. There is a lot of good news in manufacturing.

Daniel R. Pearson is a senior fellow at the Cato Institute and the former chairman of the U.S. International Trade Commission.

Share |

Lincoln Was Wrong on Trade

Daniel R. Pearson

In his address to a joint session of Congress this week, President Trump quoted President Lincoln’s views on international trade: “The first Republican President, Abraham Lincoln, warned that the ‘abandonment of the protective policy by the American government [will] produce want and ruin among our people.’ Lincoln was right.”

Unfortunately, Lincoln was wrong on trade. He was a great president, a strong supporter of preserving the Union, and a wonderful wordsmith. But he never had a chance to absorb the concepts undergirding free trade that had been developed just a few decades earlier in the United Kingdom by Adam Smith and David Ricardo.

The economy of the United States in Lincoln’s time was very different than it is today. The most significant factor driving robust economic growth was not international trade, but rather free trade within the United States. The country was in an expansionary mode. New states were joining the country, which meant new resources were being added to the national economy.

The Constitution helpfully prevented states from imposing tariffs against products coming from other states. New York, for instance, was not allowed to restrict the importation of wheat from Ohio. Products that could be produced efficiently in one part of the country could be sold anywhere else in the country, as long as transportation costs were low enough. With so many opportunities to expand trade domestically, trading with foreign countries simply wasn’t as important.

Lincoln was a thoughtful man. If he was alive today, his views on trade likely would have evolved to reflect the economic experience of the intervening years.

Today the situation is very different. The United States still has a freely trading internal economy, but it no longer is expanding by adding large tracts of new territory. The best way for the U.S. economy to grow is to connect itself closely to parts of the world that are growing most rapidly. And that only can be done through international trade.

Lincoln likely saw Adam Smith’s “invisible hand” of the marketplace to be doing a fine job of allocating resources within the United States, so may not have perceived much additional benefit from allowing it to work across national borders. He likely never contemplated David Ricardo’s concept of “comparative advantage,” which explains that neither individuals nor nations should seek self-sufficiency, because not everyone or every country can do everything well. The better approach is for people to specialize in activities at which they are most productive, then trade to obtain other needed goods and services.

Nonetheless, Lincoln’s protectionist tendencies set the template for Republican thinking over the following decades. When Republicans controlled the government, they were inclined to set tariffs at relatively high levels. When Democrats took control, they would reduce tariffs. Democrats saw tariffs as a tax on everyday working men and women, as well as a form of crony capitalism that benefitted protected manufacturers at the expense of consumers. So tariffs went up and down throughout the latter half of the 19th century and the first years of the 20th century, depending on which party controlled Congress and the White House.

Then in 1930, things got out of hand. The Republican chairmen of the Senate Finance and House Ways and Means Committees, Sen. Reed Smoot of Utah and Rep. Willis Hawley of Oregon, sponsored legislation that set tariffs at their highest levels in over 100 years. President Herbert Hoover signed the infamous “Smoot-Hawley” Tariff Act. Other countries adopted the same beggar-thy-neighbor approach and retaliated. Global trade responded by falling roughly two-thirds from its peak in 1929 to its low in 1934. Smoot-Hawley generally is given credit for helping to deepen and lengthen the Great Depression.

Interestingly, torpedoing international trade proved not to be an effective political strategy. Both Smoot and Hawley were defeated in their reelection bids in 1932. Hoover also lost that year to Franklin Roosevelt.

President Roosevelt set about repairing the damage to international trade. His secretary of state, Cordell Hull, actively pursued reciprocal tariff reductions with other countries. Those efforts laid the groundwork for trade liberalization in the post-WWII era.

President Truman joined with 22 other nations to create the General Agreement on Tariffs and Trade (GATT) in 1947. The GATT accomplished eight rounds of tariff reductions, the most recent being the Uruguay Round, which established the World Trade Organization (WTO) in 1995. This conscious effort to liberalize trade helped to boost the value of global merchandise exports by nearly 300 times, from an inflation-adjusted $58 billion in 1948 to $16.5 trillion in 2015.

Lincoln was a thoughtful man. If he was alive today, his views on trade likely would have evolved to reflect the economic experience of the intervening years. Perhaps he would even support the position articulated by another great Republican president, Ronald Reagan, in his 1983 State of the Union address: “As the leader of the West and as a country that has become great and rich because of economic freedom, America must be an unrelenting advocate of free trade.”

Daniel R. Pearson is a senior fellow in trade policy studies at the Cato Institute. He served a two-year term as chairman of the U.S. International Trade Commission during the George W. Bush administration.

Share |

Alcohol and Caffeine Created Civilization

Chelsea Follett

No two drugs have defined human civilization the way alcohol and caffeine have.

Nature created both to kill creatures much smaller than us — plants evolved caffeine to poison insect predators, and yeasts produce ethanol to destroy competing microbes.

True to its toxic origins, alcohol kills 3.3 million people each year, bringing about 5.9% of all deaths and 25% of deaths among people aged 20 to 39. Alcohol causes liver disease, many cancers, and other devastating health and social issues.

On the other hand, research suggests that alcohol may have helped create civilization itself.

Alcohol consumption could have given early homo sapiens a survival edge. Before we could properly purify water or prepare food, the risk of ingesting hazardous microbes was so great that the antiseptic qualities of alcohol made it safer to consume than non-alcoholic alternatives — despite alcohol’s own risks.

Even our primate ancestors may have consumed ethanol in decomposing fruit. Robert Dudley, who created the “drunken monkey” hypothesis, believes that modern alcohol abuse “arises from a mismatch between prehistoric and contemporary environments.”

At first, humans obtained alcohol from wild plants. Palm wine, still popular in parts of Africa and Asia today, may have originated in 16,000 BC. A Chilean alcoholic drink made from wild potatoes may date to 13,000 BC. Researchers now believe the desire for a stable supply of alcohol could have motivated the beginnings of agriculture and non-nomadic civilization.

Residue on pottery at an archeological site in Jiahu, China, proves that humanity has drunk rice wine since at least 7,000 BC. Rice was domesticated in 8,000 BC, but the people of Jiahu made the transition to farming later, around the time we know that they drank rice wine.

“The domestication of plants [was] driven by the desire to have greater quantities of alcoholic beverages,” claims archeologist Patrick McGovern. It used to be thought that humanity domesticated wheat for bread, and beer was a byproduct. Today, some researchers, like McGovern, think it might be the other way around.

Alcohol has been with us since the beginning, but caffeine use is more recent. Chinese consumption of caffeinated tea dates back to at least 3,000 BC. But the discovery of coffee, with its generally far stronger caffeine content, seems to have occurred in 15th century Yemen.

Before the Enlightenment, Europeans drank alcohol throughout the day. Then, through trade with the Arab world, a transformation occurred: coffee, rich with caffeine, a stimulant, swept across the continent and replaced alcohol, a depressant.

As writer Tom Standage put it, “The impact of the introduction of coffee into Europe during the seventeenth century was particularly noticeable since the most common beverages of the time, even at breakfast, were weak ‘small beer’ and wine. Both were far safer than water, which was liable to be contaminated … Coffee … provided a new and safe alternative to alcoholic drinks. Those who drank coffee instead of alcohol began the day alert and stimulated, rather than relaxed and mildly inebriated, and the quality and quantity of their work improved … Western Europe began to emerge from an alcoholic haze that had lasted for centuries.”

Coffeehouses quickly became important social hubs, where patrons debated politics and philosophy. Adam Smith frequented a coffeehouse called Cockspur Street and another called the Turk’s Head, while working on The Wealth of Nations.

After the Boston Tea Party, most Americans opted for coffee over tea, raising their caffeine intake. Thomas Jefferson called coffee, “the favorite drink of the civilized world.” Even today, Americans consume three times more coffee than tea. In the words of historian Mark Pendergrast, “The French Revolution and the American Revolution were planned in coffeehouses.”

The Enlightenment and Industrial Revolution saw an explosion of innovation and new ideas. Living standards skyrocketed. New forms of government arose. More recently, globalization took the classical liberal ideal of peaceful exchange to a new scale and reduced worldwide inequality.

Today, despite population growth, fewer people live in poverty than ever before. People live longer lives, are better educated, and are increasingly likely to live in a liberal democracy instead of a dictatorship.

Caffeine is the most widely consumed psychoactive drug worldwide. Alcohol gave civilization its start, and it certainly helped the species drown its sorrows during the grinding poverty of much of human history. But it was caffeine that gave us the Enlightenment and helped us achieve prosperity.

Chelsea Follett is managing editor of HumanProgress.org, a project of the Cato Institute.

Share |

Why It Would Be Madness to Produce All Our Own Food

Ryan Bourne

In a somewhat bizarre report, the UK supermarket Morrisons has pushed the idea that Britain would be better off if the country grew more of its own food. According to the supermarket, more domestic production would bring significant benefits, not least insulating consumers from global price volatility.

This is a remix of an old protectionist tune: the idea that opening up markets to global trade makes an economy considerably more volatile and “risky”. It is a variation of the claim that we desire “food security” or “energy security” — the capacity to fulfil all our wants and needs through domestic production alone. It often manifests itself with support for “buying local” or “buying British” or, more recently “Buy American”, as articulated by the new President.

No doubt this argument has more resonance given the recent iceberg lettuce shortage in the UK, following unusual weather in Spain. If only the UK produced its own lettuce, and did not depend on those unreliable Spaniards, it would surely enjoy security of supply?

When considering reasoning such as this, it always makes sense to test whether the idea is scalable, up or down. Suppose that rather than saying “Britain should become more self-sufficient in food production”, we said, “Ryan Bourne’s family should become more self-sufficient in food production”.

This is a remix of an old protectionist tune: the idea that opening up markets to global trade makes an economy considerably more volatile and “risky”.

Rather than trading through exchanging cash for food products in a supermarket, in this world I would have to produce all my own food. I perhaps would have a herd of cattle, rent out a part of an allotment, use a greenhouse, invest in tools for my garden and vegetable patch and start growing a whole range of different foodstuffs.

Let’s put aside the one-off capital purchases. The first thing to admit is that I would be hopeless at it. I don’t have a clue how to grow anything. Diverting resources into growing my own crops would probably be extremely inefficient with low yields for substantial effort.

Given that I would be tending to my food production, I would also spend far less time doing other things that I am far better at, not least writing these kinds of articles.

If substantial numbers of people were cajoled into producing their own foodstuff, then these costs — the inefficiency plus the loss of production elsewhere — would add up substantially across the whole economy.

The same would be true if we decided all goods should be produced locally in my home town of Gillingham, in Kent, or even to the county itself. If significant inputs to production had to be substituted from service industries and apple production to instead produce all agricultural foodstuffs irrespective of the costs of doing so, then there would be a huge loss of overall production.

So why do we believe that things would be different if we restricted production to a particular nation and decided we would only buy goods produced nationally?

Not only would there be an absence of certain products which simply could not be produced in the UK. But goods prices would be higher owing to less competition, and total production would be much smaller because the economy as a whole would not be diverting resources into industries and products which we had comparative advantages in producing.

In other words, protection, or restricting trade to local trade, would hurt both consumers and overall production. The only theoretical beneficiaries would be some producers within protected product markets who saw import-substitution demand rise as a result of import restrictions. And even here the absence of more competition is likely to reduce productivity improvements over time.

Take, as an example, the recently discussed New Zealand farming reforms of the 1980s. Far from protection enhancing well-being, the evidence shows that subsidies undermined the productive potential of the sector. Practically all forms of assistance to New Zealand’s farmers were withdrawn over a period of five years in the 1980s.

Big changes occurred — the sheep stock halved and beef and sheep farms fell by a third. But larger herd sizes and increases in lambing rates made the remaining farms much more productive, while production of fruits and wines grew sharply and a venison industry developed. The country now has a healthy and more productive agricultural sector, highly responsive to global demands and trading at world prices.

This is an essential insight of trading that has been known since the days of Adam Smith. We can increase the size of the economy through specialisation and the division of labour — with people producing those things that they are relatively efficient at doing.

This process is enhanced when we remove barriers within a nation or between nations to trade. Protectionism is costly. There’s a reason why, in times of war, countries have blockaded others. Hint: no country blockades another in the expectation that it would boost the local blockaded economies.

But the costs do not just stop there. Contrary to the Morrisons report, deliberately seeking to shift to “local production” would not produce more certainty or security either. Returning to Ryan Bourne’s independent food production story, suppose that a bad harvest or a plant disease wiped out a substantial part of my food production in a given year.

Absent the ability or willingness to trade, I would simply go without, and would not be able to consume those foodstuffs which I enjoy. In an attempt to improve security through all local production, I would have maximum insecurity of consumption. The same can be seen in the Spanish lettuce example. Were Spain to implement a “buy local lettuce” law, then the failing harvest of lettuces would result in skyrocketing prices and lower sales.

The UK had some experience of this attempt to have “energy security” with its attempts to protect the coal industry through the 1970s and early 1980s. Far from ensuring “security of supply”, protection led to the monopoly power of the mining industry and the unions, resulting in strikes and constant threats of strikes that made energy supplies less, not more, secure.

When the sector was liberated and support withdrawn, energy prices fell significantly as consumers were able to import much cheaper natural gas. Since then, supplies have been much steadier as they have been more diverse.

A reliance on imports does not make an economy more at risk, because markets provide security in the same way that they provide other attributes of products which consumers consider valuable.

If customers want their supply of any given product to be “secure” through continuous availability then supermarkets have to diversify their supply arrangements with a range of producers from across countries, which will be reflected through the prices of the goods on supermarket shelves. They might also invest in extra freezing and refrigeration units, for example.

What about instances where governments react to rising international prices or supply shocks by imposing export restrictions to keep prices lower in domestic markets? Surely we can have the best of both worlds: trading freely in normal times but then protecting consumers when a crisis hits?

Evidence in fact suggests that when one country starts doing this, all countries do, exacerbating the initial supply shock and leading to spiralling overall prices. This should not surprise us, since prices set freely provide signals on the shortages or surpluses of products which leads to adjustments in behaviour.

The only semi-feasible reason why one might seek domestic self-sufficiency would be if you believed there was a high possibility of mass mobilisation war.

Yet a substantial empirical literature has shown that trade and the interdependence it generates actually makes conflict less likely. And, frankly, if World War III happens then we’d have bigger problems than the availability of lettuce.

Ryan Bourne occupies the R Evan Scharf Chair for the Public Understanding of Economics at Cato

Share |

Neil Gorsuch: Judicial Humility and Religious Pluralism

Thomas Berry

When you decide cases for a living, it can be difficult to admit that you don’t know something. But from the very beginning of his career, Supreme Court nominee Neil Gorsuch has shown a respect for the limits of what individuals — even judges — can truly understand about each other’s deeply held beliefs.

This trait has served him well in resolving difficult conflicts over moral and religious questions, and would continue to serve him well on the high court.

The earliest evidence of this approach came not in a judicial opinion, but in a work of moral philosophy on the ethics of euthanasia. In that article, Gorsuch scrutinized the “balancing tests” that had frequently been proposed in response to the problem of assisted suicide. In one typical example of such a “balancing” approach, the philosopher Ronald Dworkin had written that “there are dangers both in legalizing and refusing to legalize [assisted suicide]; the rival dangers must be balanced, and neither should be ignored.”

When it comes to judicial wisdom, sometimes the most important virtue is knowing what shouldn’t be decided.

Though such tests may seem fair in theory, Gorsuch rejected them as fundamentally impossible to apply, expressing skepticism of our ability to truly understand the deeply-held interests of others. “How,” he asked, “can one possibly compare, for instance, the interest the rational adult seeking death has in dying with the danger of mistakenly killing persons without their consent?”

After becoming a judge, Gorsuch carried this same skepticism with him, most frequently applying it in cases concerning religious exemptions under the Religious Freedom Restoration Act (RFRA). The most famous such case is Hobby Lobby v. Sebelius, in which the plaintiffs believed that their religion required a complete disassociation from health insurance policies that provided certain contraceptives.

In a concurring opinion, Gorsuch readily acknowledged that these “religious convictions are contestable. Some may even find [these] beliefs offensive.” But this was no reason to give them less weight, he wrote. Judges are forbidden, under RFRA, from diminishing the importance of sincerely held religious beliefs simply because the judge himself does not understand why others would hold them. Through this rule, the statute “does perhaps its most important work in protecting unpopular religious beliefs.”

In his Hobby Lobby concurrence, Gorsuch showed an understanding of why it is dangerous for judges to attempt to “weigh” the importance of others’ religious beliefs. Studies have confirmed Gorsuch’s intuition, that our ability to grasp what really matters to others is far worse than we assume, especially when others’ value systems are much different from our own.

As the moral psychologist Jonathan Haidt has shown, the instincts and desires of socially “conservative” and socially “liberal” minds (to use broad and imprecise terms) can diverge dramatically. Conservatives, for example, tend to place a higher value on a moral paradigm called “sanctity/degradation,” which encompasses a desire for “purity” and, crucially, a strong need for disassociation from activities seen as impure. As one manifestation of the differing attitudes toward this paradigm, a survey revealed that conservatives were much more likely to say they would never receive a clean, disease-free blood transfusion from a convicted child-molester, a desire that many liberals (and libertarians) can’t help but find baffling.

It would be a dangerous world if those with “conservative” minds frequently had the power to weigh the desires of those with “liberal” minds, and vice-versa. But inevitably, that is what happens when personal convictions are subjected to judicial tests. This is why under RFRA, as Gorsuch put it in another of his religious-liberty opinions, courts “lack any license to decide the relative value of a particular exercise to a religion.”

But if Gorsuch joins the High Court, not all of his colleagues will share his skepticism, or his faithful approach in applying RFRA. The Hobby Lobby case was appealed to the Supreme Court and affirmed by only a 5–4 vote. In dissent, Justice Ruth Bader Ginsburg performed exactly the scrutiny of other peoples’ values that RFRA forbids, writing that “the connection between the families’ religious objections and the contraceptive coverage requirement is too attenuated.” Further critiquing the beliefs of others, her dissent then attempted to draw objective moral lines between purchasing contraceptives and directing money into funds that pay for them, and between directing a woman to use contraceptives and merely providing the means for her to use them.

This is a path that Judge Gorsuch, wisely, has consistently declined to go down. When it comes to judicial wisdom, sometimes the most important virtue is knowing what shouldn’t be decided.

Thomas Berry is legal associate in the Cato Institute’s Center for Constitutional Studies.

Share |

Overgrown Wall Street Regulation Needs a Trim in 2017

Thaya Brook Knight

Well-functioning capital markets are the lifeblood of progress; without capital, companies cannot develop new communication technologies, safer cars, better pharmaceuticals, or any of the things that make modern life as comfortable and safe as it is.

While markets need rules of the road to facilitate trading, these rules don’t have to be government-created, and they certainly don’t need to be as lengthy and complex as they are. It’s time to reevaluate the existing laws and rip out the overgrowth.

Effective capital market regulation does three things: (1) creates rules of the road for exchanges (although exchanges can do this themselves); (2) deters and punishes fraud; and (3) facilitates price discovery.

Dodd-Frank added 2,300 pages and more than 22,000 pages of regulations to an already overgrown area of federal regulation. It’s time for the pruning shears.

It does not punish small family businesses. It does not protect investors from stupid decisions. It certainly does not promote social causes. Current government regulation attempts to do all of these things, but it does each of them poorly and imposes needless costs on the system as a whole.

One of the most confounding things about securities regulation is how difficult it can be for a company to know it’s even selling securities. Take a young chef starting a new restaurant.  If this chef asks a few friends to “go in on” her new business and offers to share the profits with them, does she know she’s probably conducting an illegal securities offering? Probably not.

These types of informal securities offerings happen all the time. When a regulation is routinely broken, with no discernable harm, that regulation is probably a bad one.

Attempts at investor protection fare no better under existing law. To the extent that investor protection is a legitimate goal of securities regulation, its focus should be on deterring fraud and facilitating disclosure, not preventing a bad investment, but current regulations go much further than this.

For example, average investors are legally barred from buying some of the most attractive stocks because of rules that restrict investment to only individuals who are rich. It’s as though Neiman Marcus was required by law to lock its doors against anyone earning less than $200,000 a year, even if shoppers had money to buy and Neiman wanted to sell.

The rationale behind this restriction is that people of average means are less able to understand sophisticated investments than their richer compatriots, and are less able to withstand financial loss. The consequence is that those everyday investors are kept out of the upside benefits too. Now, nothing about being a publicly traded company makes it immune from ruin, or from being a poor investment.

If the dot.com bubble of the 1990s taught us anything, it’s that an initial public offering (IPO) doesn’t protect a company from going down or from taking its investors down, too. But even if public companies weren’t the best investment, it is not the government’s place to restrict people from doing dumb things with their money. I may want to spend $2,000 on a designer handbag; should the government tell me I can’t even if I have the money in the bank?

Regulations aimed at social causes may be the most harmful, not only because they manipulate the securities laws into doing something entirely outside of their intended use, but because they require financial regulators to wade into unfamiliar territory, with often disastrous results.

Under one Dodd-Frank rule, for example, public companies must disclose the supply chain for certain minerals. Its stated purpose is to alleviate the humanitarian crisis in the Democratic Republic of the Congo by limiting the funds available to war lords.

But instead of reducing warfare, the rule may have instead reduced investment in a developing country desperate for growth, as companies have steered clear of the region for fear of making inaccurate disclosures about the supply chain.

The SEC recently began a review of this rule, hopefully leading to its repeal, but the only way to prevent future misguided rulemaking is for lawmakers to refrain from shoehorning social causes (admirable though they may be) into entirely unsuitable regulatory regimes.

If federal securities regulation exists, it should be carefully cabined to ensure that it hews to the three principles outlined above. Dodd-Frank added 2,300 pages and more than 22,000 pages of regulations (and counting) to an already overgrown area of federal regulation. It’s time for the pruning shears.

Thaya Brook Knight is associate director of financial regulation studies at the Cato Institute.

Share |

Advice for the President on NAFTA Renegotiation: Don’t Fix What Ain’t Broke

Daniel J. Ikenson

Scapegoating trade for problems real and imagined has been a prominent part of American electoral politics for 25 years. So, during the campaign, when candidate Donald Trump referred to the North American Free Trade Agreement as “the worst trade deal ever negotiated,” his rhetoric wasn’t especially alarming.

But President Trump’s recent announcement that his administration will reopen and renegotiate NAFTA is cause for deep concern. It’s not that NAFTA is a perfect agreement that wouldn’t benefit from some updating. The concern is that Trump will reach for a sledgehammer instead of a scalpel.

The president claims that NAFTA has been a failure, and cites America’s $60 billion bilateral trade deficit with Mexico as the evidence. In his mistaken view, exports are Team America’s points and imports are the foreign team’s points, so the deficit means we are losing. And the reason the United States is losing is because U.S. negotiators were outsmarted in the early 1990s or our trading partners—especially Mexico—have been cheating with impunity. Outside of the Trump administration, one would be hard pressed to find an economist who believes that the trade balance is a measure of the efficacy of trade policy.

It’s not that NAFTA is a perfect agreement that wouldn’t benefit from some updating. The concern is that Trump will reach for a sledgehammer instead of a scalpel.

NAFTA went into effect in 1994 and provided for the gradual elimination of almost all tariffs and many other impediments to trade among the North American countries. As a share of aggregate GDP, the value of U.S.-Mexico trade doubled between 1994 and 2015. Beyond leading to lower prices and more choices for consumers, trade barrier elimination delivered the conditions necessary to permit transnational specialization in production. Essentially, the removal of barriers allowed the factory floor to break through its walls and cross borders, so that tasks spanning the spectrum from product conception to production to consumption could be performed in the places where it made the most economic sense to perform those tasks.

The result was the emergence of a globally competitive, integrated North American production platform, in industries from agriculture and food processing to automobile and machinery manufacturing.

But the nature of production and commerce has changed considerably in the internet age that has emerged and evolved over the past quarter century. NAFTA lacks rules dealing with 21st century issues, such as e-Commerce, business data transmissions, trade in services industries that didn’t exist 25 years ago, and it includes provisions that might be worth reconsidering. While there is certainly scope for carefully considered reforms, massive overhaul that dramatically changes the rules and incentives undergirding the North American production platform would be enormously disruptive, and potentially disastrous.

Of course, NAFTA has its critics who would like nothing more than to blow up the status quo. Peter Navarro, who heads Trumps’ newly established National Trade Council, wants to see those trans-national production and supply chains dismantled and situated entirely in the United States. He and incoming Commerce Secretary Wilbur Ross (who Trump designated to lead the NAFTA renegotiation) believe that imports detract from economic growth and that the United States should be more self-sufficient.

According to the Financial Times, “Mr. Navarro said one of the administration’s trade priorities was unwinding and repatriating the international supply chains on which many US multinational companies rely, taking aim at one of the pillars of the modern global economy.” Of course those views are consistent with Trump’s threatening tweets, in which he has warned U.S. companies with supply chains running through Mexico that their products, when imported back into the United States, will face penalties.

Ever since Ross Perot’s 1992 warning of “a giant sucking sound” coming from Mexico to vacuum up U.S. investment, factories, and jobs, NAFTA has been a symbol of corporate free trade agreements run amok. Despite an abundance of evidence to the contrary, the view that NAFTA killed U.S. manufacturing jobs is alive and kicking in the age of Trump.

Bureau of Labor Statistics data show that in the 14 years between 1979 (the year in which U.S. manufacturing jobs peaked at 19.4 million) and 1993 (the last year before NAFTA implementation), the manufacturing sector shed 2.7 million jobs. In the 14 years between 1993 and 2007, manufacturing shed 2.9 million jobs. In other words, the pace of job decline in manufacturing was virtually unchanged between the periods. It’s worth mentioning that manufacturing jobs actually increased by 800,000 in the first five years following NAFTA’s implementation.

University of California economic historian Brad DeLong — not a raging free trader — estimates that NAFTA may be responsible for net job losses of about 0.1 percent of the U.S. workforce, which amounts to fewer jobs than are added to payrolls in an average month. Other factors, especially the increase in output attributable to productivity gains, explains much of the reduction in manufacturing jobs.

The president’s unorthodox views and impulsive tendencies are causing trepidation and uncertainty. Threats of 35 percent tariffs, aversion to seeing the company called out by name in a menacing tweet, and fear of political retribution have kept much of the business community cowering in silence. The uncertainty has undoubtedly deterred, deferred, and reversed cross-border investment decisions. Expect that to continue to be the case until greater clarity of purpose and scope of the NAFTA renegotiations emerges.

Dan Ikenson is director of the Cato Institute’s Herbert A. Stiefel Center for Trade Policy Studies.

Share |

Neil Gorsuch and the Structural Constitution

Ilya Shapiro and Frank Garrison

The Framers designed a system whereby the primary method of protecting individual rights lay in dividing the power of government both vertically and horizontally (federalism and the separation of powers, respectively). This innovation, applying a blend of ancient and Enlightenment-era political philosophy, would prevent anybody in the ruling class from gaining too much power over the people.

But our constitutional jurisprudence has not always reinforced this structure. Indeed, over the past century we have seen more and more power transferred from the states to the federal government — and from the judicial and legislative branches to the executive. The main protection for freedom became what the Founders originally considered a redundant afterthought, the Bill of Rights (which, as the late Justice Scalia liked to say, most tin-pot banana republics have). With the nomination of Judge Neil Gorsuch to the Supreme Court, however, there is renewed hope for a renaissance in enforcing the Constitution’s structure as the means for securing and protecting ordered liberty.

He well grasps the importance of limiting government to protect rights.

Like Justice Scalia — whose seat Gorsuch is tapped to fill — the nominee applies the Constitution’s original meaning to these structural provisions, and he recognizes the importance of limiting government to protecting rights. But Gorsuch has been more willing than Scalia was to take them seriously and not just defer to executive agencies, because he recognizes the damage that the modern administrative state has wrought on individual liberty.

In United States v. Nichols (2015), Gorsuch confronted the delegation of the legislative power to the executive branch. And so he considered the “non-delegation doctrine,” which comes directly from Article I of the Constitution: “All legislative powers herein granted shall be vested in a Congress of the United States.” That seems pretty clear, but the Supreme Court, in nodding toward the practicalities of modern government, has allowed congressional delegation of the lawmaking power if there is an “intelligible principle” for the executive to follow. In practice, Congress gives very few principles, much less intelligible ones; instead, it passes vague statutes with little guidance for how to implement them. This gives the executive free rein to promulgate rules that have the binding force of law.

In his Nichols dissent, Gorsuch invoked the Framers to explain the importance of keeping legislative power in Congress:

The framers of the Constitution thought the compartmentalization of legislative power not just a tool of good government or necessary to protect the authority of Congress from the encroachment by the Executive but essential to the preservation of the people’s liberty… . By separating the lawmaking and law enforcement functions, the framers sought to thwart the ability of an individual or group to exercise arbitrary or absolute power.

Gorsuch’s opinions have also questioned the constitutional implications of granting deference to administrative agencies through judge-made doctrines. These doctrines — emanating from the cases in which they were derived, such as Auer, Chevron, and Brand X — require courts to defer to agency interpretations of ambiguous statutes and regulations. Chevron and Auer deference allow agency “experts” to fill gaps in these ambiguities to craft policy — essentially letting the executive write legislation. Brand X, for its part, requires courts to defer to post-hoc executive interpretations of statutes even after a federal court has already construed the statutes’ meaning — conferring on the executive the judicial power to have the final say on “what the law is” (to quote Chief Justice John Marshall in the foundational case of Marbury v. Madison). Gorsuch wrote a much-heralded opinion (and separate concurrence!) in Gutierrez-Brizuela v. Lynch (2016) analyzing whether these doctrines violate the Constitution’s separation of powers.

But it is another case — which he says “would make James Madison’s head spin” — that stands out for his consideration of how these doctrines affect individual liberty. In De Niz Robles v. Lynch (2015), an executive-agency adjudication cited Chevron and Brand X to essentially overrule federal court precedent interpreting an immigration statute, applying it retroactively even though the defendant had relied on the courts’ previous interpretation of the law.

The government argued that it could do this because the law was ambiguous. Gorsuch, writing for the majority in striking down the government’s ruling, pointed out that even if these rules are not seen as violations of our Constitution’s structure under current precedent, these doctrines when combined can work together to infringe on “second-order constitutional protections sounding in due process and equal protection.”

Of course, even as Gorsuch’s administrative-law jurisprudence shows a devotion to the Constitution’s original design, no judge is perfect. When it comes to the dormant commerce clause — the idea that states can’t impose regulations that impede interstate commerce even if Congress hasn’t expressly forbidden them to do so — Gorsuch, in our view, gets it wrong. In two recent opinions, Gorsuch questioned the doctrine’s constitutional foundation (as did Justice Scalia, we hasten to add). While the commerce clause has been invoked since the New Deal as a warrant for nearly unlimited federal power, its inverse actually seems more faithful to a founding document concerned with the free flow of commerce throughout the nation.

That’s a complicated and highly technical legal dispute that generally cuts across jurisprudential lines. But it just goes to show that while not everyone will agree with Judge Gorsuch’s analysis in every area of law, he has shown a willingness to respect the judicial duty and enforce the Constitution’s structural protections against federal overreach. That approach will make a welcome addition to the Supreme Court.

Ilya Shapiro is a senior fellow in constitutional studies at the Cato Institute. Frank Garrison is a legal associate at the Cato Institute.

Share |

How One Company’s Perfidy Makes Your Cell Phone More Expensive Than It Should Be

Ike Brannon

An aggressive monopolist doesn’t just content itself with monopoly profits in the market it controls; where possible, it leverages that advantage to gain market power in additional markets as well, where regulators may be less vigilant and the players in the target market are vulnerable.

Nowhere is this more evident than in the various markets in the dynamic mobile technology industry eco-structure. And as a result of such behavior consumers are paying more than they should be for their cell phones.

Perhaps the most egregious example of a monopolist exploiting this two-step maneuver has been Qualcomm, a southern California firm that makes most of its money from its intellectual property. The company’s wielding of its cache of Standards-Essential Patents (SEPs) against other mobile technology industry players has given it a nearly untouchable position of control within the baseband processors market. Baseband processors are devices that enable cellular communications.

SEPs help standardize technology and allow products from different manufacturers to interoperate. A seamless exchange of signals and data is particularly vital within mobile technology because communication is a core utility. However, Qualcomm imposes sharply higher licensing fees on customers that do not also buy its baseband processor. In essence, it bundles its SEP licenses together with their baseband chips.

The Federal Trade Commission recently concluded that Qualcomm has been abusing its SEP power to benefit its chipset business, and last month it sued the company in an attempt to stop these practices.

Bundling is a common form of price discrimination, which is when a seller charges different customers different prices for a similar good. Bundling is not illegal per se, but regulators tend to get involved when a legal monopoly ascribed by a patent uses it to create monopoly in another market — especially if it is a market that would otherwise be relatively competitive. These excessive licensing fees trickle down to consumers in what some have come to call the “Qualcomm Tax.”

And that happens to describe the market for baseband processors to a T. Qualcomm’s market share in that heretofore robust market surged from half to two-thirds in just two years, placing it as the dominant actor. Its rapid market gains have resulted in no small part from leveraging its SEPs.

The various standards boards that deem which patents are essential to each standard require that the holder of intellectual property license it fairly, reasonably and under non-discriminatory (FRAND) terms. Of course, the word “fair” is a nebulous term, but few would dispute that a pricing structure that expands the realm of a company’s monopoly does not comport with any notion of fairness. FRAND licensing obligation makes SEPs different from other patents: since almost every phone manufacturer will need to license them they create a tremendous licensing revenue upside for SEP holders.

Qualcomm essentially offers the customers for its patents a conditional price — a low one if they buy the Qualcomm baseband processor but a much higher fee if they refuse. In some cases Qualcomm refuses to license these SEPs altogether — and without them a cellphone would be fundamentally incompatible with standards like 3G, 4G, or LTE, rendering it all but worthless.

The Federal Trade Commission recently concluded that Qualcomm has been abusing its SEP power to benefit its chipset business, and last month it sued the company in an attempt to stop these practices. Apple filed its own lawsuit against the company for the same reason shortly thereafter. In December 2016 South Korea fined Qualcomm $850 million for its actions in the baseband processor market as well.

Monopolies are not illegal — our antitrust law permits a company that builds a better mousetrap and holds the intellectual property that made its improved mousetrap a success to reap the rewards of its innovation, and permits the firm wide latitude to charge what the market will bear. The protection is temporary and the hope is that any outsize profits will encourage more innovation in this — and other — markets.

Nor is price discrimination illegal, whether or not it is done via bundling or another method. Price discrimination may create more profits for businesses but it also typically results in businesses being able to reach more customers and enhance social welfare via increased sales, along with increased consumer and producer surplus. That is a good thing, regardless of how markets divide that surplus.

However, neither of these describe what Qualcomm is doing, which is merely leveraging its SEPs to create and sustain another monopoly. The new monopoly does not incentivize more innovation and its price discrimination does not boost social welfare; it merely transfers money from its immediate customers — and everyone who buys a cellphone — into its own coffers.

The FTC is right to act to try to put a stop to this abuse of market power, and its actions hold the potential to reduce price pressures in the cell phone market, thereby saving consumers billions of dollars. Monopolies should always and everywhere be kept to as narrow a market as possible.

Ike Brannon is president of Capital Policy Analytics and a visiting fellow at the Cato Institute.