Share |

Scorecard: Trump’s First Six Months

Michael D. Tanner

The first six months of the Trump presidency have been dominated
by tweets, insults, and investigations. But obscured by all the
noise have been important questions of policy. Let us, therefore,
put aside issues of style and look more closely at the substance.
What has President Trump accomplished?

There have clearly been successes. At the very top of the list
is Supreme Court Justice Neil Gorsuch, who gives every sign of
being the brilliant originalist who was advertised. Trump has been
slower in nominating judges to lower courts, but those he has put
up, in general, appear to be excellent choices.

On the legislative front, Trump’s biggest victory may have been
a bill making it easier to fire incompetent employees at the
Department of Veterans Affairs, and protecting whistleblowers in
the agency. He has also signed some 15 bills repealing all or parts
of Obama-era regulations. Few have been earthshaking, but most have
been steps in the right direction. And while his withdrawal from
the Paris climate accords was as much symbolism as substance (as
were the accords themselves), it was an important signal that
America is going to prioritize economic growth.

Some wins (Gorsuch,
regulations, Paris accord), but character flaws continue to tarnish
his achievements.

Nor should we ignore addition by subtraction, so to speak. There
are all the regulations that the Trump administration has not
enacted, especially compared with what a Clinton administration
probably would have done. By some measures, the Trump
administration has been the least regulatory presidency since
Reagan’s. That’s not nothing.

But the president has mostly struck out on bigger items. Even if
Republicans eventually cobble together some sort of health-care
bill, full repeal of Obamacare is, by all accounts, not going to
happen. Tax reform remains nothing more than a one-page outline and
is unlikely to pass this year. The budget remains stalled,
entitlement reform is off the table, and deficits are rising.
Congress, of course, shares the blame for these failures. But
Trump’s distraction, disengagement, and vacillation helped turn bad
situations into true disasters.

Then again, we should probably be grateful that many of Trump’s
other initiatives, such as Ivanka’s paid family-leave and
child-care programs, the trillion-dollar infrastructure boondoggle,
and, of course, the wall are not going anywhere.

And if you want to see a complete policy train wreck, look no
further than the president’s travel ban, originally intended to
bar entry for 90 days for applicants from seven Muslim-majority
countries. Setting aside that the president managed to insult an
entire religion and caused enormous personal hardship to innocent
people, or that the ban does nothing to make America safer, one
can’t overlook that the whole exercise ended up bogged down in the
courts for longer than the order was originally supposed to be in
effect.

Meanwhile, on foreign policy, Trump’s flubs and snubs have
obscured the fact that he has mostly carried on a pretty
traditional approach to most issues. His rhetoric might be more
bellicose, but his actual policies are not much different than what
President Clinton probably would have done.

Under other circumstances, one might consider these six months
as perfectly mediocre, not as bad as critics feared, but no great
shakes either. But circumstances are hardly normal. It’s all but
impossible to separate Trump on policy from Trump’s character. From
the point of view of his many critics, his petty feuds, continuing
misogyny, and relentless assault on the truth have tarnished those
things he has accomplished.

Polls show that Trump’s support among voters is at record lows
at this point in a presidency, but he retains nearly all the
support of his base. For some of them, it’s enough that he appears
to speak for them against the bipartisan Washington establishment.
For others, they are enthralled by the way he drives liberals,
critics, and the media crazy. For others, not being Hillary or
Obama will carry him a long way. Besides, the Democrats are hardly
offering much of an alternative.

But if we are looking for real solutions to the serious problems
facing this country, the Trump administration is a long way from
winning.

Michael
Tanner
is a senior fellow at the Cato Institute and the author
of Going for Broke: Deficits, Debt, and the Entitlement
Crisis.

Share |

Americans Should Impeach Presidents More Often

Gene Healy

Impeachment talk in the nation’s capital rose from a murmur to a
dull roar in mid-May, thanks to a week jam-packed with Nixonesque
“White House horrors.” On Tuesday, May 9, President
Donald Trump summarily fired FBI director James Comey; on Thursday,
Trump admitted the FBI investigation into “this Russia
thing”—attempts to answer questions about his campaign’s
links with Moscow—was a key reason for the firing; Friday
found Trump warning Comey he’d “better hope that there are no
‘tapes’ of our conversations”; and the following
Tuesday The New York
Times
 reported the existence of a Comey memo on Trump’s
efforts to get the FBI director to “let this go.” Along
the way, Trump may have “jeopardized a critical source of
intelligence on the Islamic State” while bragging to Russian
diplomats about his “great intel,” according
to The Washington Post.

Still, the Beltway discussion of impeachment remained couched in
euphemism, as if there was something vaguely profane and
disreputable about the very idea. “The elephant in the
room,” an NPR story observed, “is the big ‘I’
word—impeachment”; “the ‘I’ word that I think we
should use right now is ‘investigation,’” House Judiciary
Committee member Rep. Eric Swalwell (D-Calif.) told CNN’s Wolf
Blitzer.

We don’t call it “the v-word” when the
president signals he might veto a bill. Yet somehow, when it comes
to the constitutional procedure for ejecting an unfit president,
journalists and Congress members—grown-ups,
ostensibly—are reduced to the political equivalent of
h-e-double-hockey-sticks.”

What’s really obscene is America’s record on presidential
impeachments. We’ve made only three serious attempts in our entire
constitutional history: Andrew Johnson in 1868, Bill Clinton in
1998—both of whom were impeached but escaped
removal—and Richard Nixon, who quit in 1974 before the House
could vote on the issue. Given how many bastards and clowns we’ve
been saddled with over the years, shouldn’t we manage the feat more
than once a century?

A ‘National Inquest Into the Conduct of Public
Men’

We’ve made only three
serious attempts in our entire constitutional history. Given how
many bastards and clowns we’ve been saddled with over the years,
shouldn’t we manage the feat more than once a century?

Impeachments “will seldom fail to agitate the passions of
the whole community, and to divide it into parties,” Alexander
Hamilton predicted in the Federalist. That’s how
it played out during our last national debate on the subject,
during the Monica Lewinsky imbroglio of the late ’90s.

The specter of Bill Clinton’s removal from office for perjury
and obstruction of justice drove legal academia to new heights of
creativity. Scads of concerned law professors strained to come up
with a definition of “high Crimes and Misdemeanors”
narrow enough to let Bill slide. In a letter delivered to Congress
as the impeachment debate began, over 430 of them warned that
unless the House of Representatives wanted to “dangerously
weaken the office of the presidency for the foreseeable
future” (heaven forfend), the standard had to be “grossly
heinous criminality or grossly derelict misuse of official
power.”

Some of the academy’s leading lights, not previously known for
devotion to original intent, proved themselves stricter than the
strict constructionists and a good deal more original than the
originalists. The impeachment remedy
was so narrow, Cass Sunstein insisted, that if
the president were to up and “murder someone simply because he
does not like him,” it would make for a “hard case.”
Quite so, echoed con-law superprof Laurence Tribe: An impeachable
offense had to be “a grievous abuse of official power,”
something that “severely threaten[s] the system of
government.”

Just killing someone for sport might not count—after all,
Tribe pointed out, when Vice President Aaron Burr left a gutshot
Alexander Hamilton dying in Weehawken after their July 1804 duel,
he got to serve the remaining months of his term without getting
impeached. Still, Tribe generously allowed, in the modern era
“there may well be room to argue” that a murdering
president could be removed without grave damage to the
Constitution.

In the unlikely event that Donald Trump orders one of his
private bodyguards to whack Alec Baldwin, it’s a relief to know
that Laurence Tribe will entertain the argument for impeachment.
But does constitutional fidelity really require us to put up with
anything short of “grievous,” “heinous,”
existential threats to the body politic?

The Framers borrowed the mechanism from British practice, and
there it wasn’t nearly so narrow. The first time the phrase
appeared, apparently, was in the 1386 impeachment of the Earl of
Suffolk, charged with misuse of public funds and negligence in
“improvement of the realm.” The Nixon-era House Judiciary
Committee staff report Constitutional Grounds for
Presidential Impeachment
 described the English precedents
as including “misapplication of funds, abuse of official
power, neglect of duty, encroachment on Parliament’s prerogatives,
[and] corruption and betrayal of trust.”

As Hamilton explained in the Federalist, “the
true spirit of the institution” was “a method of national
inquest into the conduct of public men,” the sort of inquiry
that could “never be tied down by such strict rules…as in
common cases serve to limit the discretion of courts.”

Among those testifying beside Sunstein and Tribe in 1998 was
Northwestern’s John O. McGinnis, a genuine originalist, who argued
that the Constitution’s impeachment provisions should be viewed in
terms of the problem they were designed to address: “how to
end the tenure of an officer whose conduct has seriously undermined
his fitness for continued service and thus poses an unacceptable
risk of injury to the republic.”

Contra Tribe, who’d compared impeachment to “capital
punishment,” McGinnis pointed out that the constitutional
penalties for unfitness—removal and possible disqualification
from future office holding—went “just far enough,”
and no further than necessary, “to remove the threat
posed.” In light of the structure and purpose of impeachment,
he argued, “high Crimes and Misdemeanors” should be
understood, in modern lay language, roughly as “objective
misconduct that seriously undermines the official’s fitness for
office…measured by the risks, both practical and symbolic, that the
officer poses to the republic.”

Today, even the president’s political enemies tend to set the
bar far higher. Donald Trump has acted in a way that is
“strategically incoherent,” “incompetent,” and
“reckless,” Democratic leader Rep. Nancy Pelosi said in
February, but “that is not grounds for impeachment.”

But incoherence, incompetence, and recklessness are evidence of
unfitness, and when we’re talking about the nation’s most powerful
office they can be as damaging as actual malice. It would be a
pretty lousy constitutional architecture that only provided the
means for ejecting the president if he’s a crook or a vegetable,
but left us to muddle through anything in between.

Luckily, Pelosi is wrong: There is no constitutional barrier to
impeaching a president who demonstrates gross incompetence or
behavior that makes reasonable people worry about his proximity to
nuclear weapons.

Impeachable Ineptitude

When Barack Obama was president, Trump once asked, “Are you
allowed to impeach a president for gross incompetence?”
Earlier this year, Daily Show viewers found that
tweet funny enough to merit the “Greatest Trump Tweet of All
Time” award. Still, it’s a valid question.

The conventional wisdom says no, largely on the basis of a
snippet of legislative history from the Constitutional Convention.
As James Madison’s notes recount, when Virginia’s George Mason
moved to add “maladministration” to the Constitution’s
impeachable offenses, Madison objected: “So vague a term will
be equivalent to a tenure during pleasure of the Senate.”
Mason yielded, substituting “other high crimes &
misdemeanors.”

But the Convention debates were held in secret, and Madison’s
notes weren’t published until half a century later. Furthermore,
the language Mason substituted was understood from British practice
to incorporate “maladministration.” Nor did Madison
himself believe mismanagement and incompetence to be clearly
off-limits, having described impeachment as the necessary remedy
for “the incapacity, negligence, or perfidy of the chief
Magistrate.”

Thus far, the Trump administration has been a rolling Fyre
Festival of negligence and maladministration, from holding a
nuclear strategy session with Japan’s prime minister in the crowded
dining room of a golf resort to having the former head
of Breitbart News draft immigration orders
without the assistance of competent lawyers. Near as I can tell,
James Comey’s verbal incontinence had a bigger impact on the 2016
election than Russian espionage, but liberals hold out hope for a
“smoking gun” of collusion that’s unlikely ever to
emerge. Meanwhile, the Trump administration was apparently clueless
that firing the FBI director in the midst of the Russia
investigation would be a big deal, and Trump himself was unaware
that admitting he did it in hopes of quashing the inquiry was a
stupid move.

As the Comey story emerged, pundits and lawbloggers debated
whether, on the known facts, the president’s behavior would support
a federal felony charge for obstruction of justice. But that’s the
wrong standard. As the Nixon Impeachment Inquiry staff report
pointed out: “the purpose of impeachment is not personal
punishment. Its purpose is primarily to maintain constitutional
government.” Even if, to borrow a phrase from Comey, “no
reasonable prosecutor” would bring a charge of obstruction on
these facts, the House is free to look at the president’s entire
course of conduct and decide whether it reveals unfitness
justifying impeachment.

A Rhetorical Question?

The Nixon report identified three categories of misconduct held
to be impeachable offenses in American constitutional history:
“exceeding the constitutional bounds” of the office’s
powers, using the office for “personal gain,” and, most
important here, “behaving in a manner grossly incompatible
with the proper function and purpose of the office.”

When Trump does something to spark cries of “this is not
normal,” the behavior in question often involves his Twitter
feed. The first calls to impeach Trump over a tweet came up in
March, when the president charged, apparently without evidence,
that Obama had his “wires tapped” in Trump Tower.

The tweet was an “abuse of power,” “harmful to
democracy,” and potentially impeachable, Harvard Law’s Noah
Feldman proclaimed: “He’s threatening somebody with the
possibility of prosecution.” Laurence Tribe, of all people,
agreed. Murder may have been a hard case, but slander? Easy call.
Trump’s charge qualified “as an impeachable offense whether
via tweet or not.”

I confess it wasn’t the utterly speculative threat to Barack
Obama that disturbed me about Trump’s Twitter feed that day in
March; it was that a mere two hours after lobbing that grenade,
Trump turned to razzing Arnold Schwarzenegger for his
“pathetic” ratings as host of Celebrity
Apprentice
. The Watergate tapes exposed much more than a
simple abuse of power. They revealed a fragile, petty, paranoid
personality of the sort you’d be loath to entrust with the vast
authority of the presidency. And Nixon didn’t imagine that the
whole world would be listening. Trump’s Twitter feed is like having
the Nixon tapes running in real time over social media, with the
president desperate for an even bigger audience.

As it happens, there’s precedent for impeaching a president for
bizarre behavior and “conduct unbecoming” in his public
communications. The impeachment of Andrew Johnson gets a bad rap,
in part because most of the charges against him really were bogus.
The bulk of the articles of impeachment rested on Johnson’s
violation of the Tenure of Office Act, a measure of dubious
constitutionality that barred the president from removing Cabinet
officers without Senate approval.

But the 10th article of impeachment against Johnson, based on
different grounds, has gotten less coverage. It charged the
president with “a high misdemeanor in office” based on a
series of “intemperate, inflammatory, and scandalous
harangues” against Congress. In a series of speeches in the
summer of 1866, Johnson had accused Congress of, among other
things, “undertak[ing] to poison the minds of the American
people” and having “substantially planned” a race
riot in New Orleans that July. Such remarks, according to Article
X, were “peculiarly indecent and unbecoming in the Chief
Magistrate” and brought his office “into contempt,
ridicule and disgrace.”

‘Peculiar Indecencies’

From a 21st century vantage point, the idea of impeaching the
president for insulting Congress seems odd, to say the least. But
as Jeffrey Tulis explained in his seminal work The
Rhetorical Presidency
, “Johnson’s popular rhetoric
violated virtually all of the nineteenth-century norms”
surrounding presidential oratory. Johnson stood “as the stark
exception to general practice in that century, so demagogic in his
appeals to the people” that he resembled “a parody of
popular leadership.” The charge, approved by the House but not
voted on in the Senate, was controversial at the time, but besides
skepticism about whether it reached the level of a high
misdemeanor, “the only other argument offered by congressmen
in Johnson’s defense was that he was not drunk when giving the
speeches.”

It’s impressive that Trump—a teetotaler—manages to
pull off his “peculiar indecencies” while stone cold
sober. Since his election, Trump has used Twitter to rail against
restaurant reviews, Saturday Night Live skits,
“so-called judges,” and America’s nuclear-armed rivals.
The month before his inauguration, apropos of nothing, Trump
announced via the social network that the U.S. “must greatly
strengthen and expand its nuclear capability,” following up
the next day on Morning Joe with “we will
outmatch them at every pass and outlast them all.”

As Charles Fried, Reagan’s solicitor general, observed,
“there are no lines for him…no notion of, this is
inappropriate, this is indecent, this is unpresidential.” If
the standard is “unacceptable risk of injury to the
republic,” such behavior just may be impeachable. An
impeachment on those grounds wouldn’t just remove a bad president
from office; it would set a precedent that might keep future
leaders in line.

Gene Healy is a
vice president at the Cato
Institute
, author of The Cult of the Presidency: America’s Dangerous
Devotion to Executive Power
(Cato 2008), and a columnist at the
Washington Examiner.

Share |

Four Reasons Obamacare Lived to Plague Republicans Another Day

Michael D. Tanner

Republican hopes to repeal Obamacare are all but officially
dead, at least for now. This isn’t just a failure, this is an epic
failure. This is the legislative failure by which all future
legislative failures will be judged.

But how did it come to this? When Republicans took power in
January, they controlled both branches of Congress and the
presidency, Obamacare was hugely unpopular with voters, and the
health care law was spiraling into failure. Yet somehow, Obamacare
not only survives, it is now more popular than ever.

So what went wrong?

1. It’s Hard Taking Things Away from People:
One thing Democrats have always understood is that there is no down
escalator for the welfare state. As we witness every election
cycle, when Democrats accuse Republicans of throwing grandma off a
cliff for discussing Social Security or Medicare reform, it doesn’t
matter how unsustainable or unrealistic promised benefits are, you
are still taking away something that people feel they were
promised. Santa Claus is always more popular than the Grinch, even
if the Grinch understands math.

Republicans tried hard to pretend that there were no losers
under their proposals, but the public understood that, if you
slowed the growth of Medicaid or reduced subsidies, some people
would either pay more or get less. And because they don’t trust
politicians, they didn’t want to take any chances that the person
paying more or getting less would be them. That means it was always
going to be hard for Republicans to repeal or replace Obamacare
even if they got everything else right. As we saw, they didn’t.

For one, Santa Claus is
much more popular than the Grinch.

2. Institutional barriers: Because Democrats
were unified in opposition to any Republican plan, Republicans were
forced to rely on a complex procedure known as “reconciliation” to
avoid a filibuster in the Senate. Among other things,
reconciliation requires that all provisions in a bill have a direct
budgetary impact. Thus, proposals like allowing the sale of
insurance across state lines couldn’t be included in the bill. But
those provisions were not only among the most popular Republican
ideas, they were also important for making insurance more
affordable.

3. No Plan: For 7 years, every Republican
running for president or Congress (or any other office for that
matter) campaigned on opposition to Obamacare. Congress even voted
some 50 times to repeal all or part of the health care law. But
once the stakes became real rather than symbolic this year, it
quickly became apparent that Republicans had no actual plan for
what would replace Obamacare. This wasn’t just a question of
negotiating the final details either. They didn’t even understand
the basics. It was obvious that very few Republicans had given much
thought to how the health care system works or what a free market
health care plan might look like.

Without a base of understanding to start from, the negotiations
over the Republican alternative quickly became obsessive efforts to
find a plan that could pass, rather than one that would work. Thus
Republicans tried to keep seemingly popular provisions of
Obamacare, like preventing medical underwriting of people with
preexisting conditions, while repealing unpopular provisions like
the individual mandate. They ended up with a proposal that
increasingly veered toward incoherence. It somehow managed the
difficult feat of taking all the problems with Obamacare and
making them worse
.

No Message: As Republicans became increasingly
obsessed with process and the tantalizing question of whether they
could pass anything, they almost completely stopped
talking about why they should pass their bill. Almost no
one talked about why this was a good bill, or why it was better
than Obamacare. The average American had no idea what the
Republican bill would do to their premiums, their coverage, their
ability to see the doctor of their choice. There is a compelling
case to be made for how free market health care reform can bring
down costs, while improving quality and choice. No one ever made
that case.

No one was more derelict in this regard than President Trump.
Say what you will about how President Obama sold Obamacare, but he
did sell it. By some estimates Obama discussed health care on more
than 150 occasions in his speeches, press conferences, and town
halls. Even by generous standards, President Trump spoke about
health care less than a dozen times in the first six months of his
presidency, often just a passing reference sandwiched amidst other
issues.

The Republican failure to repeal Obamacare suggests that the
rest of their agenda, from tax reform to the budget is in trouble
too. None of the dynamics are going to change. Democrats, firmly in
“resist” mode, will remain adamantly against anything Republicans
propose. President Trump will remain distracted and disengaged (not
to mention increasingly unpopular). Republicans will remain divided
and afraid. Not exactly a recipe for success.

The question, then, is whether the president and congressional
Republicans have learned anything from this defeat. So far, there’s
no evidence that they have.

Michael Tanner is a senior fellow at the Cato Institute.

Share |

Straight from DPRK: Traveling to NK: Brave or Crazy?

Doug Bandow

Pyongyang, North Korea — In the popular mind, there may be
no more forbidding destination on earth. I’ve never had as
many people ask if I was serious when I mentioned I was heading to
the Democratic People’s Republic of Korea last month.

In fact, I had no worries. I was going as a guest of the
Institute for American Studies of the Foreign Ministry. I also
understood what not to do.

Failing at the latter has proved to be the undoing of a number
of Americans, most spectacularly collegian Otto Warmbier, who died
after being released by North Korea in a coma. Three other
Americans remain in custody, along with several South Koreans and
other foreign nationals. But their plight, though tragic, is not a
good reason to ban travel to the DPRK.

Some in Congress want to
ban travel to the North. But a free society should protect the
liberty to travel and explore.

Some attributed Warmbier’s release to the Trump
administration’s efforts, though it had no more leverage than
its predecessor. While in the North I asked if the government sent
Warmbier home as a conciliatory gesture to Washington. The
unequivocal response was that it was strictly a humanitarian
matter.

Otto Warmbier’s family blamed the Obama administration for
failing to win his release, but the decision always was
Pyongyang’s. Why the DPRK released him was impossible to know
for sure: perhaps Kim Jong-un decided that holding a comatose
prisoner was a political liability.

The cases of Warmbier and other Americans, some going back
years, are uniformly awful: people punished for actions that should
not be considered criminal. But the DPRK is not alone in penalizing
foreigners for dubious offenses. The main difference may be that
Pyongyang, more than most other “hostile” states, sees
potential political value in jailed Americans.

Still, a thousand Americans visit annually and don’t get
arrested. Young Pioneer Tours, which organized the trip on which
Warmbier traveled, pointed out that it had brought in more than
8000 other travelers without incident.

On my plane entering North Korea I sat next to a British citizen
who was making his third tourist visit. The worst trouble he had
was being told to delete photos deemed inappropriate.

A number of humanitarian groups, some explicitly religious, work
in the officially atheist nation. I met several NGO staffers and
volunteers in the midst of a lengthy sojourn providing medical
care. None had ever ended up in jail.

In fact, arrests aren’t random but, in North Korea’s
view, for cause. DPRK officials say they punish intentional, not
accidental, rules violations.

I chatted with the head of a Western NGO active in the North who
said her group had looked into the cases of those jailed: all had
committed some illegal act. Obviously that doesn’t mean their
conduct warranted punishment. But they put themselves under the
DPRK’s authority, to ill effect.

Warmbier’s case looks extreme even by North Korean
standards. Some knowledgeable Westerners suggested that there was
more to his case, perhaps involving an insult to the North Korean
system and Supreme Leader Kim Jong-un. The poster incident merely
became the cover story.

Some in Congress want to ban travel to the North. But a free
society should protect the liberty to travel and explore. This
right shouldn’t be limited without compelling
justification.

Visiting the DPRK has educational value. Those who spend time in
North Korea are more likely to understand it. Since the U.S.
government lacks a diplomatic presence; American visitors are the
best alternative.

Going to the North also causes those living in free societies to
better appreciate their systems. I left thankful that I lived in a
society which, however imperfectly, protected individual
liberty.

Watching, meeting, and especially working with people who
don’t fit the official stereotype provide North Koreans with
an education as well. Knowledge is transmitted, curiosity is
aroused. Engagement is no panacea, but is more likely than
isolation to encourage Pyongyang’s positive evolution.

Banning Americans from visiting the North would be especially
perverse when the rest of the world remained free to go. Congress
should think how best to transform the North’s people as well
as government over the long-term.

We may never know what happened to Otto Warmbier. His tragic
case reminds us that visiting North Korea requires special caution.
But that’s no reason to block outsiders from going.

They have much both to learn and teach. Until the DPRK changes,
individual travelers may end up being most important and perhaps
only ambassadors to North Korea from democratic countries around
the world.

Doug Bandow is
a senior fellow at the Cato Institute and a former Special
Assistant to President Ronald Reagan.

Share |

Saudi Arabia and United Arab Emirates Pay High Price for Botched Attack on Qatar

Doug Bandow

The pampered petro-states of Saudi Arabia and United Arab
Emirates expected a quick victory after imposing a quasi-blockade
on neighboring Qatar. Past crises in relations had been peacefully
resolved, but this time Qatar’s antagonists demanded its
virtual surrender, particularly abandonment of an independent
foreign policy. They believed they had Washington behind them.

Alas, the intervening weeks have not been kind to Riyadh and
UAE. Secretary of State Rex Tillerson and Defense Secretary Jim
Mattis signaled their support for Doha. Tillerson demonstrated
obvious impatience with demands he viewed as extreme and not even
worth negotiating, and called Qatar’s positions “very
reasonable.”

More than a few critics observed that Riyadh and Dubai are even
guiltier than Qatar in funding terrorism. One of them was Senate
Foreign Relations Committee Chairman Bob Corker, who complained
that “The amount of support for terrorism by Saudi Arabia
dwarfs what Qatar is doing.” Doha took the opportunity to
ink an agreement with the U.S. on targeting terrorist financing,
which none of Qatar’s accusers had done.

Moreover, George Washington University Professor Marc Lynch
observed that “The extremist and sectarian rhetoric which
external forces brought to the Syrian insurgency was a problem
extending far beyond Qatar.” The demand to shut Al Jazeera by
nations which have no free press and even criminalized the simple
expression of sympathy for Qatar was denounced globally.

Riyadh and Dubai have
sown the wind. Now they will reap the whirlwind.

Then came reports that U.S. intelligence concluded the UAE had
hacked the official Qatar website a couple months ago, creating the
incendiary posts allegedly quoting Qatar’s emir which helped
trigger the crisis. In contrast, Bahrain and Egypt, which joined
the anti-Doha bandwagon, looked like mere hirelings, doing as they
have been told by states which provided financial and military aid.
Having initiated hostilities without a back-up plan, the anti-Qatar
coalition cannot easily escalate against U.S. wishes or retreat
without a huge loss of face. But staying the course looks little
better. Saudi Arabia and UAE caused Qataris to rally behind their
royal family, wrecked the Gulf Cooperation Council, eased
Iran’s isolation, pulled Turkey directly into Gulf affairs,
and challenged Washington. Quite an achievement.

The experience has yielded several important lessons.

President Donald Trump huffs and puffs, but
doesn’t have much to do with U.S. foreign policy.

Despite having criticized Saudi Arabia in the past, he flip-flopped
to become Riyadh’s de facto lobbyists in Washington. However,
his very public preferences have had little impact on U.S. policy,
which ended up tilting strongly against UAE and Saudi Arabia. He
recently acknowledged that he and Secretary Tillerson “had a
little bit of a difference, only in terms of tone.”

Saudi Arabia proved to be more paper tiger than regional
leader.
It spent lavishly on weapons, subsidized other
Muslim states, sought to overthrow of Syria’s Assad regime,
and launched a brutal war against Yemen, but had no response
prepared when Qatar dismissed Riyadh’s demands. Then
Secretary Tillerson effectively blocked any escalation. With the
expiration of the Saudi-UAE ultimatum two weeks ago some observers
feared that Saudi Arabia and UAE would impose additional sanctions,
expel Qatar from the GCC, or even invade their independent
neighbor. But all of those steps now would be more difficult if not
impossible in practice.

Indeed, the secretary’s shuttle diplomacy last week to
support the Kuwaiti mediation attempt even forced Qatar’s
accusers to effectively negotiate what they had termed
nonnegotiable. UAE Minister of State Noura al-Kaabi said “We
need a diplomatic solution. We are not looking for an
escalation.” No wonder Saudis, who once believed they had
coopted America’s president, now complain that
America’s secretary of state is backing Doha.

Saudi Arabia’s expensive overseas diplomacy has
been of dubious value, gaining the Kingdom few friends.

Riyadh and Dubai organized an inconsequential coalition featuring
dependents Bahrain and Egypt, international nullity Maldives, and
one of the contending governments in fractured Libya. Since then
the group has failed to win meaningful support from any other
state. The problem? The real issue isn’t terrorism, but far
more selfish concerns, such as support for domestic political
opponents.

The reputation of the accusers has tanked.
Discussion of the controversy almost inevitably resulted in more
attention to the misbehavior of Riyadh and Dubai, particularly
their brutal repression of any political and religious dissent at
home, Saudi Arabia’s lavish funding for the extremist and
intolerant Wahhabist strain of Islam, and UAE’s initiation of
cyber-hostilities against Doha. Tom Wilson of the London-based
Henry Jackson Society published a report calling Riyadh the
“foremost” funder of terrorism in the United Kingdom
and citing concerns that “the amount of funding for religious
extremism coming out of countries such as Saudi Arabia has actually
increased in recent years.” While Qatar was vulnerable to
criticism over its backing for some radical groups, Riyadh and
Dubai had been subject to even harsher U.S. attacks for the same
reason.

Iran continued to gain more from the actions of its
antagonists than its own efforts.
Doha and Tehran are
linked by a shared natural gas field. Their relationship is one of
Saudi Arabia’s chief complaints. Iran is a malign actor, but
Riyadh, a totalitarian Sunni dictatorship, is worse. Saudi Arabia
intervened militarily in Bahrain to sustain the Sunni monarchy
against the Shia majority and backed radical insurgents to oust
Syrian President Bashar al-Assad. The reckless new Crown Prince,
Mohammed bin Salman, orchestrated the murderous, counterproductive
war in Yemen and diplomatic/economic attack on Qatar in order to
achieve Gulf hegemony. Now, without firing a shot, Iran helped
thwart Riyadh’s latest scheme, won the gratitude of Qataris,
and put a reasonable face on the Islamist regime.

Secretaries Tillerson and Mattis deserve special credit. By
ignoring President Trump’s misdirected enthusiasm for the
Saudi monarchy, they helped shift public attention back to Riyadh
and Dubai. Neither has demonstrated sufficient interest in cutting
terrorist funding.

For instance, in a lengthy cable dated December 30, 2009,
released by Wikileaks, the State Department criticized Qatar and
UAE, but was toughest on Saudi Arabia: “it has been an
ongoing challenge to persuade Saudi officials to treat terrorist
financing emanating from Saudi Arabia as a strategic
priority.” Moreover, “donors in Saudi Arabia constitute
the most significant source of funding to Sunni terrorist groups
worldwide.” The kingdom “remains a critical financial
support base for al-Qaeda” and other terrorist organizations.
Despite Riyadh’s policies, “groups continue to send
money overseas and, at times, fund extremism overseas.”

If Saudi Arabia and UAE cared about terrorism, they would look
inward first. And Riyadh would stop funding Wahhabism, an
intolerant Islamic teaching which demonizes those who believe
differently. Wilson charged that “a growing body of evidence
has emerged that points to the considerable impact that foreign
funding has had on advancing Islamist extremism in Britain and
other Western countries.” The consequences of this funding
may be more long-lasting than payments to the terrorist group du
jour. Norwegian anti-terrorism analyst Thomas Hegghammer observed
“If there was going to be an Islamic reformation in the 20th
century, the Saudis probably prevented it by pumping out
literalism.”

What really bothers Saudi Arabia and the UAE is Doha’s
support for opposition groups. For instance, both Riyadh and Egypt
fear the Muslim Brotherhood, which challenges their ruling regimes
with a flawed but serious political philosophy—and,
incidentally, does not promote terrorism. The Saudi royals are
insecure because a kleptocratic, totalitarian monarchy holds little
appeal to anyone other than the few thousand princes who live
lavishly at everyone else’s expense. Saudi Arabia and the
Emirates similarly despise the TV channel Al Jazeera, which has
criticized both regimes.

Riyadh also wants to conscript Qatar in its campaign to isolate
Iran. Ironically, the Kingdom so far has applied no pressure on UAE
which, like Qatar, has maintained ties with the Islamist regime.
Anyway, it would be far better to promote long-term change by
continuing to draw Iran’s population westward in opposition
to Islamist elites. By playing host to groups as diverse as the
Taliban and Hamas, Doha actually has drawn controversial
organizations away from more radical governments, such as
Iran’s, and enabled the West to have unofficial contact with
groups with which it is officially at odds, such as the
Taliban.

Riyadh and Dubai have sown the wind. Now they will reap the
whirlwind. Their attack on Qatar further destabilized the Middle
East, unsettling several of Washington’s closest allies. The
Saudis and Emiratis ended up in a global cul-de-sac, isolating
themselves more than Qatar. The latter has little incentive to
yield, while the former face humiliation if they abandon their
claims. Other governments increasingly expect a lengthy stand-off.
Secretary Tillerson predicted that the “ultimate resolution
may take quite a while.”

That will benefit no one, other than Iran, perhaps. Not Qatar.
Not America. And certainly not Saudi Arabia and the UAE.

The U.S. can’t impose a settlement on its dubious allies.
But Washington can recognize that “there are no clean hands
here,” as a State Department spokesman recently observed. The
Trump administration should place full responsibility for the
current stand-off where it belongs, on Riyadh and Dubai.

Doug Bandow is
a Senior Fellow at the Cato Institute, former Special Assistant to
President Ronald Reagan, and a Senior Fellow in International
Religious Persecution with the Institute on Religion and Public
Policy.

Share |

Criminal Justice on a Hunch

Jonathan Blanks

On Wednesday,
Attorney General Jeff Sessions announced
the reinstitution of a
federal program that allows local police officers to seize personal
property without so much as a criminal charge. The program is
intended to increase the use of civil asset forfeiture in the
never-ending War on Drugs. Federal “adoption,” as it’s referred to,
allows local police to seize property without criminal charge
— which is forbidden or limited under some state laws —
and turn it over to the federal government. Then, under what is
known as the “equitable sharing” provision, up to 80 percent of the
value of that seized property is returned directly to the local law
enforcement agency for certain purposes such as paying for overtime
or buying law enforcement equipment. Attorney General Eric Holder

suspended most federal adoptions in 2015
because of stories of
abuse — one department used funds to buy a margarita
machine
— and of innocent owners losing their property to
overzealous police departments.

Today’s announcement stands in stark contrast to bipartisan
efforts on the state and federal levels to curb this too
often-abusive practice. Although the attorney general paid lip
service to protections for innocent property owners, the
reinstitution of federal adoption incentivizes police to employ
tactics that will likely ensnare presumptively innocent people and
place burdens on them to prove their property is legal. Moreover,
this may have a disparate impact on ethnic minorities by
incentivizing racial profiling and skewing police priorities away
from public safety.

Today, asset forfeiture is a process by which the government
seizes property — cash, automobiles, real estate, etc.
— that allegedly was produced by, or used in, the furtherance
of a crime. For example, if a person led an investment scam for
several years and made millions of dollars from that fraud, the
state could seize his home, bank accounts, and other proceeds that
can be tied to the underlying fraud. Assets connected to drug
transactions can likewise be seized, such as the car the offender
was driving, any cash in the car, or the house from which the drugs
were alleged to have been sold.

In too many cases,
calling civil asset forfeiture “highway robbery” is not hyperbole,
and Jeff Sessions just made it worse.

Before the 1980s, the most common forfeiture used by domestic
law enforcement was criminal asset forfeiture. In a criminal
forfeiture case, the asset must have been related to a crime that
was proven in court, either in a trial or admitted in a guilty
plea. Importantly, in criminal forfeiture, the burden is on the
government to prove that the seized property had a direct
connection to the underlying criminal act.

However, in 1984, Congress amended the Comprehensive Drug Abuse
Prevention and Control Act of 1970 and established the Department
of Justice’s Asset Forfeiture Fund (AFF) that allowed the
Department of Justice (DOJ) to keep the funds it seized, sparking
the resuscitation of the once-arcane practice of civil asset
forfeiture. Many states followed suit with similar provisions that
allowed their agencies to self-fund through property seizures, and
they too saw an expansion of civil asset forfeiture.

Unlike criminal forfeiture, civil forfeiture requires no arrest
or criminal proceeding for the government to seize and liquidate
property that the government claimed was connected to a crime.
While there are administrative procedures that must run their
course between the time the property is seized and when the
government may liquidate the asset, the burden is usually on the
property owner to prove that the asset is licit and not tied to a
criminal act, turning due process completely upside down.

In many jurisdictions, police don’t have to assert more than a
hunch to meet the probable cause standard to take a person’s money
under civil forfeiture. Officers have seized cash not because there
were drugs or contraband present, but because it
was “way, way” more than a “normal person would carry.”
The
amount of money does not have to be large, however. A simple wad of
cash in a person’s pocket can be confiscated under some state laws
if the officer says he suspects drug activity. That person then has
to go to court to get it back. The process to reclaim the money or
property can be time-consuming and expensive, making it truly not
worth it for many individuals — particularly poor people
— to go to court to recover the asset that was taken by a
police officer. Thus, in too many cases, calling civil asset
forfeiture “highway robbery” is not hyperbole.

According to the Institute for Justice’s
extensive study of state and federal forfeiture practices
,
between 2000 and 2013, the DOJ paid state and local agencies $4.7
billion in all forfeiture proceeds, the vast majority of which were
obtained through civil forfeiture. To put the growth of federal
forfeiture in context, the AFF’s net assets were $93.7 million in
1986. In 2014, the number was $4.5 billion: a 4,667 percent
increase.

In 2012, the City of Tenaha and Shelby County, Texas
settled an ACLU class action lawsuit
that alleged that the
department was stopping drivers and, under civil asset forfeiture
law, coercing them to sign over cash and property or face arrest on
baseless charges. Officers threatened parents with taking custody
of their children for non-cooperation, and on one occasion, seized
a 16-month-old child from a restauranteur who refused to sign away
his rights to $50,000 in cash he was carrying to make a legitimate
business purchase. Horrifying as this is, the practice is worsened
because the restaurateur and most of the other people the agencies
were shaking down were
black and Hispanic
.

Although the ACLU suit is a particularly egregious example, the
racial disparity in stopping presumably innocent drivers with the
intent to search them is not limited to Texas. Virtually everywhere police

stops are counted
and measured
demographically
, black and/or Hispanic drivers are
over-represented in those pulled over and subsequently searched for
contraband. The
vast majority of searches of drivers across ethnicities come up
empty
, and statistics show that black and Hispanic drivers who
are searched are less likely to be carrying contraband than whites
who are similarly searched.

Stopping drivers to search for drugs and drug proceeds is much
cheaper than developing leads and building cases against large drug
organizations through buy-and-bust operations or long-term stings,
making interdiction through traffic stops all the more appealing.
For that reason, while the disparity in stops almost certainly
exists independent of asset forfeiture laws, increasing the use of
forfeiture will likely result in an increase of racial
profiling.

Due to the challenges of data collection and the lack of
transparency about collection practices, it is impossible to know
the full extent to which asset forfeiture drives aggressive
policing. But profit motives certainly can distort priorities,
perpetuating these disparities.
One investigative report in Tennessee
uncovered that drug
interdiction task force officers were ten times more likely to
stop, search, and seize drivers on the westbound side of an
east-west highway. This seemingly innocuous detail is relevant
because the officers were apparently set up to catch the cash from
allegedly Mexican-connected drug couriers. That is, instead of
setting on the eastbound lanes where they could try to catch the
drugs and guns before they got into the community, the police would
look for the cash after the transaction took place.
Waiting until the guns and drugs have entered the community to
interdict trafficking is the opposite of policing for public
safety, it is the definition of “policing for profit.”

Recognizing some of these problems, a growing number of state
legislatures have been trying to rein in civil asset forfeiture
abuse,
curbing officers’ ability to seize property under state law without
a conviction
, or
limiting seizures to high dollar amounts
in order to protect
the state’s most vulnerable citizens from the arbitrary
confiscation of their property. But the new DOJ guidance provides
an end-run around some state limitations on asset forfeiture and
incentivizes departments with direct payments of cash.

Expanding the already profligate use of civil asset forfeiture
is a giant step in the wrong direction for effective criminal
justice policy. Indeed, civil asset forfeiture incentivizes police
abuse of innocent people — abuse that falls
disproportionately on the poor and racial minorities — and
undermines good police work and public safety. In light of today’s
news, Congress should move to end federal civil asset forfeiture
entirely and make sure that federal law enforcement officers secure
criminal convictions before seizing property from individuals they
suspect of criminal wrongdoing. “Innocent until proven guilty” is
the touchstone of the American criminal justice system. It’s about
time our government lived up to it.

Jonathan Blanks is a research associate at the Cato Institute’s Project on Criminal Justice and managing editor of PoliceMisconduct.net.

Share |

Why You Shouldn’t Knock ‘Sweatshops’ If You Care about Women’s Empowerment

Chelsea Follett

Factories producing Ivanka Trump-brand clothing have recently
drawn “sweatshop” accusations. Of course, the United
States had its own sweatshops once, often with worse conditions
than factories in poor countries today.

Those who imagine Industrial Revolution factory work in the
United States as a dark and oppressive moment in history might
benefit from reading the words of those who lived through it.
“Farm to Factory: Women’s Letters, 1830-1860,”
published by Columbia University Press, provides a collection of
first-hand accounts revealing a more nuanced reality.

The letters do indeed reveal abject misery, but much of that
misery comes from nineteenth-century farm life. To many women,
factory work was an escape from this backbreaking agricultural
labor. Consider this excerpt from a letter a young woman on a New
Hampshire farm wrote to her urban factory-worker sister in 1845.
(The spelling and punctuation are modernized for readability.)

Between my housework and dairying, spinning, weaving and raking
hay I find but little time to write … This morning I fainted away
and had to lie on the shed floor fifteen or twenty minutes for any
comfort before I could get to bed. And to pay for it tomorrow I
have got to wash [the laundry], churn [butter], bake [bread] and
make a cheese and go … blackberrying [blackberry-picking].

By contrast, cities often offered somewhat better living
standards. Far more women sought factory work than there were
factory jobs available.

Factory Work Could Mean Freedom

A closer look at the letters in the book reveals the incredibly
varied lives of the “factory girls.” Consider the life
of Delia Page. With a substantial inheritance, she was never in
need of money. But at age 18, Delia decided to move away from her
rural home and work in a factory in New Hampshire. She did that
despite the dangers of factory work. A mill in nearby Massachusetts
collapsed in a fire that killed 88 people and
seriously injured more than 100 others. Delia’s foster family
wrote to her about the tragedy and their fears for her wellbeing.
But she defiantly continued factory work for several years.

What led well-to-do Delia to seek out factory work in spite of
the danger and long hours? The answer is social independence. In
their letters, her foster family repeatedly urges her to break off
what they saw as an indecent affair with a scandal-ridden man,
implores her to attend church and subtly suggests she come home.
But by working in a factory, Delia was free to live on her own
terms. To her, that was worth it.

Today, across the
developing world, factory work continues to serve as a path out of
poverty and an escape from agricultural drudgery, with particular
benefits for women seeking economic independence.

The unique story of Emeline Larcom also emerges from the
letters. Emeline’s background could not have been more
different from Delia’s. Her father died at sea, and her
mother, widowed with twelve children, struggled to support the
family. Emeline and three of her sisters found gainful employment
at a factory and sent money home to support their mother and other
siblings. Emeline, the oldest of the four Larcom factory girls,
essentially raised the other three. One of them, Lucy, went on to
become a noted poet, professor, and an abolitionist against
slavery. Her own memoirs cast mill work in a positive light.

Of the diverse personalities captured in the letters, only one
openly despises her work in the mill. Mary Paul was a restless
spirit. She moved from town to town, sometimes working in
factories, sometimes trying her hand at other forms of employment
such as tailoring, but never staying anywhere for long. She loathed
factory work, but it enabled her to save up enough money to pursue
her dream: buying entry into a Utopian agricultural community that
operated on proto-socialist principles.

She enjoyed living at the “North American Phalanx” and working only three
hours a day—while it lasted. But as with all such
communities, it ran into money problems, exacerbated by a barn
fire, and she had to leave. She eventually settled down, married a
shopkeeper, and—her letters seem to hint—became
involved in the early “temperance” movement to ban
alcohol (another ultimately ill-fated venture).

Factory Work Is a First Step Towards a Better
Future

Delia, Emeline, and Mary provide a glimpse of the different ways
that factory work affected women during the Industrial Revolution.
Wealthy Delia gained the social independence she sought and Emeline
was able to support her family. Even Mary, who detested factories,
was ultimately only able to chase her (ill-advised) dream through
factory work.

Although the Industrial Revolution is commonly vilified, it was
an important first step toward increasing women’s
socioeconomic mobility and ultimately brought about prosperity
unimaginable in the pre-industrial world. The pace of industrial
economic development may even be speeding up. In South Korea, Taiwan, Hong Kong,
and Singapore, the process of moving from sweatshops to First World
living standards took less than two generations as opposed to a
century in the United States.

Today, across the developing world, factory work continues to
serve as a path out of poverty and an escape from agricultural
drudgery, with particular benefits for women seeking economic
independence. In China, many women move on from factories to
white-collar careers or start their own small businesses. Very few choose to return to subsistence farming.

In poorer Bangladesh, factory work has increased
women’s educational attainment while lowering
rates of child marriage. The country’s garment industry has
also softened the norm of purdah or
seclusion that traditionally prevented women from working or even
walking outside unaccompanied by a male guardian.

Women factory workers are often thought of as “undifferentiated,
homogenous, faceless and voiceless” passive victims, but even
a cursory examination of their words and lives reveals unique
individuals with agency. Today, just as in the nineteenth century,
industrialization not only spurs economic development and reduces
poverty, but also expands women’s options.

Chelsea
Follett
is the managing editor of HumanProgress.org, a project
of the Cato Institute.

Share |

Unintended Impacts of Regulations on the Quality of Schooling Options

Corey A. DeAngelis

When the first random-assignment study ever to find a negative
effect from a voucher program was released more than two years ago,
a debate broke out over what role, if any, the Louisiana
Scholarship Program’s regulations played. Some argued that Louisiana’s onerous
regulatory environment — particularly its open admissions
requirement and state test mandate — drove away
better-performing private schools from participating in the
program. Others dismissed such claims, arguing instead
that such regulations were necessary to guarantee quality in the
long run.

The debate has reignited with last month’s release of the
third year LSP reports, which found no statistically significant
difference between voucher students and the control group.
Louisiana’s superintendent of education, John White, argued that the results proved that concerns
about over-regulation were unfounded. Whereas “conservative
ideologues paraded around the idea that regulation is somehow
anathema to choice, and is driving away the elite schools that
otherwise would have magically served these kids better than the
schools that participated,” White argued that instead,
“it may very well be the regulation itself — the
accountability system — that is the thing that has promoted
the performance.”

In fact, the available evidence suggests that regulations did
indeed drive away higher-performing private schools. One of the
three reports released by the School Choice Demonstration Project
at the University of Arkansas addresses exactly this question.

These findings strongly
suggest that more onerous regulations are more likely to drive away
better schools.

In Supplying Choice, my colleagues and I
examined the quality levels of private schools that decided to
participate in voucher programs in Indiana, D.C., and Louisiana. We
found a consistent negative relationship between several proxies
for school quality and private schools’ likelihood of
participating in the voucher program. Moreover, we found that
private schools in D.C. and Louisiana, the two states that have
higher regulatory burdens, are less likely to participate in
voucher programs. While these findings are not conclusive, they do
offer compelling evidence that regulations drove away better
schools from participating in the voucher programs.

Theory

The theory is rooted in basic economics. Private school leaders
decide whether to partake in a given voucher program based on the
costs and benefits associated with participation. The benefit comes
from additional funding which is limited in Louisiana to a maximum
of 20 percent of total enrollment for private schools that have
been in operation for under two years. The costs associated with
participation come in the form of red tape. Participating private
schools in Louisiana must administer the state standardized test,
prohibit parental copay for families using vouchers, report
finances to the government, and surrender their admissions process
over to the state. As an American Enterprise Institute survey found, private schools are concerned
that such regulations could threaten their character or identity,
would force them to change what and how they teach, and bog them
down in paperwork. In other words, they feared that the regulations
would hamper their ability to carry out their educational
missions.

If the expected benefits exceed the expected costs, a given
private school will participate in the program. The types of
schools that will be more likely to find additional benefits in
excess of additional costs are the ones that value financial
resources more than a loss of autonomy. Of course, schools
desperate for enrollment and funding will have a stronger incentive
to accept the state requirements. Indeed, if a school is about to
close down due to financial constraints, it would likely choose to
participate regardless of the magnitude of the costs.

Consequently, we expected to find the strongest negative
association between quality and participation in the most-regulated program: Louisiana.

Results

As shown in Figure 1 below, we observed that only a third of
eligible private schools in the state decided to participate in the
LSP, while between 70 and 78 percent participated in D.C. and
Indiana. This follows intuition, as the regulatory costs are the
highest in the LSP.

In our main analyses, we used enrollment and tuition levels as
proxies for school quality. Economists would view tuition level as
the price of schooling and enrollment as the quantity demanded.
These two measures are the strongest measures of quality that
exist, as they capture the sum of all of the quality-based
decisions of individual parents.

If we believe price to be the most informative measure, we would
not want to control for anything in the analysis. Such an analysis
produces the expected result: higher quality private schools are
the least likely to participate in highly regulated voucher
environments.

As shown in Figure 2 below, a one-thousand dollar increase in
tuition level is associated with a 3.5-percentage point decrease in
the likelihood of participating in the LSP. The association is not
statistically different from zero for Indiana, the least regulated
program, and is only marginally significant in D.C.

When we controlled for factors such as school racial
composition, grades served, and religiosity, the coefficient on
tuition became insignificant in Louisiana, but remained highly
significant for the enrollment measure. As shown in Table 5 from
the original report — a 100 student increase
in enrollment was associated with a 28-percent decrease in the
likelihood of participation in the LSP. The enrollment coefficient
was not statistically different from zero for D.C. or Indiana.

These findings strongly suggest that more onerous regulations
are more likely to drive away better schools. Researchers and
policymakers should take them into account when considering what
sort of regulatory environment to construct for school choice
policies.

Even well-intended policies can produce negative consequences.
White and others like him no doubt have noble intentions when they
support imposing regulations on private schools. They want to make
it impossible, or at least very difficult, for disadvantaged
families to make bad choices. It remains possible that the
regulation-heavy approach they prefer will lead over time to
quality improvements among private schools that decide to
participate in choice programs. However, our research suggests that
the same regulations can also reduce the quality of educational
options available to children in need by leading some schools not
to participate at all.

Corey A.
DeAngelis
is an education policy analyst at the Cato
Institute’s Center for Educational Freedom.

Share |

This Group Hopes to Push America toward Regime Change in Iran

Ted Galen Carpenter

American policymakers and pundits have an unfortunate history of
embracing odious foreign political movements that purport to be
democratic. During the Cold War, embarrassing episodes included
Washington’s support for the Nicaraguan Contras and Jonas Savimbi’s
National Union for the Total Independence of Angola. The post-Cold
War era provides ample evidence that influential Americans have not
learned appropriate lessons from those earlier blunders. The
Clinton administration made common cause with the Kosovo Liberation
Army, which proceeded to commit numerous war crimes during—and
following—its successful war of secession against Serbia.
Both the Clinton and George W. Bush administrations allied with
Ahmed Chalabi’s Iraqi National Congress (INC). The INC’s false
intelligence regarding Saddam Hussein’s alleged weapons of mass
destruction, which the New York Timesand other prominent
media outlets reflexively circulated , was one of the major
factors that prompted the United States to launch its ill-starred
military intervention in Iraq.

There is mounting danger that the Trump administration is
flirting with committing a similar blunder—this time in Iran . Secretary of State Rex Tillerson was
asked explicitly by Rep. Ted Poe whether the United States
supported a policy of regime change in Iran when he testified before
the House Foreign Affairs Committee in June 2017. Poe argued that
“there are Iranians in exile all over the world. Some are
here. And then there’s (sic) Iranians in Iran who don’t
support the totalitarian state.” Tillerson replied that the administration’s policy
toward Iran was still “under development,” but that
Washington would work with “elements inside Iran” to
bring about the transition to a new government. In other words,
regime change is now official U.S. policy regarding Iran.

President Trump should
learn from the follies of his predecessors who backed the agendas
of foreign groups that purported to be democratic but turned out to
be nothing of the sort.

That strategy entails numerous problems. An especially troubling
one is that the most intense opposition force (inside and
especially outside Iran) is the Mujahedeen Khalq (MEK). Although
Tillerson did not explicitly mention the MEK, any U.S. promotion of
dissidents would almost certainly have to include that faction.
More moderate reformists have repeatedly rejected an American
embrace, justifiably concerned that such an association would
destroy their domestic credibility. Indeed, a significant segment
of Iranian moderates endorsed President Hassan Rouhani and were a major
factor in his decisive reelection victory over a hard-line opponent
in the 2017 election.

The MEK’s history should cause any sensible U.S.
administration to stay very, very far away from that organization.
The MEK is a weird political cult built around a husband and wife
team of Massoud and Maryam Rajavi. It has been guilty of numerous
terrorist acts and was on the U.S. government’s formal list of
terrorist organizations until February 2012. The group did not even
originate as an enemy of Iran’s clerical regime. It began long before that regime came to power, and its
original orientation seemed strongly Marxist. The MEK was founded
in 1965 by leftist Iranian students opposed to the Shah of Iran,
who was one of Washington’s major strategic allies. And the United
States was very much in the MEK’s crosshairs during its early
years. During the late 1960s and throughout the 1970s, the MEK
directed terrorist attacks that killed several
Americans working in Iran.

The MEK’s worrisome track record has not deterred
prominent Americans from endorsing the organization. In the months
preceding the State Department’s decision to delist the MEK,
dozens of well-known advocates—primarily but not exclusively
conservatives—lobbied on behalf of the group. Vocal supporters included former CIA directors R. James
Woolsey Jr. and Porter Goss, former FBI director Louis J. Freeh, as
well as Tom Ridge and Michael Mukasey, both cabinet secretaries in
George W. Bush’s administration. Several members of Congress,
including Rep. Dana Rohrabacher, were also prominent advocates.
Rohrabacher stated confidently that the MEK seeks “a
secular, peaceful, and democratic government.” Other
proponents included former New York City Mayor Rudy Giuliani,
former House Speaker Newt Gingrich and Sen. John McCain. Gingrich
has been especially enthusiastic about the MEK over the years,
describing it as the vanguard of “a
massive worldwide movement for liberty in Iran.” More
recently, Gingrich showed up along with former Democratic senator
and former vice president nominee Joe Lieberman at a conference in
Paris to laud the MEK.

Such enthusiasm has increased since its delisting as a terrorist
organization. The House Foreign Affairs Committee even invited
Maryam Rajavi to testify at a hearing on strategies for defeating
ISIS. The decision to give Rajavi a platform for her broader agenda
was not that surprising. Many of the committee’s members
(especially GOP members) are staunch advocates of a regime-change
strategy toward Iran. The MEK serves the same function for such
hawks as Chalabi and the INC did in the prelude to the U.S.
invasion of Iraq.

Americans have reason to be wary when prominent advocates of an
extremely hard-line policy toward Iran also want “vigorous
support for Iran’s opposition, aimed at regime change in
Tehran,” as former U.S. ambassador to the United Nations
John Bolton recommends. Given his vocal
cheerleading for the MEK in recent years, there is little doubt
that he is not referring to the moderate, anti-clerical
“Green coalition” inside that country, but to the
MEK.

Therein lies the principal danger of Tillerson’s embrace
of a regime-change strategy toward Iran. Granted, he referred to
U.S. support for peaceful regime change, but the MEK’s
American backers show no signs of making that distinction. The MEK
has spent hundreds of thousands of dollars cultivating their
support, and such gullible (or venal) Americans continue to tout
the organization as a genuine democratic movement with strong
support inside Iran. The extent of the financial entanglements is
deeply troubling. Many prominent American supporters have accepted
fees of $15,000 to $30,000 to give speeches to the group. They also
have accepted posh, all-expenses-paid trips to attend MEK events in
Paris and other locales. Former Pennsylvania Gov. Ed Rendell
confirmed in March 2012 that the MEK had paid him a total of
$150,000 to $160,000, and it appeared that other
“A-list” backers had been rewarded in a similar fashion. Needless to say,
accepting such largesse from a highly controversial foreign
political organization—and one that was still listed as a
terrorist organization at the time—should raise justifiable
questions regarding the judgment, if not the ethics, of the
recipients.

U.S. opinion leaders are playing a dangerous and morally
untethered game by flirting with the likes of the MEK. Daniel
Larison, a columnist for the American Conservative,
recently highlighted the problem with their approach. “I have marveled at the willingness of numerous former
government officials, retired
military officers
, and elected representatives to embrace the
MEK,” he wrote . “There’s no question
that they are motivated by their loathing of the Iranian
government, but their hostility to the regime has led them to
endorse a group that most Iranians loathe.” The last point is
not mere speculation. The MEK aided Saddam Hussein’s war
against Iran in the 1980s, and even Iranians who detest the
clerical regime regard the MEK as a collection of odious
traitors.

President Trump should learn from the follies of his
predecessors who backed the agendas of foreign groups that
purported to be democratic but turned out to be nothing of the
sort. There are ample warning signs about the real nature of the
MEK. The administration needs to avoid that organization like the
plague.

Ted Galen
Carpenter
, a senior fellow at the Cato Institute and a
contributing editor at the National Interest, is the author of ten
books, the contributing editor of ten books, and the author of more
than 650 articles on defense, foreign policy and civil-liberties
issues.

Share |

The Market Doesn’t Corrupt Morals – Socialism Does

Ryan Bourne

Delivering the Keith Joseph Memorial
Lecture
last week, Matt Ridley highlighted a common, yet
unfounded, attack on free markets: that they encourage us to be
greedy and selfish, and erode moral values.

This has been a frequent lament from the Left, who pivoted in
the 1980s from claiming Margaret Thatcher and Ronald Reagan’s
free market agenda would slow growth to saying that it encouraged
us to want too much.

The US philosopher Michael Sandel has argued that the infusion
of markets into many areas of life has led to the crowding out of
virtues such as altruism, generosity and solidarity. The Pope has
said that “libertarian individualism … minimises the
common good”. Bizarrely, even UK Conservatives appear to
agree. The 2017 Conservative manifesto declared: “We do not
believe in untrammelled free markets. We reject the cult of selfish
individualism”.

There appears to be a
robust and strong relationship between levels of prosperity and
economic freedom.

The weird thing about all these assertions is that no hard
evidence is ever offered to prove that free markets encourage
greed. That may be because the evidence — and logic —
suggest that the opposite is true.

First, let’s state an obvious truth. There appears to be a
robust and strong relationship between levels of prosperity and
economic freedom. Natural experiments, such as East and West
Germany, North and South Korea, and Hong Kong and mainland China,
have shown that market economies tend to be much more prosperous
than non-market economies. This results in more resources for
compassionate causes, whether through individual activity or
socialised through the collection of tax revenue. It can be said
quite clearly that free market economies facilitate the opportunity
to be more compassionate. As a British Prime Minister once said, “no one would remember
the Good Samaritan if he’d only had good intentions; he had
money as well.”

But opportunities need not be taken, of course. So what does the
empirical literature suggest on whether markets facilitate greed
and lead to selfish immoral behaviour?

In a famous paper in 2013,
Armin Falk and Nora Szech purported to show that markets norms did
in fact damage us. They ran experiments involving cash, giving
participants in the experiment the option of paying to save a mouse
from being killed. They found that people were more likely to
enable the killing when the decision came about as a result of
bargaining between buyers and sellers (which made the mouse a third
party), rather than someone making an individual decision based on
the mouse-cash trade-off alone. They concluded that “market
interaction displays a tendency to lower moral values, relative to
individually stated preferences,” perhaps because of the
ability to spread the guilt between trading parties, or because of
the “competition” for money.

This study went around the world as “proof” that
markets eroded our humanity. But closer examination of the results
suggested something quite different. In this game, there was no
clear good being traded, except the abstract thought of a mouse
dying. In the real world, most transactions are more like the
individual judgment rather than bargaining. We walk into a store or
market as a price-taker and decide whether or not to buy a product.
This would suggest the interpretation given by Falk and Szech could
be the the opposite of what the results suggest. As Breyer and
Weimann concluded in their critique of the original paper,
“in typical market situations, moral norms play a more
prominent role than in non-market bargaining situations” that
tend to be zero-sum.

This alternative interpretation is backed up by the experimental work of
Herbert Gintis
, who has analysed the behaviours of 15 tribal
societies from around the world, including “hunter-gatherers,
horticulturalists, nomadic herders, and small-scale sedentary
farmers – in Africa, Latin America, and Asia.” Playing a host
of economic games, Gintis found that societies exposed to voluntary
exchange through markets were more highly motivated by
non-financial fairness considerations than those which were not.
“The notion that the market economy makes people greedy,
sel?sh, and amoral is simply fallacious,” Gintis
concluded.

This makes sense. Considering the broad sweep of history, one
can observe that the rise of market economies and the greater
material wealth they have brought has largely coincided with a
greater tolerance of others, including less willingness to exploit.
As Gintis again summarises, “movements for religious and
lifestyle tolerance, gender equality, and democracy have ?ourished
and triumphed in societies governed by market exchange, and nowhere
else.”

In other words, we might expect greed, cheating and intolerance
to be more prevalent in societies where individuals can only fulfil
selfish desires by taking from, overpowering or using dominant
political or hierarchical positions to rule over and extort from
others. Markets actually encourage collaboration and exchange
between parties that might otherwise not interact. This
interdependency discourages violence and builds trust and
tolerance.

Now, at this stage, I’m sure that many people who consider
themselves moderate socialists would object. Of course, they might
say, there is a role for markets. But modern economies are mixed,
compromising some relatively free markets and other areas with
extensive government intervention.

Most countries have different cultures too, so comparing whether
more “free market” or “socialistic”
countries are likely to promote and encourage greed is very
difficult. Gintis’s experiments are interesting, but do they
really inform us about whether shifting the balance from markets
towards state provision would lead to negative effects in terms of
a less trusting or more greedy society?

Well, we cannot say for sure. But sometimes natural experiments
arise which give us suggestive insights, and the most obvious
recent example was the split of Germany into a broadly capitalist
West and the socialist East.

In a 2014 paper,
economists tested Berlin residents’ willingness to cheat in a
simple game involving rolling die, whereby self-reported scores
could lead to small monetary pay-offs. Participants presented
passports and ID cards to the researchers, which allowed them to
assess their backgrounds. The results were clear: participants from
an East German family background were far more likely to cheat than
those from the West. What is more, the “longer individuals
were exposed to socialism, the more likely they were to
cheat.”

All of which suggests that the conventional trendy wisdom is
wrong. Free markets do not make us greedy and immoral. But
embracing socialism may well do.

Ryan Bourne
holds the R. Evan Scharf Chair for the Public Understanding of
Economics at the Cato Institute.