Joe Alterman’s Timeless Jazz

Nat Hentoff and Nick Hentoff

Although I have been writing about jazz for over 70 years, I don’t like to think of myself as a jazz historian. Being described as a “historian” implies that what you are writing about is dead and in the past. When it comes to writing about jazz, nothing can ever be further from the truth.

I first came to New York City in 1953 as a civil war was raging among the so-called jazz critics. The traditionalists, known then as “moldy figs,” thought that jazz had died if not with Louis Armstrong, then not too long after. The music of Parker, Dizzy and the young Miles Davis was not even considered jazz at all by many of the writers.

The “moldy figs” are still among us today, applying the same labels and temporal limits on jazz. The only difference is that they now insist that real jazz died with Parker, Dizzy and the elderly Miles Davis.

While it is true that most of the musicians I used to write about in the ’50s and ’60s are gone, real jazz is timeless. It lives on in new musicians, like my young friend Joe Alterman, an accomplished jazz pianist who is making his debut performance at NYC’s iconic Birdland jazz club on April 7.

In a 2013 profile of Joe for The Wall Street Journal, I recalled my reaction upon hearing his music for the first time: “Talk about the joy of jazz! It’s a pleasure to hear this music.”

In the early ’50s I was a regular at the original Birdland, located on Broadway, just north of West 52nd Street in Manhattan; “the jazz corner of the world,” as the neon sign reminded me each time I entered its doorway and descended down the steep staircase. All the musicians who are now considered jazz legends played there: Charlie Parker, Dizzy Gillespie, Thelonious Monk, Miles Davis, Lester Young, Count Basie, Oscar Peterson and Duke Ellington, among others.

Alterman would have been able to hold his own jamming with any of them.

“Joe Alterman has tremendous taste and a passionate respect for swing and space,” Marc Myers wrote in a JazzWax.com review of Alterman’s 2012 recording “Simple Life.”

“His touch on the keyboard is reminiscent of pianists from earlier years who listened carefully, felt expressively and actually cared about what the listener heard. Joe is a remarkable swinger and poet.”

Alterman, a white kid from Atlanta, graduated with a bachelor’s and a master’s degree in music from New York University. He has been mentored by the 81-year-old black saxophonist Houston Person, whom he met while Person was teaching a master class at NYU. According to Person, “Joe has a great sense of what is most meaningful in the history and tradition of our music, and a real solid musical vision of where to take it.”

Person, who played on four of the tracks on “Simple Life,” was also the producer of Alterman’s latest recording, “Georgia Sunset,” which was selected as a Downbeat Editor’s Pick and reached No. 6 on the JazzWeek radio charts. All of Alterman’s recordings, along with his calendar of appearances, can be found on his website: JoeAltermanMusic.com.

Alterman has already headlined some impressive jazz venues, but his debut at Birdland has particular significance for him. In 2006 Joe was still a senior in high school when he and his father flew to New York City to see Oscar Peterson at Birdland, in what turned out to be the legendary jazz pianist’s final New York appearance. It was a transformative experience.

“I’ll never forget how silent and warm the entire room felt as soon as the lights dimmed and the announcer came on the speakers,” Alterman recalled.

“From the moment Mr. Peterson emerged from backstage, his enormous charisma and aura seemed to envelop the whole room. I’d never been around such a powerful presence before in my life. Ever since that evening, it’s been a dream of mine to perform at Birdland. For a long time I figured it was impossible. How could I possibly perform in the same room as the great Oscar Peterson? And now, that dream is coming true.”

If you are in NYC, and can catch Alterman’s gig on April 7, expect to hear the ghosts of the original Birdland on that stage. After the lights dim, and you hear the announcer on the speakers, you will be listening to the timeless personification of the past, present and future of jazz.

Nat Hentoff is a nationally renowned authority on the First Amendment and the Bill of Rights. He is a member of the Reporters Committee for Freedom of the Press, and the Cato Institute, where he is a senior fellow. Nick Hentoff is a criminal defense and civil liberties attorney in New York City.

The Unpredictable Trump Doctrine

Emma Ashford

Donald Trump has finally given us greater insight into his approach to foreign policy. Last week, he not only conducted interviews with the Washington Post and New York Times, but revealed his long-promised list of foreign policy advisers, and addressed the annual AIPAC conference. His remarks led some to note that a restrained or realist worldview was implicit in Trump’s statements.

Unfortunately, for those who seek a more restrained foreign policy, there is little reason to celebrate. Even a stopped clock may be right twice a day. And as the man’s own remarks show, the Trump Doctrine isn’t actually about restraint, it’s about unpredictability. And there’s every reason to believe that is exactly what the man would deliver as president.

In a political landscape dominated by neoconservative and hawkish candidates, some of Trump’s statements are certainly appealing. He admits that the invasion of Iraq was a mistake, a view now shared by a majority of Americans. He decries the costs of nation-building and consistent military intervention abroad.

Trump has also criticized American military spending to defend U.S. allies in NATO and elsewhere around the world. The United States spends the lion’s share of both NATO’s budget, and its operational costs. It is little wonder, perhaps, that Trump’s point resonates with many Americans who wonder why they should be picking up the tab for our some of our wealthiest allies.

Yet though Trump sometimes advocates more restrained foreign policy ideas, he frequently also expresses extremely hawkish ideas. At his AIPAC speech, Trump pledged to prevent regional aggression by Iran, and promised to dismantle the Obama administration’s nuclear deal. He promises to “knock the hell out of ISIS,” and proposed sending 20,000 to 30,000 troops to fight the group.

When Trump says something you like on foreign policy, remember that tomorrow he will most likely change his mind.

Other statements simply make no sense. Trump has repeatedly called for U.S. troops to seize Middle Eastern oil by force, a proposal that would require either a major permanent military occupation, or some miraculous advance in drilling technology. He appears to believe that hostages formed a part of the Iranian nuclear deal. And he refuses to answer questions on whether he would use nuclear weapons against ISIS.

Indeed, it is nearly impossible to tell whether he actually believes these statements, or is simply monumentally ill-informed. Based on his comments to the Washington Post, Trump is apparently unaware of European sanctions on Russia, of the fact that Iran and ISIS oppose each other, and believes that America’s GDP is “essentially zero.”

If we step back from substantive issues, however, another pattern emerges: unpredictability. Trump has flip-flopped on issues ranging from Syria to Afghanistan to visa policy. When confronted with these inconsistencies, he has denied his prior comments, obfuscated, and even praised his own flexibility.

Unlike many politicians who moderate between the primary and general election, Trump actually touts his unpredictability as a foreign policy virtue. As early as June last year, Trump promised that he had a secret plan to defeat ISIS, which he could not reveal for fear of giving too much away. And in the Post interview, Trump similarly refused to detail his strategy for dealing with China, arguing that other countries cannot know what he will do once he is president.

As a foreign policy doctrine, this is highly problematic. For voters, more hawkish Americans cannot tell if President Trump would withdraw from NATO or strike a grand bargain with Russia. Restraint-minded Americans cannot tell if he would commit us to further Middle East wars.

At the same time, other countries are watching and worrying, unsure what sort of foreign policy will emerge from a Trump White House. Such unpredictability, combined with Trump’s erratic and thin-skinned personality, is surely a recipe for unexpected conflict.

Nor do Trump’s newly announced advisers offer much reassurance. Campaign advisers often act as a signal, illustrating the kind of foreign policy a candidate is expected to pursue. But Trump’s advisers are all over the map, including two minor Bush administration officials, a recent graduate and energy consultant, and a Lebanese-born pundit with ties to militia groups in that country.

Not only do Trump’s advisers present no clear picture of whether or not he would pursue a more restrained foreign policy, the list is so short and so scattershot that it seems likely Trump is still having substantial difficulties attracting experienced foreign policy advisers.

Unfortunately, greater exposure to Trump’s ideas has not substantially increased our understanding of his foreign policy views. His consistent unpredictability is problematic for the effective and rational conduct of foreign policy, even before you factor in the odious comments on Muslims and Mexicans or his incitement of political violence.

So when Trump says something you like on foreign policy, remember that tomorrow he will most likely change his mind. For all our disagreements, hawks and doves can certainly agree how dangerous such unpredictability would be.

Emma Ashford is a visiting research fellow at the Cato Institute.

Is it Time for America to Quit NATO?

Ted Galen Carpenter

NATO will celebrate its sixty-seventh anniversary in April. Instead of being an occasion for the usual expression of mind-numbing clichés about the alliance’s enduring importance both to U.S. security and world peace, it should become an opportunity for a long overdue assessment of whether the NATO commitment truly serves America’s best interests in the twenty-first century. There is mounting evidence that it does not.

The creation of NATO in 1949 represented the most explicit break with America’s traditional policy of avoiding foreign alliances and generally charting a noninterventionist course. Being drawn into two major wars in little more than a generation—and especially the psychologically devastating attack on Pearl Harbor—had struck a fatal blow to a noninterventionist foreign policy. Even prominent noninterventionists such as Sen. Arthur Vandenberg (R-MI) now conceded that the world had changed and that “isolationism” (a poisonous misnomer) was no longer an appropriate policy for the United States. Joining NATO, the ultimate “entangling alliance” with European powers, confirmed the extent of the shift in Washington’s policy and American attitudes.

It was hard to dispute the argument that the world had changed. There was no semblance of a European or a global balance of power in the late 1940s or early 1950s. Central and Eastern Europe were under the domination of the Soviet Union, a ruthless totalitarian power that posed an expansionist threat of unknown dimensions. Western Europe, although mostly democratic, was badly demoralized from the devastation of World War II and the looming Soviet menace. Even prominent noninterventionists such as Sen. Robert A. Taft (R-OH) were not prepared to cut democratic Europe loose in such a strategic environment. He merely wanted to offer the Europeans a security guarantee without having the United States take the fateful step of joining NATO and assuming virtually unlimited leadership duties for an unknown length of time. Subsequent developments validated his wariness.

America’s NATO policy is increasingly failing the most basic tests of relevance and prudence.

NATO partisans insisted that the world had changed with World War II and that the new paradigm required extensive U.S. leadership. The problem with their analysis, especially as the decades have passed, is that they seem to assume that change is a single major event and everything thereafter operates within the new paradigm. But that assumption is totally false. Change is an ongoing process. Today’s Europe is at least as different from the Europe of 1949 as that Europe was from pre–World War II Europe. Yet the institutional centerpiece—NATO—and much of the substance of U.S. policy remain the same.

The entire security environment is different. Instead of being a collection of demoralized, war-ravaged waifs, the European democracies are now banded together in the European Union, with a population and collective gross domestic product larger than that of the United States. Although they are troubled by the turbulence in the Middle East and the occasional growls of the Russian bear, they are capable of handling both problems. Indeed, Vladimir Putin’s Russia is a pale shadow of the threat once posed by the Soviet Union. The European Union has three times the population and an economy nearly ten times the size of Russia.

The primary reason that the EU countries have not done more to manage the security of their own region is that the United States has insisted on taking the leadership role—and paying a large portion of the costs. As a result, the United States spends nearly 4 percent of its GDP on the military; for NATO Europe, the figure is barely 1.6 percent. That disparate economic burden is only one reason why we need to conduct a comprehensive review of whether the NATO commitment serves America’s interests any longer, but it is an important one.

The European security environment has changed in another significant way since NATO’s creation. During the early decades of the alliance, Washington’s goal was to preserve the security of major players, such as West Germany, Italy, France and Britain. Since the collapse of the Soviet Union in 1991, though, U.S. leaders have pushed for the expansion of the alliance into Central and even Eastern Europe, adding marginal allies with the casual attitude that some people add Facebook friends.

But unlike Facebook, military alliances are deadly serious enterprises. NATO, with its Article 5 commitment pledging that an attack on one member will be considered an attack on all, could easily entangle the United States in an armed conflict that has little or nothing to do with America’s own security. The absurdity of NATO in the twenty-first century may have reached its zenith in February 2016 when, with Washington’s enthusiastic backing, the alliance admitted tiny Montenegro as a member. In the post–World War II decade, when the United States broke with its traditional noninterventionist policy, proponents of the new approach argued that alliances would enhance America’s security. How a microstate like Montenegro could augment America’s already vast military power and economic strength would seem to be a great mystery.

But at least Montenegro does not have any great-power enemies. The same could not be said of three other small members, the Baltic republics of Estonia, Latvia and Lithuania, which were admitted a decade ago. They are on frosty terms with Russia, and a leading think-tank study indicates they are so vulnerable that Russian forces could overrun them in a matter of days. So, we have gone from an alliance commitment to protect crucial economic and strategic players against an ominous totalitarian superpower to putting America’s credibility, and perhaps its very existence, on the line to protect a collection of tiny players on Russia’s border. Indeed, the buildup of U.S. forces on Russia’s western frontier has contributed significantly to the deterioration of bilateral relations.

It would seem that NATO partisans are the ones who don’t appreciate the role of change in international affairs. To them, preserving the alliance, not maximizing America’s security and well-being, is the highest priority. We should not accept such static thinking. Sixty-seven years is a long time for any policy to remain intact and try to remain relevant. America’s NATO policy is increasingly failing the most basic tests of relevance and prudence. It is well past time to conduct a comprehensive review and consider even the most drastic option: U.S. withdrawal from the alliance.

Ted Galen Carpenter, a senior fellow at the Cato Institute and a contributing editor at the National Interest, is the author of ten books on international affairs, including several books on NATO.

Obamacare Again?

Ilya Shapiro and Josh Blackman

In what has become a spring tradition, Obamacare returns to the Supreme Court this month, the fourth time in five years. Fortunately for the religious nonprofits challenging the law’s contraceptive mandate—including the Little Sisters of the Poor, a monastic order that cares for impoverished elderly—the results of the Court’s second and third encounters with the act can together answer their prayer for relief.

Two years ago, in Burwell v. Hobby Lobby, the justices ruled 5-4 that the government could not force owners of closely held corporations to provide morally objectionable contraceptives to employees. (Hobby Lobby’s owners believe devices that can prevent the implantation of fertilized eggs, such as morning-after pills and IUDs, violate the Christian prohibition on abortion.) The decision was based on the Religious Freedom Restoration Act (RFRA), which bars the government from imposing a “substantial burden” on religious liberty unless it’s “the least restrictive means” of advancing a “compelling government interest.”

Then last year, in King v. Burwell, the Court refused to defer to the IRS’s reading of the phrase “established by the State” in determining whether Obamacare’s tax credits applied to plans bought through federal, not just state-created, exchanges. Chief Justice John Roberts’s majority opinion declared the bureaucracy lacked the requisite authority and “expertise” to interpret this “central” part of the act. Administrative agencies cannot give themselves the power to answer questions of profound “economic and political significance.” (In its 6-3 decision, the Court found another way to rule in favor of the administration.)

What do these cases have in common? They demonstrate that it’s Congress’s duty to craft delicate religious accommodations to protect conscience. The bureaucracy simply doesn’t have the ability—meaning both authority and know-how—to create legal rules in this area.

The Little Sisters of the Poor, whose case has been consolidated with several others under Zubik v. Burwell and will be heard March 23, argue that the Obama administration’s “accommodation” still violates their religious free exercise. Houses of worship are exempt from the contraceptive mandate, but nonprofits the government considers insufficiently religious to merit that exemption—such as educational institutions and social-service providers—must structure their insurance coverage in a way that still fulfills the mandate. So another RFRA battle looms, which in the absence of Justice Antonin Scalia seems destined for a 4-4 deadlock that’s not just unsatisfying but impracticable: Given the disparate lower-court rulings that would stand with a tie, the mandate would survive in some parts of the country but not others.

The administration’s attempt to force religious nonprofits to violate religious teaching regarding the start and nature of human life lays claim to an extravagant statutory power affecting fundamental liberties—one that Obamacare simply does not grant.

Conveniently, there’s an alternate argument, based on the Hobby Lobby and King rulings, that could command a majority opinion: The agencies lack both the expertise and power to exempt some religious groups while forcing others—deemed “less” religious—to be complicit in what they consider sin. By rejecting this bureaucratic assertion of executive authority, Zubik can thus be resolved without further politically fraught haggling over RFRA.

To better understand this elegant solution that sidesteps the culture-war debate over reproductive rights and what constitutes an abortifacient, let’s step back and look at the history of the mandate at issue.

Congress didn’t actually enact a contraceptive mandate. Obamacare’s statutory text only requires that insurance cover, “with respect to women, such additional preventive care … as provided for in comprehensive guidelines supported by the Health Resources and Services Administration.” Congress did not define what constitutes “preventive care.” That subsidiary agency within the Department of Health and Human Services recommended that “preventive care” be interpreted to include all federally approved contraceptives. HHS agreed.

Facing a wave of public outrage, HHS belatedly acknowledged that its interpretation would force millions of religious believers to violate the teachings of their various faiths. In response, it worked with the Departments of Labor and Treasury to adjust the relevant regulations. They exempted certain religious employers—houses of worship and their auxiliaries—from the mandate altogether. Religious nonprofits the agencies deemed insufficiently religious to qualify for the exemption would receive an “accommodation” allowing them to discharge the mandate another way: They could notify either HHS or their insurance companies of their objection, and insurers would then offer contraceptive coverage directly to their employees at no cost.

HHS doesn’t say that RFRA compels the exemption or the alternative-compliance mechanism. Instead, it asserts that the relevant Obamacare provisions give the agency the authority to decide which religious groups should be exempted and which “accommodated.” Still, the government concedes that the accommodation imposes at least a “minimal” burden on religious free exercise.

The alternative-compliance regulation, however, is not authorized by the text of Obamacare. No provision of that statute empowers any administrative agency to distinguish among religious nonprofits, exempting some while burdening others. The statute doesn’t authorize HHS or any other department to burden the free exercise of anyone. To paraphrase Chief Justice Roberts’s opinion in King: “It is especially unlikely that Congress would have delegated this decision,” without clear statutory guidance, to an agency that “has no expertise in crafting” religious accommodations. Or as Justice Anthony Kennedy wrote a decade ago in Gonzales v. Oregon, in which the Justice Department attempted to trump a state drug-dispensing law: “The idea that Congress gave the [executive branch] such broad and unusual authority through an implicit delegation is not sustainable.”

The Obama administration’s justifications for discriminating among religious groups reflect its unprecedented home-brewed approach to protecting religious exercise. The agencies concocted an exemption for churches but not associated religious organizations based on its assertion that employees of the latter are “less likely” than the former “to share their employer’s … faith.” That HHS refused to exempt people who work for Little Sisters of the Poor—a group of nuns who vow obedience to the pope!—illustrates how out-of-its-league it was in evaluating religiosity.

Congress has expressly exempted nonprofits, including all the Zubik plaintiffs, from the antidiscrimination provisions of federal employment law. The Little Sisters of the Poor can hire exclusively people of their own faith. Yet administrative agencies, with no legal basis, issued a blanket judgment that all religious nonprofits have employees less likely to share their employers’ religious beliefs. (At the same time, they removed the regulatory requirement that houses of worship primarily employ people who share their faith to avail themselves of the exemption.) There was not even an option for a case-by-case judgment.

Such haphazard and unauthorized guesswork by anonymous civil servants, in the face of longstanding congressional policy to the contrary, cannot justify an infringement of religious freedom. That HHS, Labor, and Treasury’s rulemaking was premised not on health, labor, or financial criteria, but on the departments’ subjective evaluation about which employees more closely adhere to the religious views of their employers, confirms that the authority claimed by these agencies is, to again quote Gonzales v.Oregon, “beyond [their] expertise and incongruous with the statutory purposes and design.”

Earnest and profound questions regarding “the mystery of human life,” as the Supreme Court has discussed in its abortion jurisprudence, are the quintessential issues of “significance” that the Constitution does not intend agencies to resolve absent clear delegation. The administration’s attempt to force religious nonprofits to violate religious teaching regarding the start and nature of human life lays claim to an extravagant statutory power affecting fundamental liberties—one that Obamacare simply does not grant.

The combined holdings of Hobby Lobby and King present a result that most of the justices should be able to support: The administrative state overstepped its bounds, and religious nonprofits deserve at least the same exemption that many for-profit employers now enjoy. In addition to avoiding the ideologically charged battle over RFRA, this alternate path will allow the Court to set down an important limitation on executive power that will bind the next president, whoever he or she may be.

Ilya Shapiro is a senior fellow in constitutional studies at the Cato Institute. Josh Blackman is a constitutional law professor at South Texas College of Law in Houston. They filed a brief on Cato’s behalf supporting the plaintiffs in Zubik v. Burwell.

Is Cruz and Trump’s Islamophobia the New McCarthyism?

Patrick G. Eddington

If there was any remaining doubt that the GOP will nominate an overtly anti-Muslim candidate for president, the front-runners dispelled it in their responses to this week’s horrific ISIS-inspired attacks in Belgium.

On his Facebook page, Senator Ted Cruz (R-Texas) said, “We need to empower law enforcement to patrol and secure Muslim neighborhoods before they become radicalized.”

His chief rival, Donald Trump, not only endorsed the Cruz proposal but reiterated his own intention to bring back the use of torture should he be elected.

Speaking of captured Belgian attacker Salah Abdeslam, Trump told CNN’s Wolf Blitzer, “Well you know he may be talking, but he’ll talk a lot faster with the torture.”

The Islamophobia-baiting contest that Trump and Cruz are currently engaged in echoes the worst excesses of the McCarthy-era anti-Communist witch hunts.

The Cruz proposal to effectively “ghetto-ize” Arab- and Muslim-American communities into police and surveillance-saturated zones seems lifted from some dystopian fiction. But instead of a fictionalized due process-free United States in which Arab- and Muslim-Americans are presumed to be terrorists until proven otherwise, Cruz would bring that nightmare to life for over 3.5 million Arab- and Muslim-Americans.

And a President Trump would ensure that the scenario’s torture scenes were anything but make-believe. Given how readily federal agencies went along with torture under the Bush administration—despite its clear prohibition under U.S. and international law—Trump’s threat is a credible one.

Indeed, the detention of U.S. citizens by the American military for alleged terrorism involvement was made far easier under a 2012 change in federal law.

Given Trump’s strongly stated intention to keep the U.S. military prison at Guantánamo Bay, Cuba, open indefinitely and send even more alleged or actual ISIS-inspired individuals there, Arab- and Muslim-Americans may well find themselves facing the kind U.S. government-sponsored discrimination and forced segregation suffered by Japanese-Americans in World War II.

And other Americans who previously had ties to Arab- and Muslim-American groups may sever those links out of a fear of being swept up in government dragnets aimed at members of those communities.

The Islamophobia-baiting contest that Trump and Cruz are currently engaged in echoes the worst excesses of the McCarthy-era anti-Communist witch hunts, albeit this time with overtly racist and ethnic overtones. What makes their proposals all the more chilling is that under the Obama administration, the governmental machinery for effectuating a new, ethnocentric form of McCarthyism has already been slowly becoming a reality.

Couched under the very reasonable sounding label of “countering violent extremism” (CVE), these programs emphasize identifying and ferreting out alleged political extremists within the Arab- and Muslim-American community. Since at least 2011 at the direction of the White House, the Departments of Justice, Homeland Security and State have been spending tens of millions of dollars on CVE programs, at home and abroad.

However, the very concept of CVE is based on two false premises: that there are discrete, identifiable indicators of radicalization, and that those signs are found most often among Arabs/Muslims. As multiple, published, peer-reviewed studies have found, there is no “conveyor belt to radicalization” among members of those ethnic and religious groups.

Almost a decade ago, the British Security Service (MI5) issued a report that analyzed “several hundred individuals known to be involved in, or closely associated with, violent extremist activity” and concluded that those who ultimately became terrorists “are a diverse collection of individuals, fitting no single demographic profile, nor do they all follow a typical pathway to violent extremism.” The same holds true in the Arab and Muslim world as a whole, according to a 2013 United States Institute for Peace report on violent extremism.

Despite those facts, the FBI recently launched a CVE-related website that embraces discredited ideas about how and why people engage in politically motivated terrorism.

For example, anyone “spending a lot of time reading violent extremist information online, including in chat rooms and password-protected websites”—a prime activity of counterterrorism researchers. Or someone “using several different cellphones and private messaging apps”—in an age when most Americans already have multiple electronic devices that also have multiple messaging apps.

Also included are people “talking about traveling to places that sound suspicious”—a vague and nonsensical extremism indicator.

Another group of potential extremists are those “researching or training with weapons or explosives”—activities engaged in by law-abiding gun owners, those in the shooting sports, state and local first responders who deal with explosives or hazmat incidents and chemistry students and teachers, among others.

Those also potentially on the pathway to extremism are people “studying or taking pictures of potential targets (like a government building)”—something tourists visiting Washington, D.C., do every day.

Other potential “terrorists” include those “looking for ways to disrupt computers or other technology”—the very activities that privacy/civil liberties analysts, security researchers and members of the public concerned about the security of their personal information do in an effort to prevent Americans from becoming victims of identity theft or other kinds of cybercrime.

Discredited, constitutionally illegitimate counterterrorism proposals, backed by millions in taxpayer dollars, enabled by multiple federal departments with the power to detain—and potentially torture—alleged terrorism suspects at the direction of the president of the United States. This is the scenario that Election 2016 has made possible.

Patrick Eddington is policy analyst in homeland security and civil liberties at the Cato Institute. His most recent project is an interactive timeline of political surveillance and repression in the United States over the past century.

Millennials Like Socialism — Until They Get Jobs

Emily Ekins

Millennials are the only age group in America in which a majority views socialism favorably. A national Reason-Rupe survey found that 53 percent of Americans under 30 have a favorable view of socialism compared with less than a third of those over 30. Moreover, Gallup has found that an astounding 69 percent of millennials say they’d be willing to vote for a “socialist” candidate for president — among their parents’ generation, only a third would do so. Indeed, national polls and exit polls reveal about 70 to 80 percent of young Democrats are casting their ballots for presidential candidate Bernie Sanders, who calls himself a “democratic socialist.”

Yet millennials tend to reject the actual definition of socialism — government ownership of the means of production, or government running businesses. Only 32 percent of millennials favor “an economy managed by the government,” while, similar to older generations, 64 percent prefer a free-market economy. And as millennials age and begin to earn more, their socialistic ideals seem to slip away.

So what does socialism actually mean to millennials? Scandinavia. Even though countries such as Denmark aren’t socialist states (as the Danish prime minster has taken great pains to emphasize) and Denmark itself outranks the United States on a number of economic freedom measures such as less business regulation and lower corporate tax rates, young people like that country’s expanded social welfare programs.

Coming of age during the Great Recession, millennials aren’t sure if free markets are sufficient to drive income mobility and thus many are comfortable with government helping to provide for people’s needs. Indeed, a Reason-Rupe study found that 69 percent of millennials favor a government guarantee for health insurance and 54 percent support a guarantee for a college education. Perhaps most striking is that millennials favor a bigger government that provides more services — 52 percent of them do, compared with 38 percent of the nation overall.

So, will it last? Are millennials ushering in a sea change of public opinion? Do they signal the transformation of the United States into a Scandinavian social democracy?

It depends. There is some evidence that this generation’s views on activist government will stick. However, there is more reason to expect that support for their Scandinavian version of socialism may wither as they age, make more money and pay more in taxes.

The expanded social welfare state Sanders thinks the United States should adopt requires everyday people to pay considerably more in taxes. Yet millennials become averse to social welfare spending if they foot the bill. As they reach the threshold of earning $40,000 to $60,000 a year, the majority of millennials come to oppose income redistribution, including raising taxes to increase financial assistance to the poor.

Millennials like free markets, and most already accept that free markets have done more to lift the world out of poverty than any other system.

Similarly, a Reason-Rupe poll found that while millennials still on their parents’ health-insurance policies supported the idea of paying higher premiums to help cover the uninsured (57 percent), support flipped among millennials paying for their own health insurance with 59 percent opposed to higher premiums.

When tax rates are not explicit, millennials say they’d prefer larger government offering more services (54 percent) to smaller government offering fewer services (43 percent). However when larger government offering more services is described as requiring high taxes, support flips and 57 percent of millennials opt for smaller government with fewer services and low taxes, while 41 percent prefer large government.

Millennials wouldn’t be the first generation to flip-flop. In the 1980s, the same share (52 percent) of baby boomers also supported bigger government, and so did Generation Xers (53 percent) in the 1990s. Yet, both baby boomers and Gen Xers grew more skeptical of government over time and by about the same magnitude. Today, only 25 percent of boomers and 37 percent of Gen Xers continue to favor larger government.

Many conservatives bemoan millennials’ increased comfort with the idea of “socialism.” But conservatives aren’t recognizing that in the 20th-century battle between free enterprise and socialism, free enterprise already won. In contrast with the 1960s and ’70s, college students today are not debating whether we should adopt the Soviet or Maoist command-and-control regimes that devastated economies and killed millions. Instead, the debate today is about whether the social welfare model in Scandinavia (which is essentially a “beta-test,” because it hasn’t been around long) is sustainable and transferable.

Millennials like free markets, and most already accept that free markets have done more to lift the world out of poverty than any other system. Instead, what this generation has to decide is whether higher education and health-care innovation, access, and high quality can be best achieved through opening these sectors to more free-market reforms or though increased government control. This is a debate we should be glad to have.

Emily Ekins is a research fellow at the Cato Institute. Her research focuses primarily on American politics, public opinion, political psychology, and social movements, with an emphasis in survey and quantitative methods.

Time to Fess up and Walk back Our Paris Pledge

Paul C. “Chip” Knappenberger

At last December’s UN climate conference, President Obama pledged that the US would slash its greenhouse gas emissions by 26-28 percent between 2005 and 2025. Since then, harsh realities have conspired against him, making that target infeasible if not downright impossible. Rather than pay the rest of the world to look the other way, the president should revise, or better yet, rescind that promise.

And now is the time to do that, before the grand signing ceremony of the Paris Climate Agreement that is scheduled for April 22, Earth Day, at the UN’s New York headquarters. Putting our name on a promise that we know we can’t keep would be a disingenuous act, painting the Paris Agreement not as a serious undertaking, but as a global publicity stunt.

Putting our name on a promise that we know we can’t keep would be a disingenuous act, painting the Paris Agreement not as a serious undertaking, but as a global publicity stunt.

It’s becoming all too clear that what the president was peddling at the UN climate conference—his leadership in producing great strides in reducing US GHG emissions and a laying the groundwork for a series of ever-more stringent policies going forward—was a bill of goods.

Consider his boasting that “[o]ver the last seven years, we’ve made…ambitious reductions in our carbon emissions.” Turns out that new scientific findings indicate that the EPA has been underestimating U.S. emissions of the powerful greenhouse gas methane, so much so that whereas the EPA has been reporting a decline in methane emissions over the past decade, observations indicate a sharp rise. The EPA has now admitted that its past estimates were too low and is in the process of trying to fix them. Taking into account the new scientific findings, the recent decline in overall US greenhouse gas emissions highlighted by the president is lessened by nearly one-third—with much of what remains a result of the Great Recession and natural gas replacing coal in power generation, not Obama’s climate policies.

Or consider that the president told the UN assembly that “we’ve said yes to the first-ever set of national standards limiting the amount of carbon pollution our power plants can release into the sky” all the while knowing that virtually every analyst who had looked at the EPA’s Clean Power Plan knew that it stretched elements of the Clean Air Act to the point of breaking and was going to face a stiff, uphill legal battle. Barely two months later, the Supreme Court stayed the Clean Power Plan pending the outcome of the challenges. Even including the greenhouse gas reductions promised by the Clean Power Plan the path to Obama’s Paris pledge was uncertain, without it, there is no chance.

But that doesn’t keep the Obama administration from trying to make it seem otherwise. What it can’t achieve through emissions reductions, it is attempting to achieve through creative accounting. In the State Department’s Second Biennial Report Under the United Nations Framework Convention on Climate Change, the Obama administration significantly increased its estimates of how much carbon dioxide US forests were expected to uptake over the next 10 years. In its “optimistic” scenario, the one which gets closest to, but still doesn’t quite reach its Paris goals, the State Department projects the US forest carbon dioxide sink will expand by more than 33 percent. This seems highly implausible, considering the over the past 10 years, the US carbon sink has actually declined a small amount. But, by projecting the sink to change course and expand considerably, it reduces the pressure for Obama to find more emissions reductions, and thereby makes his emissions targets easier to reach.

Put it all together—a smaller observed decline in greenhouse gas emissions, a roadblock to additional emissions reductions going forward, overly optimist expectations—and add in cheap gas (more driving) and a growing economy that’s still tightly tied to fossil fuel use, and you are left with the stark realization that we are not going to come close to meeting the pledges Obama made to the international community in Paris last year.

The one promise, though, that the president has been able to keep, is his pledge to fund the UN.’s Green Climate Fund. Last week he handed over $500 million dollars to the Fund to show that “that the United States stands squarely behind our international climate commitments.” Perhaps that’ll be enough hush money to keep the rest of the world from complaining too loudly that the U.S. is overpromising its emissions commitments.

A successful signing day in New York this April will be a clear indication that the money transfer component of the Paris Agreement is more important than the climate change mitigation component. For those on the receiving end of this arrangement, this may be a good deal, but for us on the other end, we’re paying a lot, accomplishing little, and hoping to get kudos for “doing something.” Let’s just end the charade and say no to signing the Paris Climate Agreement.

Patrick J. Michaels is assistant director of the Center for the Study of Science at the Cato Institute.

The Lessons from El Nino

Patrick J. Michaels

As noted in a March 18 Reuters article by Karen Braun, the very strong El Niño event is showing signs that it is rapidly unwinding, and when it does, there could be some major changes in global temperature.

El Niño is a periodic weakening—or even a reversal, as is the case with this one—of the easterly trade winds across the tropical Pacific. In their more common configuration, they diverge massive amounts of water away from the coast of South America, forcing a huge upwelling of cold water, which is one of the reasons that Lima, Peru, is one of the coolest tropical cities on the planet. By changing the winds, El Niños keep all the chilly water under the surface. As a result, global temperatures often show a profound peak when El Niño rages.

The very strong El Niño event is showing signs that it is rapidly unwinding, and when it does, there could be some major changes in global temperature.

El Niños have come and gone for eons, and contrary to the breathless reporting about their horrors, they are pretty compatible with the biosphere. Yes, El Niños are associated with flooding rains in California. They’re also a pretty nifty way to replenish the snowpack out there, and many desert seeds will only germinate after being jostled around by the large-scale overland flooding that they often cause. They literally make the desert bloom.

The wind reversal tends to peter out after about a year, and then all the cold water that should have surfaced appears with a vengeance, and average global temperature tanks, often to values lower than before the event began. That situation is called La Niña, which tends to associate with particularly nasty winters in eastern North America.

The current El Niño is about as strong as one gets, comparable the big one that peaked in 1998, and a look at the lower atmospheric satellite data may be revealing what is about to happen. You can find it here.

The similarities are striking. The 1998 event was so toasty that Tom Karl, the head federal climatologist, proclaimed it a “change point”, and that from then on, global warming would occur at a much faster rate that before. In reality—until he changed the data last summer—global warming promptly stopped.

You can still see the “pause” in the satellite data that provides the closest match to calibrated weather balloon readings taken twice daily. Even with the current peak, there’s no statistically significant warming trend over the last 19+ years.

Note what happened after the 1998 peak. Global temperatures dropped nearly a degree Celsius (1.8°F) in less than a year. There was another El Niño that peaked in early 2010 in the satellite data, followed by a rapid fall of about 0.7°C (1.3°F). These are highlighted in the graph linked above. In fact, each of the strong El Niños of the last 35 years was followed by a big temperature drop.

There’s been a lot of debate in climate circles about where temperatures are going to settle in the next year or two. If they drop back down to where they were before it all began, will our beneficent federal scientists find something else “wrong” with the data? Or, will temperatures settle down at a higher level (but with no trend going forward)? There’s some evidence—and no one has a clue why this is so—that global warming really isn’t smooth at all, but is characterized by jumps after a strong El Niño-La Niña cycle. Or will a statistically robust warming trend finally appear after La Niña?

Climate models, which are very poor at representing El Niños, have said forever that the last one is what should have been happening all along.

But the lesson from El Niño is that they have it all wrong. Apparently the only thing they are good for is to serve as the basis for the US and the UN’s plans to dramatically change the way we live.

Patrick J. Michaels is a senior fellow at the Cato Institute and author of Climate of Extremes: Global Warming Science They Don’t Want You to Know. David Wojick, head of DEWA, a cognitive science and policy analysis consultancy.

Physicians Face Moral Dilemma in Conscription on War on Drugs

Jeffrey A. Singer

America’s physicians have been conscripted as law enforcement agents in the never-ending War on Drugs, and it puts us in a moral dilemma.

As media attention has turned to the recent national surge in prescription-opioid and heroin abuse, politicians feel compelled to be ready with “solutions.” The Obama Administration last summer announced $100 million in new funding for drug-addiction centers, and has recently announced new opioid training programs for federal government physicians. In a recent debate, Presidential candidate Hillary Clinton, exclaiming, “Lives are being lost,” proposed a $10 billion criminal justice initiative including increased grants to states for drug treatment centers, as well as training and equipping first responders to administer heroin overdose antidotes. As a doctor, I react to these reports with great apprehension, because public policy will inevitably impact my profession and me.

Lessons From the First Drug War

With the passage of the Harrison Narcotics Act in 1914, opiates and cocaine for the first time were prohibited to the general public without a doctor’s prescription. The Surgeon General reassured doctors that this was intended only as a means for the government to gather information. But when doctors began writing morphine prescriptions for patients (many of whom were affluent middle aged women at the time) as a means of helping them cope with their chronic addiction, they suddenly found themselves in violation of the fine print of the law: the doctor may prescribe “in the course of his professional practice only.” This was interpreted by law enforcement to mean that these drugs could not be prescribed simply to help the patients avoid the pains of withdrawal from their addiction, and doctors risked indictment if they prescribed narcotics for this reason. The first War on Drugs was underway, and physicians found themselves caught in the crossfire. 

Six weeks after the Harrison Narcotics Act’s passage, the New York Medical Journal warned in an editorial that the new law will have ominous consequences, including “the failure of promising careers, the disrupting of happy families, the commission of crimes that will never be traced to their real cause, and the influx into hospitals for the mentally disordered of many who would otherwise live socially competent lives.”

Critics of the War on Drugs like to use alcohol’s prohibition and its subsequent re-legalization as a teaching tool for making their case. Alcohol is an extremely dangerous drug. Overdosing on alcohol can lead to coma and respiratory arrest. Long-term addiction can cause liver failure, gastrointestinal hemorrhage, cardiomyopathy and heart failure, pancreatitis, cancer of the stomach and esophagus, cognitive disorders, encephalopathy, and dementia. It didn’t take long for the public to learn, however, that the destruction to society wrought by alcohol prohibition far outweighed the harmful effects of alcohol on the segment of society who could not use this drug in a safe and healthy way.

In the government’s new war on opiates, physicians and their patients find themselves caught in the crossfire.

Fortunately, a doctor’s prescription was never required for people to obtain alcohol. Such a requirement would have created a real moral dilemma for the physician: should he help the patient avoid the pains of alcohol withdrawal by writing the prescription? Is prescribing alcohol for that reason an appropriate one in the eyes of law enforcement? Furthermore, will prescribing the drug contribute to the patient’s harm over the long term and thus violate professional ethics?

Opiates, by comparison, are much safer than alcohol. Long-term addiction can contribute to gastrointestinal motility and digestive problems, and research suggests it might slightly impair the immune system and promote mild hormonal dysfunction. Some studies have shown chronic use increases the risk of clinical depression, and might make users withdraw socially. There is no conclusive evidence that it can cause dementia or cognitive disorders. There is an honest disagreement among health care practitioners over just how harmful long-term opiate use can be.

So it would appear that prescribing opiates to an addict to help him avoid withdrawal would present less of a professional ethical dilemma than with alcohol. And the practitioner who doesn’t feel it is ethical to subject the patient to the risks of long-term opiate use—even with the patient’s informed consent—can always refer the patient to a doctor who doesn’t see an ethical problem. Alas, that’s not how things worked out.

Doctors began to cut their patients off from narcotics, fearing federal prosecution. Patients would “doctor shop,” feigning painful illnesses, and when that didn’t work, would turn to the streets to buy their opiates in the burgeoning illegal market. At first they purchased morphine on the street. But after heroin (diacetyl-morphine) was outlawed entirely in America in 1924 (it remains legal and is used in hospitals in Britain and other European countries under the name “diamorphine”), drug dealers pushed heroin over morphine. By the close of the 1920s, the great majority of opium addicts were now heroin addicts.

Opiophobia Onset

As the drug war intensified in the 1970s and onward, doctors became ever more leery of prescribing narcotics. And patients in pain became more fearful of taking them as they heard more horror stories about addiction. By the late 1990s, a new term was coined, opiophobia, to describe an irrational fear of opiate prescription and use by doctors and patients.

As professional and patient advocacy groups became more enlightened on the topic, however, patients were encouraged to overcome their fear of addiction, and doctors were exhorted to show more compassion and prescribe more liberally. By the dawn of the 21st century, narcotic prescription—and narcotic addiction—began to rise again. 

In the past few years, a surge in opioid prescription use and opioid addiction has been noted with alarm by public health authorities. In response, the U.S. Drug Enforcement Administration (DEA) has partnered with state medical and pharmacy licensing boards and state health authorities in an effort to curb opioid prescription and root out “pill mill” practices.

Prescription Drug Monitoring Programs (PDMP) now track the prescribing patterns of health care practitioners as well as monitor the frequency and amounts of prescriptions filled by patients. Doctors are provided with periodic “report cards,” comparing their prescribing patterns with their peers. In some states legislation is being considered to require prescribers to check on their patient through the PDMP before writing any opioid prescription. And law enforcement, often using undercover agents, have severely cracked down on providers they believe are over-prescribing.

This has chilled the behavior of many prescribers, who are beginning to revert to the old practice of cutting patients off.

History Repeating

According to the Centers for Disease Control and Prevention (CDC), heroin use in the U.S. has increased 63 percent over the past decade, while prescription-opioid abuse has also risen. In fact, 45 percent of heroin addicts are also prescription opioid addicts, the report claimed.

Addiction rates are up among both the affluent and people with health insurance. The CDC found that people in these groups tend to move on to heroin after being cut off from prescription opioids. (Sound familiar?)

Bree Watzak of the Texas A&M College of Pharmacy states in a 2015 report: ”We see that people tend to move on to street drugs after they’ve lost access to prescription opioids. It’s a progression.”

Thomas Frieden, director of the CDC, said in a July 2015 interview with NPR that people who abuse prescription opioids are 40 times more likely to abuse or become dependent on heroin. He also lamented that heroin is more available than ever on the streets, and often far cheaper than prescription narcotics. In fact he estimates heroin to be one-fifth the cost of prescription drugs. This more than 50 years since President Nixon declared the second “War on Drugs.”

So 102 years after the passage of the Harrison Narcotics Act, and 92 years after the banning of heroin in the U.S., here we are.

Short of ending the War on Drugs, there are steps that can be taken in the right direction. One is called ”harm reduction.” If a heroin addict is unwilling or unable to detox and undergo rehab, then at least provide clean needles with pharmaceutical grade heroin so as to avoid the spread of disease and enable the person to lead a more productive life. Programs like this in Switzerland, the U.K.,and other countries have been successful, and many addicts have been thus able to resume their occupations and a relatively conventional lifestyle. They no longer have to spend their days looking for the drug and they take just enough to be able to perform their jobs without experiencing withdrawal symptoms. Many, after returning to a conventional lifestyle, gradually taper themselves off the drug and voluntarily detox.

Another smart move would be to “decommission” doctors as agents of law enforcement. Allow doctors to prescribe opioids without fear of prosecution. A physician who encounters a patient with a dependency problem should have a frank discussion with that patient, inform the patient of the potential long-term health consequences of the addiction, and encourage treatment of the addiction. If the patient refuses treatment, then the physician can continue to write the opioid prescriptions in the interest of harm reduction—it certainly is preferable to patients going to the street for heroin and dirty needles.

It has been over a century since the government began its first War on Drugs. It set in motion a series of destructive unintended consequences affecting every one of us, and the medical profession has not been spared. We learned before that the harmful consequences of alcohol prohibition were worse than the drug itself. It’s time we learn to apply that same insight to the other drugs, including narcotics.

Dr. Jeffrey A. Singer practices general surgery in metropolitan Phoenix and is an adjunct scholar at the Cato Institute.

The New Propaganda Wars

Richard W. Rahn

Berlin, Germany — Even though the Berlin Wall was destroyed more than a quarter-century ago and the city rebuilt, new propaganda wars are being waged here and elsewhere by the major and some minor powers in a more sophisticated way than the Soviets ever imagined.

Disinformation has always been a staple of states and spies, but now the world’s airways are being swamped with state-controlled TV “news” stations. Of the 83 TV channels I have access to in my hotel room, roughly 32 of the 51 non-German channels are largely owned or controlled by various governments. This includes the Russian, Chinese, Qatar (Al Jazeera), Thai, Vietnamese, Armenian, Turkish, and Cuban governments. The Russian, Chinese, Qatar, Japanese and French governments have full-time English-language channels here in Berlin, clearly designed to reach an audience outside of their home-country nationals who may be living in or visiting Berlin.

These are almost entirely commercial-free channels, paid for by the taxpayers of their respective countries, and are now found in most major cities in the world, at some considerable cost. The question is: Why are they doing it?

State-owned global TV networks have become key tools of political spin.

The world’s oldest and largest broadcast company is the British Broadcasting Company, started back in 1923 with radio. It was the first global radio and then TV broadcaster. Before the British Empire was dissolved, the BBC was a way for the far-flung British colonies to get a constant British perspective on the news, as well as a way to disseminate British culture and “standard” English. Although exhibiting a leftist bias, the BBC was widely praised for its quality and accuracy.

The United States had no equal to the BBC. But during the Cold War, it created Radio Free Europe and others, which were explicitly designed to provide those in the Soviet Union and elsewhere the “real” global news. With the collapse of the Soviet Union, these efforts were largely defunded by the Congress. The Soviets and other communist countries operated their own radio networks directed at people in the West. The efforts on both sides were somewhat effective and served to undermine the legitimacy of all governments.

In 2005, the government of Vladimir Putin in Russia decided to build its own global TV network to serve as a propaganda tool for the Russian government. Originally named Russia Today, it is now known as RT, and it broadcasts primarily in English, as well as in Spanish and Arabic. The Russians knew that they had to make the bulk of the programming both credible and interesting. Most of the news is done in a straight manner without evidence of bias, left or right. But when it comes to interests that the Kremlin believes to be important, RT carries the water.

Mikhail Lesin, who had been highly effective in forcing the previously independent media in Russia to bow to the wishes of the Kremlin, was selected by Mr. Putin to create RT, which is viewed as a great success. In 2009, Lesin moved on to other jobs, ending up as the senior media and lobbying person in the Russian energy giant Gazprom. Lesin spent considerable time in the United States, where his adult children had permanently moved. He managed to acquire a number of multimillion-dollar homes in the Los Angeles area and a huge and very expensive yacht, among other things.

On Nov. 5 last year, Lesin was found dead in a Dupont Circle hotel in Washington, D.C. At the time, the claim was made by his family and Russian sources that he died of a heart attack — which many doubted. Last week, the official coroner’s report was released, which said that the official cause of his death was blunt force trauma to his head, neck, torso and extremities — meaning that he apparently had been beaten to death. It has also been reported that he was under investigation for a number of financial crimes — and that he may have been cooperating with the FBI to save himself. In addition, it is widely believed that he had a falling out with Mr. Putin back in 2014, so naturally, the speculation is that the Kremlin was behind his murder, which may or may not be true.

Mikhail Lesin was obviously privy to many secrets, and the reports of his cooperation with the FBI would probably not only upset the folks in the Kremlin, but many others, including those who were recipients of funds from Russia — laundered through Gazprom and Rosneft, an oil company owned by the Russian government. Last year, there were a number of articles in the press about how major environmental organizations and foundations with political agendas had been recipients of Russian money secretly distributed by Kremlin-controlled energy companies through offshore shell companies. All of these groups certainly had a strong interest in Lesin not revealing what he knew — particularly in this very political year.

Governments have interests in promoting their image and influence to the outside world for a variety of goals. In addition to the age-old tools of buying-influence, threatening and discrediting opponents, the state-owned global TV network has become the most important propaganda tool. Watchers beware.

Richard W. Rahn is a senior fellow at the Cato Institute and chairman of the Institute for Global Economic Growth.