Monthly Archives: July 2014

“Zionazi” Meme is Vile Propaganda

The “Zionazi” trope and the accusation that Israel is committing genocide akin to the Holocaust are outright lies perpetrated by terrorists and their defenders, which should be rejected and confronted by any responsible media outlet. The falsehoods are plain from even a cursory comparison.

The contrasts between Israel’s actions in Gaza and the Holocaust are too many and too obvious to list exhaustively. Two core distinctions suffice – the scale of the casualties and the intention of the parties.

First, the Holocaust was horror at an incomprehensible scale. The Nazis killed 6,000,000 Jews alone and 11,000,000 people overall in the Holocaust. These Holocaust figures refer only to targeted non-combatants, and the aforementioned figures do not include the tens of millions of military and civilian war casualties from World War II. Depending when you consider the beginning of the Holocaust, roughly 4,000 Holocaust victims died per day.

In the almost 70 years since Arab irregulars invaded pre-state Israel, in 1947, Egypt, Jordan, Lebanon, Syria, Iraq, Saudi Arabia and the Palestinians in aggregate have suffered about 85,000 killed in conflicts with Israel, and Israel has suffered just under 30,000 killed. These figures include both civilian and military casualties, and include all forms of the Israeli/Arab conflict — the War of Independence, the 1956 Sinai war, the Six-Day War, the Yom Kippur War, the 1980s Lebanon war, two intifadas, the 1990s terror war, the Second Lebanon War and two large-scale Gaza battles since Israel withdrew in 2005, as well as terror attacks and counter-operations.

As of this writing, around 190 Palestinians have been killed in the current, eight-day Hamas-Israel war. That’s about twenty-seven people per day – including combatants – compared to 4,000 innocents killed per day in the Holocaust. Every civilian death is terrible, but the Nazi analogy is void and inapplicable.

Second, the distinguishing and lasting horror of the Holocaust was the conscious, concerted effort to rout out and annihilate a population totally unrelated to the war. Certainly many Holocaust victims were killed ad hoc at the front. However, millions of Jews and other “undesirables” were herded into ghettos where they were left to die en masse of disease and hunger. Those who survived the ghettos or were found elsewhere were shipped by railroad cattle cars to concentration camps where they were gassed to death, worked to death, or again died of hunger, disease and exposure. Whether murdered as targets of opportunity collateral to actual battle, or in bulk far in the rear, Holocaust victims were not unintended victims of otherwise legitimate military operations.

In short, the Nazis’ intentional, systematic extermination of undesirables had no military rationale, but the extermination itself was the goal.

The differences are absolute. Hamas and Israel are at war – Hamas has fired around 1,000 rockets from Gaza at Israeli population centers in the last week and Israel has bombed military targets in Gaza. All of the casualties in Gaza have been either military personnel or unintended victims of attacks on military targets. Israel pursuing valid military aims and causing civilian casualties in the process bears no comparison to the Holocaust.

Further, there is zero evidence that Israel intends to harm civilians. Far from targeting civilians, Israel makes every possible effort to prevent civilian casualties. Israeli pilots are authorized to call of strikes in progress if civilians are present, and have done so. Even the U.N. has acknowledged that Israel warns civilians of forthcoming strikes on nearby military targets so that civilians can evacuate the danger zone. Col. Richard Kemp testified to the Israel’s historically unprecedented efforts to avoid civilian casualties in 2012.

In addition, Hamas itself contributes to Palestinian civilian casualties. Hamas maximizes the overlap between military and civilian areas by placing military bases, munitions and rocket launchers in population centers. A recent Hamas video asked Gaza residents not to post photographs of Hamas fighters firing rockets “from the middle of town.” When Israel warns of the civilian population of imminent attacks on military targets, Hamas has civilians mass and form human shields at the targeted infrastructure. Using human shields in this manner is, of course, a war crime, but Hamas hopes to either ward of Israeli strikes or inflate civilian casualties for propaganda purposes.

For Hamas and its supporters and defenders, propaganda is the key. Comparing Israel to the Nazis, and the Israeli war effort to the Holocaust evokes a powerful sense not only of good versus evil, but of the evil being even greater, even viler and crueler, as the former victim becomes the perpetrator.

But it’s a lie. Without even delving into Hamas’s terrorist nature (it is a designated terrorist organization in the U.S., Canada, the European Union, Japan, Jordan and Egypt), the origins of the current conflagration or the history of Israel’s withdrawal from Gaza, it is obvious from objective facts that accusations of genocide are pure falsehood.

John Boehner and John Roberts, Meet John Marshall

Originally posted at American Thinker

If and when House Speaker John Boehner sues the President, John Roberts and his colleagues on the Supreme Court will confront a challenge that has bedeviled the judiciary for more than 200 years. When the executive disregards constitutional limitations on its own power, what can the Court do about it?

When confronted with the issue in Marbury v. Madison in 1803, then-Chief Justice John Marshall enhanced the Court’s power and prestige by declining to issue an order the executive might refuse to obey. Today’s Court will have to decide whether the Obama Administration’s recidivist disregard for constitutional and legislative limits warrants issuing an order the executive might simply ignore.

Marbury arose under political circumstances as volatile and acrimonious as anything we see today. From 1789 to 1801 Federalists controlled the presidency under George Washington and John Adams. From 1797 to 1801 the Federalists held both chambers of Congress.

But at the turn of the 19th century, Federalist dominance failed. In the 1790s, the French Revolution deeply divided American politics, and opposition to Federalists’ latent monopoly on government coalesced when Washington declared the United States neutral in Europe’s general war of the 1790s and Adams undertook an undeclared naval war against the French. The overwhelmingly unpopular Alien and Sedition Acts, allegedly enacted to bolster national security against French infiltrators and supporters, was seen by many as an effort to suppress anti-Federalist sentiment. Then in the election of 1800, the Federalist ticket split in as Alexander Hamilton ran against the incumbent John Adams.

In the resulting “Revolution of 1800,” the Democratic-Republican party swept into power. Due to a constitutional quirk since rectified, the two D-R candidates, Thomas Jefferson and Aaron Burr, deadlocked in the Electoral College, requiring the lame-duck Federalists in the House of Representatives to break the tie. The Federalists generally supported Burr, but Hamilton worked tirelessly against him, contributing to a prolonged deadlock in the House. It took thirty-six ballots, but eventually Delaware Federalist James Bayard switched his vote from Burr to Jefferson, giving Jefferson the election. Burr killed Hamilton in a duel less than four years later.

Before the new president and Congress were sworn in, the Federalists packed the courts. First, the Federalists enacted the Judiciary Act of 1801, creating new District Courts and Circuit Courts, adding judges to the Circuits, reducing the number of Supreme Court justices and empowering the president to appoint federal judges and Justices of the Peace. On March 3, 1801, the day before leaving office, Adams appointed what become known as the “Midnight Judges” to fill the new positions. On March 4, the last day of the Adams presidency and the 6th Congress, the Senate approved the Midnight Judges appointments.

John Marshall was a staunch Federalist and Adams ally. He was Adams’ Secretary of State from June, 1800, until the Federalists left office on March 4, 1801. He was also Chief Justice of the Supreme Court from January 31, 1801, to July 6, 1835, making him both Secretary of State and Chief Justice at the time of the Midnight Judges appointments.

In his role as Secretary of State, Marshall was charged with actually delivering the commissions to the Justices of the Peace approved by the outgoing Federalist Senate. Perhaps serving in two of the most important roles in the country simultaneously was too much, as Secretary Marshall did not deliver all of the commissions and left some number in the desk in the Secretary of State’s office.

Once sworn in, Jefferson set about undoing the Federalists’ court-packing scheme. One of his first steps was ordering Levi Lincoln, the stand-in Secretary of State until James Madison arrived in the District of Columbia, not to deliver the neglected commissions. Then the Judiciary Act of 1802 reversed the Judiciary Act of 1801, so the courts’ governance reverted to the Judiciary Act of 1789. The Judiciary Act of 1802 also cancelled the Supreme Court’s entire term for the second half of 1802.

One of the commissions that Secretary Marshall neglected to deliver and that was subsequently withheld by Jefferson, Lincoln, and, once he arrived on the scene, Madison, was slated for William Marbury. So Marbury sued Madison.

And Marbury sued Madison directly in the United States Supreme Court. Under the Judiciary Act of 1789 — again in force thanks to the D-Rs reversing the Judiciary Act of 1801 — the Supreme Court had original jurisdiction over writs of mandamus. Generally, a writ of mandamus requires or forbids a government entity from doing something or failing to do something. Marbury sought a writ of mandamus requiring delivery of his commission.

Federalist Marbury was asking Federalist Marshall to order D-R Madison to deliver the commission Secretary Marshall himself had failed to deliver; the D-Rs controlled the entire legislative function, had already prorogued an entire Supreme Court term, and by all indications were willing to hamstring the Federalist-controlled courts altogether.

Chief Justice Marshall responded with arguably the most important decision in U.S. judicial history, addressing two critical questions. First, the Court held that a political officer cannot refuse to perform a specific, legally defined obligation, notwithstanding any Presidential order to the contrary. In Marbury, once the commission had been signed, the Secretary of State’s duty was to stamp it and deliver it:

This is not a proceeding which may be varied if the judgment of the Executive shall suggest one more eligible, but is a precise course accurately marked out by law, and is to be strictly pursued. It is the duty of the Secretary of State to conform to the law . . . He acts, in this respect . . . under the authority of law, and not by the instructions of the President.

As a result, Madison was not entitled to withhold Marbury’s commission.

Second, though, the Court held that it did not have the power to order Madison to deliver Marbury’s commission because the Court lacked original jurisdiction over Marbury’s suit. Without too much of a foray into the minutia of constitutional syntax, Chief Justice Marshall held that the Constitution precludes expanding the Court’s original jurisdiction, and therefore the Judiciary Act clause purporting to grant the Court original jurisdiction over complaints seeking a writ of mandamus was void. Since the Supreme Court lacked jurisdiction over Marbury’s suit, the Court could not issue the requested writ.

Chief Justice Marshall is often credited with creating judicial review by voiding an act of Congress in Marbury, but that is not entirely accurate. Judicial review existed in English common law and was discussed favorably at the constitutional conventions. However, Marbury is the first time the Supreme Court negated an act of Congress, thereby giving judicial review firm precedential footing.

Establishing judicial review was a major victory for Chief Justice Marshall and for the Court. That goal explains two peculiarities about Marshall’s decision. First, jurisdiction over a case is usually and properly determined at the beginning of a decision, since jurisdiction is a predicate to deciding the merits of a matter. In Marbury, only the holding that the Court lacked jurisdiction is binding, and the discussion of executive officers’ obligations is non-binding dicta. Chief Justice Marshall likely opined on Madison’s lawlessness first because he wanted to support his fellow Federalists, even though he then rendered his opinion moot.

Second, if Marshall wanted to help his fellow-traveler and redeem his own mistake, why did he cast off his own power to do so by negating the Court’s original jurisdiction? There are many good reasons, actually. The constitutional interpretation in Marbury is objectively and generally accepted as the correct one; Chief Justice Marshall probably did not want long, tedious original jurisdiction matters diluting the Court’s appellate powers; securing judicial review required voiding the Judiciary Act of 1789 and so sacrificing Mr. Marbury’s commission.

In considering a potential Boehner suit, Chief Justice Roberts may be thinking of another reason Chief Justice Marshall disclaimed the Court’s jurisdiction in Marbury. The Court has no independent means of enforcing its orders. It relies on the executive. If Marshall issued a writ of mandamus ordering Madison to deliver Marbury’s commission, Madison might have refused. Marbury could have turned to the executive, but Thomas Jefferson was not predisposed to empower a Federalist at Madison’s expense. Issuing an order that Madison and Jefferson might ignore risked undermining the Court, the Constitution and the republic.

Now recall that Speaker Boehner’s proposed suit would allege precisely that the executive is ignoring another branch’s constitutional powers. A court order would either require or forbid the President’s doing something, whether it be unwinding past encroachments or refraining from future ones. But the Court is no more able to enforce such a decision today than it was two hundred years ago. No, President Obama is abusing the executive’s exclusive ability to change facts on the ground, and the judiciary cannot force him to stop or undo his overreach.

Nevertheless, there are good reasons for the Court to risk today what Chief Justice Marshall refused to risk in 1803. Chief Justice Marshall’s concern for the Court’s well-being is moot in 2014, as Judicial review and the Court’s prestige are well and deeply established. While having an order ignored would highlight the Court’s limitations, it would not undermine the institution as Chief Justice Marshall might have feared. Indeed, if President Obama ignored a Supreme Court order, it would damage him and his party far more than it would damage the judiciary.

In addition, the merits of a potential Boehner suit leave little room for maneuver. Chief Justice Roberts has been pilloried for his 2012 decision in National Federation of Independent Business v. Sebelius, which upheld Obamacare by transmogrifying a fine into a tax. Consistent with precedent and deference to the more democratic branches of government, that decision reflects the Court’s reluctance to interfere with legislation when there is any Constitutional basis upon which to uphold it. But while Sebelius may have required strained reasoning, there is no reasoning upon which Mr. Obama could constitutionally rewrite the explicit deadlines in Obamacare or negate provisions in immigration law. Chief Justice Roberts’ deference in Sebelius even enhances his credibility in a Boehner suit.

Finally, the Court has the power and duty to interpret and uphold the Constitution. The Constitution provides that “[t]he judicial power shall extend to all cases, in law and equity, arising under this Constitution, [and] the laws of the United States.” As Chief Justice Marshall wrote in Marbury, “[i]t is emphatically the province and duty of the Judicial Department to say what the law is.” President Obama re-writes and ignores legislation as he sees fit, derogating from Congress’s Article I powers. If the separation of executive and legislative powers is to have any substance, the Court must assert itself and perform its duty to say what the law is, no matter the practical or political challenges.

Incidentally, Marbury never received his commission. Madison succeeded Jefferson as President, elected in 1808.

Jonathan Levin is an attorney and blogs at

Carnegie Report On U.S. Policy Toward Corruption Abroad Misses The Mark

A recent study from The Carnegie Endowment For International Peace rightly concludes that the United States should make anti-corruption a significant element of U.S. foreign policy, but for the wrong reasons. The Carnegie report drastically oversimplifies corruption’s innumerable varieties and overstates the United States’ ability to stop corruption abroad. In particular, the report’s eye-catching assertion that al-Qaeda and its Islamist allies recruit on the basis of U.S. support for corrupt regimes misconstrues the relevant sense of “corruption” and the potential impact of a more fulsome anti-corruption policy. Instead of relying on this contorted self-interest argument, the U.S. should bolster anti-corruption efforts as a moral imperative, as part and parcel of democratic governance, and as the leader of the free world.

The thesis of the study, “Corruption[:] The Unrecognized Threat to International Security,” is that corruption contributes to lawlessness, undermines the rule of law and the state itself, and predisposes the population to indoctrination and recruitment by violent organizations. At the heart of the report are two charts: The first reflects a correlation between a nation’s position on Transparency International’s “Corruption Perceptions Index” (CPI) and Fund For Peace’s “Failed States Index;” the second shows a similar correlation between “Absence of Corruption” as measured by World Justice Project and “Political Stability and Absence of Violence/Terrorism” as measured by the World Bank’s World Governance Indicators. Hence the conclusion that corruption is a contributing factor in state failure, violence and terrorism.

There are two critical problems with this basic tenet. First, correlation does not reflect causation, as the report implicitly accepts. But the report proceeds under the assumption that reduced corruption would likewise produce reduced violence, terrorism or lawlessness, as the case may be. This assumption is a logical fallacy.

Second, corruption is a broad notion used in different ways in different places and under different circumstances. Black’s Law Dictionary, the go-to source for legalese, defines corruption narrowly as “Illegality; a vicious and fraudulent intention to evade the prohibitions of the law. The act of an official or fiduciary person who unlawfully and wrongfully uses his station or character to procure some benefit for himself or for another person, contrary to duty and the rights of others.” This approximates the common notion of official corruption.

Merriam-Webster’s definition is broader and more literary, “dishonest or illegal behavior especially by powerful people (such as government officials or police officers),” the act of corrupting someone or something,” “something that has been changed from its original form,” “impairment of integrity, virtue, or moral principle,” “depravity; decay; decomposition,” “inducement to wrong by improper or unlawful means (as bribery),” “a departure from the original or from what is pure or correct.” This definition incorporates the concept of a corrupting influence.

The instances of corruption abroad accordingly vary enormously, and the report fails to adequately address that diversity. The Carnegie report categorizes countries as either structurally corrupt or lacking central control, then shoehorns countries into one or the other, grouping together countries with little or nothing in common.

Structurally corrupt countries have “been repurposed to serve . . . the personal enrichments of ruling networks.” The unifying principle is that a group of elites grow rich on the backs of the population. North Koreans starve as the Kims amass billions. Fidel Castro lived as a king while his people barely subsided. Central Asia, much of the Arab Middle East, North Africa and vast reaches of sub-Saharan Africa suffer under more or less exploitative governments. South America is vastly improved in recent decades, but political advancement and surprising wealth accumulation still correlate. In China, a public crackdown on mid-level corruption is in progress, but masks pervasive corruption at the top and bottom of the governmental structure.

But the report also puts Saudi Arabia and Russia into this category, countries whose problems do not reflect the same sense of “corruption.” Saudi Arabia and its fellow Arabian Peninsula monarchies in particular have little in common with the rest of the group. The Saudi government enriches the monarch and royal family, but it is in fact a monarchy. Enriching the monarch is not corrupt according to most of the dictionary definitions, though it offends Western notions of integrity, virtue and moral principle. Further, if the nature of the Saudi government is inherently corrupt then the implications of opposing “corruption” are outlandish.

By contrast, Russia fits neatly into any definition of corruption. A putatively democratic system has been undermined by Vladimir Putin and his cronies who control the legislature and security organs, and intimidate the judiciary to prevent substantive oversight and silence opponents. Mr. Putin has forced the sale of private businesses as depressed rates so his supporters can snap them up and reap massive profits. This is corruption at a grand scale, as Mr. Putin unlawfully and wrongfully uses his station and individual influence to procure benefits. There is no identifiable commonality between corruption in Russia and Saudi Arabia, or between either and North Korea or Cuba.

The nations deemed corrupt for lack of central control are similarly disparate. Corruption for lack of central control arises where local officials, militias or other unrestrained and abusive authorities extort the population through theft, graft or rents. Within this category the report cites nations as diverse as Ivory Coast and South Sudan (little central government to speak of and wracked by internecine violence among competing warlords, claimants or fiefdoms), and Colombia and India (relatively strong central government failing to control local officials). Again, there are few if any common elements to effective anti-corruption efforts in South Sudan and India.

The failure to establish a functional working definition of corruption also undermines the report’s treatment of Islamic radicalism. The report credits Islamic terrorists’ accusation that the United States supports corruption in the middle-east as an effective recruitment tool. This accusation is indeed common among extremist literature, but the report’s interpretation is incorrect. First, extremists’ accusations do not refer to the monetary “corruption” that is the subject of the report, but rather to the perceived moral or religious corruption of Islamic leaders who fail to abide Islamists’ interpretation of Islam. Extremist groups recruit by reference to the Saudi royal family’s corruption, but this refers to their drinking alcohol, carousing and gallivanting in Monte Carlo instead of seeing to their Islamic obligations.

Second, Islamist recruitment literature is mere propaganda and reinvigorating U.S. anti-corruption measures would have doubtful practical impact. To the extent the sort of fiscal corruption the report refers to is a factor in Islamist recruiting at all, a campaign to end such corruption might easily backfire. Recruiters might refer U.S. pressure as evidence of Saudi or other foreign corruption, and any continuing U.S. aid would be spun as evidence of duplicity and hypocrisy. At bottom, Islamists hate the U.S. because they believe their religious obligations are mutually exclusive from liberal, Western values, and U.S. anti-corruption efforts or lack thereof are not a substantive consideration.

Likewise, the Carnegie repeatedly cites Mubarak’s Egypt as an example of a U.S.-supported state where systematic corruption lead to instability and violence, and criticizes President Obama for calling Mubarak a “force for stability and good in the region.” Mubarak was a stabilizing force, though. He replaced an assassinated prime minister (separated by Sufi Abu Taleb’s eight-day tenure), ruled for almost thirty years, and has been succeeded by a nascent Islamist theocracy overthrown by a military coup. Unlike his predecessors, Mubarak did not invade Israel and did not conspire to create a pan-Arab power. U.S. support for Mubarak was always offered despite his corruption and because the alternative was potentially far worse. History may judge Mubarak’s tenure as a relatively benign calm between Egypt’s militant-socialist era and its current upheavals.

In addition, the U.S. and allied Western countries already prohibit corrupt practices abroad, and the Department of Justice and Securities Exchange Commission have recently indicated increased focus on the Foreign Corrupt Practices Act which generally forbids bribes by U.S. companies to foreign officials. The FCPA has the salutary effect of privatizing anti-corruption measures by taking away the incentives and creating a market for private corruption detection.

Nevertheless, there is room for argument that the FCPA could be extended to sanction the foreign brokers and fixers that often pay bribes to foreign officials without their U.S. principal’s knowledge or authorization. As the law stands now, the principal is liable for the third party’s crime, while the foreign third party often escapes punishment. The U.S. could expand the FCPA to ban the third party from any business with the U.S. banking and financial sectors, including transaction clearing, a tactic deployed with great success against Iranian and terrorist assets.

And some of the report’s recommendations are appropriate, while others are myopic and unrealistic. Intelligence agencies should collect information on corruption because it is relevant, at least, to assessing foreign government stability, reliability and susceptibility to illicit influence. Diplomats should consider foreign corruption for the same reasons. The executive should not meet and lend credibility to corrupt foreign counterparts. Assistance money from the U.S. government, NGOs or multi-lateral organizations like the World Bank or World Trade Organization should be strictly monitored and conditioned, and terminated if diverted to private accounts. Where possible, foreign assistance for infrastructure or other tangible projects could be administered directly by the donor rather than remitted to the host nation.

On the other hand, the U.S. must be careful about interfering in other countries’ domestic affairs. Sponsoring civil anti-corruption efforts abroad borders on subversion or espionage; likewise systematically monitoring foreign officials’ finances. Denying visas risks creating a diplomatic rift and sets potentially troublesome precedent.

The most glaring omission is the report’s failure to invoke the United States’ moral obligations. As the leading global democracy, the United States should be pursuing a robust freedom agenda abroad. Corruption is inherently oppressive, as it interferes with or interrupts social and economic mobility, and so it derogates from that agenda.

The Carnegie report fails to address critical questions, undermining its conclusions and usefulness. This is unfortunate, because the United States has the prestige and moral obligation to combat corruption abroad, and an objective assessment of the U.S.’s limited tools would be invaluable. Nevertheless, the U.S. should oppose foreign corruption, not necessarily in hopes of tamping anti-Americanism abroad, but in the name of expanding free and democratic governance.