N ATO’s deployment of air power against Muammar el-Qaddafi’s forces in Libya has been called—by some with hope, by others with alarm—the first exercise of the “Responsibility to Protect.” This new principle, which calls for international military action against “genocide, war crimes, ethnic cleansing and crimes against humanity,” was endorsed at the 2005 “high- level plenary meeting” of the UN General Assembly. It is so contemporary that it has been given a textable, Tweetable acronym—R2P.
In truth, while it sounds cutting-edge, R2P has a pedigree that is old, some even say ancient. An acquaintance with this history is essential to assessing whether R2P is likely to prove a boon or a bane to the human condition and to American interests—or whether it is likely to make much difference at all.
The more traditional name for this principle is “humanitarian intervention.” I first encountered it as a graduate assistant in the late 1970s. The professor for whom I worked, Georgetown University’s much beloved William V. O’Brien, was an expert on war, international law, and the relation between the two. The importance of the concept was that it legitimated the use of force under certain circumstances. International law is quite restrictive of the right of states to go to war, all the more so since the adoption of the UN Charter, which allows states to take military action only at the behest of the UN Security Council or in the exercise of individual or collective self-defense.
But traditionally, authorities on international law had recognized another grounds for lawful war-making, “humanitarian intervention.” Scholars have identified expressions of this idea in texts as old as Hugo Grotius’s 1625 De Jure Belli ac Pacis , generally taken as the starting point of international law, and even in the writings of classical philosophers and theologians on whom Grotius and his co-thinkers drew.
The concept was not hard to grasp. Although sovereignty has been a powerful principle of international law at least since the birth of the state system, moral intuition suggested that it could not be absolute. When a government’s depredations against its own subjects far exceeded the level of brutality that is all too common, then it in effect forfeited its sovereignty and others might rightfully send combatants to protect the victims. No one ever succeeded in defining the threshold, but no one doubted that it existed. Who would have objected to forceful action to stop Hitler’s Holocaust of the Jews or Pol Pot’s “auto-genocide” of the Khmer on the grounds that foreign intervention was illegal?
But of course no one did intervene to save the Jews or the Khmer. This was not only tragic; it also created a legal question. Those who asserted that such a provision existed—and they included L. F. L. Oppenheim, Hersch Lauterpacht, and other of the most luminary names in the field of international law—held it to be part of “customary” law. (“Custom,” which is roughly analogous to common law within the British legal tradition, is a major source of international law.) My Georgetown mentor William O’Brien, however, writing on the laws of war, was not convinced. He pointed out that “custom” self-evidently derives from what states actually do—not from what many think they ought to have done but failed to do. To test his skepticism against the opinions of those who propounded the tenet, he tasked me with searching for concrete examples of humanitarian intervention.
The contemporary literature on the subject mostly pointed to one latter-day instance. During the Congo crisis of the early 1960s, a joint American-Belgian task force was airlifted in to rescue Westerners who were being held hostage. This was not very satisfying. The extrication of white people from African mayhem did not make a uniquely morally compelling tale. And the number of lives at stake—just shy of a thousand—was not on the order of the Holocaust or Pol Pot’s rampage. The same number of blacks might vanish in an episode of African fratricide any week of the year without outsiders taking much notice.
O’Brien and I thought we had at last hit upon a more solid example in another African event, the 1979 overthrow of Ugandan dictator Idi Amin Dada by the army of neighboring Tanzania. Even by the standards of a day when all of Africa groaned under the dictatorial rule of so-called “big men,” Amin’s bloody reign stood out. He is estimated to have executed one hundred thousand of his countrymen; and reportedly tortured to death Uganda’s Anglican archbishop with his own hands—even, by some accounts, feasting on the remains.
However, when I interviewed officials of Tanzania’s embassy in Washington, they were adamant that their forces had ousted Amin because of the invasion of Tanzanian territory by his soldiers, which had in fact occurred. They insisted that their government had not acted against Amin due to his cruelties against his own people and that it would not have done so since it was faithful to the central principle of the Organization of African Unity (later succeeded by the African Union) of absolute non-interference in one another’s internal affairs, a policy in which each of these autocracies had an obvious self-interest.
Our quest having proved fruitless, O’Brien treated the matter with scholarly skepticism in the book he was writing. “The need for humanitarian intervention to save a people from its own government has not coincided with the availability of a power or group of powers capable and willing to intervene,” he said, which made the whole concept “problematic.”
T he situation changed, however, over the next two decades, in ways that led to the emergence of R2P as a principle of US foreign policy. In 1986, the US Senate gave its consent to ratification of the Convention on the Prevention and Punishment of the Crime of Genocide, which had come into force among other nations some decades earlier. It bound signatories to “undertake to prevent and to punish” genocide. This meant that at least in some instances, humanitarian intervention might be based on something stronger than customary law, namely, treaty obligation. It gave the whole matter more weight.
A second, more momentous change—the ending of the Cold War in 1989—made humanitarian interventions more feasible. During the Cold War, any movement of military forces into a new territory by one superpower, whatever the reason or pretext, was seen by the other as the ominous advance of a pawn on the global chessboard. At a minimum, it heightened tensions, and often it provoked a counter-response.
Now there was greater freedom of action, and for the United States the demands of national security felt less urgent. No longer burdened by the exigencies of parrying an existential threat from the Soviets, America could give greater rein to its moral sensibilities.
Crises that engaged America’s values or principles more than its security took center stage in international politics for a stretch of years in the 1990s when Yugoslavia disintegrated, famine overtook Somalia, and inter-tribal bloodletting reached epic proportions in Rwanda. The latter two posed only humanitarian concerns, while the Yugoslav case entwined these with more practical ones. Serbia’s attacks on Slovenia, Croatia, and especially Bosnia-Herzegovina challenged a basic rule against cross-border aggression in which America (and others) had a security stake, the same rule that had been invoked against Iraq’s invasion of Kuwait, while the mass murder and rape of civilians added moral issues.
President Bill Clinton, eager to “focus like a laser” on the domestic economy during his first term, deferred to the UN to handle these crises. In the end, both the world body and Washington covered themselves in shame by their ineptitude and lack of urgency in the face of wholesale atrocities, preparing the ground from which R2P sprung.
The Somalia events had sprung from the decision of Clinton’s predecessor, George H. W. Bush, in his last days in office, to send Marines to stem a famine in that country that had been caused less by natural events than by the disappearance of law and order and the commandeering of food by armed gangs. The Americans set up well-protected feeding stations and saved perhaps a million lives. But this left the question of how they could extricate themselves without tragedy returning.
The UN decided on an ambitious nation-building project and persuaded Clinton to leave several thousand US soldiers in the country as the backbone of a UN force to shield it. When, in October 1993, eighteen US Army Rangers were killed in a shootout in Mogadishu, the president, at a loss to explain to the public why US forces were in combat in Somalia, ordered a hasty withdrawal.
Six months later, ethnic strife in Rwanda exploded into the first unambiguous episode of genocide since the Holocaust. Rather than reinforce UN peacekeepers stationed in the country due to earlier outbreaks of violence, UN officials encouraged them to flee. When some Security Council members sought action to stanch the bloodshed, the US took the lead in blocking it. Once he had left office, Clinton offered the lamest of apologies to Rwanda. “The international community . . . must bear its share of responsibility for this tragedy,” he said, without acknowledging that in this case the “international community” was first and foremost himself.
He had even gone so far as to order members of his administration to avoid using the word “genocide” while the Rwanda killings were under way, lest this invoke America’s obligation under the genocide convention to attempt “to prevent” it. As a result, administration representatives would only reluctantly concede that “acts of genocide may have occurred.”
When war had broken out in Bosnia-Herzegovina in April 1992, President Bush had called it a “hiccup” and sat on his hands. As a candidate, Clinton criticized this policy, but as president he continued it. The UN role actually made things worse. The Muslim victims of “ethnic cleansing” at Serb hands were invited to take refuge in six “safe areas” under UN protection, provided they turned over their weapons. One of these was Srebrenica, which was overrun by Serb forces in July 1995. As the Serbs moved in, UN forces refused to protect the Muslims or to return their guns so they could try to protect themselves. Some seven or eight thousand males of or near military age were rounded up and slaughtered en masse, the only such massacre in Europe since World War II. This prompted Clinton finally to order air strikes, which ended the war quickly and easily.
Embarrassed by the long delay in taking action in Bosnia, during which 100,000 to 250,000 people, mostly civilians, had perished, and alarmed by the strains the situation had caused within NATO, Washington and its European allies responded with alacrity when Kosovo heated up in 1998. A bombing campaign forced the withdrawal of Serb forces.
NATO’s action may have been morally justifiable, but it had no plausible basis in international law. The use of force had not been approved by the Security Council because Moscow, which holds a veto, stood with Belgrade. Absent a Security Council vote, military action can sometimes be legitimated under the rubric of “collective self-defense,” but unlike Bosnia-Herzegovina, which was an independent country, Kosovo was indisputably a province of Serbia, so no issue of cross-border aggression arose.
A few NATO governments, but not most, justified their offensive as an exercise in humanitarian intervention, but this stretched the concept beyond all meaning. There is no doubt that Serbs persecuted the Albanian population of Kosovo, but persecution occurs in many places. The number of Albanians killed by Serbs by the time the bombing began probably did not exceed double digits. Tragic though it was, this did not nearly rise to the extraordinary level of violence that had always been seen as the threshold for humanitarian intervention. Without any legal right, NATO acted above all out of the regret that it had failed to act in the earlier crises in Bosnia-Herzegovina and in Rwanda.
Unease over what was done in Kosovo—and done or not done in Bosnia, Rwanda, and Somalia—prompted the government of Canada, with backing from UN officials and funding from major foundations, to create an International Commission on Intervention and State Sovereignty. Its deliberations fructified in a report issued in December 2001 that coined the term “Responsibility to Protect.” This in turn was incorporated in the recommendations released in 2004 of the so-called High-Level Panel on Threats, Challenges and Change, appointed by then Secretary General Kofi Annan to spearhead UN reform. This was then codified the following year at a special meeting of the General Assembly, the “world summit.”
F or some American liberals, and no doubt for like-minded Europeans, the embrace of humanitarian intervention, now rechristened as Responsibility to Protect, appeared to entail some paradoxical reasoning.
During the humanitarian crises of the early 1990s, three schools of thought could be discerned. One, composed mostly of neoconservatives, advocated armed intervention. Another, composed mostly of more traditional conservatives, opposed involvement on the grounds that our sentiments might be touched but our interests were not. A third group, mostly liberals, wanted to do something to stop the bloodshed but were chary of the use of force.
In the years following the crises, this last group exhibited second thoughts, as exemplified by Clinton’s Rwanda apology. Without abandoning wholesale their distrust of military action, some of these liberals seemed now to feel more comfortable with war for humanitarian ends than for national self-interest which, as they see it, can too easily slide into self-aggrandizement (a distrust of American purposes that still lingers from Vietnam).
It was these voices, exemplified by Samantha Power, author of a widely acclaimed book about genocide who now serves on the staff of the National Security Council, who were seen to have triumphed over the “realists” in the Obama administration in persuading the president to undertake the Libya mission. However, the self-doubt that seems to inhere in liberal hawkishness was expressed in Obama’s decision to end US participation in the Libya bombing campaign, in favor of NATO, almost before it had begun.
Moreover, the embrace of humanitarian intervention still left the newly fledged liberal hawks—or if not them, then their foreign counterparts—uneasy on one score. The only country with the capacity to use force decisively in most violent crises was the United States. Was Washington now to be given a free hand in the name of humanitarianism? Might not America’s war hawks exploit such a loophole for their own purposes?
Ironically, even as the end of the Cold War had allowed greater focus on humanitarian crises, it had also stoked dismay over American power. Throughout the Cold War, Western Europe and many countries elsewhere had sheltered under that power against the depredations of the Soviet Union and its surrogates. When the sudden Soviet collapse created a unipolar world, America was no longer needed as a protector, and America’s singular status began to seem ominous, even to allies. Hubert Védrine, French foreign minister, coined the term hyper-puissance to liken the muscularity of the US to a malady of international politics. “The Americans, in the absence of limits put to them by anybody or anything, act as if they own a kind of blank check in their McWorld,” wrote Germany’s leading magazine, Der Spiegel .
These anxieties gave rise to insistent demands from statesmen and commentators for obedience to the principle that the use of force always required the approval of the Security Council. In Kosovo in 1998, this issue was ignored by some of these selfsame luminaries. But it was asserted afresh in 2002 in the run-up to the invasion of Iraq by America and its allies, an invasion that was branded “illegal” by UN Secretary General Kofi Annan because it had proceeded in the absence of such approval.
Thus, in enshrining the principle of R2P, the UN world summit affirmed that any such action must be taken “through the Security Council,” thereby safeguarding the world against any self-appointed policing on the part of the United States.
There is, however, a deep problem with the UN Charter’s conferral on the Security Council of a monopoly of the rightful use of force. The Charter creates a kind of social compact among nations, analogous to the compact among individuals in Lockean political philosophy. Under article 2.4, which outlaws “the threat or use of force against the territorial integrity or political independence of any state,” members forgo the autonomy of action they enjoyed under customary law in exchange for the protection they will receive from the Security Council. That protection, as spelled out in chapter 7 of the Charter, will be furnished by a massive international military apparatus the Security Council will deploy against any miscreant state that threatens or attacks another. But this entire apparatus is a mirage.
The only two occasions in the UN’s sixty-six-year history on which it performed the function envisioned as its main purpose, that is, to thwart an aggressor, were in Korea in 1950 and in Kuwait in 1991. On both occasions, the Security Council, having no forces of its own, invited members to form a posse under the leadership of the United States. In effect, it acted under article 51, which acknowledges “the inherent right of individual or collective self-defense.” This article was designed for occasions when the Security Council fails to act. Yet it turned out to be the only article under which it could act.
What clearer confession could there be that the compact at the heart of the Charter is broken? Thus, to repose all authority for the use of force in the Security Council is absurd and dangerous. This applies to humanitarian disasters as well as breaches of the peace.
The UN was deeply involved in Somalia, Rwanda, and Yugoslavia, and its actions mostly made things worse. This prompted Kofi Annan, who headed the body’s peacekeeping department during these crises before rising to become secretary general, to declaim: “Peacekeepers must never again be deployed into an environment in which there is no cease-fire or peace agreement.” In other words, UN forces might play a post-conflict role in restoring normalcy but are generally helpless where one party or another wants to continue killing.
D oes the UN Security Council’s authorization of NATO’s air campaign over Libya give grounds for revising this assessment? Does it establish a precedent that is likely to be followed by other life-saving interventions under UN aegis? The answer is no. The cardinal feature of the Libyan episode is that Qaddafi is a nut job who has alienated almost all governments except a few in Africa that he has bribed but that wield little clout.
The clearest measure of Qaddafi’s extraordinary isolation was the unprecedented resolution of the Arab League endorsing intervention in Libya, which paved the way for the Security Council’s vote. But when Syrian dictator Bashar al-Assad, no less a tyrant than Qaddafi, sent tanks and snipers into cities to mow down hundreds of peaceful demonstrators, the Arab League endorsed his action and the Security Council refused to pass so much as a resolution offering mere verbal condemnation.
It is hard to think of regimes as friendless as Qaddafi’s. Perhaps that of North Korea, but since it has nuclear bombs and an ally in Beijing, no action is contemplated even though the Kims have far exceeded Qaddafi in abusing their own subjects. The military junta that rules Burma has been widely criticized, but its fellow members of ASEAN, the Association of Southeast Asian Nations, generally coddle it, and military intervention was never considered when it slaughtered Buddhist monks marching peacefully in 2007.
In short, the Libyan case will prove no more of a precedent than the Security Council’s authorization of the use of force against North Korea in 1950, thanks to an ill-conceived boycott by the Soviet delegate who thus was unable to cast a veto. The UN cover was handy, but the United States would have fought to defend South Korea regardless, probably joined by the same collection of allies.
R2P at best will be a flawed principle of moral action because it cannot be applied even-handedly. No matter what the regime in Beijing, for example, does to its own citizens, the use of outside military force to protect them is unimaginable. Who will invade China? Nonetheless, for this principle to deserve to be taken seriously, it should be applied as uniformly as possible. The situation in Syria is not the same as in Libya: for one thing, the Syrian people have not called for outside armed help. But if Assad goes on a mass killing spree (as his father did in the city of Hamma in 1982) and the Syrian dissidents do call for outside help, then what? It is likely that the Obama administration would shed its perverse solicitude for that regime. But it is inconceivable that the UN—i.e., Moscow and Beijing—would.
The world has mostly enjoyed peace since 1945, but that owes nothing to the UN and everything to American power, exercised mostly in the form of guarantees to Japan, NATO, and other allies, rather than in shooting wars. In this era when violence within states is far more common than between them, cases of extreme abuse will sometimes cry out for outside intervention. But the traditional doctrine of humanitarian intervention, invoked by the United States and other democracies at their own discretion, is likely to offer a more usable basis for such action than the shiny new version called R2P, which places all authority in the paralytic hands of the United Nations Security Council.
Joshua Muravchik is a fellow at the Foreign Policy Institute at Johns Hopkins University's School of Advanced International Studies. His upcoming book, How the World Turned Against Israel, wll come out next year.