When UN Secretary General Kofi Annan delivered his annual address to the General Assembly in 1999, he bluntly reminded the gathered heads of state of the UN’s failure to act to stop ethnic cleansing earlier that year in Kosovo—a failure that had in turn provoked NATO to initiate an air war over Serbia without Security Council approval. That episode, Annan asserted, “has revealed the core challenge to the Security Council and the United Nations as a whole in the next century: to forge unity behind the principle that massive and systematic violations of human rights—wherever they may take place—should not be allowed to stand.” Annan declared that the UN must embrace the “developing international norm in favour of intervention to protect civilians from wholesale slaughter.”
The norm Annan cited had, of course, begun to take root in the aftermath of the genocide in Rwanda, which the world had witnessed with little more than a cry of anguished guilt, and the ethnic cleansing and mass death in Bosnia, to which the UN Security Council had replied with a humanitarian mission protected by ill-equipped and outnumbered peacekeepers. Annan had been sounding this theme for the previous eighteen months, prompting hope across the West that the UN, and the Security Council, might eventually discover its moral purpose.
That was in the West. In much of “the global South”—the Third World—the question of “humanitarian intervention” had a much different coloration. And after Annan and several others spoke, Algerian President Abdelaziz Bouteflika, an old lion of the developing world and the chairman of the Organization of African Unity, stood at the podium and offered up a litany of complaints against the industrialized world, which, he said, used its dominance of global institutions to enhance its power at the expense of the poor. He asked if this norm of intervention would de deployed “only in weak or weakened states” or in “all states without distinction.” Bouteflika made it plain that the former possibility was by far likelier. Humanitarian intervention was but the latest tactic of neo-colonialism. “We firmly believe,” he said, “that interference in internal affairs may take place only with the consent of the state in question.”
With his speech, Annan did not so much open as reveal a fault line of global politics. In trying to convert an incipient practice into a worldwide norm, Annan had forced into the open a rancorous argument over the significance, and the salience, of sovereignty. Annan felt that he had to instigate the debate in order to gain consensus on the question. And one could argue that he succeeded handsomely, since heads of state gathered for another such meeting six years later unanimously approved the doctrine of “the responsibility to protect,” a re-formulated descendant of humanitarian intervention.
But that’s too optimistic a view. “Sovereignty” has become an inflamed concept, and not only in matters of intervention. The battle lines seem, if anything, more entrenched today than they were nine years ago; some former neutrals have joined the camp of the sovereign absolutists. Maybe a President Barack Obama will defuse some of the tensions. But the problem existed well before George W. Bush made it worse.
The idea that sovereignty does not confer upon the sovereign an absolute right to do as he wishes with his citizens, or with others who happen to fall under his sway, greatly predates the 1990s. The first Geneva Convention, signed in 1864, obliged states to extend certain protections to citizens in occupied territories. World War II, and above all the Holocaust, put an end to the principle of absolute sovereignty that had dominated political theory and practice since the Peace of Westphalia in 1648. First the UN Charter, and then the UN Declaration of Human Rights, explicitly asserted that the state has an obligation to protect and advance individual rights. The Convention on the Prevention and Punishment of the Crime of Genocide in 1948 made the inadmissibility of genocidal violence a matter of international law.
But the idea of limited or conditional sovereignty was just that—an idea. In practice, the UN was governed by Article 2(7) of the Charter, which stipulates that “nothing contained in the present Charter shall authorize the United Nations to intervene in matters which are essentially within the domestic jurisdiction of any state.” (Defenders of sovereign rights tend to forget about the admonition, in the ensuing clause, that “this principle shall not prejudice the application of enforcement measures under Chapter VII,” which authorizes the Security Council to respond to aggression.) Anything contained within a state’s borders, including the most heinous violations of human rights, was understood to fall into the realm of domestic jurisdiction. The UN had been created as a globalized mutual-defense pact; it had become, over the years, the locus classicus of the principle of sovereignty, a place where all states were equal, and equally inviolable.
And then the Cold War ended. As impoverished and enfeebled states were abandoned by their one-time patrons, the phenomenon of the “failed state” emerged—in Somalia, Sierra Leone, Haiti, Afghanistan, and elsewhere. The chief threat both to the security of ordinary people and to a stable world order was not the aggressive designs of states, as it had been from the time of the founding of the UN, but the violent implosion of these dying stars. This very rapid change in the nature of war led to an equally swift change in the realm of ideas, for the protections traditionally afforded victims of inter-state aggression had become a kind of Maginot Line. Citizens would henceforth have to be protected from their own rulers.
The idea of humanitarian intervention was popularized in the late 1980s by Bernard Kouchner, a founder of Médecins Sans Frontières and then France’s minister for humanitarian affairs. Kouchner argued for a droit d’ingérence, or right of intervention, in the face of atrocities, though at the time he had in mind interventions conducted not by armies but by humanitarian organizations. And in 1991, the UN Security Council authorized a massive humanitarian campaign, backed by military force, to protect Kurds inside Saddam Hussein’s territory from the consequences of Iraq’s campaign of forced removal.
Then came the sickening failures in Somalia, Rwanda, and Bosnia, when the world did nothing, or much too little and much too late. In Somalia, leading NGOs, including Oxfam USA and CARE, argued for an armed intervention to protect the massive humanitarian effort. In Bosnia, advocates took the case yet one step further, declaring that the only way to halt Serb depredations was to stop treating the situation as a humanitarian crisis and embrace the need for military action. Bosnia, and then Rwanda, turned humanitarian intervention into the great moral issue of international affairs, bringing together figures from the European left such as Kouchner and Joschka Fischer, American “liberal interventionists” like Paul Berman and Christopher Hitchens (not then a conservative), and national “greatness conservatives” like William Kristol and Robert Kagan.
The argument for humanitarian intervention finally crystallized when Slobodan Milosevic extended to Kosovo his ruinous campaign to redraw the Balkan map. Western elites—and, to a lesser extent, Western publics—demanded a forceful response. Annan, strikingly, refused to condemn the NATO air war, though the lack of Security Council approval rendered it illegal by the standards of international law. A month into the war, British Prime Minister Tony Blair gave a speech in Chicago in which he defended the campaign as “a just war, based not on any territorial ambitions but on values,” and then sought to reconceive just-war principles for a new, globalized world. He wound up sounding very much like Kofi Annan. “The most pressing foreign policy problem we face,” Blair said, “is to identify the circumstances in which we should get actively involved in other people’s conflicts.” While we should not “jettison too readily” the doctrine of non-interference, that principle “must be qualified in important respects. Acts of genocide can never be a purely internal matter.”
But humanitarian intervention was only the most forceful expression of the challenge to sovereign absolutism. When we think of the peacekeeping missions of the time, we mostly recall the feckless efforts in the Balkans and elsewhere. In fact, such missions grew rapidly in size and ambition, blurring the sharp line between the consensual and the coercive, and thus between bolstering sovereignty and confronting it. As early as 1994, the Security Council agreed to dispatch an American-led force to Haiti in order to restore the country’s democratically elected president—a political, rather than a humanitarian, intervention. In 1999, the government of Indonesia agreed, under immense international pressure, to accept a heavily armed peacekeeping force to halt ongoing violence in East Timor. By the end of the decade, peacekeeping was no longer an exercise in separating warring states from one another, as it had been since the mid-1950s; it was a means of stopping civil wars.
In a yet broader sense, the Westphalian notion of sovereignty increasingly gave ground in the post–Cold War era to new human rights principles governing the rights of women, of children, and of refugees and displaced persons; to new mechanisms of enforcement of such principles, including the International Criminal Court and the doctrine of universal jurisdiction over mass crimes; and to new species of monitoring, whether in the form of UN or regional organs or, perhaps more importantly, non-governmental bodies like Human Rights Watch or Médecins Sans Frontières. In each case, the rights of citizens were understood to take precedence over the rights of states. The burgeoning human rights movement accepted both fundamental claims of the individual against the state and the obligation of outsiders to act to protect those claims. And the UN was very much the site for this act of moral and political rebalancing.
After the stinging reception of his 1999 General Assembly speech, Annan retired from the fray and let others advance this “developing international norm.” The following year, the Canadian government established the International Commission on Intervention and State Sovereignty to try to find a way out of the thicket into which Annan had stumbled. The commission, which recruited members equally from the developed and developing worlds, reformulated the underlying doctrine. All states, the ICISS report concluded, had a “responsibility to protect” their own citizens from atrocities; should they prove “unable or unwilling” to do so, that responsibility moved to the international community, acting through the UN Security Council. Moreover, the essence of this responsibility was to prevent atrocities rather than to react once they had occurred; and the actions required for prevention would often be non-lethal, non-coercive, and even non-urgent. The shift from a right of others to intervene to the responsibility of all to protect, and from a focus on military response to non-military prevention was designed, as Gareth Evans, the former foreign minister of Australia, wrote, “to find new ground on which to constructively engage”—i.e., to mollify and persuade Third World critics of humanitarian intervention.
It worked. A number of developing countries in Africa and Latin America (fewer in Asia) became advocates of the responsibility to protect. In December 2004, the UN’s High-Level Panel on Threats, Challenges and Change, which Kofi Annan had appointed, released a report that included, among many other proposals, a recommendation that the UN embrace the “emerging norm” of the responsibility to protect. Annan included the proposal in his own 2005 report. Very little from that document survived the ensuing months of debate, as emissaries from the developing world pulled in one direction, and those of the West—above all, John Bolton, the refusenik who then served as U.S. Ambassador to the UN—pulled in another. But the responsibility to protect, remarkably, emerged nearly intact. In September of that year, the heads of states gathered at the General Assembly for the so-called World Summit celebrating the UN’s sixtieth anniversary accepted language stipulating that “each individual State has the responsibility to protect its populations from genocide, war crimes, ethnic cleansing and crimes against humanity,” and that states are “prepared to take collective action, in a timely and decisive manner . . . should peaceful means be inadequate and states manifestly fail” to protect their own populations.
Why so controversial a doctrine made it through so formidable a gauntlet is not quite clear. In remarks to the General Assembly prior to the World Summit, quite a few ambassadors from the developing world stated frankly that their governments did not actually accept the responsibility to protect, or at least its coercive aspects. These included not only perennial spoilers like Cuba, Venezuela, Iran, and Syria, but also Pakistan, Egypt, Brazil, El Salvador, Indonesia, Malaysia, and India. Many of these same countries were even then blocking efforts to fortify the UN’s toothless Human Rights Commission, on the grounds that a human rights body that could single out individual countries for censure constituted an infringement of sovereignty. Certainly the support of countries victimized by atrocities, like Rwanda, and of Third World democracies like South Africa or Chile, helped the responsibility to protect doctrine gain altitude. But the principle may ultimately have won approval, as many of the proposed reforms in the human rights body did not, because states viewed the former as a harmless concession to Western preoccupations, destined to remain a mere exhortation.
Indeed, by the time the World Summit Outcome document was signed, sovereignty had long since become a neuralgic issue in Security Council deliberations. And here a good deal of the blame accrues to the Bush administration, which from its first months insisted that it would not be constrained by international law, and would not accept the legitimacy of international pacts. Perhaps the administration’s single most provocative decision was, first, to very loudly and publicly withdraw from the International Criminal Court, and then to demand that states sign bilateral agreements making American citizens immune from prosecution. Here was sovereign abolutism hitched to superpower status. And even as the U.S. held itself immune from external judgment, the White House’s enthusiasm for regime change in countries whose policies it opposed, whether the “Axis of Evil” or such lesser evils as Venezuela, implied a deep nonchalance toward the sovereignty of others.
And then, in the fall and winter of 2002–03, came the long, agonizing melodrama of the UN deliberations over war in Iraq, a debate that the administration made plain would have little or no bearing on its own conduct. The debate did not address directly the question of sovereignty. Washington described Saddam Hussein as a threat to international peace and security. But both President Bush, and, to a much greater extent, Tony Blair, sought to justify the war on humanitarian grounds. This claim, on behalf of a war widely seen, rightly or not, as unprincipled and unnecessary (unlike Kosovo), offered supreme vindication to those, like President Bouteflika, who saw humanitarian intervention as an instrument of neo-colonial control. Here was a rogue American administration eager to clothe its geopolitical aspirations in universalist principles. Beyond that, American unilateralism and high-handedness—the transparent wish to turn the UN into an extension of national policy—poisoned the atmosphere and provided a point of solidarity amidst the varied and conflicting interests of the developing world. That world—the Non-Aligned Movement, to use the archaic language still in vogue at the UN—would have to defend itself from American hegemonism. And sovereignty would be the first line of the defense.
Since that time, the argument over sovereignty has regularly brought the wheels of the Human Rights Council, the International Criminal Court, and the Security Council grinding to a halt. The most vivid example has been Darfur. The Council’s failure in 2004 and thereafter to take action against the mass killings in Darfur was over determined; not the least of the causes was the unwillingness of those very Western states that made the most noise on the subject to take forceful action. But at least Washington, Paris, and London wanted to do something. Yet, their every effort to apply, or even threaten, sanctions against the Sudanese regime was shut down in the Council by Russia and China and non-permanent members from the developing world. The African Union and the Arab League insisted that the campaign of terror sponsored by Khartoum was, if not a strictly domestic matter, then one which only fellow Africans had the right to engage. Sudanese President Omar al-Bashir depicted his country as a martyr to the Western crusade to crush Islam; he had little trouble finding takers. Like Robert Mugabe in Zimbabwe, and the military regime in Burma, Bashir has been able to continue perpetrating his brutalities thanks to the cordon sanitaire of sovereignty.
The World Summit turned out to constitute the high-water mark of the campaign against sovereign absolutism. For several years thereafter, the question of R2P, as it came to be known, went into eclipse. There were no fresh outbreaks of mass atrocity and no appetite to apply the doctrine to such pre-existing nightmares as Darfur or the Congo. The skeptics had little to react to; R2P was indeed a few paragraphs on a piece of paper. That, however, began to change in late 2007, when the new UN Secretary General, Ban Ki-moon, attempted to create an office dedicated to advancing, and operationalizing, the new norm. Using the UN’s budgetary process as their cudgel, opponents blocked any funding for the office, and even refused to permit the official whom Ban hired—and had to pay for himself—to be called “the special advisor for the responsibility to protect.” (He would simply be known as “the special advisor.”) As R2P began to look like more than an idle wish, the Egyptians insisted that the General Assembly, not the Security Council, should decide when a situation had crossed the threshold of atrocity. Cuba claimed that the doctrine had not, in fact, established any new responsibility. South Africa, a prominent former advocate, switched sides, though it now appears to be in the process of switching back.
And the sovereign absolutists rallied on behalf of the alleged perpetrators of atrocities. Kouchner’s suggestion that the Security Council consider forcible action to deliver humanitarian supplies to Burma in the aftermath of Cyclone Nargis met with denunciations from Burma’s neighbors far harsher than those inflicted on the regime itself. And when, in the aftermath of a disputed election last spring, Robert Mugabe’s security forces began systematically hunting down opposition members and barring access to humanitarian organizations, Russia vetoed an American-sponsored Security Council resolution calling for sanctions and an arms embargo. The measure, Moscow complained, constituted an unwarranted intrusion into Zimbabwe’s domestic affairs—although several months later the Russians adopted a notably more liberal view of the subject, invoking R2P to justify their war with Georgia.
How, then, shall we understand the sovereignty backlash? The reaction plainly has a great deal to do with the Bush administration. Today almost any proposed course of action intending to advance American or Western values, whether sanctions in Sudan or democracy promotion in the Middle East, is routinely rebutted with a single word: Iraq. But Iraq also serves as a convenient excuse for those who opposed such measures long before George W. Bush ever became president. The same regimes that railed against the doctrine of humanitarian intervention express doubts today about the responsibility to protect; the change of language has persuaded some skeptics, and may persuade more in the months and years to come; but it has scarcely converted those who regard sovereignty as sacrosanct. In fact, the word “backlash” may be misleading. The Algerias of the world have not suddenly discovered the virtues of Westphalian first principles; they have reacted to what they see as the endangerment of those principles.
As to why autocracies like Russia and China, or Iran and Cuba, oppose the responsibility to protect, there is no mystery here: to embrace any of these doctrines or practices would be to jeopardize their own mechanisms of control. Such countries do not accept—save in official rhetoric—the principle that individuals have rights which supersede those of the state, and certainly not the Lockean premise that sovereignty originates with the citizen. A state may have an obligation to protect its citizens, but those citizens have no right to redress from others on this count. Thus, Cuba’s UN ambassador recently explained that, while the doctrine that each state has an obligation to protect its citizens from atrocities was self-evident, “We do not agree with the notion that [state sovereignty] is outmoded.” This was not, of course, because citizens deserved no protections, but rather because “without this principle, small and weak nations will be cast down at the mercy of the big and the strong.” (John Bolton, another sovereign absolutist, argued in 2005 that no state can be obliged to come to the aid of citizens elsewhere.)
India and South Africa, however, are not autocracies but thriving democracies; indeed, India conducted a kind of humanitarian intervention in Bangladesh—then East Pakistan—in 1971, while the current rulers of South Africa loudly and successfully urged the UN to impose sanctions on the apartheid regime. What is more, these states accept the obligation to apply human rights standards to their own citizens, and accept as well the responsibility to permit outsiders—election monitors, human rights investigators—to examine and report on their practices.
Why, then, do these states balk at the idea of acting to protect citizens elsewhere? One explanation—and perhaps the least creditable—is generational and political: The ruling establishments in many of these nations were shaped either by anti-colonial struggles or by the “anti-imperialist” politics of the sixties and seventies. (In a recent conversation, the ambassador to the U.S. of a major Asian nation ascribed his country’s overall posture at the UN to precisely this ideological reflex.) This, in turn, seems to have determined the politics of the Non-Aligned Movement, in whose deliberations countries like Cuba seem to carry far more weight than does, say, Costa Rica. Nations ambitious for Third World leadership status—India and South Africa, to pick two—make a point of proving their anti-Western bona fides.
But tired ideology explains only just so much. Most developing countries, democratic or not, have had no more than fifty or sixty years of sovereign status, which in most cases they wrested from colonial masters. They can scarcely be expected to treat sovereignty with the nonchalance of Western nations who regard such protections as strictly ceremonial, like cannons rusting in an armory basement. And nations of much older lineage, like Mexico or Brazil, have scarcely forgotten the long legacy of U.S. intrusion in the domestic affairs of Latin American states—a form of meddling that includes the occasional coup. That may be history, but the Bush administration provided an unwelcome reminder when it recognized the regime that briefly overthrew the democratically elected, if deeply unpleasant, Hugo Chavez of Venezuela.
Finally, the pride of the developing world bridles at the supposed universality of Western principles. As major developing countries come to account for a growing share of the world’s wealth, its growth, and its consumption, the willingness to take intellectual dictation from the West diminishes accordingly. In The New Asian Hemisphere, Kishore Mahbubani, a veteran Singaporean diplomat, delineates the process of “De-Westernization” by which, he asserts, nations like India and China have disencumbered themselves of Western influence. Mahbubani writes witheringly of the Western inability to recognize its own eclipse, or the loss of its claims to moral superiority: thus advocates of the responsibility to protect vowed “never again” but “remained silent . . . when Lebanese civilians were killed” by Israeli troops in 2006. This line of reasoning sounds like special pleading on behalf of authoritarianism, or at least of a policy of not-very-benign neglect, yet its tone resonates with Third World elites.
What can the West—and, specifically, President Barack Obama—do to reduce the inflammation around the idea of sovereignty? Plainly, Obama must demonstrate that the universalist human rights principles espoused by the U.S. apply above all to itself. A U.S. that arrogates to itself the right either to ignore the Geneva Protocols on the treatment of enemy combatants, or to interpret them in such a way as to render them meaningless, has no standing to insist that others abide by the same rules. Nor should the U.S. balk when, for example, a UN human rights rapporteur wants to examine conditions in our jails: Conditional sovereignty means not only respecting individual rights but transparency about the means with which one safeguards those rights.
We also need to accept the legitimacy of complaints about the global distribution of power. If states agree to forfeit some traditional aspects of sovereignty, they have a right to ask: “To whom?” And the answer cannot be: “To us, the West.” We ought to accept that the G8 is a historical relic and empower a more inclusive G20 in its stead. As we rebuild international financial institutions in the wake of global economic crisis, we should take another glance at voting rights in the IMF and the World Bank. Finally, we should commit ourselves to expanding the permanent membership of the Security Council. This is the most hopeless of hopeless causes; and yet if the Obama administration committed itself to cutting this Gordian knot, the impasse might come to an end. We could offer no better proof of our willingness to incorporate the developing world into the highest levels of decision making.
There would have to be a substantial quid pro quo: Those newly enfranchised citizens, India and Brazil and South Africa and Nigeria (or perhaps Egypt), would have to put aside the rhetoric of anti-colonialism and Third World solidarity. They would have to add to, not subtract from, the quotient of political will to be mustered in the face of abhorrent actions by fellow members of the Non-Aligned Movement. And they would have to accept that the principles that undergird the responsibility to protect are not the hobbyhorse of a Western elite, but rather the property of all mankind.
James Traub is a contributing writer for The New York Times Magazine and director of policy at the Global Centre for the Responsibility to Protect. His most recent book is The Freedom Agenda: Why America Must Spread Democracy.