Issues in Data Collection: International Conflict
Summary and Keywords
Conflict data sets can shed light on how different ways of measuring conflict (or any other international relations phenomenon) result in different conclusions. Data collection procedures affect our efforts to answer key descriptive questions about war and peace in the world and their relationship to other features of interest. Moreover, empirical data or information can answer some pointed questions about world politics, such as, “Has there been a decline in conflict in the international system?” The development of data on characteristics relevant to the study of international relations has undeniably allowed a great deal of progress to be made on many research questions. However, trying to answer seemingly simple descriptive questions about international relations often shows how data rarely speak entirely for themselves. The specific ways in which we pose questions or try to reach answers will often influence our conclusions. Likewise, the specific manner in which the data have been collected will often have implications for our inferences. In turn, proposed answers to descriptive questions are often contested by other researchers. Many empirical debates in the study of international relations, upon closer inspection, often hinge on assumptions and criteria that are not made fully explicit in studies based on empirical data.
Many of the central questions of interest to researchers and the general public about world politics are inherently descriptive. Some prominent examples include: Has there been a decline in conflict in the international system? Was the 19th century more peaceful than the 20th century? How common are democratic institutions around the world, and how has the extent of democracy changed over time? Does there appear to be any relationship between such changes or “waves of democracy” and conflict in the international system?
Descriptive questions like these should in principle be answerable from empirical data or information about the relevant features. The turn to greater use of scientific methods in the study of international relations, exemplified by the Scientific Study of International Processes section of the International Studies Association, is premised on how attention to the canons of scientific inquiry, systematic data collection, and empirical testing of propositions can help improve our understanding of world politics. Following the behavioral revolution in the social sciences in the 1960s, several databases have been developed that attempt to take stock of core features relevant to international relations, such as conflict between states and democratic institutions. Given the development of such data sources, one might expect that answering the questions posed above should be relatively straightforward, and that clear and uncontroversial answers to these questions could be reached simply by looking at the data at hand.
The development of data on characteristics relevant to the study of international relations has undeniably allowed a great deal of progress to be made on many research questions. However, trying to answer seemingly simple descriptive questions such as those posed above will often alert us to how data rarely speak entirely for themselves. The specific ways in which researchers pose questions or try to reach answers will often influence conclusions. Likewise, the specific manner in which the data have been collected will often have implications for inferences. In turn, proposed answers to core descriptive questions such as those outlined above are often contested by other researchers. Many empirical debates in the study of international relations, upon closer inspection, often hinge on assumptions and criteria that are not made fully explicit in studies based on empirical data.
This essay conducts a review of conflict datasets, seeking to shed light on how different ways of measuring conflict (or any other international relations phenomenon) results in different conclusions. To this end, it reviews conflict datasets and the literature about them, and examines how data collection procedures affect efforts to answer key descriptive questions about war and peace in the world and their relationship to other features of interest. The article starts with a specific example, namely the different datasets on frequency of conflict after the Cold War and the different conclusions about global conflict that emerge in the resulting scholarly literature. The second section draws on the example to suggest some critical issues regarding how the data are collected and consequences for the goal of generalizable knowledge-building. The third section then analyzes a wider range of data collection procedures for conflict datasets, engaging with these core issues. The article concludes with some practical recommendations for data collection efforts.
Approaches to Post–Cold War Conflict Data: Same World, Disparate Conclusions
Few questions can be of greater interest to conflict studies than the issue of how common conflict is in the world and whether there are any discernible trends in the frequency of conflict over time. Many prominent scholars and policy makers originally claimed that the end of the Cold War would unleash a period of instability with a heightened risk of new conflicts (see, e.g., Mearsheimer, 1990 for a particularly pessimistic view; Mueller, 1994 provides an interesting review of such predictions). Although such ideas were—and continue to be—widespread, an early empirical analysis of the so-called Uppsala Armed Conflict Data (ACD) project collecting information on conflict incidence for the years 1989–1994 actually indicated that there appeared to be a decline in new conflicts and fewer ongoing conflicts in the world after the Cold War (see Wallensteen & Sollenberg, 1995). This finding of a so-called post–Cold War dip in the frequency of armed conflict has been replicated by other researchers, with different data sources on conflict incidence (Gurr, 2000), and has gradually become accepted among many researchers (see, e.g., Goldstein, 2011; Mueller, 2004; Pinker, 2011). The post–Cold War dip in conflict also provides a rare example of an empirical finding that has managed to get disseminated beyond the confines of academia and conflict studies (see Goldstein, 2002; Mack, 2002).
This decline in the number of conflicts noted in the ACD project is undoubtedly an empirical finding, which can be derived directly from these data by simply counting the number of conflicts that have been assigned proper names. This is not to say, however, that all researchers accept the inference that there is less conflict in the world (see, for example, Harrison & Wolf, 2012; Kaplan, 2006). Some researchers have disputed the existence of such a post–Cold War dip in conflict, seemingly also basing their conclusions on empirical evidence and data. Sarkees et al. (2003, p. 49), for example, argue that data on conflict “reflect a disquieting constancy in warfare” over the past one hundred and fifty years. Hewitt et al. (2007) question the significance of the observed decline in the number of conflicts, as conflicts during the supposed dip seem particularly extensive in scope measured by number of participants. Harrison and Wolf (2012, p. 1055) assert that “[w]ars are becoming more frequent …, the frequency of bilateral militarized conflicts among independent states has risen steadily over 131 years from 1870 to 2001.” Others argue that “new wars” and forms of violence excluded from traditional analyses of war, such as terrorism and genocide have replaced traditional forms of interstate conflict (Kaldor, 2006), although systematic studies using other data lend no support to claims that genocide and terrorism have become more widespread or severe over time (e.g., Clauset et al., 2007; Rummel, 1995; Kalyvas, 2001).
The fact that researchers can draw such disparate conclusions about a seemingly simple descriptive question on the extent of conflict is bound to puzzle many observers. Upon closer inspection, it is possible to show that these conclusions often depend on obvious differences in research design as well as more subtle differences in assumptions about how to analyze the available data and the specific manner in which these data have been collected. For example, Wallensteen and Sollenberg (1995) find a dip in the number of conflicts while counting the number of unique conflicts in the ACD data. By contrast, Sarkees et al. (2003) argue for an alarming constancy of war, since there is no clear linear trend in a regression of conflict fatalities on time in the Correlates of War data, whereas Hewitt et al.’s (2007) measure of conflict participants does not decline as fast as the number of conflicts, since many international interventions in recent conflicts such as Kosovo and Afghanistan have involved large international coalitions. Harrison and Wolf (2012) in their discussion equate wars with Militarized Interstate Disputes (MIDs), although the latter include threats and need not need to entail any lethal violence. These are clearly different types of data on violence, and there is no inherent reason why studies using different criteria must yield similar conclusions about how common conflict is in the world.
However, the controversy over the trends in conflict should impress upon us how answers to seemingly descriptive question almost inevitably require a number of important additional assumptions, and how it rarely will be possible to simply look directly at available data and reach answers that will be universally accepted as valid. The argument here is not that it is inherently impossible to answer such questions, or that all approaches are arbitrary and equally valid ways of answering a question. For example, it questions whether it is reasonable to use the number of participants (by very inclusive criteria, including UN operations) as a measure of the scope of conflict, especially when the number of participants in the “coalition of the willing” (36) in the Iraq War is “larger” than World War I (32) (see Gleditsch, 2008). It is also problematic to equate Military Interstate Disputes with “wars” and use such information to assert that the frequency of wars is increasing, especially given the challenges in recording minor conflicts in a consistent and exhaustive manner as we go back in time (see Gleditsch & Pickering, 2014). However, a deeper point is that there will often be no natural or universally valid empirical measures, and any debates on such questions must clarify the potentially contentious issues in collecting and analyzing data. Researchers must be as explicit as possible about the assumptions entailed in their answers and be prepared to defend these before there can be a meaningful evaluation of differences in their conclusions and their validity.
Core Issues in Conflict Data Collection
This section proceeds from the example of post–Cold War conflict datasets to more general considerations arising in collecting data for the study of international conflict and for the study of international relations more generally. It will by necessity have to be somewhat selective in the specific issues covered, and will focus on problems of collecting data on core features that lie at the heart of the discipline, such as how to identify states and conflict between them. However, many of the issues discussed are rather general and likely to be relevant for a wider range of data collection projects and research questions in international relations. Note that the section is concerned with issues that may hinder accurate inferences by the researcher and thus focuses on instances in which different measurements produce different conclusions. When scholars use different measures and arrive at the same conclusion, being able to demonstrate robustness is generally encouraging.
In general, data should be collected in a manner that maximizes content validity and objectivity. Empirical measures have content validity to the extent that the resulting observable indicators reflect the theoretical concepts that researchers are interested in (see Campbell & Stanley, 1963). For example, if the intensity of violence is to be measured, and bodily harm is believed to be an important manifestation of the degree of violence, then the number of fatalities in an event may have greater content validity than the number of participants. One way to think of objectivity here is as the degree to which the explicit operational definition can be applied in an intersubjective manner. Data become more subjective if two researchers may disagree on how individual observations should be classified or how the criteria should be applied to actual cases. For example, although it is relatively easy to establish whether a person is alive or dead (in the sense of not breathing) in an uncontroversial manner, it may be much more difficult for observers to agree upon cause of death (e.g., did death occur due to a respiratory infection or should HIV be considered the underlying cause?). Likewise, two researchers may be working with a seemingly objective operational definition defining a type of conflict based on a minimum threshold of direct fatalities caused by battles, but have different views on what should be considered direct battle-related fatalities.
Moreover, data must be as valid as possible for allowing comparisons between individual observations or countries and over time (e.g., King et al., 2003; Summers & Heston, 1991). There may be many instances where it would be undesirable to insist on using the same operational criteria over time and space. To use a simple example, comparisons of energy consumption per capita as a measure of economic capacity over time may be questionable if technological change implies that more can be produced with the same amount of input, and comparisons across units at the same point in time may be inappropriate if these have very different energy needs for heating or different incentives to seek efficient energy use. As another example, prices vary, and a given amount of money in nominal exchange rates will suffice for very different quantities of a commodity in different markets. Therefore, it is often not ideal to use nominal exchange rates—calculated the same way across time and countries—as an economic indicator when making comparisons, and for this reason economists frequently use purchasing power parity–adjusted figures when comparing wealth between countries. However, it is important to be attuned to how changing operational criteria may influence conclusions from comparisons across time and space. Stated differently, it is cause for concern if data produce relative measures or rankings of observations that either fail obvious construct validity tests or lead to changes or rankings that may be in part artifacts of the operational measures.
The different goals of objectivity, content validity, and maximum comparability can sometimes present very real trade-offs in data collection efforts. Coding schemes that may be difficult to apply in an intersubjective manner, such as the definition of a state based on relative autonomy or the definition of a crisis based on perception of threat, are often designed to increase the content validity and maximize the ability to draw meaningful comparisons across units. That is, apparent ease of coding consistency is not of much help in itself unless the measures correspond well to the underlying theoretical constructs. When choosing coding rules, one must try to maximize both objectivity and content validity, even though it may be difficult to fully preserve all the desired goals at the same time.
Finally, just as important as potential problems in data collection are the problems arising from unobservable or nonrandom missing data in comparisons (Rubin, 1976) and what are called denominator effects (Firebaugh, 1992; Dixon & Boswell, 1996), which occur when measures are shares or proportions and thus potentially take on different meanings in different subsets of the population. In particular, one should be very skeptical of descriptive claims if there are likely to be major problems with observing the phenomenon of interest in particular circumstances, or if descriptive claims are based on shares or proportions where the denominator may leave out important relevant actors or individuals. For example, it is difficult to reliably assess the frequency of famine in the 19th century, since reliable information is not available and such incidents are likely to be undercounted in many parts of the world in the recorded data.
Data Collection Procedures in International Conflict Datasets
The remainder of this essay will consider how the core data collection issues may arise with regard to key questions in some often-used conflict datasets.
The Units: How Many States or Actors?
Identifying the set of the relevant actors or units in the field could be seen as the most fundamental issue in collecting international relations data. The traditional approach in international relations theory is to take the state as the key actor or fundamental unit. Hence, to answer questions about the relative frequency of wars between states, it is necessary to first identify the population of states that might enter into conflict with one another.
The literature on the state offers a number of conceptual definitions. An early influential example is Weber’s (2004) definition of the state as “a relation of men dominating men” with a monopoly on the legitimate use of violence. More recently, North (1981, p. 21) defines the state as “an organization with a comparative advantage in violence, extending over a geographic areas whose boundaries are determined by its power to tax constituents.” Krasner’s (1995–1996) discussion of the possible changing and variable nature of the state highlights territoriality and autonomy as the defining characteristics of the Westphalian state.
Such efforts to develop conceptual definitions of the state rarely discuss how one might operationalize these defining characteristics in practice. Probably the best-known effort to identify states in the international system empirically is the so-called Correlates of War (COW) list, first developed by Russett et al. (1968). This list does not follow common definitions of the state in emphasizing features such as territorial control and autonomy, but rather focuses on a set of criteria based on external recognition and a minimum population size, where the specific criteria for identifying states change over time. In brief, prior to 1920, the criteria for inclusion are (1) recognition as a state by the United Kingdom and France and (2) a half-million minimum population threshold. After 1920, units are considered states if (1) they are members of the League of Nations or the United Nations, or (2) they exceed the half-million population threshold.
Whether this is a valid approach to identifying states depends upon the specific research question one has in mind. Gleditsch and Ward (1999) criticize the content validity of the COW list criteria, since their operational definitions exclude a number of important autonomous actors with clear autonomy and territoriality in the 19th century when these were not recognized by either the United Kingdom or France. Moreover, the fact that different criteria are used over time means comparisons across time based on the COW list may be of questionable validity. Examples of questionable exclusion prior to the Treaty of Versailles in 1919 include China and Iran, which were never colonized by any other state, as well as countries that were de facto autonomous in internal affairs but chose to let another state retain influence over foreign policy, such as the former British dominions Australia and Canada. Lemke (2002) notes that the COW list excludes many regionally important actors; In particular, many Latin American states appear in the COW list substantially later than their conventionally recognized dates of independence, and this makes it difficult to study their interaction if relying on the COW list to identify states. More recently, the COW list has come to encompass a number of microscopic states, by virtue of UN membership, such as Palau (population 18,100), whose role as actors in world politics is somewhat questionable.
Gleditsch and Ward (1999) suggest alternative criteria that emphasize autonomy and territorial control rather than major-power recognition, which may be more appropriate for researchers interested in making statements about independent states than the major-power-centered conception of the international system guiding the construction of the COW list. As can be seen from Figure 1, the Gleditsch and Ward data yield a substantially larger number of states in the 19th century than the COW list, and a lower number of states following the rush of many microstates to join the UN in the 1990s. Especially during the 19th century, the COW list leaves large parts of the globe excluded from the population. As a consequence, it follows that any measure based on the number of states in the denominator could differ considerably depending on the definitions used.
Bremer and Ghosn (2003) criticize Gleditsch and Ward for relying on subjective criteria to identify states. However, using the terminology and concepts introduced above, the costs of seemingly more subjective criteria that may be difficult to apply in an intersubjective manner must be considered relative to the potential problems of content validity of alternative criteria that may appear more objective. As the debate over the status of Kosovo highlights, external recognition can reflect political considerations as much as whether states have the characteristics that we expect states to have such as autonomy or territorial control. Objections from particular states can, for example, delay or prevent recognition by international organizations, or states may push for recognition of states that may have only limited autonomy or territorial control. This aside, even if the seemingly objective nature of the criteria might seem an attractive feature of the COW list, the stated rules are not actually sufficient to replicate the list. Gleditsch and Ward (1999) point out how Singer and Small excluded a number of states that met their pre-1920 criteria and included others that did not, based on various ad hoc decisions outside the explicit criteria. Ukraine and Belarus, for example, were independent UN members during the Cold War and signatories to the original UN charter, yet neither was considered a system member in the COW list. Gleditsch and Ward do not include these states, as their independence from the Soviet Union is judged to be questionable (or largely fictitious). The more general point emphasized here is that the COW list relies on similar subjective judgments supplementing or modifying the explicit operational criteria.
Identifying Conflict between States
This section focuses on how differences in definitions of conflicts used in various conflict data collection efforts will influence inferences on trends in conflict and peace. Decisions about what to consider a state will have obvious implications for conclusions about conflicts between them. Gleditsch (2004b) identifies 22 additional wars between states in the Gleditsch and Ward list that satisfy the COW project’s criteria for what constitutes an interstate war, but which were excluded from the COW war data by Small and Singer (1982) because at least one of the states involved was not recognized as a system member. However, since most efforts to collect data on conflict tend to use a particular definition of the population of states, it is generally difficult to ascertain how changing the criteria for states might influence conclusions.
Before making any inferences about trends, it is necessary to define what is meant by conflict and peace. Here again, there is a rich conceptual literature offering many possible definitions (see, e.g., the extensive review in Most & Starr, 1989). Boulding (1963, p. 5), for example, proposes that conflict can be defined “as a situation of competition […] [where] parties are aware of the incompatibility of their potential future position and […] [each] wishes to occupy a position that is incompatible with the position of the other.” The emphasis on perceived conflict here allows the exclusion of alleged forms of conflict that may not be understood by the actors, such as structural violence or certain forms of Marxist exploitation (Roemer, 1982; Høivik & Galtung, 1971). However, it is still far from clear how one would apply Boulding’s definition in efforts to collect empirical data (see, e.g., Gleditsch, 2002).
Despite the emphasis on incompatibilities as the defining characteristics in most conceptual definitions, most empirical conflict data collection efforts emphasize manifest forms of conflict or events involving the use of violence or perceived crisis. Incompatibilities may be enduring and take on the character of constant or regular features of relations—Spain, for example, continues to dispute UK sovereignty over Gibraltar, although it no longer acts in ways designed to enforce its claims. It would be nearly impossible to make an exhaustive catalog of all such latent incompatibilities. By contrast, violent events and spectacular crises that are clearly departures from normal relations are much easier to observe and record.
The first data collection efforts on conflict emphasized the use of violence as a distinguishing characteristic of events (Richardson, 1960), and this approach was later adopted by the COW project (Singer & Small, 1972). The COW project in essence defined wars to be events causing more than 1,000 casualties and provided data for the period following the Congress of Vienna in 1814–1815. The COW war definition has the advantage of separating wars or large events from minor controversies or quibbles. Moreover, the core definition appears—perhaps deceptively—simple and easy to apply. Upon closer inspection, however, many ambiguities arise with regard to how one would identify wars from this definition (see Reiter, Stam, & Horowitz, 2014). Should one count deaths of active combatants or formal government soldiers, or should we include battle deaths of civilians? How do we determine if deaths were directly caused by conflict? How do we identify the start and end dates of a conflict?
There has been considerable confusion over the specific operational criteria for identifying wars in the COW project, fueled in part by how the criteria have changed over time and doubt as to whether the criteria have been applied consistently over time (Gleditsch, 2004b; Sambanis, 2004). Moreover, although deaths in principle clearly can be counted, determining whether the death threshold has been met involves relying on estimates of battle deaths, which in practice vary widely for many conflicts (see Lacina & Gleditsch, 2005). The paucity of information on many parts of the world and time periods implies that the quality of data on casualties is likely to be very low for many violent incidents (see Shirkey & Weisiger, 2012). Hence, despite the apparent objective nature of the criteria, the resulting list of wars encompasses many subjective judgments, and it would be difficult—if at all possible—to replicate the list without additional information on the sources and subjective estimates used. Reiter et al. (2014) question whether many of the wars included in the COW data reach the battle death threshold.
Although a large number of studies have analyzed the incidence of war, many scholars have become concerned that looking only at the conflicts that eventually escalate to major wars may leave out many of the relevant conflictual situations and limit empirical variation that might help us better understand conflict and peace. Large wars are, fortunately, relatively uncommon events (see Richardson, 1948), and many argue that we can get a better understanding of under what conditions incompatibilities may escalate to violence if we take a more comprehensive look at crises and disputes between states from which war could arise. The fact that conflicts display a scale-invariant distribution where frequency is inversely proportional to magnitude could also be taken to suggest that small and large wars represent draws from the same underlying distribution where the events that give rise to large wars may not be inherently different from smaller conflicts (see, e.g., Cederman, 2003; Richardson, 1948).
This section highlights three alternative data collection efforts that have tried to identify such broader sets of conflicts. The Uppsala ACD project retains the focus on violence and identifies “armed conflicts,” with “a contested incompatibility that concerns government or territory or both where the use of armed force between two parties results in at least 25 battle-related deaths […] [of which one must be] the government of a state” (Gleditsch et al., 2002, pp. 618–619).
Other data collection efforts have dropped the explicit emphasis on violence, and instead have looked for criteria to help identify situations where violence may be likely. The COW project has produced a dataset on Militarized Interstate Disputes (MIDs), defined as “cases in which the threat, display or use of military force short of war by one member state is explicitly directed towards the government, official representatives, official forces, property, or territory of another state” (Jones et al., 1996, p. 168, emphasis added). This definition makes it explicitly clear that coders must identify whether an official of a state has made a threat to use force for something to be counted as a MID. However, Jones et al. (1996)—the main article presenting the 2.0 MID data—is less explicit on what it takes for something to constitute a threat to use force or what constitutes an official of a state, and there is no formal description of what criteria coders may have used to classify this in practice. The new 4.0 version of the MID data introduces a new approach to collecting information and some revisions and clarification of the coding rules (see Palmer et al., 2015), and there is some indication that stricter application of the criteria may result in fewer cases ultimately classified as disputes.
Brecher et al. (1988) and Brecher and Wilkenfeld (1997) have developed a dataset of international crises—the International Crisis Behavior (ICB) dataset—defined by a set of necessary criteria that together are sufficient: (1) “a threat to one or more basic values,” (2) “an awareness of finite time for response,” and (3) “a heightened probability of involvement in military hostilities” (Brecher & Wilkenfeld, 1997, p. 7). Note that the ICB definition of crisis requires a judgment on whether or not the actors perceived a threat, finite time for response, and heightened probability of military escalation. Hence, to determine what events constitute crises, coders must consider primary and secondary accounts of both the tangible actions transpired as well as how actors interpreted those actions.
Although these data collection efforts certainly are very useful and commendable, the expanded scope of these efforts introduces additional potential problems to those that we have previously listed. If there is a problem in identifying major wars in information-poor environments as one goes back in time, then there is all the more reason to worry about whether we have sufficiently good historical data to identify such militarized interstate disputes and threats that may not entail any violence or casualties consistently in developing countries and in the 19th century. Gleditsch (2002, pp. 76–78) notes that a disproportionate number of MIDs is reported for European states in the MID 3.0 data. Although this conceivably could be a “genuine” characteristic of the universe of militarized disputes, it seems likely to arise in part as a result of more frequent reporting of events in this region and that many actual disputes that meet the definition simply go undetected in areas off the radar of international media. Moreover, the criteria for determining whether incidents are part of the “same” dispute or separate disputes in the MID data are complex (Jones et al., 1996, pp. 174–177). While wars such as the World Wars and the Mexican-American War are considered one event with a single dispute ID, other contentious issues, such as the Iranian threat to impose a blockade of the Strait of Hormuz, give rise to a large number of disputes with distinct ID numbers. However, although the MID 3.0 data suggest an increase in the frequency of militarized interstate disputes up to 2001, possibly due the greater coverage (see Harrison & Wolf, 2012; Gleditsch & Pickering, 2014), the new MID 4.0 data indicate a decrease in the number of MIDs from 2002 (see Palmer et al., 2015).
Individuals in the COW project have criticized the ICB data for relying on subjective evaluations of the perceptions of actors in identifying crises. However, it is highly questionable whether one avoids the problem of subjective judgments by relying on secondary news reports to identify disputes, as does the MID project, as these obviously also may reflect subjective perceptions or contentious views. Thompson (1995) similarly criticizes the content validity of rivalry measures based on the frequency of MIDs (see, e.g., Diehl & Goertz, 2000), as these often include states that did not perceive one another as rivals (such as the United States and Ecuador) and may omit rivals that did not record the required number of disputes (such as North and South Korea).
Our brief overview of the definitions of the different conflict measures indicates that the different conflict data have quite different criteria for what constitutes a conflictual event. Each of these will in turn give quite different indications of the prevalence of conflict in the international system. Figure 2 displays the number of ongoing conflicts in each year, from 1946 to 2007, according to each of the datasets.
It might be argued that it is inappropriate to compare the number of violent incidents between states over time, since the number of states has increased over time, and by implication, so has the number of opportunities for interaction. This is a valid concern, but adjusting the number of disputes by some measure of opportunities raises the issue of what should be put in the denominator. As we have already seen, the COW system membership list encompasses many more states in 2011 (195) than the GW list (174) following the inclusion of many microstates after the Cold War. As a result, the number of possible undirected dyads—i.e., [N × (N − 1)]/2—is over 25% greater for the COW data than the Gleditsch and Ward (1999) list. In earlier time periods, the relationship is reversed as the Gleditsch and Ward list contains more states than the COW list. Hence, the criteria for what should be included in the denominator can have a large impact on the inferences that we draw. This is an example of “denominator effects.”
As is evident, the MID data consistently suggest far more conflictual events between states in each year than the other two data sources. Moreover, using the MID data probably would not lead one to conclude that there was any decline in interstate war over time or a post–Cold War dip, at least until the advent of the new MID 4.0 data covering the post-2002 period. Although the number of MIDs falls sharply at the end of the Cold War, this fall is preceded by an all-time high, and the number of MIDs quickly reverts to a level quite similar to the historical average, until we see the decline in the new post-2002 data.
By contrast, the ICB data suggest a much more restricted number of conflictual interstate events than the MID data. The ICB data do suggest a decline in conflict, but this appears much earlier than the end of the Cold War and resembles a constant decline from the early 1980s, possibly as a result of improving relations between the Soviet Union and the West.
When looking at interstate conflict only, the ACD data do not suggest much of a trend in Figure 2. However, this really reflects the sparseness of violent interstate armed conflict at any time during this period; hence the line appears almost entirely flat. An analysis of the ACD data including intrastate actors reveals many more conflicts and variation over time, and this combined series is often invoked to support the notion of a post–Cold War dip or lull in violent conflict.
These are all quite different conceptualizations of conflict, and as we have mentioned previously, there is no reason why different data necessarily should yield similar answers or trends. However, it is incumbent upon researchers to ensure that the conflict measure used actually corresponds to the concept of interest. Any threshold criteria will always to some extent be arbitrary, and judgment is required in determining what counts as a particular type of conflict and what does not. For example, the “use of force category” in the MID data includes fishing disputes, and analysts should think carefully about whether a confrontation between the United States and Canada set off by a fishing vessel on one side and the Coast Guard on the other is as relevant in the study of crisis dynamics as rapidly escalating and potentially very damaging conflicts such as the one between India and Pakistan in 2001.
A final problem in identifying conflict between states arises from its relationship to conflict within states. Civil wars, or conflicts within states, are far more common than conflicts between states (see Gleditsch et al., 2002; Holsti, 1996). However, whereas many conflict data collection efforts impose typologies where there is a mutually exclusive distinction between intra- and interstate wars, many conflictual events do not fit easily into these categories. Imposing such binary distinctions has led to numerous ambiguities in many data sources. In the COW data, for example, the civil war in Vietnam formally “ends” on February 6, 1965, when the United States started the bombing of North Vietnam and the conflict was reclassified as an interstate war. And the Kashmir conflict has shifted back and forth between the civil and interstate war categories in different versions of the dataset, since a conflict by definition must be one of the two types. Similarly, a large share of the recoded MIDs between states, as well as ICB crises, appear to originate out of civil conflicts rather than having interstate incompatibilities as their initial cause (see Gleditsch et al., 2008). Note that the Uppsala ACD data are an exception here, as they code conflicts by actors, so that one named conflict may include both state versus state and state versus nonstate actor dyads. Although prior typologies and taxonomies can be helpful in data collection and theory building, these should not be allowed to become shackles that we strive to impose on our data if the observations refuse to behave according to our coding scheme. If a high share of disputes arise out of civil wars, then it seems questionable whether models focusing exclusively on state-to-state relations can realistically be expected to have high predictive capacity or include the relevant “issues” that may lead to conflict (see Diehl, 1992; Gleditsch et al., 2008).
Denominator Effects in International Relations Research
This section highlights how the answers that researchers reach may depend on “denominator effects” in the construction of seemingly descriptive measures from the data, starting with the issue of the relationship between the distribution of capabilities in the system and its sensitivity to the list of states in the system. Traditional international relations theory has emphasized the distribution of military capabilities as a key determinant for the prospects for conflict and peace. A central debate has been the relative merits of different forms of systemic polarity or balance of power, or the pacifying effects of power preponderance (e.g., Kaplan, 1957; Waltz, 1979). Hence it is not surprising that initial empirical efforts in conflict studies sought to evaluate such systemic capability concentration arguments empirically. An early example here is Singer et al. (1972), who examine the relationship between a measure of systemic concentration and the extent of war in the international system. Their proposed measure of systemic concentration is:
where si indicates the proportion or share of capabilities held by a state in a system, while conflict is measured in terms of the number of nation-months of war during each year. The results in Singer et al. (1972) suggested that war was less common the more dispersed capabilities were in the 19th century, which they saw as consistent with the idea of balancing favoring peace. However, at the same time, greater concentration was associated with peace in the 20th century, supporting the notion that preponderance favors peace. Singer et al. (1972) argue, in a somewhat ad hoc fashion, that changes in the nature of diplomacy may account for the shift between the two time periods, due to the increasing uncertainty resulting from a larger role for domestic politics in the 20th century.
The Singer et al. (1972) findings have been subject to a great deal of debate, and it is certainly possible to question the correspondence between their measure of power concentration and the concept of balance of power as well as the adequacy of their measures of conflict (Vasquez, 1993). Here, attention is drawn to how the differences in the definition of states across time can create problems for studies of this type. The CON measure is normalized by the number of states n and will be sensitive to changes in the number of states. This is due to the presence of n itself in the formula for CON as well as the fact that the proportional size si of a given amount of capabilities that a state i holds must depend on the number of states in the system, since proportions are normalized so that ∑si ≡ 1. By implication, a fixed level of capabilities for states could appear as changing systemic proportions in the data simply because more states appear in the system and add to the denominator or normalizing sum. As was mentioned above, measures based on shares of states will be difficult to compare across shifts in the COW criteria identifying the population of states. Although Singer et al. (1972) look only at major powers, the observed relationship between warfare and systemic concentration will obviously be influenced by what is left in and out of the denominator or population. Hence, one should be very cautious in drawing strong conclusions about differences over time being due to changing features rather than possible artifacts of our measures and definitions. In this instance, it is difficult to ascertain the specific consequences of the restrictive definition of states, since there is no supplementary data on military capabilities for states not included in the COW list. However, in principle this could be investigated empirically.
One possible approach to dealing with denominator effects and their implications in comparisons over time is to try to either keep the denominator itself constant or constrain the units that go into the denominator. For example, in studies of economic openness over time, Alesina et al. (2003) show that measures of trade based on normalizing by the changing number of states over time or average trade to GDP suggest a declining degree of interconnectedness between countries. However, this is due to changes in the type of state in the population or denominator rather than a decline in trade between existing states. The developing countries that emerge tend to have lower trade, which in turn will influence the sum or average. The fact that new states have less trade, however, does not mean that the world at large sees less trade. Alesina et al. argue that we can get a better measure of changes in economic connectedness by comparing trade to GDP for a set of continuously existing states over time. This measure holds the set of countries to be compared constant, and suggests a dramatic increase in the volume of trade rather than the decrease apparent in the unadjusted data.
We have already mentioned how comparing conflict over time raises the question of whether the number of incidents should be weighted by the number of states. If there is a constant probability of conflict across dyads, then clearly the probability of seeing X number of conflicts must increase as the number of states n and the number of subsequent dyadic interaction opportunities increase (for general discussions, see Avenhaus et al., 1989; Raknerud & Hegre, 1997; Wright, 1965). However, there are strong reasons to question whether the probability of a dispute for each dyad is likely to be comparable across dyads or whether the population characteristics of dyads are constant over time. For example, the new states emerging in the system are generally poorer and small. It is well known that developing countries are less likely to engage in disputes with one another than industrialized countries. This is in part a consequence of the fact that developing states usually lack the military resources to engage in direct confrontations with other states (see, e.g., Lemke, 2002 on the so-called “African” peace). Moreover, small states are generally unable to project force over large distances, and only major powers tend to engage in much conflict with distant countries. As such, the distribution of dyadic profiles is likely to change over time as more dyads are introduced that are less likely to experience conflict. Many researchers have argued that only dyads of contiguous states or dyads involving at least one major power should be considered “relevant” candidates for conflicts (e.g., Lemke, 1995; Maoz & Russett, 1993). By this standard, the increase in n leads to a proliferation of “irrelevant” dyads, which would call into question the usefulness of metrics of conflict proneness that have the total number of dyads in the denominator.
One alternative to examining conflict-proneness over time might be to select a set of states in continuous existence since 1946 and focus only on MIDs involving these states, to keep the denominator constant and ensure greater comparability. Figure 3 compares trend lines for the share of all MIDs in all dyads in the international system (dashed line, right axis) with a measure of the number of MIDs involving states in continuous existence since 1946 (solid line, left axis). The share of all dyadic MIDs does indeed suggest a decline in share over time. Even after the post–Cold War increase, the level is still notably below anything prior to the early 1980s. The number of MIDs involving states in continuous existence since 1946, however, does not differ notably from the trend we saw when looking at the number of incidents, and suggests a number comparable to the historical average after the Cold War. This suggests that the decline in shares of MIDs by dyads may be heavily influenced by changes in the composition of dyads. Although adjusted measures are often helpful and essential, they may yield misleading inferences if we do not consider changes in the denominators or the type of units that go into this.
Another area where it is easy to demonstrate clear denominator effects due to baseline changes is the question of the prevalence of democracy in the world. Huntington (1991) and others have suggested a wave-like pattern in the expansion and contraction of democracy in the world. Some international relations researchers have examined how such changes in the share of democracies either globally or regionally are associated with changes in conflict and peace (see, e.g., Crescenzi & Enterline, 1999; Kadera et al., 2003). Figure 4a shows a plot of the proportion of independent states with a score of 6 or above on the POLITY institutionalized democracy scale (see Jaggers & Gurr, 1995 for further details), which is a common threshold to delineate countries that are democracies from this index. This does indeed suggest a pattern of three waves of democracy, followed by two waves of reversals.
Figure 4a is based on using the number of states in the denominator. We already know from Figure 1 that the number of states in the international system is not constant but has increased rapidly with the decline of larger empires and the process of emancipation of former colonies. If the denominator n is not constant, then the share of democracies could decline, even if we have no actual reversals from democracy to autocracy. More problematically, the types of states that have been added to the system with decolonialization are likely to be very different from long-established states, typically having much lower levels of economic development or human capital and hence less likelihood of being democracies. Indeed, few previous colonies start out as democracies and instead are likely to emerge with autocratic institutions. Doorenspleet (2000) holds that much of the apparent evidence for waves of democratization is due to changes in the system rather than changes between democracy and autocracy in state institutions.
In Figure 4a all states are assigned equal weight. However, states come in very different sizes, and some states clearly carry more weight in international politics than others. For example, one could well argue that there are many theoretical reasons to expect that the regime types of India and China would be much more significant both for conflict and global demonstration effects than whether democratic institutions are seen in very small states such as Tonga or Palau. One possibility to take such features into account when assessing the extent of democracy in the world is to weight countries by some measure of their relative share, for example population, as shown in Figure 4b. This suggests a much higher level for the second wave than the proportion of states in Figure 4a, reflecting the fact that India and other large states are accorded more weight than smaller states. However, Figure 4b does not indicate a dramatically different picture in terms of the waves of democracy.
Amore subtle issue that has received little attention so far is that if one is interested in the extent to which the world is democratic, then it is not obvious that the denominator should be limited to the part living in independent states (see Gleditsch & Ward, 2006). The principle of the right of national self-determination is very much a post-1945 phenomenon; previously, a substantial share of the world’s population lived in dependent areas or colonies, under the control of other states. As was shown above, when the denominator increases as former colonies become independent, there will be an increase in the number of nondemocratic independent states. However, since the populations in these areas were previously living under nondemocratic colonial rule, it is misleading to argue that autocracy is becoming widespread simply because these areas now have independent autocratic states as opposed to colonial nondemocratic administration.
Unlike many other forms of data that are only collected for state-like units, it is possible to get estimates for world population (or estimates by region) including people living in colonies and dependent areas. This allows a better measure of the extent of democracy in the world by looking at the share of population living in democratic states over the total global population. Figure 5, which displays this rate using expanded population data collected by Gleditsch (2004a), suggests that there has been one reverse wave, in the sense that there was a real setback for democracy in the period leading up to World War II. However, this is the only major period of democratic setback over the two last centuries. Even the decolonization period emerges primarily as a static period by this measure, in the sense that democracy does not expand, rather than a retrenchment, or an expansion of autocracy, as implied by waves of “autocracy.” According to these estimates, it is only around the turn of the millennium that a majority of the world’s population lives under democratic institutions. Moreover, if at best one example can be found of a period with major changes toward more autocracy in the world, then we should be hesitant to accept the conclusion drawn by many that previous waves of democracy have given rise to reversals and that future breakdowns of democratic institutions therefore must be considered likely (e.g., Diamond, 1996).
The substantial denominator effects shown here imply quite different conclusions about the frequency and changes in democratic institutions. Rather than arguing that one measure is inherently superior to another, the central point here is to emphasize how conclusions in analyses on war and peace can be influenced by denominator effects or nonrandom missing data. The main lesson is that scholars should carefully specify empirical measures that actually correspond to their concept of interest rather than uncritically rely on common convention, or use data and measures that are the most readily available.
Recommendations for Future Research
The discussion so far has generally had a pessimistic tone, pointing to potential problems arising in data collection and the interpretation of empirical data. This is not to be construed as an argument for abandoning empirical work and reverting to armchair theorizing. Data collection in international relations has indisputably facilitated a great deal of valuable systematic research and insights. In numerous instances, widely shared beliefs about alleged trends have actually been challenged and refuted by systematic data. Prominent examples here include the allegedly more dangerous nature of the post–Cold War world, as well as the extreme fears over the alleged global erosion of democracy expressed in the 1970s and 1980s (e.g., Revel, 1984; see also the interesting review in Mueller, 1994, p. 362). Aside from these successes, greater attention to problems in data collection and interpretation is important in its own right, and essential for advancing research on conflict and peace. This section attempts to provide a more positive contribution, with practical recommendations on how data collection efforts ought to proceed.
First, the distinction between subjective and objective measures is often overstated. Leaving aside the question of content validity, most seemingly objective criteria (e.g., numbers of people killed) upon closer scrutiny tend ultimately to rely on subjective and potentially controversial judgments. The more serious problem plaguing many international relations data collection efforts is that so little documentation is available detailing the specific judgments and sources going into the production of the data. The original COW war data, for example, provide little documentation on the specific sources used to determine whether conflicts meet their battle-death criteria and why codings change, beyond some discussion of how their original data were compiled from existing data sources (e.g., Singer & Small, 1972; Small & Singer, 1982). The volume by Sarkees and Wayman (2009) provides narratives and a great deal more detailed documentation on sources for wars, but this has a high list price and is difficult to access for many interested parties (for example, it is not available in the British Library as of 2015). Likewise, very little information has been available on coding of the early MID data, making it difficult for users to figure out what the events included actually may refer to, as well as what sources were used in coding the event. In some instances, the only source for the militarized events appears to be statements from the alleged target of the action, with the initiating state denying that the events had taken place (see Gleditsch et al., 2008, p. 26). This hardly seems consistent with the definition provided by Jones et al. (1996) and the emphasis on explicit action. The recent updates of the MID data are more explicit in reporting narratives of conflicts, incident data, and sources on their coding decisions. However, both the COW and MID data collection efforts thus demonstrate the potential issue of path dependency, as projects often tend to simply build on past efforts for continuity without revisiting or questioning earlier coding decisions and documentation. These data projects also exemplify the difficulty of coding distant historical events.
By contrast, although the ICB data may be criticized for relying on subjective judgments on perceptions, the ICB project has been quite explicit in documenting the rationale for their classifications (see, for example, the extensive case summaries in Brecher & Wilkenfeld, 1997). Such information on decisions and sources/origins of coding allows users to inspect these for individual observations in order to determine whether these are relevant for their particular research questions or not. The ICB documentation, too, is more extensive for more recent cases, which again demonstrates the difficulty of capturing distant historical observations. Other examples of data with extensive source documentation include Lacina et al. (2006) on battle deaths, Vanhanen’s (2000) democracy data, and the Archigos data on political leaders (Goemans et al., 2009).
This false distinction between subjective and objective measures is especially relevant to the use of event data to study political phenomena. Although the specific data points in event data are observed facts—close to an objective ideal, but still dependent on the interpretations of news reporters and creators of the data dictionaries—the translation of those events into variables to be used in data analysis requires a number of judgments to be made by the analyst. That process might be characterized as transposition from multidimensional relationships to lower-dimension relationships. For example, scholars might create typologies of actors and/or actions to, say, study the correlates of international behavior (Gerner et al., 1994). Alternatively, they might place actions on a conflict-cooperation continuum (Goldstein, 1992). Or they might use computational techniques that identify latent dimensions or groupings (e.g., Metternich et al., 2013). Each of these processes involves important judgments on the part of the researcher. One of the great benefits of event data is that they empower the researcher to adopt variable measures that best match the research question at hand—that maximize content validity for each specific project. That is, by having access to the raw events, users of the data need not rely so heavily on the judgments of others.
Second, data collection efforts should be much more explicit about the uncertainty in classifications. Recent data collection efforts by Lacina et al. (2006) and Valentino (2004), for example, provide a range of estimates for casualty numbers, which are very helpful in providing a sense of the uncertainty surrounding some of even the best guesses. Morrow (2007) similarly codes not only the degree of compliance with the laws of war, but also the quality of the data and the clarity of noncompliance. By incorporating uncertainty in projects, the collector allows future users to understand when the indicators involve more subjective judgment and help to flag decisions that other researches may disagree with or exclude from their own analyses. Moreover, when users can demonstrate that their findings are robust to alternative classifications of their key measures, the strength of their inferences improves.
Third, expert opinion surveys can help to limit the impact that a single person’s judgment has on the final resulting data. An early example of an expert-based survey is Goldstein’s (1992) development of a cooperation–conflict index based on the WEIS event data scores. To create the scale, Goldstein had to assign values related to the level of cooperation or conflict involved in such events as “deny negotiations” and “promise military support.” Instead of relying exclusively on his own best guesses, Goldstein surveyed international relations experts and had them assign values to each of the possible events. Goldstein then found the means and standard deviations around each of the assigned values and used these to construct the conflict–cooperation scale for the categories. In this way, Goldstein was able to reduce the influence that a single coder’s biases can have on the resulting classifications, and the reported measures of uncertainty allow the user of the data to incorporate the uncertainty around these values. More recently, Baum and Groeling (2008) polled international security experts to create an estimate for the degree to which the United States was succeeding or failing in Iraq during various time periods. Cederman et al. (2006) conducted an ethnic survey of the political status of different ethnic groups, using an Internet-based platform, which records the individual users’ rankings and the full revision history of the data. This approach provides an excellent way of making the coding transparent and allowing interested readers access to decisions that go into the data.
Finally, although much of data collection efforts in international relations have emphasized collecting data for the entire population or system, in many instances a targeted, smaller data collection strategy might be a more cost-effective way to expand our knowledge (e.g., King & Zeng, 2000). Owing to the predominance in systemic arguments in the discipline at the outset of the behavioral revolution, it is not surprising that early data collection efforts sought to classify information for the international system at large. But as the discipline has increasingly turned to evaluating dyadic propositions, scholars now attempt explanations based on variation in dyadic rather than systemic characteristics. Elementary statistics courses teach that making generalizations to a population does not require data on the full population, and that much information can be gleaned from a random sample. If data are difficult to collect for a full population, then it is often much better to track down good information on a smaller sample while minimizing bias in the information collected.
Links to Digital Materials
Archigos: A data base on leaders. Provides access to the Archigos data discussed in this essay, including information on the coding of specific cases.
Correlates of War. Provides access to the most recent version of the COW, including the war and MID data discussed in this essay.
Kristian Skrede Gleditsch, Data etc.. Provides access to the Gleditsch and Ward list of independent states, including documentation of the case decisions and sources.
EUGene. EUGene is a data management tool for datasets for use in the quantitative analysis of international relations, with the country-year, directed-dyad-year, nondirected-dyad-year, and directed-dispute-dyad-year as the units of analysis.
International Crisis Behavior Project. Provides access to the International Crisis Behavior data, including a data viewer application with crisis summaries.
Issue Correlates of War Project. Provides access to the ICOW data, which covers a wide array of interstate contentions over territorial, river, and maritime claims for select regions.
Polity IV Project. A dataset on political institutions and the degree to which they have specific features of autocratic and democratic institutions. See also Gleditsch’s Polity Data Archive, which provides access to older versions of the original data and a slightly modified version of the current data.
Uppsala Conflict Data Program. Provides access to the armed conflict data discussed in this essay, as well as additional datasets on other conflict-related features, and an online Conflict Encyclopedia.
Armed Conflict Location & Event Data Project (ACLED) Provides rapidly updated event data on different type of political violence and associated events, focusing on Africa.
Global Terrorism Database (GTD). Provides global data terrorist events around the world from 1970 through 2013, including both domestic and transnational terrorist incidents.
Alesina, A., Spolaore, E., & Wacziarg, R. (2003). Trade, growth and the size of countries. In P. Aghion & S. N. Durlauf (Eds.), Handbook of economic growth (pp. 1499–1542). New York: North Holland.Find this resource:
Avenhaus, R., Brams, S. J., & Fichtner, J. (1989). The probability of nuclear war. Journal of Peace Research, 26, 91–99.Find this resource:
Baum, M. A., & Groeling, T. (2008). Crossing the water’s edge: Elite rhetoric, media coverage, and the rally-round-the-flag phenomenon. Journal of Politics, 70, 1065–1085.Find this resource:
Boulding, K. E. (1963). Conflict and defense: A general theory. New York: Harper and Row.Find this resource:
Brecher, M., & Wilkenfeld, J. (1997). A study of crisis. Ann Arbor: University of Michigan Press.Find this resource:
Brecher, M., Wilkenfeld, J., & Moser, S. A. (1988). Crises in the twentieth century: Handbook of international crises. New York: Pergamon.Find this resource:
Bremer, S. A., & Ghosn, F. (2003). Defining states: Reconsiderations and recommendations. Conflict Management and Peace Science, 20(1), 21–41.Find this resource:
Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research. Chicago: Rand McNally.Find this resource:
Cederman, L. -E. (2003). Modeling the size of wars: From billiard balls to sandpiles. American Political Science Review, 97(1), 135–150.Find this resource:
Cederman, L. -E., Girardin, L., & Wimmer, A. (2006). Getting ethnicity right: An expert survey on power distributions among ethnic groups. Paper presented at the annual meeting of the American Political Science Association, Philadelphia (Aug. 31). Available online.Find this resource:
Clauset, A., Young, M., & Gleditsch, K. S. (2007). On the frequency of severe terrorist events. Journal of Conflict Resolution, 51(1), 1–31.Find this resource:
Crescenzi, M. J. C., & Enterline, A. J. (1999). Ripples from the waves? A systemic, time-series analysis of democracy, democratization, and war. Journal of Peace Research, 36(1), 75–94.Find this resource:
Diamond, L. (1996). Is the third wave over? Journal of Democracy, 7(3), 20–37.Find this resource:
Diehl, P. F. (1992). What are they fighting for? The importance of issues in international conflict research. Journal of Peace Research, 29, 333–344.Find this resource:
Diehl, P. F., & Goertz, G. (2000). War and peace in international rivalry. Ann Arbor: University of Michigan Press.Find this resource:
Dixon, W. J., & Boswell, T. (1996). Dependency, disarticulation, and denominator effects: Another look at foreign capital penetration. American Journal of Sociology, 102(2), 543–562.Find this resource:
Doorenspleet, R. (2000). Reassessing the three waves of democratization. World Politics, 52(3), 384–406.Find this resource:
Firebaugh, G. (1992). Growth effects of foreign and domestic investment. American Journal of Sociology, 98(1), 105–130.Find this resource:
Gerner, D. J., Schrodt, P. A., Francisco, R. A., & Weddle, J. L. (1994). Machine coding of event data using regional and international sources. International Studies Quarterly, 38(1), 91–119.Find this resource:
Gleditsch, K. S. (2002). All international politics is local: The diffusion of conflict, integration, and democratization. Ann Arbor: University of Michigan Press.Find this resource:
Gleditsch, K. S. (2004a). Expanded population data v1.0. Department of Political Science, University of California, San Diego.Find this resource:
Gleditsch, K. S. (2004b). A revised list of wars between and within independent states, 1816–2002. International Interactions, 30(4), 231–262.Find this resource:
Gleditsch, K. S., & Pickering, S. (2014). Wars are becoming less frequent: A reply to Harrison and Wolf. Economic History Review, 67(1), 214–230.Find this resource:
Gleditsch, K. S., Salehyan, I., & Schultz, K. (2008). Fighting at home, fighting abroad: How civil wars lead to interstate disputes. Journal of Conflict Resolution, 52(4), 479–506.Find this resource:
Gleditsch, K. S., & Ward, M. D. (1999). A revised list of independent states since 1816. International Interactions, 25(4), 393–413.Find this resource:
Gleditsch, K. S., & Ward, M. D. (2006). The diffusion of democracy and the international context of democratization. International Organization, 60(4), 911–933.Find this resource:
Gleditsch, N. P. (2008). The liberal moment fifteen years on. International Studies Quarterly, 52(4), 691–712.Find this resource:
Gleditsch, N. P., Wallensteen, P., Eriksson, M., Sollenberg, M., & Strand, H. (2002). Armed conflict 1946–2001: A new dataset. Journal of Peace Research, 39(5), 615–637.Find this resource:
Goemans, H., Gleditsch, K. S., & Chiozza, G. (2009). Introducing Archigos: A dataset of political leaders. Journal of Peace Research, 46(2), 269–283.Find this resource:
Goldstein, J. S. (1992). A conflict-cooperation scale for WEIS events data. Journal of Conflict Resolution, 36, 369–385.Find this resource:
Goldstein, J. S. (2002, May 14). The worldwide lull in war. Christian Science Monitor.Find this resource:
Goldstein, J. S. (2011). Winning the war on war. Hialeah, FL: Dutton/Penguin.Find this resource:
Gurr, T. R. (2000). Ethnic warfare on the wane. Foreign Affairs, 79(3), 52–64.Find this resource:
Harrison, M., & Wolf, N. (2012). The frequency of wars. Economic History Review, 65(3), 1055–1076.Find this resource:
Hewitt, J., Wilkenfeld, J., & Gurr, T. R. (2007). Peace and conflict 2008. Boulder, CO: Paradigm.Find this resource:
Høivik, T., & Galtung, J. V. (1971). Structural violence: A note on operationalization. Journal of Peace Research, 7(1), 73–76.Find this resource:
Holsti, K. J. (1996). The state, war, and the state of war. New York: Cambridge University Press.Find this resource:
Huntington, S. P. (1991). The third wave: Democratization in the late twentieth century. Norman: Oklahoma University Press.Find this resource:
Jaggers, K., & Gurr, T. R. (1995). Tracking democracy’s third wave with the Polity III data. Journal of Peace Research, 32(4), 469–482.Find this resource:
Jones, D. M., Bremer, S. A., & Singer, J. D. (1996). Militarized interstate disputes, 1816–1992: Rationale, coding rules, and empirical applications. Conflict Management and Peace Science, 15(2), 163–213.Find this resource:
Kadera, K. M., Crescenzi, M. J. C., & Shannon, M. L. (2003). Democratic survival, peace, and war in the international system. American Journal of Political Science, 47(2), 234–247.Find this resource:
Kaldor, M. (2006). New wars and old wars: Organised violence in a global era (2d ed.). Cambridge, U.K.: Polity.Find this resource:
Kalyvas, S. N. (2001). “New” and “old” civil wars: A valid distinction? World Politics, 54(1), 99–118.Find this resource:
Kaplan, F. (2006, January 25). What “peace epidemic”? Don’t pop the champagne corks just yet, the evidence isn’t quite there. Slate. Available online http://www.slate.com/id/2134846/?nav=tap3.Find this resource:
Kaplan, M. A. (1957). System and process in international politics. New York: John Wiley.Find this resource:
King, G., Murray, C. J. L., Salomon, J. A., & Tandon, A. (2003). Enhancing the validity and cross-cultural comparability of measurement in survey research. American Political Science Review, 97(4), 567–584.Find this resource:
King, G., & Zeng, L. (2000). Logistic regression in rare events data. Political Analysis, 9(2), 137–163.Find this resource:
Krasner, S. D. (1995–1996). Compromising Westphalia. International Security, 20(3), 115–151.Find this resource:
Lacina, B., & Gleditsch, N. P. (2005). Monitoring trends in global combat: A new dataset of battle deaths. European Journal of Population Studies, 21(2/3), 145–166.Find this resource:
Lacina, B. A., Gleditsch, N. P., & Russett, B. M. (2006). The declining risk of death in battle. International Studies Quarterly, 50(3), 673–680.Find this resource:
Lemke, D. (1995). The tyranny of distance: Redefining relevant dyads. International Interactions, 21(1), 23–38.Find this resource:
Lemke, D. (2002). Regions of war and peace. Cambridge, U.K.: Cambridge University Press.Find this resource:
Mack, A. (2002). Civil war: Academic research and the policy community. Journal of Peace Research, 39(5), 515–525.Find this resource:
Maoz, Z., & Russett, B. M. (1993). Normative and structural causes of the democratic peace, 1945–1986. American Political Science Review, 87(3), 624–638.Find this resource:
Mearsheimer, J. J. (1990). Back to the future: Instability in Europe after the Cold War. International Security, 15(1), 5–56.Find this resource:
Metternich, N. W., Dorff, C., Gallop, M., Weschle, S., & Ward, M. D. (2013). Antigovernment networks in civil conflicts: How network structures affect conflictual behavior. American Journal of Political Science, 57(4), 892–911.Find this resource:
Morrow, J. D. (2007). When do states follow the laws of war? American Political Science Review, 101(3), 559–572.Find this resource:
Most, B. A., & Starr, H. (1989). Inquiry, logic, and international politics. Columbia: University of South Carolina Press.Find this resource:
Mueller, J. (1994). The catastrophe quota: Trouble after the Cold War. Journal of Conflict Resolution, 38(3), 355–375.Find this resource:
Mueller, J. (2004). The remnants of war. Ithaca, NY: Cornell University Press.Find this resource:
North, D. C. (1981). Structure and change in economic history. New York: Norton.Find this resource:
Palmer, G., D’Orazio, V., Kenwick, M., & Lane., M. (2015). The MID4 dataset, 2002–2010: Procedures, coding rules and description. Conflict Management and Peace Science, 32(2), 222–242.Find this resource:
Pinker, S. (2011). The better angels of our nature: Why violence has declined. New York: Viking.Find this resource:
Raknerud, A., & Hegre, H. (1997). The hazard of war: Reassessing the evidence of the democratic peace. Journal of Peace Research, 34(4), 385–404.Find this resource:
Reiter, D., Stam, A. C., & Horowitz, M. C. (2014). A revised look at interstate wars, 1816–2007. Journal of Conflict Resolution, 60(5), 956–976.Find this resource:
Revel, J. -F. (1984). How democracies perish. New York: Doubleday.Find this resource:
Richardson, L. F. (1948). Variation of the frequency of fatal quarrels with magnitude. Journal of the American Statistical Association, 43(244), 523–546.Find this resource:
Richardson, L. F. (1960). Statistics of deadly quarrels. Chicago: Quadrangle.Find this resource:
Roemer, J. E. (1982). A general theory of exploitation and class. Cambridge, MA: Harvard University Press.Find this resource:
Rubin, D. (1976). Inference and missing data. Biometrika, 63(3), 581–592.Find this resource:
Rummel, R. J. (1995). Death by government. New Brunswick, NJ: Transaction.Find this resource:
Russett, B. M., Singer, J. D., & Small, M. (1968). National political units in the twentieth century: A standardized list. American Political Science Review, 62(3), 932–951.Find this resource:
Sambanis, N. (2004). What is a civil war? Conceptual and empirical complexities of an operational definition. Journal of Conflict Resolution, 48(6), 814–858.Find this resource:
Sarkees, M. R., & Wayman, F. W. (2009). Resort to war: 1816–2007. Washington, DC: CQ Press.Find this resource:
Sarkees, M. R., Wayman, F. W., & Singer, J. D. (2003). Inter-state, intra-state, and extra-state wars: A comprehensive look at their distribution over time, 1816–1997. International Studies Quarterly, 47(1), 49–70.Find this resource:
Shirkey, Z. C., & Weisiger, A. (2012). An annotated bibliography for the correlates of war interstate wars database. Available online at http://academicworks.cuny.edu/hc_pubs/6/.
Singer, J. D., Bremer, S., & Stuckey, J. (1972). Capability distribution, uncertainty, and major power war. In B. M. Russett (Ed.), Peace, war, and numbers (pp. 19–48). Beverly Hills, CA: SAGE.Find this resource:
Singer, J. D., & Small, M. (1972). The wages of war, 1816–1965: A statistical handbook. New York: John Wiley.Find this resource:
Small, M., & Singer, J. D. (1982). Resort to arms: International and civil wars, 1816–1980. Beverly Hills, CA: SAGE.Find this resource:
Summers, R., & Heston, A. (1991). The Penn world table (Mark 5): An expanded set of international comparisons, 1950–1988. Quarterly Journal of Economics, 106(2), 327–368.Find this resource:
Thompson, W. R. (1995). Principal rivalries. Journal of Conflict Resolution, 39(2), 195–223.Find this resource:
Valentino, B. (2004). Final solutions: Mass killing and genocide in the twentieth century. Ithaca, NY: Cornell University Press.Find this resource:
Vanhanen, T. (2000). A new dataset for measuring democracy, 1810–1998. Journal of Peace Research, 37(2), 251–265.Find this resource:
Vasquez, J. A. (1993). The war puzzle. Cambridge, U.K.: Cambridge University Press.Find this resource:
Wallensteen, P., & Sollenberg, M. (1995). After the Cold War: Emerging patterns of armed conflict 1989–94. Journal of Peace Research, 32(3), 345–360.Find this resource:
Waltz, K. N. (1979). Theory of international politics. Reading, PA: Addison Wesley.Find this resource:
Weber, M. (2004 ). The vocation lectures: “Science as a vocation,”“Politics as a vocation.” Indianapolis, IN: Hackett.Find this resource:
Wilson, E. J., III, & Gurr, T. R. (1999, August 22). Fewer nations are making war. Los Angeles Times.Find this resource:
Wright, Q. (1965). A study of war. Chicago: University of Chicago Press.Find this resource: