Wissenschaftliche:r Mitarbeiter:in (Postdoc) im Bereich Survey Design and Methodology
Blog: RSS-Feed soziopolis.de
Stellenausschreibung des GESIS – Leibniz-Institut für Sozialwissenschaften in Mannheim. Deadline: 24. September 2023
6 Ergebnisse
Sortierung:
Blog: RSS-Feed soziopolis.de
Stellenausschreibung des GESIS – Leibniz-Institut für Sozialwissenschaften in Mannheim. Deadline: 24. September 2023
Blog: Features – FiveThirtyEight
This is a special end-of-meteorological-summer installment of the FiveThirtyEight Politics podcast. Galen Druke speaks with pollsters Kristen Soltis Anderson and David Byler in an episode made entirely of "good or bad use of polling" examples. They consider why GOP primary candidate Vivek Ramaswamy polls differently depending on survey methodology, what we can learn from post-debate […]
Blog: Just the social facts, ma'am
I have an account on Truth Social, and I check it from time to time to see what Donald Trump is saying. He recently posted a story from Breitbart about a survey by Rasmussen Reports. According to the story "more than 1-in-5 voters who submitted ballots by mail say they did so fraudulently." This isn't just some Twitter "poll"--Rasmussen has decent record of accuracy in predicting elections (a B rating from 538)--so it deserves a closer look. The survey found the 30% of the sample said they voted by mail in the 2020 election. Those who said they did were asked "did a friend or family member fill out your ballot, in part or in full, on your behalf" (19% yes); "did you fill out a ballot, in part or in full, on behalf of a friend or family member, such as a spouse or child?" (21% yes); "did you cast a mail-in ballot in a state where you were no longer a permanent resident? (17% yes); "did you sign a ballot or ballot envelope on behalf of a friend or family member, with or without his or her permission?" (17% yes). Everyone was asked three additional questions whether "a friend, family member, or organization, such as a political party, offer to pay or reward you for voting?" (8% yes); whether "you know a friend, family member, co-worker, or other acquaintance who has admitted to you that he or she cast a mail-in ballot in 2020 in a state other than his or her state of permanent residence?" (10% yes); and whether "you know a friend, family member, co-worker, or other acquaintance who has admitted to you that he or she filled out a ballot on behalf of another person?" (11% yes). Rasmussen didn't release the original data, but they provided a detailed breakdown of responses. In looking at that, I noticed something strange--people who said they voted for Trump were likely to say that they had done these things. For example, among people who voted by mail, 26% of Trump voters and 14% of Biden voters said that a friend or family member had filled in their ballot. A larger fraction of Biden voters voted by mail (36% vs. 23%), so overall, .23*.26=.060 or 6% of Trump voters and .36*.13=.050 or 5% of Biden voters said that someone else had filled in their ballot. The total percent of voters who answered yes on each question: Trump BidenSomeone else filled in your ballot 6.0% 5.0%You filled out someone else's 6.9% 4.7%Signed someone else's 5.3% 4.0%So if you accept the data, Trump voters were more likely to engage in "fraud" than Biden voters. For the questions asked of everyone:Offer of reward 6% 9%Know out-of-state voter 13% 8%Know someone who filled out other's 12% 9%There was also another odd pattern in the data. For all of the questions, people in the youngest age group (18-39) were more likely to answer yes--a lot more likely. For example, 33% of people aged 18-39, 9% of people aged 40-64, and 1% of people aged 65% said that they had signed someone else's ballot. Of course, there is sampling error, but these aren't tiny groups--there are roughly 100 absentee voters in each age group. Since people in the youngest age group were more likely to have voted for Biden, the tendency for Trump voters to be more likely to report irregularities would be even stronger after controlling for age. The age differences are also present in the questions asked of everyone--19% of 18-39 year olds, 7% of 40-64 year olds, and 3% of people over 65 said they knew someone who admitted casting a ballot in a state of which they weren't a resident. How can you explain the age differences? I doubt that there has been a dramatic increase in propensity to violate the rules for mail in ballots (and to tell friends, family members, and acquaintances that you've violated the rules) across the generations. Rasmussen has a statement on their methodology that might provide an answer. Their sample is mostly obtained by random-digit dialing of phone numbers, but "to reach those who have abandoned traditional landline telephones, Rasmussen Reports uses an online survey tool to interview randomly selected participants from a demographically diverse panel." Unlike most survey organizations, Rasmussen doesn't use live interviewers--there's a recorded voice and people answer by "press 1 for yes, 2 for no....." I suspect that people are more likely to give a false answer with this format than when speaking to a person, and because Trump has been saying that there was fraud in mail voting, Trump voters may have wanted to help give evidence of fraud.** This tendency is likely to be stronger in the panel--since they are regularly asked to do surveys (and probably are generally more online), they are likely to have a better sense of how the results will be used. People without landlines tend to be young, so the panel probably makes up a much larger share of the 18-39 group. So my hypothesis is that many of the "yes" answers are a result of Trump voters (especially in the panel) giving answers that they think will help to make Trump's case that there was a lot of fraud in the election. Another factor is that people in the online panel are presumably given some compensation for participating in the surveys, so they may rush through without paying much attention. Most organizations make some effort to identify people like this and remove them from the sample, but they are usually pretty crude and Rasmussen doesn't say anything about whether and how they do it. So some of the "yes" answers, especially in the youngest cohort, may be people who are essentially answering at random. *The "no longer a permanent resident" question was left out of the table. **The question about who you voted for was asked before the questions on voting irregularities--that is, people answered it before they knew what the survey would be about.
Blog: Cato at Liberty
Jai Kedia
Recently, CMFA published an article and a working paper that detailed the Federal Reserve's departure from rules‐based governance following the financial crisis of the late 2000s. As per academics and Fed officials, the era of rules‐based governance facilitated the Great Moderation – a stable economic period characterized by less volatile macro indicators such as inflation, output gap, and unemployment. In academic parlance, macroeconomists refer to this situation as determinacy. Despite conflicting evidence, the prevailing view is that the Fed facilitated the Great Moderation by establishing a determinate economic environment through rules‐based governance that focused on keeping inflation low. Previous CMFA papers had posited the question as to whether the Fed's departure from this "successful" era of monetary policy may have instead led to indeterminacy. This article provides evidence that indeterminacy did occur during this period.
Determinacy is a feature of an economic system whereby outcomes such as inflation, output, etc., can be precisely determined based on a given set of initial conditions and policy rules. Under determinacy, the economy (as represented by a mathematical model) has a unique equilibrium outcome. In simple terms, under determinacy, the economy has only one possible resting state and is also stable with no large spirals or variability. Conversely, indeterminacy occurs when there are multiple possible equilibria that could result from the same initial conditions and policy rules. This state can create uncertainty in predicting the future state of the economy, as different equilibria may lead to significantly divergent economic outcomes. Simply put, the economy could end up in multiple possible states, some of which may be highly volatile, depending on how individuals form their expectations and make decisions.
Academics generally believe that a strong Fed response to inflation (a more than one‐to‐one increase in the target federal funds rate to inflation changes) can ensure a determinate system. This is known as the Taylor Principle. A greater than one‐to‐one response to inflation is deeply entrenched in the economic literature; most empirical macroeconomic studies simply assume determinacy and fix the Fed's response to inflation at a number higher than one or use estimation techniques that entirely exclude the possibility of indeterminacy. This determinacy bias has serious implications for policy analysis because economic models (such as those used by the Fed) exhibit significantly different dynamics in an indeterminate system. Additionally, even approaches that account for indeterminacy, including seminal papers, fail to take consumers' inflation expectations seriously. As noted above, expectations matter drastically when determining equilibrium selection. They should be included in the datasets used by empirical methods.
I utilize a simple macro model – connecting output gap, inflation, and the federal funds rate – to test the determinacy of the U.S. economy during the period when the Fed abandoned rules‐based governance (2009 through 2022). I use actual U.S. time series data for the three variables listed above as well as a measure of consumers' inflation expectations – one year ahead inflation expectations collected from the Michigan Survey of Consumers.[1] I fit the macro model to the data using a Bayesian estimation procedure under both determinacy and indeterminacy to see which fits the data better.
I find that the model under indeterminacy significantly outperforms its determinate counterpart in fitting the data set. That is, the model under indeterminacy has a much higher "goodness‐of‐fit" versus determinacy. Goodness‐of‐fit values from Bayesian analysis are unlike the usual R2 value reported from regressions. Bayesian model comparison is conducted through marginal likelihoods which are then converted to an odds ratio (similar to betting odds) called the Bayes factor. The estimated odds of determinacy to indeterminacy are 1 to 1.5 x 1015 – making determinacy an extremely unlikely event. To understand exactly how unlikely, let us compare these odds to another extremely unlikely event – being struck by lightning. The odds of being struck by lightning are much higher in comparison: 1 to 1.5 x 104. In other words, the odds of being struck by lightning are significantly higher than the odds that the U.S. economy was determinate from 2009 through 2022.
Consequently, the probability that the U.S. economy was indeterminate following the financial crisis is nearly 100%. The (indeterminate) model with a 0.57 estimated inflation response coefficient fits the data better than the (determinate) model with a 1.13 coefficient estimate. The results confirm that the Fed did not target inflation in line with the Taylor Principle.
These findings raise an important question: how responsible is the Fed in keeping the economy determinate with a unique and stable outcome? If it is, as several academics and Fed officials have claimed, then they must answer why they did not conduct policy in a way that ensured the economy's determinacy. If they are not responsible for keeping the economy determinate (as several recent studies are now finding), then the Fed's reputation for stabilizing the economy is undeserved, and the public should question why an unelected governmental agency exerts such a high degree of influence over the political economy discourse if it is ineffective in maintaining prices or keeping the economy stable. A forthcoming paper will further examine the history of the Fed's effectiveness in achieving determinacy.
The author thanks Jerome Famularo for providing research assistance during the preparation of this essay. For more information on the model, empirical methodology, and posterior distribution please click here.
[1] Respondents are asked the question: 'By what percent do you expect prices to go up, on the average, during the next 12 months?' The average of all responses is used as the measure for inflation expectations.
Blog: Cato at Liberty
Marc Joffe
San Francisco is proving to be ground zero in the nationwide commercial real estate collapse. While the values of offices and malls are tumbling in many US cities, the losses in San Francisco are more dramatic and, unlike elsewhere, have extended to hotels. City and state government mismanagement have played a major role in destroying billions of dollars in assessable real estate values, but the role of these policies is easily overlooked.
San Francisco's plight was thrown into sharp relief on June 5, when the owner of two downtown hotels containing a combined 2,925 rooms announced that it would cease making payments on a $725 million mortgage backed by the properties. Commercial bond investors will now have to find a company willing to purchase the hotels at a small fraction of their estimated 2020 valuation of $1.561 billion.
In explaining the company's decision to walk away from the hotels, Thomas J. Baltimore, Jr., Chairman and Chief Executive Officer of Park Hotels and Resorts stated:
After much thought and consideration, we believe it is in the best interest for Park's stockholders to materially reduce our current exposure to the San Francisco market. Now more than ever, we believe San Francisco's path to recovery remains clouded and elongated by major challenges – both old and new: record high office vacancy; concerns over street conditions; lower return to office than peer cities; and a weaker than expected citywide convention calendar through 2027 that will negatively impact business and leisure demand and will likely significantly reduce compression in the city for the foreseeable future.
Another nearby hotel is also experiencing a dramatic valuation decline. The 1,195-room Westin St. Francis Hotel has asked the local tax assessor to slash the combined assessment of its two parcels from $1.037 billion to $101 million.
The hotels are within walking distance of the Westfield San Francisco Centre mall that is losing its anchor retailer, Nordstrom, this summer. Before Nordstrom announced the closure, S&P had already estimated that the mall's value had declined by over 70% since it was appraised in 2016.
An even larger value decline was suffered by a 22‐story office tower at 350 California Street. After being valued at around $300 million in 2019, the property recently changed hands for between $60 million and $67.5 million according to media reports.
When considering why San Francisco has suffered so much commercial real estate value destruction in the 2020s, it is tempting to conclude that the city's tech‐heavy workforce was better equipped to work from home. This factor played a role but should not be overestimated. Indeed, one common software development methodology, known as agile, often involved daily in‐person team meetings. So, it is not strictly true that software engineering is a solitary job.
Rather than blame the pandemic or the local business mix, San Francisco and California political leaders should look inward at their policy errors that exacerbated the city's distress. Among these unforced errors were their harsh lockdown policies and the failure to provide adequate security in the downtown core.
The Lockdown
San Francisco and neighboring counties were the first to impose sweeping stay‐at‐home orders at the beginning of the COVID pandemic in the US. More importantly, San Francisco and its neighbors were slower than most other population centers to relax COVID-19 restrictions.
Over a three‐year period, San Francisco's public health officer issued a blizzard of rules that were often lengthy and challenging to implement. As late as January 27, 2021 (over ten months into the pandemic), he issued an order that required "all residents in the County to reduce the risk of COVID-19 transmission by staying in their residences to the extent possible and minimizing trips and activities outside the home." At the time, California had more cases per capita than the less restrictive states of Texas and Florida, begging the question of how effective lockdown measures were.
By continuing shelter‐at‐home restrictions for so long, San Francisco normalized remote work, thereby encouraging employers and employees to adopt to a new normal. Many employees moved beyond easy commuting distance from the city on the assumption that they could retain hybrid or fully remote work arrangements permanently.
Although San Francisco's political leaders trumpet the city's low per capita death rate from COVID-19, some of that is attributable to individuals temporarily or permanently leaving the area, thereby deflating the true denominator of any death rate calculation. Economist Stephen Hanke has concluded that lockdowns had "a negligible effect" in COVID deaths.
Lack of Security
As the accompanying map shows, San Francisco has a very high concentration of high value properties in a small geographic area. Many of these $100 million plus properties (based on assessed value) are within walking distance of the Tenderloin neighborhood which has struggled over several decades. But in recent years, the social problems of the Tenderloin have increasingly spilled over into the adjacent, high‐value areas, deterring tourists, shoppers, and office workers from visiting.
Measuring crime trends is challenging. According to Police Department statistics, reported crimes in the first five months of 2023 are below pre‐pandemic levels. But some proportion of crime goes unreported and it is possible that this proportion has increased given the low likelihood that San Francisco police will identify a suspect. In 2022, only 2.9% of larceny thefts were cleared within one year.
Also, residents clearly perceive an increase in crime. The most recent City Controller survey found that San Franciscans rated the city's safety a C+, the lowest grade since 1996. Safety ratings were especially low in the Tenderloin and two adjoining neighborhoods with high‐value commercial real estate: South of Market and Financial District/South Beach.
Critics have highlighted various public safety policy concerns including the defund the police movement, lax prosecution, reclassification of shoplifting goods worth less than $950 as a misdemeanor, disincarceration, and lack of enforcement against open air drug markets. Since these issues have been covered elsewhere and libertarians have varying opinions about them, I'll address a couple of other aspects that have received less attention.
First, the city has encouraged many individuals who may be more prone to criminal activity to concentrate in and around the Tenderloin. It has done this by establishing a cluster of thousands of supportive housing units, mostly in converted hotels in the area. Although residents of supportive housing are no longer defined as "homeless", many if not most are still dealing with issues such as drug addiction that contributed to their loss of shelter.
During the pandemic, the city converted hundreds of additional hotel rooms in the area to temporary residences for unhoused homeless individuals in hopes of preventing them from getting and spreading COVID-19. But the unintended effect of this program, known as Project Roomkey, seems to have been to increase drug abuse and disorder at the periphery of the Tenderloin.
One Project Roomkey property, Hotel Whitcomb, housed about four hundred homeless individuals, many of whom were continuing to use drugs. Shortly thereafter, a new open air drug market became established in an alley just south of Market Street. Both the hotel and the drug market were near a new Whole Foods store which was forced to close due to high rates of theft and violent criminal activity.
Aside from concentrating potential offenders in the area, the city and activists appear to have neutered two quasi‐private mechanisms that allow business districts to enhance security levels beyond that which the city government would normally provide.
Since 1847, San Francisco has had a category of law enforcement officers known as a Patrol Special Police. These trained officers can be directly hired by groups of merchants and/or homeowners to patrol and provide other security services within a designated area. In 1994, there were 72 patrol special police serving 65 areas. But their ranks decreased in recent decades and, as of 2022, only one officer remained.
Although clients expressed a high level of satisfaction with their services, city policies have decimated the program. San Francisco's charter requires the city's Police Commission to approve new patrol special officers, but in recent years it has rarely done so. At the same time, the San Francisco Police Department offered a competing program under which city‐employed police officers could provide security services to local business when they would otherwise be off duty.
Since clients must cover officer pay at overtime rates, this alternative is more expensive. Further, given the shortage of police officers in San Francisco today, there may not be enough staff to regularly serve clients who might be interested in purchasing their services.
California has also given property owners the ability to form their own Business Improvement Districts (BIDs) since the 1990s. BIDs, also known locally as Community Benefit Districts (CBDs), are formed when owners representing a majority of the assessed valuation in a given area vote to tax themselves to finance district operations.
San Francisco's Union Square area, the hotel and retail center that borders the Tenderloin, has had a BID in place since 1999. By 2018, the district was employing a large staff of cleaning ambassadors and safety ambassadors to deal with trash and quality of life issues respectively. The BID also installed a network of security cameras.
But the district's efforts to force homeless individuals out of the area faced criticism from UC Berkeley's Public Policy Clinic and local activists. Since the pandemic, the BID, now known as the Union Square Alliance, may have become less effective at maintaining cleanliness and safety in its neighborhood. It is not clear whether this is due to the criticism it has received, the retirement of its long‐time executive director, or some other factor.
Conclusion
An overly energetic lockdown and actions that concentrated violent and unstable individuals in the downtown area have contributed to the collapse of real estate values in San Francisco's prime hotel, office, and retail districts. Quasi‐governmental institutions that might have stepped in to provide improved security and street conditions have been enfeebled in part by city policy.
At this point, it does not appear that any set of feasible policies can restore downtown San Francisco to the heights it reached in 2019. A more realistic possibility is that it will stabilize at much lower levels of occupancy, activity, and value forming a new base from which to grow. New and remaining property owners should be given the tools and the space to restore a sense of security among those visiting, shopping, and staying in the neighborhood. Finally, city and state leaders should avoid overreacting to pandemics.
Blog: Theory Talks
Jordan Branch on Google
Maps, State Formation, and the International Politics of Cartography
The territorial
underpinnings of international politics are as familiar as they are contested
within the discipline of International Relations. While the presumed 'territorial
trap' of the discipline has been attacked from many sides (see, for instance, Theory Talk #4),
Jordan Branch is more interested in turning the question around.
His work has
carefully addressed the historical constitutive effects of mapping practices
and technologies on the subsequent transformation of practices of, and ideas about,
rule and the international system. In this fascinating Talk, Branch,
amongst others, discusses the significance of cartography for international
politics, explores the effects that contemporary digital mapping might have on political
spaces, and illustrates how innovations in mapping impacted on rule with the historical
example of France.
Print version of this Talk (pdf)
What is according to your view the most
important challenge facing global politics and what is/should be the central
debate in the discipline of International Relations (IR)?
While
there are many different debates going on at the same time within the discipline,
the one that has interested me most is the relationship between ideas and
practical or material factors. There is a very simplistic version of this
dichotomy that has been debated to death in the constructivist versus
rationalists debates, particularly in the American field of IR—an over-drawn
distinction, as many have pointed out. I am more interested in actual
explanations for the process, outcome, or phenomenon we're looking at. Rather
than separating them out, I am interested in how the ideational and material
relate to one another, how they fit together.
This
relationship poses questions for my specific interest in technological change. We
are experiencing fast-paced technological changes—for example, the information
technology revolution—which can yield a natural yet incorrect assumption, namely,
that this change will inevitably have some kind of major effect on, or
interaction with, politics and, specifically, with international relations.
This may be true, but it is too often assumed. Indeed, this raises another
problem. Even if there is such an effect, is it something we'll be able to
observe, let alone predict or explain, as it is happening? From my historical
work on the role of maps in state formation, for example, it is quite evident that
for people at the time, there was no way to see the impact maps had on the
political/spatial/ideational constitution of the state.
The information
technology (IT) revolution is the most obvious current example of dramatic technological
change. Although it has been playing out for the last 20 or 30 years, it only
continues to accelerate. Over the past couple of years, a lot of discussion has
focused on 'big data' and what it implies for business, financial analysis, and
the like. Of course, it also presents possibilities as a new tool for social
science. But there is a danger here. There is a tendency of seeing new
technological phenomena only in their material contexts, specifically focusing
on possibilities for measurement, for example, thereby neglecting to think
about the ideational. How do ideas about collecting and using data actually
play into the collection and analysis itself? So while they are in practice
always entangled, analytically, I find the distinction between the ideational
and the material a very fruitful one, not so much as a debate between opposing
fields, but as way to think about technological change.
How
did you arrive where you currently are in your thinking about these issues?
It is
funny—people often ask this sort of question, and I did not necessarily see a
natural trajectory for my thinking or work until I began to look and think
back. This interest in connecting technological and political change goes as
far back as my undergraduate time at Stanford University, where I initially
majored in mechanical engineering, and later switched to International
Relations. While technology remained an important preoccupation, I became more
interested in politics, history, and theory. So the interest formed into
questions about the political implications of phenomena like technology. But
this didn't happen instantly. Just before beginning my PhD at the University of
California, Berkeley, I was planning to do comparative work on regime change
and democratization. Then my older brother (Adam Branch), who is also a
political scientist, gave me a copy of Hendrik Spruyt's The Sovereign State and its Competitors (1994), and he said: 'Hey
you might like this!' So, I literally read that on a beach the summer before
starting grad school—which may sound funny, but I sat down, read it, and found
it fascinating. Yet I didn't immediately start thinking about these questions
then. It took a year or two, when I started really thinking about what I wanted
to work on. I came back to this work and realised these were the kind of
questions I was interested in: the origins of the territorial state and its
characteristics.
The
interest in the state as a concept had been with me for slightly longer. As an
undergrad my first introductory course to IR was taught by Stephen Krasner (Theory Talk #21).
Krasner has strong views, and the class was very rigorous. A lot of his work
focuses on the state and I think his framings influenced me early on. I don't entirely
agree with some of Krasner's arguments about sovereignty, but these
disagreements are more about the specifics of salient time periods or cases.
Other work which influenced me early on was that of John Ruggie on
territoriality. Indeed his approach became central as I was developing these
questions myself. I also discovered a host of literature in political geography
that turned out to be very interesting and useful.
So,
one could say that my trajectory was really more focused on understanding the
historical outcome of the territorial state than on what role technology,
specifically maps, played in this process. The focus on technology, while
existent from my engineering days, really began to materialize as a link missing
from existing explanations of state formation. I was thinking about how we might
be able to find some additional traction on these questions by including
technology more prominently. It has certainly been part of some scholarship on
state formation, as in Charles Tilly's work or William McNeill's on
technological change and warfare. Surely, technology has always been in there,
but the discussion has been centered on war fighting technology and maybe on
transport, and only to a lesser degree on communication technology in the
broader sense.
Another
piece of work which triggered my focus on the relationship between the
ideational and the material was Ron Deibert's book Parchment, Printing and Hypermedia (1997, read the 1995 PhD thesis
that became the book here, pdf). He talks about internet
communication technology but also about the printing press and the impact it
has on the global distribution of power. Yet, only when I read this book for
the second or third time just as I was finishing up my dissertation did I realize
how much his framing had shaped how I formulated my thesis. He does touch on
the role of mapping, but it is his elaboration on the way in which media
informs how people think about the world which was spot on for me. For me, maps
as a medium very importantly framed how people thought about and imagined the
world in the past—but of course these questions about technology and its role
in constituting the international political system, states, territorial
boundaries, and so on are still relevant today.
What would a student need to become a
specialist in global studies or understand the world in a global way?
I
think it is important to be really excited and interested in your topic and
what you want to do. The key thing is to enter a grad program that fits you in
terms of your interests and to be willing to do whatever methodological training
ends up being needed for your research project.
I
think there's a tendency to look for a 'one size fits all' graduate training
model, which does make sense at the initial level. Everyone should get a
certain amount of background in a variety of methods, whether they'll end up
using those or not. For example, I have not used quantitative methods in my own
research, but I'm glad that I had to take classes on those methods in grad school.
They give you the ability to understand work which may connect to your own but
comes at it from a different angle. And you should always be open to a variety
of methods. The key is to be able to understand a broad array of approaches, otherwise
you won't be able to engage in broad conversations.
I
also feel I gained a lot from exploring, and reading widely, from other
disciplines such as history and sociology. I already mentioned political
geography, which is really not too distant but, nonetheless, in the U.S. it
sits in a different department. You might think that some work is 'on the other
side of the fence' but it is important to be able to bring that work into your
thinking.
The
final thing is to be open and ready to change your mind, whether it is about
the answer you're expecting to get to your question, or even changing the
question itself. Obviously there is a certain point when you're almost done
with a project where that might not be a good idea…. but if it is early on and
it works and you can do it logistically, I think it is important to be willing
to do that. Five years later you're going to be a lot better off.
So far, your work has been mainly
historical. Can you explain the importance of 'looking back' for understanding
contemporary international relations?
I
think it is extraordinarily important and useful. A lot of us in this and other
fields do see strong connections between today's politics and past events,
institutions, and ideas. There is an important notion that we cannot engage
meaningfully with the present if we do not understand its genealogy. That is
certainly a driver for me in thinking about the origins of the state and
territorial boundaries. It may help us to observe patterns we might see
replicated or appear in some kind of altered yet recognizable form today.
Indeed, it can help us think about where were might be headed.
Although
I also hesitate here slightly: always looking to the past for the answers can
be problematic. History can help us to observe patterns, dynamics, and maybe
relationships that might tell us something about other periods or about contemporary
international relations. But we should never do so thinking that the patterns
are definitely going to be the same or are deterministic. I think one can look
for patterns or relationships without automatically assuming that they have to
apply everywhere.
Historical
analysis can be problematic in its own right, because there is no way to
discover or absorb the past 'as it really was.' All history is some kind of
construction, whether it is based on contemporary or historical sources. Additionally,
in the social sciences we often have to rely on secondary sources. That is not
inherently a problem; this fact just introduces more variables to think about.
Pure narrative purporting to capture 'what really happened' can be very
problematic.
Given these disclaimers, it is useful to consider
the past. I think what should be emphasized is that, specifically at the grad
school level, students should be encouraged to dig a little deeper
historically. They shouldn't hesitate to do that excavation work.
IR, it has been argued, rests firmly on
a spatial or territorial understanding of politics. What constitutive role does
territorial space play in IR and is that role based on historical fact or is it
myth?
I
like that question. I think it is actually both—sort of a myth and sort of a
fact. In one sense, territory informs at
least the state ideal (i.e., states as we think of them): it informs what the
state is, the interests of states, and of course how we distinguish one state
from another. And yet, while this is all inherently territorial, we also know
that this is far from an accurate description of a lot of regions and places in
the world. There are many different spatial ideas, practices, and organizations
with political agency that are non-state or non-territorial.
But
regarding the myth of state territoriality: I think it is important to point
out there is a lot of detail in the 'conventional narrative' of the state, such
as timing of when territoriality came about as pinpointed in Westphalia, that
has been quite effectively debunked by a number of scholars in the last 10 or
20 years (scholars like Andreas Osiander or Benno Teschke, from different
theoretical perspectives). This is a strongly supported finding. But it really
hasn't penetrated the mainstream narrative very well. While we can gradually see
a little more nuanced discussion in IR textbooks in the U.S., they more often
than not will still start with 1648 and Westphalia.
We
can now confidently say that states—states as we think of them now—did not
appear in 1648, let alone earlier. This is especially true if we look at the
specifically territorial or spatial aspects of statehood, which again are so
central to how we think about the state internationally. The focus on defending
cleanly demarcated linear boundaries and the idea of asserting absolute
sovereign authority within those lines; this is really not consolidated until at
least the 19th century. So, part of the myth is the timing and the
how and why we have states.
But
there still is a factual quality to territoriality in this story we tell
ourselves about the foundation of the international system and the supposed
creation of sovereign states. In a certain setting and for a certain period I
think this describes the ideas and practices of international politics quite
well. The most obvious example of this is 19th century Europe. While
there are still ways in which it diverges from the ideals of the typical state system,
in a lot of ways it actually did fit that. This happened at the same time as
the development of modern Western historiography, and it was the setting for some
of the traditional foundation of political science and IR. So we can see how
one shaped the other: history-making and state-making. The singular territorial
ideal of statehood from the 19th century has subsequently been
applied to other issues, actors, and areas. Even if it does not fit exactly, it
is applied today still and it is made to fit retrospectively much earlier
periods, where it applied less well.
Ultimately,
it is a powerful myth which has informed how we think about international
relations to such a degree that we shouldn't just throw it out. Instead, we
should think about exactly how it actually informs the way that international
relations is understood and practiced.
Practitioners and officials don't exactly read IR journals and base their
decision-making on our knowledge production, but the basic ideas of states, boundaries,
and territory which inform the practice of international relations—as well as
the study of it—should be our concern.
You have looked specifically at how
mapping has contributed to imagining and formation of the modern state system. Could
you elaborate more on how something as technical as cartography matters for
international politics?
I've
argued in my recent work that early modern mapping technologies were really
essential to the consolidation of the territorial state, particularly the
specific territorial features of states today. Maps, which have been a popular
medium over the past few centuries, really do shape how people understand the
world and their place in it. This gets us back to the connection between the
material and the ideational.
In early
modern Europe a revolution took place, first in mapmaking technologies and,
slightly later, in the ideas and practices of political rule, especially as it
relates to territory. I argue this was really not a coincidence. How rulers and
subjects conceived of rule and how rulers conceived of their realms was really
altered as they increasingly used maps that depicted the world in this one
particular way. The key characteristics of modern statehood – at least of the
ideal of modern statehood – such as linear boundaries between homogenous territorial
claims, really appeared first in maps and only subsequently in political practices
on the ground. Of course, there were existing authority structures, but these were
not particularly spatial or were not spatial in this linear demarcated way. Subsequently,
however, these authority structures were ignored or sometimes even actively
renounced in favor of the kind of authority which could be literally shown and
drawn on a map.
It is
interesting because initially, maps were not predominantly produced by rulers, states,
or officials. They were certainly involved in sponsoring some mapping projects,
buying maps, and using them, but mapmaking was more of a commercial private
scientific enterprise, if we can apply the label 'scientific' in the 16th
and 17th centuries. These map-makers certainly didn't have any articulated
goal of changing politics, at least not on this broad level. They were really
concerned with making money, maybe creating art, and advancing what they
thought was a growing science of cartography.
We
can however see that the map, as a technological artifact—maps as actual things—had
an impact on the practices of rule both between rulers and between rulers and
their subjects. I argue that this process occurred quite broadly across the
European development of the international system at that time. And you can see
this sequence really clearly in a case like France.
Let
me illustrate that. Here are three maps of 'France' ranging from the 1400s to
the 1700s—the quotation marks are necessary because the notion of there being
one entity called France across this whole period is more a matter of us labeling
it as such rather than it being one recognizable entity.
The
first map is from a 15th century manuscript about royal and noble
genealogy in France. The image is purported to represent 'all the realm of
France' and shows the country as a collection of what I would call places
rather than a single linearly demarcated space. You do have the notion of spatial
boundaries here, in terms of rivers as means for demarcation. Yet, very
clearly, the visual language of this map focuses on towns. And this is how rule
was practiced and operationalized as well: negotiations would be over places, or
maybe collections of people based on identity, jurisdiction, or where they were
allowed to reside, but not in term of linear demarcations between claims.
Now
look at the second map, which is just from about 150 years later, from the
1590s. It is from an atlas by a follower of Mercator, and its label Gallia is
the Roman designation for France. From our modern perspective we can recognize
something that looks a lot like a modern map of France. Maybe even a state, although
the boundaries are not exactly like we would expect them to be. But this is the
visual language of mapping that we are familiar with: longitude, latitude,
spatial expanses colored in, homogenous territorial claims—there is something
about the space depicted that argues that it is all the same, that is all
France.
And
despite this familiarity, it was actually far from an accurate depiction of
French rule. Not just in the actual placement of the boundaries, which are
contestable, but in the discrete nature of the boundaries themselves. Along
these frontiers, so clearly demarcated on this image, the claims of the French
king were often unclear and overlapped with those of other rulers. This was
even true for the interior of France during this period.
The
third map is another 150 years later, from the 1740s. This is from a map
showing the triangulation of the realm, undertaken by a group of geographers, known
as the Cassini survey, as several generations of the Cassini family headed up
this effort. The realm is being mapped explicitly using geometric tools with
the important emphasis that the image is actually meant to represent reality.
It is understood that way: it's supposed to measure reality, in order to enable
the French king to better understand what he rules. Moreover, this mapping took
place at the same time that rule was being implemented in practice on the
ground in terms of spatial expanses as we think of them, in the form of
demarcating boundaries with neighbors which had previously been unclear,
overlapping jurisdictions.
Although
maps of the second generation (i.e., the map from the 1590s) were 'inaccurate,'
they were extremely influential. They were widely distributed and purchased by
the elite, both inside and outside of government. Using these maps provided
rulers with this particularly new territorial meaning to their centralizing and
bureaucratizing efforts. As a consequence, the use of these maps as material
tools of governing and negotiation really changed the language of rule. Rule becomes
cartographic, at least in part. When two opposing sides come to the negotiation
table, for example, they at the very least have already agreed, implicitly, that
the division should be a linear boundary—it is just a question of where.
By
the time the third map is produced, the government is much more directly
involved in map production using accurate geometrical measurement. Yet the very
desire for this mapping was shaped by the earlier use of those commercial maps
that built up the visual grammar of geometric space. The French case is useful
because it is very well documented, but we do see the same sort of process
repeated either simultaneously or later throughout Europe and also elsewhere. In
fact, there is a lot of interesting scholarship on the introduction of mapping
and modern geographic thinking into regions outside the West. Siam Mapped (1994), a book by Thongchai Winichakul,
is a fantastic study that I found really useful for my thinking about Europe,
even though it deals with Siam (Thailand) in the late 19th and early
20th centuries.
This
is my story about mapping and territory, but I think there is a broader frame
to your question: do we want to bring in these sort of technical factors into explanations
in International Relations? And while we don't want to be technologically
determinist, there is some useful thinking around technology and its effects we
should consider. The impact of maps has been so strong, and yet they are such
common artifacts that they are largely conceived of as 'unremarkable' outside
of geographic-oriented disciplines.
So can we juxtapose this insight that mapping practices precede the
practice of rule and state formation to the anthropological present, that is,
what do the contemporary, some say radical, shifts in mapping techniques entail
for international politics?
Absolutely.
When I initially present my work, there is often an assumption that I use GIS
in my study. Instead, my work focuses on analyzing mapping itself—maps as
historical artifacts, their effects and their interaction with political
identities, interests, and organizations. But I think the ways in which methodology
and the subject of study overlap on subjects like technology could potentially
contribute to stretching the boundaries of IR. The big data question is both a
question of studying what big data means for politics but also how we can use
big data to study politics. The way in which new technologies can simultaneously
play into our methods and into our answers or questions is a pressing and
fascinating issue.
For
instance, there has been a lot of back and forth on the question of whether
more open-access mapping techniques entail some sort of democratization. While
I think we have seen that more participatory forms of mapping are possible, we
shouldn't think that this type of mapping is completely open, as no
technological system is completely open to anyone and everyone at all times.
But, indeed, there is a democratization of mapping under way. Authorship in a
whole host of domains, including mapping, is opening up where there used to be
a single authoritative voice or at least a single type of authoritative voice. So
maps are an example of this opening up and collective authorship. At the same
time, accommodating more voices also means that a lot of information is being
shared without authority or attribution or what we think of as a legitimate
source… When you open a map from Rand McNally or National Geographic, you know
that specific cartographers thought this was accurate and you can blame or
praise them. But when you open up a layer of Google Earth that has been
crowd-sourced you don't know who put that pin there, and you don't know why.
It's
really interesting to explore a bit further how this is different from the
recent past. In the 19th and first half of the 20th
century, mapmaking was essentially state-led. The U.S. geological survey, the
Ordnance Survey in Britain, or large mapmaking geographical institutions such
as National Geographic represented the owners and producers. Mapping was so
technical, so obviously technical that the everyday person would not be able to
make a map to Rand McNally's standards. This has changed, and quite importantly
so. Not only do we have the technology to do this, people are aware that they
can use it as easily as opening a smartphone app, thereby incorporating more
points of view. This is not necessarily good or bad. Politically, it does open
up new possibilities. Maps have always been political, both implicitly and
explicitly. It certainly opens up the possibility of some kind of broader shift
in ideas about territory. Let me illustrate with an example. I haven't
necessarily come across specific maps that present some completely novel visual
grammar potentially reshaping the way we think about the world. But, an
interesting example I like to bring to my students: there was a September 2011blog post on Google's Lat-Long blog (which is the company's blog about Google Maps and Google Earth). Its headline
read: 'South Sudan is now official on Google Maps,' and it displayed a
screenshot of the new boundary.
They changed their base layer by adding a boundary between South Sudan and
Sudan. This of course followed the referendum and the UN's recognition, and all
the traditional precursors to official statehood. South
Sudan became a recognizable entity on that blog. Google Earth, a
non-governmental actor, indeed a huge corporate actor—and thus not necessarily
democratizing—becomes part of the discourse of declaring South Sudan's official
existence.
This
is an example of how things might be going. Interestingly, the whole enterprise
of mapping today actually resembles more closely that of the 16th
and early 17th centuries then that of the 19th or early
20th century, not technologically but organizationally. The
state-centric view of the world was enforced by the state-authored mappings of
the 19th and 20th centuries. Now, by contrast, there is a
kind of shared or unclear authorship, there is crowdsourcing, there are multiple
sources of conflicting and quite openly unreliable or uncertain information.
This environment of rapidly increasing distribution and use also describes the
creation of the early atlases in the late 16th century and early 17th
centuries, which involved the collation of all kinds of information from multiple
sources.
And
of course it was in the 16th and 17th century when this
sort of non-state-controlled mapping presented innovative images of the world—those
images that ended up shaping and consolidating the state form of territory. And
so it was these new tools for understanding and acting on the world which gave
the state its territorial shape. As key information-producing activities are
being opened up, some forms of power are being redistributed. This certainly
means that we need to widen our scope in terms of whom we consider to be a
stakeholder or what sort of actors we want to study. We know that the dichotomy
of state versus non-state is not sufficient. We need to be subtler in our
inquiries. In IR, of course, the stereotypical over-emphasis on states is being
questioned, and this is really just one more sign that a piece of the power of
the state, in this case map-production and distribution, is shifting elsewhere.
I
recently had a conversation with students in my undergrad seminar on technology
and international politics. I went into it saying: 'Hey, all this mobile
mapping and GPS and Google Earth is totally revolutionary. This may change how
we think about the world.' And they were all completely unconvinced, since they
use these technologies all the time—to a bunch of twenty-year-olds these tools seem
unremarkable. And maybe that is actually a more accurate analysis. But it is
interesting how it is such a different analysis from that of my generation and anyone
older, all of us who have spent a lot of time, for example, driving around
without GPS. It is partly this perception and the 'unthinking usage' which make
the relationship between technologies and social and political outcomes so difficult
to observe. Our ideas may be changed, and especially the ideas of younger
generations may be changed, without anyone particularly noticing how dramatic
the changes might be. This also means that the connections, because they are
'unthinking,' can be quite foundational to people's ideas of social identities
or political practices. They are tacit and embodied. That makes it both hard to
observe but also an interesting puzzle. But it is worthwhile mentioning that
the images presented by Google Maps and other digital mapping tools,
particularly satellite imagery, might carry a greater legitimacy in terms of
depicting 'the truth'. It looks like a picture of the world and therefore
whatever is on it, even layered on data (like a new international boundary),
must be true. It represents another apex of the scientific trajectory of
mapping.
If it is just about adding a data layer on a base map that remains the
same, does that then mean that ontologically this kind of mapping technology
actually doesn't challenge territoriality?
That
gets to an interesting point, which entangles with a lot of the more careful
discussions of globalization and the state. One version of that is that the
state is not dying, is not being destroyed. It is just that other things are
being layered on top of it, and the state and its boundaries still remain and
still matter for certain things. In this case, maps are perfectly capable of
showing state boundaries—they look very fixed, very strong—but one can layer on
top other types of information, maybe transactional flows or particular places
that are connected in different ways.
I
think that could be an interesting argument: these new mapping tools can really
show so much, and it is matter of selecting what you want to show and
unselecting things you don't want to show. Thus they don't do anything to
undermine one particular view of the world. Now that is not necessarily a good
or bad thing. If we look at the history of mapping and the origins of state
territoriality, a key part of that was that it was really hard to depict
medieval jurisdictional and personal notions of rule on early modern maps. Printing
technology and mapping tools prescribed depiction in a certain way—drawing
lines and coloring in spaces. Maps made it harder to show and thus think about
the other forms of rule. If digital maps are still perfectly capable of showing
states and their boundaries, they may do very little to undermine that notion
of territory.
Finally, if we are interested in the politics of maps, to what extent do
we need to study not only the maps as political artifacts but the mapmakers as
political actors as well?
I
think it is extremely useful to do both, and obviously if we study mapping
today, we can do both. In terms of
historical work, we can only rely on very limited sources, such as what mapmakers
themselves wrote about what they were doing. We don't know a lot about their
goals or ideas about politics. I would have loved to have been able to read
exhaustive memoirs by mapmakers such as Ortelius and Mercator. Of course, they might
not have said anything about the questions we are interested in. On the other
hand, a lot of map-makers today are involved in mapping for explicit political
reasons: for example, Ushahidi-type collaborative mapping (www.ushahidi.com), or humanitarian and
relief mapping. Here we can dig into the question of how the maps produced relate
to specific objectives. That is a great way to get more analytical leverage on
a lot of these questions.
Jordan Branch joined the Political Science
department at Brown as an Assistant Professor in summer 2012. He received his
PhD in Political Science at UC-Berkeley in 2011, and spent 2011-2012 as the
Hayward R. Alker Postdoctoral Fellow at the Center for International Studies at
the University of Southern California. His interests include international
relations theory, the history of the sovereign state system, contemporary
challenges to statehood, and the intersection of technological and political
change. In 2014, Cambridge University Press published his book, The
Cartographic State: Maps, Territory, and the Origins of Sovereignty. His
research has also appeared in International Organization and
the European Journal of International Relations.
Related links
Faculty profile at Brown University
Read
Branch's Mapping the Sovereign State:
Technology, Authority, and Systemic Change (International Organization 2011) here (pdf)
Read
Branch's Colonial Reflection' and
Territoriality: The Peripheral Origins of Sovereign Statehood (European Journal of International Relations,
2012) here (pdf)
Print version of this Talk (pdf)
0
0
1
5331
30391
School of Global Studies, University of Gothenburg
253
71
35651
14.0
Normal
0
false
false
false
EN-US
JA
X-NONE
/* Style Definitions */
table.MsoNormalTable
{mso-style-name:"Table Normal";
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-parent:"";
mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
mso-para-margin:0cm;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:12.0pt;
font-family:Cambria;
mso-ascii-font-family:Cambria;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Cambria;
mso-hansi-theme-font:minor-latin;}