When to disclose the number of contestants: Theory and experimental evidence
In: Journal of economic behavior & organization, Volume 193, p. 146-160
ISSN: 1879-1751, 0167-2681
6459720 results
Sort by:
In: Journal of economic behavior & organization, Volume 193, p. 146-160
ISSN: 1879-1751, 0167-2681
In: 2015 International Conference on Advances in ICT for Emerging Regions (ICTer)
SSRN
Working paper
Blog: CEGA - Medium
Incorporating Remote Sensing Data Into Randomized EvaluationsThis post was written by Kelsey Jack, CEGA affiliated professor and Associate Professor of Environmental and Development Economics at UC Santa Barbara, and Jack Ellington, Policy Associate at J-PAL Global. This post was originally posted on the J-PAL website.Credit: Pranavan Shoots via Shutterstock.comA growing number of economists are incorporating remotely sensed (RS) data — satellite data in particular — into their studies. For randomized evaluations, remote data collection offers alluring possibilities: lower data collection costs, a longer time series of data both before and after an intervention, geographic spillovers, and more. However, the initial allure may obscure some practical challenges.In a new set of guidelines, CEGA affiliated professor Kelsey Jack (University of California, Santa Barbara) and Kendra Walker (University of California, Santa Barbara) — with contributions from CEGA affiliated faculty Tamma Carleton and Robert Heilmayr, along with Jenny Aker, Seema Jayachandran, Namrata Kala, Rohini Pande, Ben Moscona, Sebastien Costedoat, Carlos Muñoz Brenes, and Johanne Pelletier — outline some of the opportunities and challenges associated with using remote sensing data in randomized evaluations. The guidelines provide resources and recommendations to help social scientists, practitioners, and their collaborators effectively leverage RS data in their evaluations.Remote sensing refers to collecting data from a distance. Examples of sensors used to collect RS data include on-site monitors, manned or unmanned aircraft systems, and satellites. Predictive models and machine learning methods are often used to interpret raw RS data, such as classifying a satellite image of a piece of land as forested or not. This interpretation stage is typically necessary to allow the researcher to use the data for their analysis.The guidelines are organized around three main reasons that researchers conducting randomized evaluations might wish to include RS data: (1) increase statistical power, (2) measure different or more objective outcomes and (3) extend analysis to more time periods or locations. For example, RS data may be especially useful when evaluating environmental or agricultural interventions, such as forest cover, crop yields, land use, wildfire smoke and pollution concentrations, since environmental outcomes are often difficult to measure through surveys alone.While the use of RS data in impact or program evaluation is not new, using RS data in randomized evaluations presents both new challenges and opportunities. Most notably, because randomized evaluations typically involve a substantial amount of researcher discretion over design decisions and primary data collection, researchers can tailor their sample, collect primary data, and interpret RS data to make the most of this new and exciting data source.The guidelines are structured around the three primary motivations for incorporating RS data into a randomized evaluation and use case studies from Jack's own experiences using RS data to evaluate the impact of rainwater harvesting techniques in Niger and of payments for ecosystem services on crop burning in India. In this blog, we will briefly highlight selected challenges associated with each of these motivations and examples of how to avoid common pitfalls.Using remote sensing data to increase statistical powerThe larger a study's sample size, the more likely that the researcher will be able to detect the effect of an intervention if it exists. However, researchers often face logistical or financial constraints that make it difficult to collect primary data for a large number of participants. RS data can help predict outcomes for study participants not included in a survey or other primary data collection, making it possible to include more observations in the study, thereby increasing statistical power.While RS data may be used to increase sample size, it also introduces a new source of measurement error since outcomes are typically predicted. If the error is sufficiently large, statistical power may not improve much relative to analysis using the smaller set of primary data. For example, if field observations of crop types are used to train a prediction model that achieves only 60 percent accuracy with the RS data, researchers may be better off just running regressions with the field observations. Non-classical measurement error, particularly if it is correlated with treatment, may introduce new forms of bias. For example, if the crop type observations can only be obtained in the treatment group, and treatment affects crop choices, then the model may be systematically more accurate in predicting outcomes in the treatment group than in the control.Increasing the number or quality of outcomes measuredThere are cases where RS data may be more objective, accurate, or inexpensive than primary data collected through surveys. However, some primary data will usually be necessary to calibrate or train the RS model. Raw RS data can be difficult to make sense of without primary data to compare it to. Therefore, designing appropriate primary data collection is important, and may differ from what would be collected in the RCT if no RS data were involved.One key consideration is linking the relevant unit of intervention in the RCT to the RS data. For example, if agricultural outcomes are of interest, then the researcher needs to know the spatial location of both treatment and control fields. Measurement error will be considerably higher if only geographic points — rather than field perimeters — are collected. Notably, spatial locations must be collected for both treatment and control fields.RS data may be particularly useful when outcomes are difficult to measure through standard survey-based techniques. For example, illegal activities, such as (in some settings) deforestation or crop residue burning, may be susceptible to substantial reporting error in surveys, but could be more accurately measured with RS data — providing adequate primary data can be collected, of course. Where primary data for training a model cannot be obtained from ground-based methods such as surveys or spot checks (for example, in a conflict zone), a small sample of very high-resolution satellite imagery may provide an alternative approach to constructing a dataset for training or calibrating the RS model.Extending measurement to locations or time periods outside of the main study sampleResearchers may also want to use RS data to examine the impacts of an intervention outside of the original time period or sample of the evaluation, but as with any statistical analysis, care must be taken when conducting out-of-sample analysis.First, out-of-sample extrapolation requires an assumption that the relationship between primary (training) data and RS data is the same between the original sample and the extended sample. For example, a land use model trained on data from a set of villages in the original evaluation may or may not perform well for a larger sample of villages that may have been affected by spillovers.Similarly, the same land use model trained at a single point in time may be poorly suited to predicting the evolution of land use into the future as a result of the treatment. There will almost always be some differences in background characteristics — weather patterns, economic conditions, landscape, etc. — between the main study sample or time period and the extended sample which may cause a model from the original sample to interpret new RS data inaccurately (referred to as "model drift").Collecting new primary data for the extended sample and recalibrating the RS model can help with both accuracy and interpretation. If researchers can identify potential opportunities to use RS data to measure spillovers or long-run effects early on, they can design the initial evaluation to make measuring these outcomes easier down the line.TakeawaysIncorporating remote sensing data into randomized evaluations has tremendous potential to measure outcomes that would otherwise be difficult or expensive to study with traditional surveys and may be especially useful for evaluating environmental interventions that require physical measurements like land cover. However, RS data are not a panacea and researchers need to take these considerations into account from the time they start designing their evaluations to determine whether and how to use RS data.For more thorough guidance, additional practical considerations, and examples; check out the guidelines here.Incorporating remote sensing data into randomized evaluations was originally published in CEGA on Medium, where people are continuing the conversation by highlighting and responding to this story.
In: Routledge studies in extremism and democracy
"Populist Parties and Democratic Resilience focuses on populist parties as the main agents of populism and examines when these parties turn anti-democratic and when they remain loyal to the democratic system. Following the Brexit referendum, the election of Donald Trump, and the rise of populist parties around the globe, many observers suggested that democracy was in serious trouble. Nevertheless, while some democratic systems have been seized by populists, most of them have proven resilient. In this volume, the authors identify the conditions under which populist parties become inimical to political and societal pluralism. They offer in-depth analyses of the trajectory of populist parties in eleven European Union countries (Austria, Belgium, Czech Republic, France, Germany, Hungary, Italy, The Netherlands, Poland, Romania, and Spain). The book shows that, reflecting the diversity of national contexts, there are multiple pathways whereby populist parties' power can remain contained and subject to democratic checks and balances. Moreover, populist parties can - at times voluntarily, at other times by force of external conditions - come to adhere to the democratic rules of the game. On this basis, the volume outlines different ways in which European democracies can successfully accommodate populist parties through strategies that carefully navigate between the extremes of uncritical acceptance and outright ostracization. Drawing on the literature on democratic theory and comparative politics, this book directly contributes to the public debate on the state of democracy in Europe. It will be of interest to researchers of comparative politics, European politics, party politics, democracy, and populism"--
General Statement: A Law School was first established in Richmond College in 1870. In 1890 the family of the late Mr. T, C. Williams, who had been a devoted and useful trustee, donated $25,000 as the nucleus of an endowment for the Law School. In recognition of this gift, the name of the School was changed to THE T. C. WILLIAMS SCHOOL OF LAW. At various times the School has received further generous gifts from members of the family of Mr. Williams. The largest of these gifts came through a bequest from Mr. T. C. Williams, Jr., who like his father, was long a trustee of Richmond College, and for twenty years was the efficient chairman of the Executive Committee of the Board. As a result of these several benefactions the endowment of the School of Law now amounts to $281,700.00. The Law building and lot are valued at $125,000 and the equipment at $50,000. The T. C. Williams School of Law is an integral part of the University of Richmond. The degrees in law are conferred by the corporation of the University of Richmond. ; https://scholarship.richmond.edu/law-catalogues/1031/thumbnail.jpg
BASE
Despite efforts to create, revise, reform, and establish an effective child welfare system in the United States, the system continues to fail to ensure the safety and well-being of maltreated children. Out of Harm's Way presents four specific changes that would lead to a more effective system; and unlike many other studies of this system, the changes recommended here are affordable, scalable, and proven
In: Političeskie issledovanija: Polis ; naučnyj i kul'turno-prosvetitel'skij žurnal = Political studies, Issue 4, p. 21-37
ISSN: 1684-0070
In: Politics and Public Policy in Scotland, p. 21-50
In: Canadian public policy: Analyse de politiques, Volume 3, Issue 3, p. 404
ISSN: 1911-9917
Blog: CEGA - Medium
Research shows that women are less likely to point out and penalize mistakes in science, and publish fewer comments and failed replications in scientific journals. What does this mean for the social sciences?BITSS Program Manager Grace Han interviews David Klinowski (Visiting Assistant Professor at the University of Pittsburgh Katz Graduate School of Business) about his forthcoming publication, "Voicing Disagreement in Science: Missing Women."Credit: Suad Kamardeen via UnsplashWhat does it mean for the state of social science when half of all published studies fail to replicate? A recent Nature article reported that, according to some studies, 35–70 percent of published results don't support replication with new data. "Often, researchers cannot even reproduce results when using the same data and code as the original paper, because key information is missing," the authors wrote.Despite glaring issues with the credibility of published research, replicating results remains disincentivized, and stark inequalities in their production and publication exacerbate this challenge. At the 2023 BITSS Annual Meeting, David Klinowski, visiting assistant professor at the University of Pittsburgh Katz Graduate School of Business, discussed determinants of gender gaps in education and labor markets. His forthcoming publication, Voicing Disagreement in Science: Missing Women, shows that women are less likely to point out and penalize mistakes in science and publish fewer comments and failed replications in scientific journals.We talked to David about why women are missing from academic criticisms and what this means for the social sciences. Responses have been edited for length and clarity.Grace Han: Your research indicates that women are missing from critiques and failed replications. Why do you think that is?David Klinowski: Many factors may influence a researcher's decision to criticize a published paper, some of which may have a gender component. For example, a researcher may be more or less willing to express criticism depending on her level of seniority or whether she has coauthors to pursue the criticism with. If women tend to be more junior than men, or have fewer coauthors, then these differences could contribute to the gender gap in critiques and failed replications.Opportunity costs may also play a role. Our profession values critiques and replications much less than it values novelty. Researchers may naturally choose to focus on doing original work rather than attempting to replicate or criticize others' work.Women may also suffer backlash to a larger degree than men for criticizing work. Research has found that women are often penalized for displaying male-stereotypical behaviors, such as assertiveness. If writing a critique or a failed replication is perceived as confrontational, then women may be reluctant to write these papers for fear of backlash. It is also possible that women criticize as much as men do but their criticisms are more often rejected during peer review.What are the impacts of women's comparative absence from such criticisms?DK: It is hard to say. Publishing a critique in a top journal could advance the critic's career. She could gain recognition from her peers, and citations to her work. And the process of replicating someone's work could be a great learning experience. But if negative consequences such as backlash are larger for women, then it may be entirely rational for women not to participate.We should also worry about what impact the lack of female-authored critiques and failed replications has on science itself. Is the process of self-correction in science slower as a result? Are there views and insights that never emerge as a result?What does the relative absence of women from critical publications say about the scientific literature and publishing process at large?DK: One way to interpret the results is that they illustrate how difficult and fraught it is to express post-publication criticism. It is not always easy to find out whether a replication exists, or to know what a critique may imply about the original paper. Original authors may be terrified of scrutiny, and may not be eager to share their code, data, or other details of their work.What can be done to increase women's participation in academic criticisms? Is this wise given the potential risks and professional blowback?DK: I think a promising approach is to make it easier and more rewarding for researchers to participate. To even start to scrutinize or replicate a paper, researchers must be able to access its data, code, and methods. BITSS does a great job at educating scholars on why and how to do transparent and reproducible research.One way to prevent authors from feeling singled out when their work is criticized or replicated may be to conduct replications via a "replication audit," an organized effort by multiple researchers to replicate a large number of studies. The Replication Games [convened by I4R,] is one such effort. Researchers may find it more attractive to participate in replication audits than to replicate a single study on their own.Afterwards, replicators become coauthors of a meta-paper that combines the different replications conducted, but their names are not associated with the particular paper they replicated. To increase participation of women and other researchers in academic criticisms, we need more of this type of careful design of the incentives and institutions.In March, alongside the American Economic Association's Committee on the Status of Women in the Economics Profession and Economists for Equity at UC Berkeley, CEGA will co-host the Berkeley Replication Games, which will focus on the participation of underrepresented researchers.To find out more and register, check out our event page.The Gender Gap in Academic Criticism was originally published in CEGA on Medium, where people are continuing the conversation by highlighting and responding to this story.
In: Journal of elections, public opinion and parties, Volume 29, Issue 4, p. 465-489
ISSN: 1745-7297
In: Advances in decision sciences, Volume 2017, p. 1-10
ISSN: 2090-3367
Total quality management (TQM) is a unified management approach that concentrates on the consecutive improvement of processes, products, and methods to surpass client expectations. This paper aims to highlight the advantages of implementing TQM in the airline industry by examining the key enablers of TQM. This study was based on a research model including six factors that were found to have a profound influence in implementing TQM. The data needed for the study was collected with the aid of questionnaires and secondary sources. The study discovered that procedures, policies, and a culture of knowledge transfer were the most significant enablers of successful TQM initiatives in Royal Jet Airways. In order to effectively implement TQM, the study suggests that management should offer appropriate recognition and rewards to employees. This will motivate them to adopt and practice TQM initiatives. Apart from that, management should also arrange seminars or workshops to accustom the employees to the basic procedures and objectives of effective TQM.