Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
On October 14, IRI's Evidence and Learning Practice (ELP) organized a panel on conducting trauma-sensitive evaluations as part of the American Evaluation Association's (AEA) 2023 annual conference. The annual conference provides a forum for evaluators to discuss trends and priorities within the sector and share best practices for conducting effective and inclusive evaluations. IRI's portion […] The post Leveraging Trauma-sensitive Data Collection Approaches to 'Do No Harm' appeared first on International Republican Institute.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
In Data Paradoxes: The Politics of Intensified Data Sourcing in Contemporary Healthcare, Klaus Hoeyer examines the paradoxes surrounding healthcare data, looking at Denmark as a case study, arguing that increased data collection does not always result in more efficient services. The book's extensive interdisciplinary research results in a rich guide for how we can think about our relationship to … Continued
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
Incorporating Remote Sensing Data Into Randomized EvaluationsThis post was written by Kelsey Jack, CEGA affiliated professor and Associate Professor of Environmental and Development Economics at UC Santa Barbara, and Jack Ellington, Policy Associate at J-PAL Global. This post was originally posted on the J-PAL website.Credit: Pranavan Shoots via Shutterstock.comA growing number of economists are incorporating remotely sensed (RS) data — satellite data in particular — into their studies. For randomized evaluations, remote data collection offers alluring possibilities: lower data collection costs, a longer time series of data both before and after an intervention, geographic spillovers, and more. However, the initial allure may obscure some practical challenges.In a new set of guidelines, CEGA affiliated professor Kelsey Jack (University of California, Santa Barbara) and Kendra Walker (University of California, Santa Barbara) — with contributions from CEGA affiliated faculty Tamma Carleton and Robert Heilmayr, along with Jenny Aker, Seema Jayachandran, Namrata Kala, Rohini Pande, Ben Moscona, Sebastien Costedoat, Carlos Muñoz Brenes, and Johanne Pelletier — outline some of the opportunities and challenges associated with using remote sensing data in randomized evaluations. The guidelines provide resources and recommendations to help social scientists, practitioners, and their collaborators effectively leverage RS data in their evaluations.Remote sensing refers to collecting data from a distance. Examples of sensors used to collect RS data include on-site monitors, manned or unmanned aircraft systems, and satellites. Predictive models and machine learning methods are often used to interpret raw RS data, such as classifying a satellite image of a piece of land as forested or not. This interpretation stage is typically necessary to allow the researcher to use the data for their analysis.The guidelines are organized around three main reasons that researchers conducting randomized evaluations might wish to include RS data: (1) increase statistical power, (2) measure different or more objective outcomes and (3) extend analysis to more time periods or locations. For example, RS data may be especially useful when evaluating environmental or agricultural interventions, such as forest cover, crop yields, land use, wildfire smoke and pollution concentrations, since environmental outcomes are often difficult to measure through surveys alone.While the use of RS data in impact or program evaluation is not new, using RS data in randomized evaluations presents both new challenges and opportunities. Most notably, because randomized evaluations typically involve a substantial amount of researcher discretion over design decisions and primary data collection, researchers can tailor their sample, collect primary data, and interpret RS data to make the most of this new and exciting data source.The guidelines are structured around the three primary motivations for incorporating RS data into a randomized evaluation and use case studies from Jack's own experiences using RS data to evaluate the impact of rainwater harvesting techniques in Niger and of payments for ecosystem services on crop burning in India. In this blog, we will briefly highlight selected challenges associated with each of these motivations and examples of how to avoid common pitfalls.Using remote sensing data to increase statistical powerThe larger a study's sample size, the more likely that the researcher will be able to detect the effect of an intervention if it exists. However, researchers often face logistical or financial constraints that make it difficult to collect primary data for a large number of participants. RS data can help predict outcomes for study participants not included in a survey or other primary data collection, making it possible to include more observations in the study, thereby increasing statistical power.While RS data may be used to increase sample size, it also introduces a new source of measurement error since outcomes are typically predicted. If the error is sufficiently large, statistical power may not improve much relative to analysis using the smaller set of primary data. For example, if field observations of crop types are used to train a prediction model that achieves only 60 percent accuracy with the RS data, researchers may be better off just running regressions with the field observations. Non-classical measurement error, particularly if it is correlated with treatment, may introduce new forms of bias. For example, if the crop type observations can only be obtained in the treatment group, and treatment affects crop choices, then the model may be systematically more accurate in predicting outcomes in the treatment group than in the control.Increasing the number or quality of outcomes measuredThere are cases where RS data may be more objective, accurate, or inexpensive than primary data collected through surveys. However, some primary data will usually be necessary to calibrate or train the RS model. Raw RS data can be difficult to make sense of without primary data to compare it to. Therefore, designing appropriate primary data collection is important, and may differ from what would be collected in the RCT if no RS data were involved.One key consideration is linking the relevant unit of intervention in the RCT to the RS data. For example, if agricultural outcomes are of interest, then the researcher needs to know the spatial location of both treatment and control fields. Measurement error will be considerably higher if only geographic points — rather than field perimeters — are collected. Notably, spatial locations must be collected for both treatment and control fields.RS data may be particularly useful when outcomes are difficult to measure through standard survey-based techniques. For example, illegal activities, such as (in some settings) deforestation or crop residue burning, may be susceptible to substantial reporting error in surveys, but could be more accurately measured with RS data — providing adequate primary data can be collected, of course. Where primary data for training a model cannot be obtained from ground-based methods such as surveys or spot checks (for example, in a conflict zone), a small sample of very high-resolution satellite imagery may provide an alternative approach to constructing a dataset for training or calibrating the RS model.Extending measurement to locations or time periods outside of the main study sampleResearchers may also want to use RS data to examine the impacts of an intervention outside of the original time period or sample of the evaluation, but as with any statistical analysis, care must be taken when conducting out-of-sample analysis.First, out-of-sample extrapolation requires an assumption that the relationship between primary (training) data and RS data is the same between the original sample and the extended sample. For example, a land use model trained on data from a set of villages in the original evaluation may or may not perform well for a larger sample of villages that may have been affected by spillovers.Similarly, the same land use model trained at a single point in time may be poorly suited to predicting the evolution of land use into the future as a result of the treatment. There will almost always be some differences in background characteristics — weather patterns, economic conditions, landscape, etc. — between the main study sample or time period and the extended sample which may cause a model from the original sample to interpret new RS data inaccurately (referred to as "model drift").Collecting new primary data for the extended sample and recalibrating the RS model can help with both accuracy and interpretation. If researchers can identify potential opportunities to use RS data to measure spillovers or long-run effects early on, they can design the initial evaluation to make measuring these outcomes easier down the line.TakeawaysIncorporating remote sensing data into randomized evaluations has tremendous potential to measure outcomes that would otherwise be difficult or expensive to study with traditional surveys and may be especially useful for evaluating environmental interventions that require physical measurements like land cover. However, RS data are not a panacea and researchers need to take these considerations into account from the time they start designing their evaluations to determine whether and how to use RS data.For more thorough guidance, additional practical considerations, and examples; check out the guidelines here.Incorporating remote sensing data into randomized evaluations was originally published in CEGA on Medium, where people are continuing the conversation by highlighting and responding to this story.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
If you have ever used a smartwatch or other wearable tech to track your steps, heart rate, or sleep, you are part of the "quantified self" movement. You are voluntarily submitting millions of intimate data points for collection and analysis. The Economist highlighted the benefits of good quality personal health and wellness data—increased physical activity,…
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
After rapid growth in 2021 and early 2022, federal and state income tax revenue collections have stalled, and, in some cases, declined. This may be a temporary soft patch arising from Federal Reserve tightening or a longer‐term phenomenon. For governments with high‐income tax dependency, the durability of this downturn will be an issue hovering over their next budget cycle. The latest Monthly Treasury Statement shows personal income tax revenue running well below prior year levels and budgetary forecasts. Together with higher‐than‐expected debt service costs, these weak income tax collections are driving the federal deficit well above expected levels. The effect on states varies. While several states collect minimal or even no personal income tax, Census Bureau figures show that nine states obtained most of their tax revenue from this source in 2021. Among these are the big blue states of California and New York as well as other states with varying political orientations including Georgia and Utah. The state most dependent on income taxation is Oregon, which derived 63 percent of its tax revenue from that single source in 2021. Thus far, New York State's challenges have been most apparent. For the first three months of its current fiscal year—April, May, and June—income tax collections were down 32.8 percent from the same three‐month period in 2022. Because New York has a highly progressive income tax system, collections depend on a relatively small number of taxpayers. The top state tax rate is 10.9 percent, but for New York City residents marginal tax rates peak at 14.776 percent. Preliminary revenue data for 2021 indicates that 1.5 percent of income tax filers accounted for 43.5 percent of total state personal income tax revenues. Those in the highest tax brackets tend to have volatile incomes from capital gains (or losses), cash bonuses, and equity‐based compensation. Stock market weakness in 2022 crimped these income sources. That could also be a cause for optimism because the stock market has been rebounding in recent months. But New York is vulnerable to another issue with high‐bracket taxpayers: their propensity to relocate. The State Comptroller reports that 1.9 percent of taxpayers reporting income of over $1 million left New York State in 2021 and that outmigration among this category of taxpayers has continued since, albeit at a slower rate. The main beneficiary of outmigration from New York is Florida, which has seen rapid in‐migration often attributed to its business‐friendly regulatory environment and lack of a state income tax. But recent Census data show that Miami, a city that had been attracting New Yorkers, lost population between 2019 and 2022. This finding led Paul Krugman to conclude that "the buzz around finance moving to Miami seems to have died down." Krugman goes on to note, however, that it is Miami's rising home prices that appear to be driving residents out (mostly to other parts of Florida). A higher median home price is unlikely to be much of a deterrent to high‐income individuals looking to relocate from New York. It is also worth noting that West Palm Beach, which has been attracting New York‐based hedge fund managers, continues to grow. Another state that has seen declining income tax receipts is California, where marginal rates top out at 13.3 percent. June 2023 income tax collections of $9.6 billion fell far shy of the $13.5 billion the state received in June 2022. While May collections were relatively flat, April saw a 71 percent drop in collections, although that was mostly due to the postponement of the 2022 tax year filing deadline to October due to floods. Like New York, California has been suffering outmigration, but a recent Bloomberg article cited by Krugman questions whether it is the rich who are leaving. Bloomberg cites state data that shows a 116,000 increase in the number of taxpayers reporting over $1 million of annual income between 2019 and 2021. But it is likely that most of this increase is due to existing residents reporting higher income due to the stock market boom of 2021. We will need to see data for subsequent years to determine whether California is retaining, let alone attracting, high‐income taxpayers using this metric. By contrast, IRS migration data show that California that net outmigration cost California $29 billion of taxable income in 2021, although it is not clear how much of that amount is related to high‐income taxpayers. Earlier, I mentioned that Georgia and Utah, two more fiscally conservative states are also highly dependent on income tax revenues. But, unlike New York and California, these states do not have highly progressive income tax rates. Utah has a flat 4.85 percent rate while Georgia has graduated rates of up to 5.75 percent but is now migrating to a flat rate of 5.49 percent. Unlike New York and California, these states had net inflows of taxable revenue in 2021. But they may not be doing so well in 2023. An analysis of Georgia' monthly revenue reports suggests a 25.4 percent decline in income tax revenue for 2023's second calendar quarter versus the prior year, which is only marginally better than New York State's results. Utah does not provide monthly figures but for the full fiscal year ended June 30, personal income tax collections declined 5.3 percent. So, it appears that relatively low‐ and flat‐income tax rates do not fully shield against the revenue volatility that comes with levying a personal income tax. While the best policy is to eliminate state income taxes, those states that retain them should monitor revenues carefully and use reserves to cushion the impact of fluctuations.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
At a hearing of the Senate Select Committee on Career Technology and the New Economy, PPIC researcher Sarah Bohn discussed how improvements in the state's collection of wage data could inform strategies for promoting economic growth and opportunity.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
A new open data platform will accelerate robust and comprehensive research in the agricultural sectorThis post was written by Jenna Fahle (CEGA), Radhika Goyal (UCSD), Vinny Armentano (UCSD), and Craig McIntosh (UCSD).Introduction to the ATAI Data PortalSince 2009, the Agricultural Technology Adoption Initiative (ATAI), co-managed by the Center for Effective Global Action (CEGA) and the Abdul Latif Jameel Poverty Action Lab (J-PAL), has generated robust evidence of the impacts of agricultural technologies, such as stress-tolerant rice or mobile-phone based agricultural extension, on small-scale farmer welfare. Today, ATAI launched a new open data platform to bring together the best evidence from ATAI-funded research in a single portal, making it easily accessible to researchers and policymakers alike. The initiative aims to foster collaboration and evidence-informed decision-making in the agricultural sector, ultimately contributing to the advancement of the most effective agricultural practices and improving farmer welfare.Why make data open?Access to high-quality data has long been recognized as a significant obstacle in social science research. To address this issue, data repositories like the J-PAL Dataverse have emerged, making it easier for researchers, policymakers, and others to access and utilize data from completed research studies. In recent times, the effectiveness of these data repositories has been bolstered by data sharing policies put into place by funders, journals, and research organizations. UC Berkeley's Initiative for Transparency in the Social Sciences (BITSS) — incubated at CEGA — champions these and other open data approaches as a standard practice that promotes transparency and reproducibility of evidence, strengthening the scientific ecosystem and bolstering the credibility of research findings.The ATAI Data Portal goes beyond the principles of open data by incorporating data harmonization. Data harmonization involves the collection of data from various sources or, in the case of ATAI, a research portfolio, in a manner that ensures users have a comprehensive and comparable view of the information.Harmonized data holds tremendous value for researchers aiming to extract insights from multiple studies. In the past, researchers had to collect datasets from various sources, investing valuable time in cleaning and integrating the data. Often, the unavailability of raw data hindered such comparisons, and the resulting publicly available data lacked sufficient information for meaningful analyses. However, researchers now have a powerful tool at their disposal. With the ATAI Data Portal, they can access harmonized data, enabling them to conduct meta-analyses and explore the external validity and generalizability of research results more efficiently and effectively. This transformative platform opens up new avenues for robust and comprehensive research in the agricultural sector.The ATAI Data Portal also improves the richness and quality of datasets from ATAI-funded projects in several ways. For instance, a number of ATAI-funded studies contain georeferencing, or latitude and longitude coordinates for agricultural fields, households, or study administrative boundaries. When geographic coordinates are available, the ATAI Data Portal overlays the project dataset with environmental variables — such as temperature, precipitation, night lights, and forest cover –- to expand the richness and utility of the data. (Many predictive models rely on this kind of information as ground truth data.)To maintain the anonymity of the surveyed population, the data linkage employs industry-standard geo-masking techniques. By implementing these measures, the ATAI Data portal ensures that the privacy and confidentiality of the participants are preserved while providing valuable insights into the relationships between agricultural practices and environmental factors.During the data harmonization process, meticulous data cleaning is carried out to ensure data integrity. This includes harmonizing units, eliminating negative values, and removing duplicate records as part of the harmonization effort. These measures contribute to the overall reliability and consistency of the data made available through the ATAI Data portal, fostering more robust and trustworthy research outcomes.Thus, the ATAI Data Portal offers a novel approach in that it features high-quality, harmonized data integrated with environmental variables in an open and accessible format."This portal is a first step in an effort to allow datasets from randomized controlled trials to be put to a broader set of uses. By harmonizing core agricultural variables to the fullest extent possible as well as providing broad access to raw data, the portal will allow the research community to aggregate across studies and geographies in a way not possible in any single study." — Craig McIntosh, ATAI Co-Chair and Professor of Economics at UCSDATAI-data.org launched with seventeen datasets based in Bangladesh, Ghana, Ethiopia, India, Kenya, Mozambique, Uganda, and Zambia. The portal will continue to grow as more research teams complete and submit their datasets to ATAI.What comes next?The ATAI Data Portal is a public good that will increase in volume and value over time as more open datasets from ATAI become available and more researchers make use of it. The ATAI Data Portal is open-source and freely available.ATAI has seized an opportunity to institutionalize harmonized, open data and further standardize data collection for agricultural randomized evaluations — making every research step count. We hope that this model is an encouraging approach and tool for researchers working to evaluate the effectiveness of agricultural development programs.For more information and for portal documentation, please visit atai-data.org.Making every research step count: Introducing the ATAI Data Portal was originally published in CEGA on Medium, where people are continuing the conversation by highlighting and responding to this story.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
Data collection on Americans has become a giant business over the last 30 years, one that law enforcement agencies benefit from enormously as it allows them to effectively circumvent the Fourth Amendment, outside of circumstances requiring the most intrusive surveillance techniques.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
Theme The potential impact of TikTok on Western security has raised concerns regarding data collection, algorithmic control and potential interference by the Chinese government. Although ByteDance, the parent company of TikTok, has proposed the establishment of local data centres, these measures do not effectively address the fundamental risks that leave European society susceptible to potential […] La entrada The TikTok controversy: its impact on international security se publicó primero en Elcano Royal Institute.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
Prior to the 2022-2023 legislative session, five states (California, Virginia, Utah, Colorado, and Connecticut) had passed consumer data privacy laws, but now the patchwork of state laws has more than doubled. Congress has continued to debate a potential federal standard with the American Data Privacy Protection Act in the 117th Congress being the first such proposal to be voted out of a committee; however, without momentum around a federal standard and with continuing and new concerns about data privacy from consumers, many states are undertaking their own policy actions around data privacy. The patchwork nature of these individual state laws can potentially amplify compliance costs for businesses operating across different states and create confusion among American consumers whose digital footprint often crosses state borders. The potential financial impact of complying with 50 distinct state laws could surpass $1 trillion over a decade, with a minimum of $200 billion being borne by small businesses. As this patchwork grows, what does data privacy look like as the 2022-2023 legislative session comes to a close? What happened with data privacy in 2022-2023? As of 2023, the majority of states have considered data privacy legislation, likely in response to consumer concerns on this issue — 32 state legislatures have kicked off the debate and presented bills. Ten states have already signed comprehensive privacy bills into law. Six states—Florida, Indiana, Iowa, Montana, Tennessee, and Texas—enacted data privacy legislation this year. Oregon is the latest state to pass a comprehensive law, which is now awaiting the governor's signature. Additionally, there are five more bills under consideration as of July 2023. Most of these bills share similarities with the existing data privacy laws in California, Virginia, and Utah.
States with data privacy acts enacted in 2023 that have followed the California model Of the five additional states that enacted data privacy laws this year, Indiana and Montana appear to most closely resemble California's model, which relies heavily on administrative rules. Montana, for example, even goes beyond California by creating a right for consumers to revoke their consent to data processing. None of the states that have enacted laws this year have created a private right of action as seen in a limited capacity in the current California law. States that have followed the Virginia or Utah model Notably, a growing number of states have passed or considered a data privacy framework that more closely resembles the laws initially passed in Utah and Virginia. This includes Iowa, Tennessee, and Texas as well as a bill still under consideration in North Carolina. Such models provide baseline protections but typically have fewer obligations or areas of covered data, limit enforcement to the attorney general, and are more likely to provide safe harbors. Still, each proposal remains unique. For example, Tennessee became the first state to create a compliance safe harbor for companies complying with National Institute of Standards and Technology (NIST) standards. Other states have considered similar carve-outs for existing standards. Such an approach may lessen some problems with the patchwork by providing a way for a single set of best practices that could be compliant from state to state. Notable privacy bill trends to watch In addition to the growing patchwork of state privacy laws, this latest legislative term has also provided additional information about the debates around data privacy legislation. Notably, private rights of action continue to raise concerns and may make proposals less likely to succeed. Additionally, a new trend of health privacy-focused bills is emerging at the state level. Currently, four states that still have active bills—Maine, Massachusetts, New Jersey, and Rhode Island—contemplate creating a private right of action. However, to date, all bills from Hawaii to Mississippi to New York that included provisions on the private right of action have failed. New York's failed "It's Your Data Act" had foreseen that consumers "need not suffer monetary or property loss as a result of such violation in order to bring an action for a violation." The Washington Privacy Act was passed only after eliminating the private right of action, which was later reinstated in a very limited form by allowing a private right of action only for injunctive relief without monetary damages. The inclusion of a private right of action for statutory violations so that individuals can sue companies without the need to prove that actual harm inflicted upon them has grave consequences. Such private right of action for statutory damages raises significant concerns about how litigation could be used to prevent innovation. While a private right of action wouldn't pose any significant issues if the burden of proof was solely tied to demonstrating the harm, the problem arises when there's no requirement to prove harm. Such a provision could prompt a surge in class action lawsuits, thereby impeding innovation, especially among small companies that may become more risk-averse for fear of being sued. The United States, with its distinct litigation system, and features such as the absence of a "loser pays" rule, is more susceptible to the abuse of the private right of action for statutory violations. Illinois's Biometric Information Privacy Act provides such a right in the context of certain collection of data and has seen everything from photo tagging to trucking companies be sued. Most of the resulting funds have gone to attorneys, with limited amounts to the class members alleged to be "violated" by the action. In the photo tagging case, Facebook was directed to pay $650 million without the necessity of demonstrating any harm. In the trucking case, truck drivers secured a $228 million judgment because, as employees, they were required to scan fingerprints to confirm their identity, again without the need to show actual harm. A new emerging trend to watch is the ongoing debate surrounding the sponsorship of bills aimed at regulating consumer health data, primarily focusing on reproductive health data. Washington is the first state to pass such a law, which is set to take effect in 2024. In a post-Roe context, it is likely that similar legislation — particularly in blue states — will emerge, regulating actors that are not governed by HIPAA. Given the broad scope of what is classified as health data, debates on its definition, collection, and usage are likely to be heated. Such laws also raise unique compliance questions for a variety of popular apps that are not regulated as medical devices but provide consumers with empowering ways to track information from blood sugar to mental health. What do state data privacy laws mean for consumers, innovators, and the federal privacy policy debate? States are acting on data privacy in part because of the continued interest in the issue from constituents. In 2022, more than 80% of voters polled supported the idea of a federal data privacy law. Given that data privacy remains a concern and due to the lack of progress on a federal bill, it is unsurprising that much of the debate over data privacy has shifted to a local or state level where legislatures are able to move more quickly. But is this good for consumers and innovators? Is there a case for data privacy legislation anyway? While many polled consumers are in favor of data privacy legislation, there remains a great amount of difference in the actual privacy preferences they have. In fact, the overwhelming support for data privacy becomes far more complicated when you consider questions like how much an individual would be willing to pay for social media or other products as opposed to an ad-supported version. Similarly, research has shown a "privacy paradox" where revealed preferences for privacy tend to be weaker than stated preferences. If policymakers are to consider legislation around data privacy, they should focus on real and widely agreed-upon harms, not merely expressed preferences. This approach prevents a shift toward a more European "privacy fundamentalism" that is more likely to result in conflicts both with other rights, like speech, as well as create a static approach that could deter innovation including those that may improve privacy. Understanding the problems of a patchwork approach The continuing, emerging patchwork of data privacy laws at a state level is likely to lead to both increased costs and confusion. This is true not only for the businesses that handle data but also for consumers. A state-by-state approach makes it uncertain for both innovators and consumers what may or may not be done with their data. For consumers, this can create confusion about why certain products or features may not be available in their state or what rights they have when it comes to obtaining or correcting their data online. Particularly for small businesses, a state-by-state approach is likely to significantly raise costs as new compliance concerns arise in each state. In some cases, this may result in applying the most restrictive standard necessary, but in other cases, it may require development of specific features to comply. In either case, again both consumers and innovators lose out. Consumers may find themselves losing features because of standards imposed by legislatures in other states and innovators may find themselves focusing on compliance rather than the improvements that best serve their customers. Far from being the second-best solution, it is almost inevitable that proposals will eventually conflict with one another which makes it impossible to comply with all such state laws. The most obvious example of this would be if one state chooses an opt-out model while another chooses an opt-in model, but many other conflicts could arise around issues such as data minimization or retention. Given the potential and likelihood for conflicts and the burden on out-of-state businesses, a state-by-state approach also should give rise to dormant commerce clause concerns. The interstate (and international) nature of data means that a federal standard should be considered constitutionally necessary in this case.
Conclusion The 2022-2023 session saw a doubling of the number of states with consumer data privacy laws. While policymakers may feel they are responding to constituent concerns, the patchwork approach remains problematic for both innovators and consumers.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
Housed in the Center for Political Studies at the Institute for Social Research, The Constituency-Level Elections Archive (CLEA) is a repository of detailed results from lower and upper house elections from around the world. The project provides opportunities for students to be involved at all stages of the data collection process, providing a valuable training experience. The post CLEA Provides Student Opportunities for Impact and Growth first appeared on Center for Political Studies (CPS) Blog.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
Post developed by Anne Pitcher, Rod Alence, Melanie Roberts, and Katherine Pearson Secure elections are essential to democracy. ObSERV, a new study by researchers at the Electoral Institute for Sustainable Democracy in Africa (EISA) and the University of Michigan, with support from the University of Witwatersrand (Wits), presents a data collection methodology that improves the […] The post ObSERV Study improves methods for observing elections and election-related violence first appeared on Center for Political Studies (CPS) Blog.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
Last month, CEGA held its ninth annual Measuring Development (MeasureDev) conference on "Mitigating the Risks and Impacts of Climate Change," in partnership with the World Bank's Development Impact Evaluation (DIME) Department, Data Analytics and Tools Unit (DECAT), and the University of Chicago's Development Innovation Lab (DIL). Speakers showcased innovative approaches for measuring and tracking climate-related risk, developing effective responses, and evaluating outcomes in data-sparse environments. Sean Luna McAdams, CEGA's Data Science for Development Program Manager, shares key insights from the event here.Climate change is disrupting weather patterns around the world. Look no further than the unhealthy levels of smoke in the Northeast's skies last week. The impacts on human activity require urgent investments in mitigation and resilience for those most vulnerable. Last month, CEGA, DIL, and the World Bank brought together some of the most innovative social and natural scientists working on this existential challenge to share how they are pushing the frontiers of data collection, for example by using remote sensing technologies, engaging in participatory data collection, and effectively (and meaningfully) integrating different data streams.University of Chicago's Rachel Glennester emphasized the importance of measurement to help diagnose, mitigate, and adapt to climate change, particularly to incentivize green investments in LMICs. Credit: World Bank.A Call for Better Measurement"Mitigation is one of the true global public goods," noted the University of Chicago's Rachel Glennester in her keynote address. Indeed, the efforts by one country or group of countries to reduce carbon emissions will have benefits that are felt worldwide. Recognizing that low- and middle-income countries (LMICs) — who have historically contributed little to climate change — nevertheless face growing opportunities to mitigate emissions for the whole planet, Rachel suggested high-income countries could fund highly costeffective mitigation efforts in LMICs. These payments should not be considered aid as they benefit the world and offset high income countries' damage to the atmosphere. To do this effectively we need scalable approaches to measuring emissions, among many other critical indicators.Cost-Effective Measurement with Remote SensingMany speakers addressed the challenge of cost-effective measurement through the use of remote sensing. CEGA Affiliate Tamma Carleton highlighted the promise of satellite imagery and machine learning (SIML) to improve climate management. Her own work on MOSAIKS demonstrates the potential for these data and predictive models to increase the spatial coverage and resolution of survey and administrative georeferenced data, while lowering barriers to access for decision-makers in low-resource settings. Similarly, Dieter Wang showcased how higher resolution and frequency satellite imagery alongside cloud-penetrating sensors can improve estimates of how well conservation policies in the Brazilian Amazon are preventing deforestation. Better measurement in this case makes it possible to reward governments through bonds whose rates are tied to mitigation performance. Kangogo Sogomo discussed a novel approach that leverages satellite imagery to predict maize yields at a finer scale with less computational resources.Since 2010, new satellites have come online that increase both sensor resolution and cloud-free revisit rate. These advances provide researchers with more granular and frequent imagery data to incorporate into their analyses. Credit: Burke et al, Science 2021.Of course, remote sensing is not just limited to satellites and can inform adaptation and resilience alongside mitigation. Samuel Seo, for example, compared measurement strategies for methane emissions from a large, unmanaged landfill in Dakar, Senegal by collecting data using human enumerators, drones, and satellites. Across the board, these measurements suggest that current approaches used by the IPCC underestimate total emissions from these sites by more than half. Bridget Hoffman instead used low-cost air pollution sensors along bus routes and within buses in Dakar to understand the effects of an infrastructure project on air quality. Drones, stationary sensors, and other instruments can all provide rich data at scale to improve the evaluation and monitoring of climate mitigation and adaptation strategies.The Role of Participatory Data CollectionResearchers and climate practitioners not only think creatively about the sensors they use to collect data, they also innovate data collection and its infrastructure to make it more participatory. Kangogo Sogomo noted increasing mobile phone use and internet penetration across the global South suggesting, "climate action is urgent… there is still an opportunity for having participatory methods [for data collection]." Tom Bewick, for example, has trained indigenous communities in Africa and Latin America how to collect georeferenced data on planted trees to improve the monitoring of their growth and local collective governance. Similarly, Kenneth Mubea, who works to conserve mangrove forests, discussed how his research assembled teams of students to work with local communities to collect georeferenced data. Participatory approaches can extend to model validation, as with the case of Alejandra Mortarini. She worked with organizations that have long-standing relationships with communities living in informal settlements in Honduras to help validate the outputs of the predictive model and calibrate it to improve its performance. By incorporating local actors into data collection efforts, we can increase its frequency, provide greater access, and contribute to a local culture of evidence-use.New Approaches to Data IntegrationA third strategy to make data collection cheaper and more effective relies on exploiting efficiencies generated by integrating different data streams. The World Bank's Stéphane Hallegatte stressed the opportunity of integrating different data sources in his remarks."We have all this fantastic progress in measurement with remote sensing and big data, we have these household surveys that are playing an absolutely critical role to measure what we are doing and to prioritize," said Hallegatte. "One of the big challenges is to make them completely interlinked and to flow smoothly from the spatial to household surveys, and have household surveys that can be more flexible when there is a shock that can use data coming from satellites to maybe focus and do dedicated surveys in places that have been affected by a shock."In particular, Hallegatte stressed that traditional measures of vulnerability may lead us to miss some individuals who may be critically underprepared to face the "long tails" of climate shocks. Adaptive research designs can help us understand which interventions work best in particular contexts and communities, improving our understanding of how climate systems affect those who are socioeconomically and environmentally most vulnerable and how we may build resilience together.Hallegatte stressed how different metrics of climate vulnerability can lead policy makers to prioritize different areas. Here we see how four different risk indicators — annual asset risk, annual consumption poverty increase, socioeconomic resilience, and annual well-being risk — map onto the Philippines. Source: Hallegatte 2023.Paola Agostini, Mohammed Basheer, and Erwin Knippenberg simulated physical and social systems in their research designs. These simulations enabled each of them to estimate new quantities of interest, like the decision-space of negotiations for potential dam designs in the Nile River Basin, the cost-per-benefit of different land restoration interventions in Tajikistan, or the percentage of the population at risk of falling into poverty due to weather shocks in Afghanistan. Ben Brunckhorst showed how the incorporation of weather predictions unlocks the possibility of anticipatory cash transfers with demonstrable effects on household resilience to flooding in Bangladesh.Through better measurement we can improve our collective efforts to meet the challenge of climate change. As Hallegatte reminded us in his keynote remarks, how we construct these measures of impact fundamentally affects what regions, communities, and interventions we prioritize. A critical part of this effort will be to leverage measurement strategies highlighted during MeasureDev 2023 to channel resources to the places and communities where interventions to mitigate and adapt to climate change will have the greatest impact. In so doing, measurement can contribute to a more equitable future by incentivizing green investments in LMICs.How does Measurement Contribute to a Habitable Planet for All? was originally published in CEGA on Medium, where people are continuing the conversation by highlighting and responding to this story.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
This morning, Chris Sabatini at Chatham House moderated a Zoom panel entitled, "How Prepared is Venezuela's Healthcare System for Covid-19." The participants were:José Miguel Vivanco, Executive Director, Americas Division, Human Rights WatchTamara Taraciuk Broner, Acting Deputy Director, Americas Division, Human Rights WatchDr Kathleen Page, Associate Professor of Medicine, John Hopkins UniversityThe answer to the panel's title question is, as anyone paying even passing attention would accurately guess, emphatically no. There is no good news. There is no silver lining. A massively corrupt and uncaring dictatorship is letting people die and lying about everything. Doctors are washing their hands from the drips of air-conditioning units before doing surgery. Many hospitals don't have potable water. Aid is tricky and the gasoline shortage makes it hard to reach the interior of the country. We have no idea how many people have the virus and how many people have died from it. Repression makes it hard to find out anything. BTW, I had never heard the phrase "verbal autopsy" before. That's where we are in terms of data collection, down to trying to get information on demand for funeral homes, but even then people are afraid to talk openly. It's an onslaught of bad, but Covid-19 has distracted the world from the disaster.What can the international community do? We need a truly multilateral effort with a common position. José Miguel Vivanco lamented the Trump's administration embrace of militaristic rhetoric, which makes things worse. The UN is barely paying attention.John Hopkins worked with Human Rights Watch before Covid-19, and already last year warned that Venezuela was in a dire healthcare crisis.Anyway, it was a really interesting discussion, but one that left me sad and frustrated. Subscribe in a reader
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
After the 2022 election, it looked like Donald Trump's support in the Republican Party was finally weakening. As Trump made a comeback in the middle of 2023, some people said that one reason for his resurgence was that Republicans were rallying around him because he had been indicted on various charges. Now this seems to have become conventional wisdom: a news story in the NY Times says "But far from diminishing the former president's standing with Republicans, the charges actually rallied the party around him."A few months ago, I looked for relevant data. Lots of surveys asked if you had a favorable or unfavorable view of Trump, but I wanted ones that asked for degree of favorability--my idea was that the indictments wouldn't convert people from unfavorable to favorable, but they might make people who were already favorable more strongly committed. Surveys that ask people for degree of favorability or unfavorability are less common, and I didn't find enough for an analysis. After the New Hampshire primary, I tried again and found a source I hadn't known about before: a company called Echelon Insights has monthly polls that include a question about views of Trump (very favorable, somewhat favorable, somewhat unfavorable, and very unfavorable) and breaks them down by party identification. Very and somewhat favorable ratings of Trump among Republicans:The first indictment came on March 30, after data collection for the March survey was complete. There was a lasting increase in very favorable ratings and decline in somewhat favorable ratings starting in April. Of course, in principle the pattern could be the result of something else that happened at around the same time, but I can't think of any other obvious candidate. There was no lasting change after the second and third indictments, but it seems reasonable that the first one would have more impact. There also has been some polarization of ratings among independents, with both very favorable and very unfavorable ratings becoming more common, but this was a gradual change--there's no sign that the indictments had any special impact. The latest figures among independents are 17% very favorable and 46% very unfavorable. For Democrats, very unfavorable ratings have been steady at about 85 percent over the whole time period.