Suchergebnisse
Filter
43 Ergebnisse
Sortierung:
Why SNAP Works: A Political History—and Defense—of the Food Stamp Program by ChristopherBusso. Oakland, CA: University of California Press, 2023, 257 pp., $24.95 (US) (Hardcover). ISBN 978‐-0520392816
In: Journal of policy analysis and management: the journal of the Association for Public Policy Analysis and Management, Band 43, Heft 2, S. 644-648
ISSN: 1520-6688
Lessons from SSA Demonstrations for Disability Policy and Future Research, edited by AustinNichols, JeffreyHemmeter, and Debra GoetzEngler, Rockville, MD: Abt Press, 2021, 510 pp., $23 (US), paperback
In: Journal of policy analysis and management: the journal of the Association for Public Policy Analysis and Management, Band 41, Heft 2, S. 653-658
ISSN: 1520-6688
Leveraging Experimental Evaluations for Understanding Causal Mechanisms
In: New directions for evaluation: a publication of the American Evaluation Association, Band 2020, Heft 167, S. 145-160
ISSN: 1534-875X
AbstractExperimental evaluations—especially when grounded in theory‐based impact evaluation—can provide insights into the mechanisms that generate program impacts. This chapter details variants of experimental evaluation designs and also analytic strategies that leverage experimental evaluation data to learn about causal mechanisms. The design variants are poised to illuminate causal mechanisms related to program implementation and the contribution of selected components of multifaceted programs. The analysis strategies lend themselves to illuminating causal mechanisms related to participants' responses to program components as well as to the contributions of selected program components themselves. The chapter offers an example from one, theory‐based impact evaluation, which embedded both design and analytic strategies to examine the extent to which specific program components and participant experiences might be identified as causal mechanisms. The particular value in using this theory‐based experimental strategy is that the results are rigorous and potentially highly relevant to policy and practice.
The Big Evaluation Enterprises in the United States
In: New directions for evaluation: a publication of the American Evaluation Association, Band 2018, Heft 160, S. 97-124
ISSN: 1534-875X
AbstractThis issue of New Directions for Evaluation addresses the market for evaluation, where demand and supply intersect to generate knowledge relevant to decisions for policy‐making and program practice. Lemire et al. (this issue) elaborated on the U.S. federal evaluation market. This chapter examines the "big evaluation enterprises" within that market.
On the "How" of Social Experiments: Analytic Strategies for Getting Inside the Black Box
In: New directions for evaluation: a publication of the American Evaluation Association, Band 2016, Heft 152, S. 85-96
ISSN: 1534-875X
AbstractAnalysis of postrandom assignment (endogenous) events or experiences in experimental evaluation data is becoming increasingly widespread. This chapter highlights some analytic strategies for revealing mediators of program impacts. In particular, it considers the kinds of research questions that instrumental variables (IV) estimation is suited to answer and how using IV in conjunction with a randomized experiment can advance what we know about the mediational effects of policies or programs. It also explains how the Analysis of Symmetrically Predicted Endogenous Subgroups (ASPES) can assist in answering other, related kinds of evaluation questions. It illustrates how these approaches have been used in practice with an example from the Moving to Opportunity (MTO) Demonstration.
Laura R. Peck on "Failure of Intervention or Failure of Evaluation: A Meta-Evaluation of the National Youth Anti-Drug Media Campaign Evaluation"
In: Substance use & misuse: an international interdisciplinary forum, Band 47, Heft 13-14, S. 1425-1426
ISSN: 1532-2491
Out of Reach: Place, Poverty, and the New American Welfare State. By Scott W. Allard. New Haven, Conn.: Yale University Press, 2009. Pp. ix+266. $35.00 (paper)
In: The American journal of sociology, Band 115, Heft 4, S. 1339-1340
ISSN: 1537-5390
Do Antipoverty Nonprofits Locate Where People Need Them? Evidence From a Spatial Analysis of Phoenix
In: Nonprofit and voluntary sector quarterly: journal of the Association for Research on Nonprofit Organizations and Voluntary Action, Band 37, Heft 1, S. 138-151
ISSN: 1552-7395
Do Antipoverty Nonprofits Locate Where People Need Them? Evidence From a Spatial Analysis of Phoenix
In: Nonprofit and voluntary sector quarterly, Band 37, Heft 1
ISSN: 0899-7640
Do Antipoverty Nonprofits Locate Where People Need Them? Evidence From a Spatial Analysis of Phoenix
In: Nonprofit and voluntary sector quarterly: journal of the Association for Research on Nonprofit Organizations and Voluntary Action, Band 37, Heft 1, S. 138-151
ISSN: 1552-7395
This work explores the spatial connections between nonprofit organizations that have an antipoverty focus and poor residents in the greater Phoenix, Arizona, metropolitan area. Substantial population and service growth occurred in Phoenix between 1990 and 2000, with almost twice the number of organizations and almost three times the amount of expenditures in 2000 than in 1990. Empirical evidence supports that these nonprofits locate in areas of greater need, but evidence that those organizations affect neighborhood poverty is weak, suggesting that the government should not retract services and that further nonprofit organizational growth may be necessary. The comprehensive measure of accessibility and two-way causal analysis are proposed for future replication.
Stereotypes and Statistics: An Essay on Public Opinion and Poverty Measurement
In: Journal of poverty: innovations on social, political & economic inequalities, Band 11, Heft 3, S. 15-28
ISSN: 1540-7608
Using Cluster Analysis in Program Evaluation
In: Evaluation review: a journal of applied social research, Band 29, Heft 2, S. 178-196
ISSN: 1552-3926
The conventional way to measure program impacts is to compute the average treatment effect; that is, the difference between a treatment group that received some intervention and a control group that did not. Recently, scholars have recognized that looking only at the average treatment effect may obscure impacts that accrue to subgroups. In an effort to inform subgroup analysis research, this article explains the challenge of treatment group heterogeneity. It then proposes using cluster analysis to identify otherwise difficult-to-identify subgroups within evaluation data. The approach maintains the integrity of the experimental evaluation design, thereby producing unbiased estimates of program impacts by subgroup. This method is applied to data from the evaluation of New York State's Child Assistance Program, a reform that intended to increase work and earnings among welfare recipients. The article interprets the substantive findings and then addresses the advantages and disadvantages of the proposed method.
Editor's Notes Social Experiments in Practice: Introduction, Framing, and Context
In: New directions for evaluation: a publication of the American Evaluation Association, Band 2016, Heft 152, S. 9-17
ISSN: 1534-875X
A Design-Based Approach to Improve External Validity in Welfare Policy Evaluations
In: Evaluation review: a journal of applied social research, Band 41, Heft 4, S. 326-356
ISSN: 1552-3926
Background: Large-scale randomized experiments are important for determining how policy interventions change average outcomes. Researchers have begun developing methods to improve the external validity of these experiments. One new approach is a balanced sampling method for site selection, which does not require random sampling and takes into account the practicalities of site recruitment including high nonresponse. Method: The goal of balanced sampling is to develop a strategic sample selection plan that results in a sample that is compositionally similar to a well-defined inference population. To do so, a population frame is created and then divided into strata, which "focuses" recruiters on specific subpopulations. Units within these strata are then ranked, thus identifying "replacements" similar to sites that can be recruited when the ideal site refuses to participate in the experiment. Result: In this article, we consider how a balanced sample strategic site selection method might be implemented in a welfare policy evaluation. Conclusion: We find that simply developing a population frame can be challenging, with three possible and reasonable options arising in the welfare policy arena. Using relevant study-specific contextual variables, we craft a recruitment plan that considers nonresponse.