Suchergebnisse
Filter
45 Ergebnisse
Sortierung:
Occupational labour shortages
In: Matching Economic Migration with Labour Market Needs, S. 335-348
Setting up social experiments: the good, the bad, and the ugly
It is widely agreed that randomized controlled trials - social experiments - are the gold standard for evaluating social programs. There are, however, many important issues that cannot be tested using social experiments, and often things go wrong when conducting social experiments. This paper explores these issues and offers suggestions on ways to deal with commonly encountered problems. Social experiments are preferred because random assignment assures that any differences between the treatment and control groups are due to the intervention and not some other factor; also, the results of social experiments are more easily explained and accepted by policy officials. Experimental evaluations often lack external validity and cannot control for entry effects, scale and general equilibrium effects, and aspects of the intervention that were not randomly assigned. Experiments can also lead to biased impact estimates if the control group changes its behavior or if changing the number selected changes the impact. Other problems with conducting social experiments include increased time and cost, and legal and ethical issues related to excluding people from the treatment. Things that sometimes go wrong in social experiments include programs cheating on random assignment, and participants and/or staff not understanding the intervention rules. The random assignment evaluation of the Job Training Partnership Act in the United States is used as a case study to illustrate the issues.
BASE
Thirty Years of Changing Federal, State, and Local Relationships in Employment and Training Programs
In: Publius: the journal of federalism, Band 23, Heft 3, S. 75-75
ISSN: 0048-5950
Vouchers in U.S. vocational training programs: an overview of what we have learned
An important decision that must be made in operating training programs targeted toward disadvantaged workers is whether the programs dictate the specific training programs that participants will take or they issue vouchers that permit participants to select their specific training programs. Over the past 40 years, the United States has operated a number of targeted training programs, some of which have used vouchers and voucher-like instruments to let participants determine their programs. This paper reviews the evidence from the U.S. experience. Although vouchers permit maximum consumer choice and reduce the need for government oversight, vouchers may not lead to optimal results due to imperfect information and a divergence between government and participant goals. Although vouchers are generally popular with participants, evaluations of U.S. training programs for poor workers and dislocated (displaced) workers show mixed results: many studies indicate that the impact of programs with vouchers is often lower than for programs without vouchers for poor participants, and the evidence is mixed for dislocated workers. When vouchers are used, appropriate counseling and assessment as well as the provision of provider performance information can improve the results.
BASE
The ethics of federal social program evaluation: A response to Jan Blustein
In: Journal of policy analysis and management: the journal of the Association for Public Policy Analysis and Management, Band 24, Heft 4, S. 846-847
ISSN: 0276-8739
Exploring the Relationship between Performance Management and Program Impact: A Case Study of the Job Training Partnership Act
In: Journal of policy analysis and management: the journal of the Association for Public Policy Analysis and Management, Band 19, Heft 1, S. 118-141
ISSN: 0276-8739
THIRTY YEARS OF CHANGING FEDERAL, STATE, AND LOCAL RELATIONSHIPS IN EMPLOYMENT AND TRAINING PROGRAMS
In: Publius: the journal of federalism, Band 23, Heft 3, S. 75-94
ISSN: 0048-5950
THERE HAVE BEEN THREE MAJOR TRAINING PROGRAMS IN THE UNITED STATES IN THE PAST THIRTY YEARS: THE MANPOWER DEVELOPMENT AND TRAINING ACT (MDTA) FROM 1962 TO 1973, THE COMPREHENSIVE EMPLOYMENT AND TRAINING ACE (CETA) FROM 1973 TO 1982, AND THE JOB TRAINING PARTNERSHIP ACT (JTPA) FROM 1982 TO THE PRESENT. MDTA WAS A CATEGORICAL PROGRAM, WITH SERVICE PROVIDERS FUNDED DIRECTLY BY THE FEDERAL GOVERNMENT. CETA WAS A HYBRID BLOCK GRANT PROGRAM THAT GAVE LOCAL UNITS OF GOVERNMENT SUBSTANTIAL AUTONOMY IN ADMINISTERING THE BASIC TRAINING COMPONENT, BUT CETA ALSO INCLUDED CATEGORICAL PROGRAMS FOR SPECIFIC TARGET GROUPS AND FOR PUBLIC SERVICE EMPLOYMENT. OVER TIME, CETA WAS INCREASINGLY REGULATED. JTPA IS REGULATED MORE BY THE STATES AND THE PRIVATE SECTOR, AND IN 1992 AMENDMENTS TARGETED THE PROGRAM MORE SHARPLY AND RESTRICTED ACTIVITIES THAT COULD BE UNDERTAKEN. FEDERALISM IN EMPLOYMENT AND TRAINING PROGRAMS HAS FOLLOWED A COURSE SIMILAR TO OTHER AREAS, WITH COOPERATIVE FEDERALISM ENDING IN 1978 BEING REPLACED BY COERCIVE FEDERALISM. IN RECENT YEARS, STATES HAVE STARTED A NUMBER OF INNOVATIVE PROGRAMS.
The Impact of CETA Programs on Earnings: A Review of the Literature
In: The journal of human resources, Band 22, Heft 2, S. 157
ISSN: 1548-8004
Evaluating employment and training programs
In: Evaluation and Program Planning, Band 9, Heft 1, S. 63-72
Evaluating Employment and Training Programs
In: Evaluation and program planning: an international journal, Band 9, Heft 1, S. 63-72
ISSN: 0149-7189
The Value of Efficiency Measures: Lessons from Workforce Development Programs
In: Public performance & management review, Band 38, Heft 3, S. 487-513
ISSN: 1557-9271
The Value of Efficiency Measures: Lessons from Workforce Development Programs
In: Public performance & management review, Band 38, Heft 3
ISSN: 1530-9576
Do Estimated Impacts on Earnings Depend on the Source of the Data Used to Measure Them? Evidence From Previous Social Experiments
In: Evaluation review: a journal of applied social research, Band 39, Heft 2, S. 179-228
ISSN: 1552-3926
Background: Impact evaluations draw their data from two sources, namely, surveys conducted for the evaluation or administrative data collected for other purposes. Both types of data have been used in impact evaluations of social programs. Objective: This study analyzes the causes of differences in impact estimates when survey data and administrative data are used to evaluate earnings impacts in social experiments and discusses the differences observed in eight evaluations of social experiments that used both survey and administrative data. Results: There are important trade-offs between the two data sources. Administrative data are less expensive but may not cover all income and may not cover the time period desired, while surveys can be designed to avoid these problems. We note that errors can be due to nonresponse or reporting, and errors can be balanced between the treatment and the control groups or unbalanced. We find that earnings are usually higher in survey data than in administrative data due to differences in coverage and likely overreporting of overtime hours and pay in survey data. Evaluations using survey data usually find greater impacts, sometimes much greater. Conclusions: The much lower cost of administrative data make their use attractive, but they are still subject to underreporting and other problems. We recommend further evaluations using both types of data with investigative audits to better understand the sources and magnitudes of errors in both survey and administrative data so that appropriate corrections to the data can be made.
Flaws in Evaluations of Social Programs: Illustrations From Randomized Controlled Trials
In: Evaluation review: a journal of applied social research, Band 38, Heft 5, S. 359-387
ISSN: 1552-3926
Background: This article describes eight flaws that occur in impact evaluations. Method: The eight flaws are grouped into four categories on how they affect impact estimates: statistical imprecision; biases; failure of impact estimates to measure effects of the planned treatment; and flaws that result from weakening an evaluation design. Each flaw is illustrated with examples from social experiments. Although these illustrations are from randomized controlled trials (RCTs), they can occur in any type of evaluation; we use RCTs to illustrate because people sometimes assume that RCTs might be immune to such problems. A summary table lists the flaws, indicates circumstances under which they occur, notes their potential seriousness, and suggests approaches for minimizing them. Results: Some of the flaws result in minor hurdles, while others cause evaluations to fail—that is, the evaluation is unable to provide a valid test of the hypothesis of interest. The flaws that appear to occur most frequently are response bias resulting from attrition, failure to adequately implement the treatment as designed, and too small a sample to detect impacts. The third of these can result from insufficient marketing, too small an initial target group, disinterest on the part of the target group in participating (if the treatment is voluntary), or attrition. Conclusion To a considerable degree, the flaws we discuss can be minimized. For instance, implementation failures and too small a sample can usually be avoided with sufficient planning, and response bias can often be mitigated—for example, through increased follow-up efforts in conducting surveys.