In: Administrative science quarterly: ASQ ; dedicated to advancing the understanding of administration through empirical investigation and theoretical analysis, Volume 23, Issue 2, p. 318-330
In: Policy sciences: integrating knowledge and practice to advance human dignity ; the journal of the Society of Policy Scientists, Volume 6, Issue 3, p. 249-265
The application of experimental & quasi-experimental methods to the study of environmental problems is recommended. Issues which must be taken into account in quasi-experimental studies are reviewed. Research possibilities in collective goods situations include investigation of demand for underpriced goods & of programs for changing usage of collective goods. Problems of causal analysis in time-series data reviews include determination of whether trends before & after a treatment fit the same or different patterns, & avoidance of spurious causal inferences. Outlined is a new causal interpretative procedure, the event-causing analysis, typical of historical analysis, which infers events from prior variables. Site selection procedures are problematic for both types of quasi-experimental research. New possible research techniques are discussed. 3 Figures. Modified HA.
In: Administrative science quarterly: ASQ ; dedicated to advancing the understanding of administration through empirical investigation and theoretical analysis, Volume 20, Issue 1, p. 71-86
In: Administrative science quarterly: ASQ ; dedicated to advancing the understanding of administration through empirical investigation and theoretical analysis, Volume 17, Issue 4, p. 529-543
This article suggests criteria for suitable research designs for use with large data bases. The advantages and disadvantages of several types of quasi-experimental designs are compared. Among the more interesting designs are those trying to minimize the possibilities for selection differences between treatment and control groups by considering assignment to treatment in creative ways. When the probability of receiving a treatment is largely outside an individual's control, possible selection bias should be reduced. These designs often use data on populations rather than on the particular recipients of an intervention. Such analysis of impact on a population, rather than of effect on recipients, makes it comparatively difficult to find that an intervention makes a difference. Examples are taken from our research with the Manitoba Health Services Commission data.
In: Administrative science quarterly: ASQ ; dedicated to advancing the understanding of administration through empirical investigation and theoretical analysis, Volume 25, Issue 1, p. 57-71
An outline is presented of various problem areas which may arise between evaluators & administrators during the policy research process. Some areas of interest include: (1) evaluation's focus on systematic assessment of treatment which will interact with administrative data collection, (2) the choice of a suitable evaluation instrument, (3) administrator/evaluator compromise regarding time of reassessment, (4) standardization of treatment programs & selection criteria, (5) development of dialogue & trust between evaluator & administrator, (6) the need for the evaluator to develop a substantive knowledge of the area to be evaluated, (7) the need to avoid evaluations designed to answer yes/no questions (these are the most threatening & the least useful to the program administrator), & (8) the problem of potentially negative results. For the program evaluation to be successful, the evaluator must provide (A) comparison group data & (B) knowledge of the rationale behind the evaluation methodology. M. Cain.