Methods for evaluating health services
In: Sage research progress series in evaluation 8
19 results
Sort by:
In: Sage research progress series in evaluation 8
In: Behavioral science, Volume 15, Issue 4, p. 329-336
In: Human factors: the journal of the Human Factors Society, Volume 8, Issue 1, p. 48-53
ISSN: 1547-8181
Four experiments investigating diagnostic problem solving by clinical neurologists were performed. From protocols or verbal reports of physicians diagnosing neurological diseases we have identified several elementary structures used in clinical decision-making and compared them to some precisely defined information-processing primitives. A separate experiment established the reliability of the protocol information. The use of a tree structure as a representational model of the diagnostic process was tested, but not confirmed. In another experiment we discussed the implications of the elementary structures for the diagnostician's search strategy and noted two more strategy characteristics. Some hypotheses for an alternative representation and strategy were offered and it was suggested that a computer program could be used as a final test of this model.
In: Evaluation Quarterly, Volume 3, Issue 4, p. 537-553
One of the major social programs of the 1960s was the development of community mental health centers. As with most early attempts at evaluation, the results were pessimistic. This article reanalyzes one of the earliest, and best-known, evaluations of a community-based treatment facility. Following the conceptual framework of Campbell and his associates, it was found that the various threats to the validity of the findings indicate a consistent and systematic bias against detecting a positive effect for the new mental health center. In light of recent federal legislation mandating formal evaluations, appropriate procedures are discussed.
In: Evaluation review: a journal of applied social research, Volume 17, Issue 3, p. 292-309
ISSN: 1552-3926
The article addresses the issue of intercoder reliability in meta-analyses. The current practice of reporting a single, mean intercoder agreement score in meta-analytic research leads to systematic bias and overestimates the true reliability. An alternative approach is recommended in which average intercoder agreement scores or other reliability statistics are calculated within clusters of coded variables. These clusters form a hierarchy in which the correctness of coding decisions at a given level of the hierarchy is contingent on decisions made at higher levels. Two separate studies of intercoder agreement in meta-analysis are presented to assess the validity of the model.
In: Evaluation review: a journal of applied social research, Volume 17, Issue 3, p. 292-309
ISSN: 0193-841X, 0164-0259
In: Evaluation review: a journal of applied social research, Volume 11, Issue 2, p. 197-215
ISSN: 1552-3926
The concept of "pre-evaluative research" is examined in the context of a museum exhibition evaluation. Preevaluation research was recommended by some of the earliest writers in the evaluation field as a way of facilitating a formal, "ultimate, "of summative evaluation. It is viewed as distinct from, and complementary to, an evaluability assessment. The exhibit preevaluative study indicates that instrumentation and imple mentation issues are likely to benefit from such activities, but that design and analysis may still suffer.
In: Evaluation review: a journal of applied social research, Volume 11, Issue 2, p. 197-215
ISSN: 0193-841X, 0164-0259
In: New directions for program evaluation: a quarterly sourcebook, Volume 1986, Issue 29, p. 39-50
ISSN: 1534-875X
AbstractThis chapter views courses in evaluation research as integral parts of advanced degree training in public health. Three such courses are discussed.
In: New directions for program evaluation: a quarterly sourcebook, Volume 1984, Issue 24, p. 43-56
ISSN: 1534-875X
AbstractThe problems confronted in the synthesis of medical research are relevant to systematic reviews in other substantive areas. This chapter shows how some of these methodological problems can be solved.
In: New directions for program evaluation: a quarterly sourcebook, Volume 1984, Issue 24, p. 1-4
ISSN: 1534-875X
In: New directions for program evaluation: a quarterly sourcebook, Volume 1984, Issue 24, p. 5-24
ISSN: 1534-875X
AbstractSpecial problems arise when meta‐analysis is applied to quasi‐experiments. Careful consideration of threats to each study's validity may help the meta‐analyst to avoid these pitfalls.
In: New directions for program evaluation: a quarterly sourcebook, Volume 1978, Issue 4, p. 89-110
ISSN: 1534-875X
AbstractThrough quantitative and qualitative analyses of program evaluations, the Project on Secondary Analysis, Northwestern University, seeks to advance understanding of program effects and improve evaluation methodology.
In: Education and urban society, Volume 9, Issue 4, p. 471-492
ISSN: 1552-3535
In: Evaluation review: a journal of applied social research, Volume 7, Issue 6, p. 831-840
ISSN: 1552-3926
The differential attrition of persons from comparison groups severely restricts the inferences that can be made from results of evaluative research. This problem is particularly troublesome in the evaluation of medical technologies, such as coronary artery bypass graft surgery, since a substantial percentage of medical or control patients cross over to the surgical group. A procedure using worst case assumptions is developed that allows researchers to estimate the maximum effect of differential attrition, and therefore enhance the quality of their inferences. The article first illustrates theprocedure, then concludes with a discussion of the generality of the estimation procedure to other instances in which differential attrition is a problem, and points out the limitations of the approach.