Evolutionary Evaluation: Implications for evaluators, researchers, practitioners, funders and the evidence-based program mandate
In: Evaluation and Program Planning, Band 45, S. 127-139
30 Ergebnisse
Sortierung:
In: Evaluation and Program Planning, Band 45, S. 127-139
In: Evaluation and program planning: an international journal, Band 45
ISSN: 0149-7189
In: New directions for evaluation: a publication of the American Evaluation Association, Band 2008, Heft 118, S. 61-72
ISSN: 1534-875X
AbstractThe authors synthesize relevant literature and findings of evaluations of four large‐scale, federally funded scientific research programs in the United States to identify desired outcomes of these types of programs, major evaluation chal‐lenges, and methodological principles and approaches. Evaluators face numer‐ous contextual, political, and methodological challenges in evaluating big science. The authors propose that these may be addressed through participatory planning, such as concept mapping, triangulation of evidence, use of promising methodologies, and a systems approach. © Wiley Periodicals, Inc.
In: New directions for evaluation: a publication of the American Evaluation Association, Band 2009, Heft 123, S. 3-11
ISSN: 1534-875X
AbstractEvaluation policy is of considerable importance, especially in relation to the limited amount of attention it receives as a general topic in the mainstream evaluation literature. Evaluation policies matter for several reasons, among them that they can profoundly affect evaluation practice, they underlie many recent and current controversies about evaluation, and they may be a lever for change that can have far‐reaching effects for practice. This chapter gives an overview of several issues regarding evaluation policy, including defining it, identifying possible facets of evaluation policy, describing how it is established, and outlining the potentially greater role for evaluators in shaping the evaluation policies that influence evaluation practice. © Wiley Periodicals, Inc.
In: New directions for evaluation: a publication of the American Evaluation Association, Band 2009, Heft 123, S. 1-2
ISSN: 1534-875X
In: New directions for evaluation: a publication of the American Evaluation Association, Band 2009, Heft 123, S. 103-109
ISSN: 1534-875X
AbstractThree issues for evaluation policy and practice are described: evaluation policy dimensions, evaluation policy instruments, and the political and economic environment for evaluation policy. Selected future directions are outlined, including the need to describe the evaluation policy landscape, further articulate an evaluation policy taxonomy, and develop and implement tactics for influencing evaluation policy, with particular attention to the role of professional associations. © Wiley Periodicals, Inc.
In: Evaluation review: a journal of applied social research, Band 19, Heft 1, S. 39-63
ISSN: 0193-841X, 0164-0259
In: Evaluation review: a journal of applied social research, Band 18, Heft 2, S. 141-152
ISSN: 0193-841X, 0164-0259
In: Small group research: an international journal of theory, investigation, and application, Band 42, Heft 2, S. 127-176
ISSN: 1552-8278
Through three studies of interacting small groups, we aimed to better understand the meaning and consequences of process conflict. Study 1 was an exploratory analysis of qualitative data that helped us to identify the unique dimensions of process conflict to more clearly distinguish it from task and relationship conflict. Study 2 used a broader sampling of participants to (a) demonstrate why process conflict has been difficult to discriminate from task conflict in many conflict scales, and (b) develop a two-factor Process Conflict Scale that effectively distinguishes process from task conflict. Study 3 used this new scale to examine the relationship between process conflict and group viability (group performance, satisfaction, and effective group process). The results showed that process conflict negatively affects group performance, member satisfaction, and group coordination.
In: http://www.biomedcentral.com/1472-6882/4/2
Abstract Background Despite the substantive literature from survey research that has accumulated on complementary and alternative medicine (CAM) in the United States and elsewhere, very little research has been done to assess conceptual domains that CAM and conventional providers would emphasize in CAM survey studies. The objective of this study is to describe and interpret the results of concept mapping with conventional and CAM practitioners from a variety of backgrounds on the topic of CAM. Methods Concept mapping, including free sorts, ratings, and multidimensional scaling was used to organize conceptual domains relevant to CAM into a visual "cluster map." The panel consisted of CAM providers, conventional providers, and university faculty, and was convened to help formulate conceptual domains to guide the development of a CAM survey for use with United States military veterans. Results Eight conceptual clusters were identified: 1) Self-assessment, Self-care, and Quality of Life; 2) Health Status, Health Behaviors; 3) Self-assessment of Health; 4) Practical/Economic/ Environmental Concerns; 5) Needs Assessment; 6) CAM vs. Conventional Medicine; 7) Knowledge of CAM; and 8) Experience with CAM. The clusters suggest panelists saw interactions between CAM and conventional medicine as a critical component of the current medical landscape. Conclusions Concept mapping provided insight into how CAM and conventional providers view the domain of health care, and was shown to be a useful tool in the formulation of CAM-related conceptual domains.
BASE
In: Evaluation review: a journal of applied social research, Band 15, Heft 4, S. 395-419
ISSN: 0193-841X, 0164-0259
In: New directions for evaluation: a publication of the American Evaluation Association, Band 2021, Heft 169, S. 31-45
ISSN: 1534-875X
AbstractIn this paper, we present a comprehensive integrative approach to evaluation planning based on Relational Systems Evaluation (RSE). We describe the major implementation approaches and the tools and resources that support evaluation planning in RSE embodied in the Systems Evaluation Protocol (SEP), a step‐by‐step guide to the tasks that every evaluation should address. The SEP provides both a conceptual framework for thinking about evaluation and a set of specific methods and tools that expand awareness of the influences of multiple parts of complex systems. The SEP is implemented through a partnership approach, joining evaluators and program professionals in an iterative, phased approach to evaluation planning, which also often emphasizes evaluation capacity building (ECB) as an intentional goal. This paper provides a practice‐oriented overview of the specific steps comprising the three stages outlined in the SEP: preparation, model development, and evaluation plan development.
In: New directions for evaluation: a publication of the American Evaluation Association, Band 2021, Heft 169, S. 7-9
ISSN: 1534-875X
In: New directions for evaluation: a publication of the American Evaluation Association, Band 2021, Heft 169, S. 11-18
ISSN: 1534-875X
AbstractIn this paper, we introduce Relational Systems Evaluation (RSE), the focal topic of this volume. RSE is a framework for program planning and evaluation that is theoretically grounded, empirically tested and focused on building evaluation capacity. Theoretically, RSE is rooted in an evolutionary approach to program development and evaluation informed by systems thinking. The application of RSE involves a collaborative partnership approach pairing evaluators or researchers with program practitioners, the use of collaborative modeling and planning tools, and an emphasis on nurturing an Evaluative Thinking mindset. The goal of RSE is to produce more thoughtful and useful evaluations and, ultimately, better programs.