A Surplus of Men, A Deficit of Peace: Security and Sex Ratios in Asia's Largest States
In: International security, Band 26, Heft 4, S. 5-38
ISSN: 1531-4804
2103055 Ergebnisse
Sortierung:
In: International security, Band 26, Heft 4, S. 5-38
ISSN: 1531-4804
In: International security, Band 26, Heft 4, S. 93-111
ISSN: 1531-4804
In: International security, Band 26, Heft 4, S. 70-92
ISSN: 1531-4804
In: Evaluation: the international journal of theory, research and practice, Band 8, Heft 2, S. 249-258
ISSN: 1461-7153
In: Evaluation: the international journal of theory, research and practice, Band 8, Heft 2, S. 227-248
ISSN: 1461-7153
Additionality is a recurring issue when evaluating the impact of public support for research and development (R&D). To what degree does public support spur additional R&D efforts and output? This article discusses the additionality concept and its measurement. Verbal reports from recipients of public support are a widely used method for measuring the additionality of ongoing R&D programmes. However, it is often argued that these reports cannot be trusted since the recipients might have an incentive to answer strategically to maintain funding. Based on the verbal reports of project additionality in Norwegian evaluations over the last two decades, involving 2624 observations, higher additionality is reported in projects further from the market involving higher risks and uncertainty than in projects closer to market launch. This is in line with a priori expectations. It will be argued that the problem of strategic answering might be over-rated, and that verbal reports can provide important evidence. This is exemplified by results from verbal reports of additionality obtained by interviewing the customers of 21 different Norwegian research institutes.1 They show that public support is more important for the realization of projects within small- and medium-sized enterprises than with larger firms.
In: Evaluation: the international journal of theory, research and practice, Band 8, Heft 2, S. 155-156
ISSN: 1461-7153
In: Evaluation: the international journal of theory, research and practice, Band 8, Heft 2, S. 259-266
ISSN: 1461-7153
In: Evaluation: the international journal of theory, research and practice, Band 8, Heft 2, S. 272-274
ISSN: 1461-7153
In: Evaluation: the international journal of theory, research and practice, Band 8, Heft 2, S. 267-271
ISSN: 1461-7153
In: Evaluation: the international journal of theory, research and practice, Band 8, Heft 2, S. 205-226
ISSN: 1461-7153
In the UK a great deal of attention is currently focused on the potential of the `theories of change' approach to evaluating complex public policy interventions. However, there is still relatively little empirical material describing its application. This article discusses the use of `theories of change' in the national evaluation of English Health Action Zones (HAZs). It locates `theories of change' within the wider context of evaluation approaches and assesses its strengths and weaknesses as an evaluation framework.The article then focuses on a key aspect of complex public policy interventions — cross-sector collaboration. Drawing on data about cross-sector partnerships and community involvement from the English HAZ evaluation, the article explores the contribution of `theories of change' towards examining the building of collaborative capacity in HAZs.The article also describes the `co-research' approach being employed within the national HAZ evaluation. It discusses how this approach can complement the use of `theories of change', contribute to managing change within organizations and communities and facilitate more effective use of evaluation within a local health context.
In: Evaluation: the international journal of theory, research and practice, Band 8, Heft 2, S. 157-181
ISSN: 1461-7153
Evaluation research is tortured by time constraints. The policy cycle revolves quicker than the research cycle, with the result that `real time' evaluations often have little influence on policy making. As a result, the quest for evidence-based policy has turned increasingly to systematic reviews of the results of previous inquiries in the relevant policy domain. However, this shifting of the temporal frame for evaluation is in itself no guarantee of success. Evidence, whether new or old, never speaks for itself. Accordingly, there is debate about the best strategy of marshalling bygone research results into the policy process. This article joins the imbroglio by examining the logic of the two main strategies of systematic review: `meta-analysis' and `narrative review'. Whilst they are often presented as diametrically opposed perspectives, this article argues that they share common limitations in their understanding of how to provide a template for impending policy decisions. This review provides the background for Part II of the article (to be published in the next issue, Evaluation 8[3]), which considers the merits of a new model for evidence-based policy, namely `realist synthesis'.
In: Evaluation: the international journal of theory, research and practice, Band 8, Heft 2, S. 182-204
ISSN: 1461-7153
What we expect from public programs determines how the programs will be judged. This article presents a systematic values inquiry used to examine how much importance citizens and stakeholders attached to specific indicators in determining the success of a public preschool program. Four groups were surveyed: teachers, administrators, parents and the public. All four groups showed significant differences with respect to the importance of 29 possible indicators. The four groups agreed on the importance of high-quality services and about the lack of importance associated with economic benefits for families and certain educational outcomes for the preschoolers. Using confirmatory factor analysis, the opinions of the groups were shown to have similar underlying structures; however, teachers made greater distinctions between the groups of potential outcomes and the public distinguished between outcomes the least. Systematic values inquiry can be an important tool in designing evaluations that will produce information that can influence the judgments of citizens and stakeholders about a program's value.
In: Evaluation: the international journal of theory, research and practice, Band 8, Heft 2, S. 272-274
ISSN: 1461-7153
In: Relations internationales: revue trimestrielle d'histoire, Band 110, Heft 2, S. 163-180
ISSN: 2105-2654
In: Relations internationales: revue trimestrielle d'histoire, Band 110, Heft 2, S. 181-196
ISSN: 2105-2654