Citation analysis in research evaluation
In: Information science and knowledge management 9
10 Ergebnisse
Sortierung:
In: Information science and knowledge management 9
Policy highlights:If evaluation is defined as "a systematic determination of a subject's merit, worth and significance, using criteria governed by a set of standards", there is no evaluation without an evaluative framework specifying these criteria and standards.On the other hand, evaluative informetrics itself, defined as the study of evaluative aspects of science and scholarship using citation analysis, altmetrics and other indicators, does not evaluate.However, informetric indicators are often used in research assessment processes. To obtain a better understanding of their role, the links between evaluative informetrics and 'values' are investigated, and a series of practical guidelines are proposed.Informetricians should maintain in their evaluative informetric studies a neutral position toward the policy issues addressed and the criteria specified in an evaluative framework.As professional experts, informetricians' competence lies primarily in the development and application of analytical models within the context of a given evaluative framework.Informetric researchers could propose that evaluators and policy makers incorporate fundamental scientific values such as openness and adopting a critical attitude in assessment processes.Informetricians could also promote and participate in an overall discussion within the academic community and the research policy domain about the objectives and criteria in research assessment processes and the role of informetric tools therein.Evaluative informetrics is defined as the study of evaluative aspects of science and scholarship using informetric data and methodologies, such as citation analysis and altmetrics. Following the main lines of an article by the Dutch philosopher O.D. Duintjer, nine interfaces are distinguished between quantitative science studies, especially evaluative informetrics, and the domain of values, including scientific, socio-historical, political, ethical and personal norms and objectives. Special attention is given to the "principle of value neutrality" at the meta-level of methodological rules guiding scientific inquiry and to the crucial, independent role of evaluative frameworks in research evaluation. The implications of the various relationships between science and values for research practices in evaluative informetrics and for its application in research assessment are considered.
BASE
Policy highlights• This paper criticizes a "quick-and-dirty" desktop model for the use of metrics in the assessment of academic research performance, and proposes a series of alternatives.• It considers often used indicators: publication and citation counts, university rankings, journal impact factors, and social media-based metrics.• It is argued that research output and impact are multi-dimensional concepts; when used to assess individuals and groups, these indicators suffer from severe limitations:• Metrics for individual researchers suggest a "false precision"; university rankings are semi-objective and semi-multidimensional; informetric evidence of the validity of journal impact measures is thin; and social media-based indicators should at best be used as complementary measures.• The paper proposes alternatives to the desktop application model: Combine metrics and expert knowledge; assess research groups rather than individuals; use indicators to define minimum standards; and use funding formula that reward promising, emerging research groups.• It proposes a two-level model in which institutions develop their own assessment and funding policies, combining metrics with expert and background knowledge, while at a national level a meta-institutional agency marginally tests the institutions' internal assessment processes.• According to this model, an inappropriate type of metrics use is when a meta-institutional agency is concerned directly with the assessment of individuals or groups within an institution.• The proposed model is not politically neutral. A normative assumption is that of the autonomy of academic institutions. The meta-institutional entity acknowledges that it is the primary responsibility of the institutions themselves to conduct quality control.• Rather than having one meta-national agency defining what is research quality and what is not, and how it should be measured, the proposed model facilitates each institution to define its own quality criteria and internal policy objectives, and to make these public.• But this freedom of institutions is accompanied by a series of obligations. As a necessary condition, institutions should conceptualize and implement their internal quality control and funding procedures.• Although a meta-institutional agency may help to improve an institution's internal processes, a repeatedly negative outcome of a marginal test may have negative consequences for the institution's research funding. This paper discusses a subject as complex as the assessment of scientific-scholarly research for evaluative purposes. It focuses on the use of informetric or bibliometric indicators in academic research assessment. It proposes a series of analytical distinctions. Moreover, it draws conclusions regarding the validity and usefulness of indicators frequently used in the assessment of individual scholars, scholarly institutions and journals. The paper criticizes a so called desktop application model based upon a set of simplistic, poorly founded assumptions about the potential of indicators and the essence of research evaluation. It proposes a more reflexive, theoretically founded, two-level model for the use of metrics of academic research assessment.
BASE
In: Science and public policy: journal of the Science Policy Foundation, Band 34, Heft 8, S. 575-583
ISSN: 1471-5430
In: Science & public policy: SPP ; journal of the Science Policy Foundation, Band 34, Heft 8, S. 575-583
ISSN: 0302-3427, 0036-8245
In: Research Policy, Band 40, Heft 10, S. 1380-1392
In: Proceedings of ISSI 2013 Vienna. Volume 2, S. 1493-1505
The purpose of this paper is to describe the evaluation of the effectiveness of the bibliometric technique Bradfordizing in an information retrieval (IR) scenario. Bradfordizing is used to re-rank topical document sets from conventional abstracting & indexing (A&I) databases into core and more peripheral document zones. Bradfordized lists of journal articles and monographs will be tested in a controlled scenario consisting of different A&I databases from social and political sciences, economics, psychology and medical science, 164 standardized IR topics and intellectual assessments of the listed documents. Does Bradfordizing improve the ratio of relevant documents in the first third (core) compared to the second and last third (zone 2 and zone 3,
respectively)? The IR tests show that relevance distributions after re-ranking improve at a significant level if documents in the core are compared with documents in the succeeding zones. After Bradfordizing of document pools, the core has a significant better average precision than zone 2, zone 3 and baseline. This paper should be
seen as an argument in favour of alternative non-textual (bibliometric) re-ranking methods which can be simply applied in text-based retrieval systems and in particular in A&I databases. (author's abstract)
In: https://dspace.library.uu.nl/handle/1874/394065
The emergence of Open Access (OA) publishing has created new economic niches and debates in academic publishing. OA journals offer numerous publication outlets with differing editorial philosophies and business models. Scholars and academic stakeholders must decide which journals offer acceptable value for the direct or indirect costs of academic publishing. Our research uses the Directory of Open Access Journals (N=12,100) to identify various characteristics of OA academic journals that influence journal prices. The Journal Impact Factor (JIF), language, publisher mission, DOAJ Seal, World Bank Economic and Geographic regions of publishers, peer review duration and journal discipline are identified as factors with significant influence over journal price levels. Journals with status endowments (JIF, DOAJ Seal), published in wealthier regions, in medical or science-based disciplines, and with English-based articles are relatively more expensive. Scholarly and political economic inequalities manifest in the prices and benefits offered by different journals and publishers throughout the world.
BASE