Drawing on their experience in producing a Cochrane review of neuraminidase inhibitors for influenza, the authors discuss how to improve the reliability of systematic reviews ; http://bmj.com/cgi/content/full/bmj.c7258?ijkey=VzuzLond9mt0ghC&keytype=ref ; Click here for the free full-text article on publisher's website
Providing information to implement a new core healthcare requirement - patient involvement Including real case scenarios to illustrate the principles of effective PPI Following the unique Toolkit series format of flowcharts and layouts that guide the reader through each section.
Zugriffsoptionen:
Die folgenden Links führen aus den jeweiligen lokalen Bibliotheken zum Volltext:
Abstract Background Currently there is no framework for those involved in the identification, evaluation and prioritisation of new diagnostic technologies. Therefore we aimed to develop prioritisation criteria for the assessment of new diagnostic technologies, by gaining international consensus on not only which criteria should be used, but also their relative importance. Methods A two-round Delphi process was used to generate consensus amongst an international panel of twenty-six experts on priority criteria for diagnostic health technology assessment. Participants represented a range of health care and related professions, including government, industry, health services and academia. Results Based on the responses to the first questionnaire 18 criteria were placed into three categories: high, intermediate and moderate priority. For 16 of the 18 criteria, agreement with the categorisation of the criteria into the high, intermediate and moderate categories was high at ≥ 70% (10 had agreement ≥ 80%). A further questionnaire and panel discussion reduced the criteria to 16 and two categories; seven were classified as high priority and nine intermediate. Conclusions This study proposes an objective structure of prioritisation criteria to use when assessing new diagnostic technologies, based on an expert consensus process. The value of these criteria is that no one single component should be used as the decisive driver for prioritisation of new diagnostic technologies for adoption in healthcare settings. Future studies should be directed at establishing the value of these prioritisation criteria across a range of healthcare settings.
BACKGROUND: Discrepancies between pre-specified and reported outcomes are an important source of bias in trials. Despite legislation, guidelines and public commitments on correct reporting from journals, outcome misreporting continues to be prevalent. We aimed to document the extent of misreporting, establish whether it was possible to publish correction letters on all misreported trials as they were published, and monitor responses from editors and trialists to understand why outcome misreporting persists despite public commitments to address it. METHODS: We identified five high-impact journals endorsing Consolidated Standards of Reporting Trials (CONSORT) (New England Journal of Medicine, The Lancet, Journal of the American Medical Association, British Medical Journal, and Annals of Internal Medicine) and assessed all trials over a six-week period to identify every correctly and incorrectly reported outcome, comparing published reports against published protocols or registry entries, using CONSORT as the gold standard. A correction letter describing all discrepancies was submitted to the journal for all misreported trials, and detailed coding sheets were shared publicly. The proportion of letters published and delay to publication were assessed over 12 months of follow-up. Correspondence received from journals and authors was documented and themes were extracted. RESULTS: Sixty-seven trials were assessed in total. Outcome reporting was poor overall and there was wide variation between journals on pre-specified primary outcomes (mean 76% correctly reported, journal range 25-96%), secondary outcomes (mean 55%, range 31-72%), and number of undeclared additional outcomes per trial (mean 5.4, range 2.9-8.3). Fifty-eight trials had discrepancies requiring a correction letter (87%, journal range 67-100%). Twenty-three letters were published (40%) with extensive variation between journals (range 0-100%). Where letters were published, there were delays (median 99 days, range 0-257 days). Twenty-nine studies had a pre-trial protocol publicly available (43%, range 0-86%). Qualitative analysis demonstrated extensive misunderstandings among journal editors about correct outcome reporting and CONSORT. Some journals did not engage positively when provided correspondence that identified misreporting; we identified possible breaches of ethics and publishing guidelines. CONCLUSIONS: All five journals were listed as endorsing CONSORT, but all exhibited extensive breaches of this guidance, and most rejected correction letters documenting shortcomings. Readers are likely to be misled by this discrepancy. We discuss the advantages of prospective methodology research sharing all data openly and pro-actively in real time as feedback on critiqued studies. This is the first empirical study of major academic journals' willingness to publish a cohort of comparable and objective correction letters on misreported high-impact studies. Suggested improvements include changes to correspondence processes at journals, alternatives for indexed post-publication peer review, changes to CONSORT's mechanisms for enforcement, and novel strategies for research on methods and reporting.
In: Bulletin of the World Health Organization: the international journal of public health = Bulletin de l'Organisation Mondiale de la Santé, Band 97, Heft 6, S. 394-404C
An alliance of researchers lays out a framework for taking decisions based on thinking critically about claims and comparisons. Everyone makes claims about what works. Politicians claim that stop and search will reduce violent crime; friends claim that vaccines cause autism; advertisers claim that natural food is healthy. One group of scientists claims that "deworming" programmes (giving deworming pills to all school children in affected areas) improve school performance and health, calling deworming one of the most potent anti-poverty interventions of our time. Another that deworming does not improve either school performance or health. Unfortunately, people often fail to think critically about the trustworthiness of claims, including policy makers weighing claims made by scientists. Schools do not do enough to prepare young people to think critically 1. So many people struggle to assess the trustworthiness of evidence. As a consequence, they may not make informed choices. To address this deficit, we present here a general tool: Key Concepts for Making Informed Choices (Table 1, with examples in Box 2). We hope scientists and professionals in all fields will use, evolve and evaluate it. The tool was adapted, drawing on the expertise of two dozen researchers, from a framework developed for healthcare 2 (Box 1). Ideally, the Key Concepts for Making Informed Choices should be embedded in education for citizens of all ages. This should be done using learning resources and teaching strategies that have been evaluated and shown to be effective. Trustworthy evidence People are flooded with information. Simply giving them more is unlikely to be helpful unless its value is understood. A recent survey in the UK showed that only about a third of the public trust evidence from medical research; about two-thirds trust the experiences of friends and family 3. Not all evidence is created equal. Yet people often don't appreciate which claims are more trustworthy than others; what sort of comparisons are needed to evaluate ...