Indian Journal of Anaesthesia  
About us | Editorial board | Search | Ahead of print | Current Issue | Past Issues | Instructions
Home | Login  | Users Online: 919  Print this pageEmail this pageSmall font sizeDefault font sizeIncrease font size    




 
 Table of Contents    
EDITORIAL
Year : 2017  |  Volume : 61  |  Issue : 6  |  Page : 453-455  

Concealing research outcomes: Missing data, negative results and missed publications


Department of Anaesthesiology, VIMS, Bellary, Karnataka, India

Date of Web Publication12-Jun-2017

Correspondence Address:
S Bala Bhaskar
Department of Anaesthesiology, VIMS, Bellary, Karnataka
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/ija.IJA_361_17

Rights and Permissions

How to cite this article:
Bhaskar S B. Concealing research outcomes: Missing data, negative results and missed publications. Indian J Anaesth 2017;61:453-5

How to cite this URL:
Bhaskar S B. Concealing research outcomes: Missing data, negative results and missed publications. Indian J Anaesth [serial online] 2017 [cited 2017 Dec 11];61:453-5. Available from: http://www.ijaweb.org/text.asp?2017/61/6/453/207767



Clinical research when performed as per the existing guidelines is a highly exhaustive, rigorous, focussed and time-consuming job. The preparatory part of research involves raising the research question, submitting the research proposal to authorities, obtaining relevant permissions from various regulatory bodies and conduct of the trial. This is logically followed by submission of all of the research findings for publication in peer-reviewed journals.

The results of the research that get published are expected to be replicable by others. This depends on its precision in terms of the methodology including the sample size; the results extrapolated from the study should also be applicable to the larger population so that the observed effects are near identical to true effects. Strong statistical correlations are expected to be discussed minimising bias and confounding.

During the process, all the research questions raised may not be answered as per the plan due to inadequate methodology or its poor execution. Sometimes, completed research is not submitted for publication or is only presented initially as abstract in scientific meetings or presented only as a paper later. The research is thus 'wasted' as the outcomes are not brought out to the public domain at all. More than 60% of the studies are not submitted for publication, and this figure has remained so around this level for the last few years.[1]

This failure to publish research ('file drawer effect') is because the researcher feels that the outcomes were not positive and/or because they were statistically insignificant.[2] This constitutes a large publication bias and one of the major factors contributing to poor replicability of research findings. The common reason reported for failure to publish the findings as full-length articles by the authors is lack of time and is unrelated to the journal policy and review process.[1],[3]

Missing data in published clinical trials, unintentional or deliberate ('dissemination bias'), are other sources leading to improper conclusions and may contribute to improper and insufficient outcome assessments in related meta-analyses. Fallacies in the choice of randomised trials by the authors of the meta-analyses themselves can be a source of poor analysis of evidence, thus depriving the clinicians, the patients, the policy-makers and the funding agencies of the best current evidence related to benefits and harms of an intervention. In a review designed to check the extent to which meta-analyses of drugs and biologics focus on specific named agents or even only a single agent, and identify characteristics associated with such focus, it was found that 81% did not include all treatments and 43% covered only specific treatments.[4] Selective removal of data and selective removal of patients ('data massaging') can be worse than non-reporting of all outcomes.[5]

Efforts have been made and suggestions provided by an expert group in the form of evidence-informed general and targeted recommendations so that the bias related to dissemination is reduced, addressing all the stakeholders.[6] The Overcome failure to Publish nEgative fiNdings project is a European Commission initiative with the agenda to identify dissemination bias and to develop recommendations to overcome incomplete or selective access to trial results, starting with the definition of a clinical trial. One of the strong recommendations is to promote trial registration and posting of the results in the registry and support initiatives to facilitate searches across multiple trial registries.

Deadlines for posting the results from aggregate data after completion of research will go a long way in reducing the dissemination bias, avoiding the likelihood of missing data. A recent joint statement by the Indian Council of Medical Research and other bodies across the world (the UK Medical Research Council, the Norwegian Research Council, the Bill and Melinda Gates Foundation, etc.) agreed to enforce within next 12 months, the policy of compulsorily ensuring registration of trials funded or sponsored by them and the results to be disclosed within a time frame (1 year) on the registry and/or as publication in journals.[7] Compliance rate by the researchers unfortunately is only one in five.[8] A new online tool the TrialsTracker identifies completed trials from clinicaltrials.gov, searches for results in the registry and on PubMed and presents summary statistics for each sponsor online. Sponsors who fail to make the results available online are tracked. This service is aimed to reduce withholding of trial results and promote transparency in research process.[9]

The recommendation on prospective trial registrations is very much necessary in countries such as India. Trial registration in the Clinical Trials Registry-India [10] has been made mandatory by the Drugs Controller General (India) (www.cdsco.nic.in) since 15th June 2009.

Many journals have also insisted on trial registration as a pre-requisite for publication. The Indian Journal of Anaesthesia (IJA) is encouraging trial registration for the past few years and hopefully, and rightly, registrations would likely be made mandatory in the near future. By this, the IJA hopes to reduce the dissemination bias in Indian scenario, and also enhance the stress on rigorous methodology.

Many meta-analyses and systematic reviews restrict the inclusion criteria deliberately so as to focus on specific cause-effect relationships, filtering out many inconvenient but significant criteria. Individual participant data are very valuable during the analysis but are not be available for review in many trials, adding to the bias as they may not be reflected in the evidence base.[11]

Outcomes and participant data published only in grey literature miss out analysis by some researchers, thus leading to misinterpretation of the final results and outcomes. In one of the surveys involving 31 meta-analyses to identify publication bias,[11] it was found that about 70% of the articles did not extract individual participant data from the grey literature.

Transparency of analysis by the authors of meta-analyses will promote more objective and safe end-point for the readers and clinicians. An interesting fact emerges from a meta-analysis, assessing the differences in methodological quality and conclusions of industry-supported meta-analyses versus 'non-profit' or 'no support' meta-analyses. Out of the 39 meta-analyses, ten had industry support, 18 non-profit or no support and 11 undeclared support. In the non-profit or no support groups, there was less bias in the selection of studies and search methods were more clearly defined; blinding and allocation concealment, exclusion of patients and studies was also clearly mentioned, as compared to industry-supported meta-analyses. The industry-supported meta-analyses were twice more likely to recommend the drug as compared to the meta-analyses with no-profit or no support (40% vs. 22%).[12] The transparency may thus be better maintained in the non-support group as compared to industry-supported group and results and inferences better accepted by the clinician.

Many systematic reviews fail to reveal all the evidence and all the research outcomes available. They are built with a narrow scope, withholding information about alternative evidence.[1] Considering the larger picture, this bypass of significant outcomes and evidence does not benefit the patient or the clinician.[13]

Many reviews appear out-dated due to rapid developments in the field by the time they are published. It is also difficult for authors to update and publish them due to time and financial constraints. One way to overcome this limitation is automation of the whole-trial search and selection process, as much as possible.[13] Systematic review with live cumulative network meta-analysis can allow for examining the totality of the randomised evidence using trial networks as the conventional meta-analyses have a narrow focus.[13]

How to minimise the dissemination bias and file drawer effect?: Journals should modify their policies and instructions so that greater value is provided to soundness of design and analysis, completeness of data, inclusion-exclusion criteria, etc. rather than the 'positive' results. Encouragingly, some journals have appeared purely to promote negative results and with open/transparent review (removing anonymity of reviewers), without publication bias, publishing submissions only on scientific validity without looking for positive outcomes. They are indexed in PubMed database and remove the bias and reduce the file drawer effect (examples - Journal of Negative Results in Biomedicine (https://jnrbm.biomedcentral.com), F1000reasearch (https://f1000research.com), The Missing Pieces: A Collection of Negative, Null and Inconclusive Results; http://collections.plos.org/missing-pieces, from PLOS ONE, Journal of Articles in Support of the Null Hypothesis (http://www.jasnh.com) for psychological literature).

Other ways by which trials with negative results and statistically insignificant results can be published could be to have the journal explicitly state that the journal cares about the genuineness of the research and allows presentation of all of data including negative ones.[14] It is also suggested that publication bias from the editors and referees of a journal, which is an inherent possibility, can be overcome by monitoring by an associate editor or publisher.

Defences towards the selective publications approach exist; the data thereof are scientifically rich and lesser number of such trials are required to create a near-perfect meta-analytic estimation of the true effect.[2] Thus, the file drawer effect, with 'withheld' publications, is claimed to be beneficial in conveying the best information and outcomes. It is also suggested that repeat submissions of scientific articles can be more strong in terms of statistical strengths and avoiding bias.[2]

The larger emerging picture nudges us to celebrate negativity for its great worth in research. Let us strive to unhide the hidden files. Confirmed 'nullness' after null hypothesis has significant implications in research analysis.

I never quit until I get what I'm after. Negative results are just what I'm after. They are just as valuable to me as positive results. – Thomas A. Edison



 
   References Top

1.
Dwan K, Gamble C, Williamson PR, Kirkham JJ; Reporting Bias Group. Systematic review of the empirical evidence of study publication bias and outcome reporting bias – An updated review. PLoS One 2013;8:e66844.  Back to cited text no. 1
    
2.
de Winter J, Happee R. Why selective publication of statistically significant results can be effective. PLoS One 2013;8:e66463.  Back to cited text no. 2
    
3.
Scherer RW, Ugarte-Gil C, Schmucker C, Meerpohl JJ. Authors report lack of time as main reason for unpublished research presented at biomedical conferences: A systematic review. J Clin Epidemiol 2015;68:803-10.  Back to cited text no. 3
    
4.
Haidich AB, Pilalas D, Contopoulos-Ioannidis DG, Ioannidis JP. Most meta-analyses of drug interventions have narrow scopes and many focus on specific agents. J Clin Epidemiol 2013;66:371-8.  Back to cited text no. 4
    
5.
Müller KF, Briel M, D'Amario A, Kleijnen J, Marusic A, Wager E, et al. Defining publication bias: Protocol for a systematic review of highly cited articles and proposal for a new framework. Syst Rev 2013;2:34.  Back to cited text no. 5
    
6.
Meerpohl JJ, Schell LK, Bassler D, Gallus S, Kleijnen J, Kulig M, et al. Evidence-informed recommendations to reduce dissemination bias in clinical research: Conclusions from the OPEN (Overcome failure to Publish nEgative fiNdings) project based on an international consensus meeting. BMJ Open 2015;5:e006666.  Back to cited text no. 6
    
7.
Major Research Funders and International NGOs to Implement WHO Standards on Reporting Clinical Trial Results. World Health Organization (WHO). Available from: http://www.who.int/mediacentre/news/releases/2017/clinical-trial-results/en/. [Last accessed on 2017 May 18].  Back to cited text no. 7
    
8.
Anderson ML, Chiswell K, Peterson ED, Tasneem A, Topping J, Califf RM. Compliance with results reporting at ClinicalTrials.gov. N Engl J Med 2015;372:1031-9.  Back to cited text no. 8
    
9.
Powell-Smith A, Goldacre B. The TrialsTracker: Automated ongoing monitoring of failure to share clinical trial results by all major companies and research institutions. F1000Res 2016;5:2629.  Back to cited text no. 9
    
10.
Clinical Trials Registry-India. Available from: http://www.ctri.nic.in/Clinicaltrials/login.php. [Last accessed on 2017 May 10].  Back to cited text no. 10
    
11.
Ahmed I, Sutton AJ, Riley RD. Assessment of publication bias, selection bias, and unavailable data in meta-analyses using individual participant data: A database survey. BMJ 2012;344:d7762.  Back to cited text no. 11
    
12.
Jørgensen AW, Maric KL, Tendal B, Faurschou A, Gøtzsche PC. Industry-supported meta-analyses compared with meta-analyses with non-profit or no support: Differences in methodological quality and conclusions. BMC Med Res Methodol 2008;8:60.  Back to cited text no. 12
    
13.
Créquit P, Trinquart L, Yavchitz A, Ravaud P. Wasted research when systematic reviews fail to provide a complete and up-to-date evidence synthesis: The example of lung cancer. BMC Med 2016;14:8.  Back to cited text no. 13
    
14.
Wager E, Williams P; Project Overcome failure to Publish nEgative fiNdings Consortium. “Hardly worth the effort”? Medical journals' policies and their editors' and publishers' views on trial registration and publication bias: Quantitative and qualitative study. BMJ 2013;347:f5248.  Back to cited text no. 14
    




 

Top
 
  Search
 
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

 
  In this article
    References

 Article Access Statistics
    Viewed806    
    Printed9    
    Emailed0    
    PDF Downloaded228    
    Comments [Add]    

Recommend this journal