The evidence base for reviews of nearly a third of clinical trials involving antidepressives submitted to the FDA is biased and incomplete, a study has concluded.
The study, just published in the New England Journal of Medicine, based its conclusion on the facts that these trials had not been published and published studies were more likely to report positive outcomes.
The study authors note that evidence-based medicine relies on assessment of published clinical trials, so selective trial reporting distorts the evidence base and may lead to bias.
All clinical trials submitted to the FDA in support of a licence application must be registered in advance, with details of the analysis planned, and the trial data must be submitted to the FDA in full for in-house analysis.
This prevents selective post-hoc analyses and reporting.
FDA in-house trial data reviews are available after licensing through US freedom of information legislation, and more recent ones are available on the agency’s website.
The study authors therefore used FDA analyses to compare the efficacy of antidepressive drugs according to analysis of published trial data and the FDA analyses.
They obtained copies of FDA reviews for 12 antidepressive drugs authorised by the FDA between 1987 and 2004.
From these, they extracted efficacy data for short-term treatment of depression from all randomised placebo-controlled trials, using licensed doses only.
They also extracted the FDA assessment of the trial as positive or negative according to the pre-specified primary outcome; where the FDA had assessed the study as neither clearly positive or negative (not significant on the primary outcome but significant on several secondary outcomes), the study was classified questionable, as were failed studies.
The researchers then used a comprehensive literature search strategy, including contact with the company involved, to determine whether each study had been published and extracted similar information from those which had.
Finally, they compared the effect sizes in the published versions with that in the FDA reviews.
The authors conclude that in this analysis there was a bias towards the publication of positive results.
Additionally, studies that were not positive were published in such a way as to suggest a positive outcome.
Overall, the evidence shows that the efficacy of drugs in this class is less than would be expected from evaluation of published data alone: selective publication bias results in their effect size appearing to be about a third larger than the effect derived from the total data.