ORIGINAL
Abstract
WHAT IS ALREADY KNOWN ABOUT THIS SUBJECT
-
Industry-funded studies tend to emphasize favourable beneficial effects of the sponsor’s product, but we do not know if reports of adverse effects are downplayed.
-
Pharmaceutical companies are required to collate and accurately report adverse effects data in order to fulfil regulatory requirements.
WHAT THIS STUDY ADDS
-
The bias found in the studies looking at the association between industry funding and reporting of beneficial effects may not be as prominent when considering adverse effects data.
-
Industry-funded studies do not appear to differ from non-industry-funded studies in reporting the raw adverse effects data, but the interpretation and conclusions may be slanted to favour the sponsor’s product.
-
Readers of industry-funded studies should critically examine the raw safety data themselves rather than be swayed by the authors’ interpretation.
AIM
To investigate whether adverse effects data for the sponsor’s product are presented more favourably in pharmaceutical industry-funded studies than in non-industry-funded studies.
METHODS
We conducted a systematic review of methodological evaluations that had assessed the relationship between industry funding and the reported risk of adverse effects. Searches were undertaken in 10 databases and supplemented with other sources of information such as handsearching, citation searching, checking conference proceedings and discussion with experts. Two reviewers independently screened the records and carried out data extraction for potentially relevant papers. We included studies that compared the results and interpretation of the adverse effects data according to funding source (e.g. adverse effects data in pharmaceutical industry research vs. data from nonprofit organizations, or from one manufacturer vs. another). Methodological evaluations were excluded if categories of funding source were not explicitly specified by the researchers, and if we were uncertain that industry-funded studies were present in the evaluation.
RESULTS
The search strategy yielded 4069 hits, of which six methodological evaluations met our inclusion criteria. One survey of 370 trials covering a wide range of topics found that trials with industry sponsors had more complete reporting of adverse effects compared with non-industry-funded trials, whereas another survey of 504 inhaled corticosteroid studies showed no apparent difference after confounding factors were adjusted for. In contrast, we found evidence from post hoc subgroup analyses involving two products where the likelihood of harm was of a lower magnitude in manufacturer-funded studies compared with nonmanufacturer-funded studies. There is also evidence from four methodological evaluations that authors with industry funding were more likely than authors without pharmaceutical funding to interpret and conclude that a drug was safe, even among studies that did find a statistically significant increase in adverse effects for the sponsored product.
CONCLUSIONS
Our review indicates that industry funding may not be a major threat to bias in the reporting of the raw adverse effects data. However, we are concerned about potential bias in the interpretation and conclusions of industry-funded authors and studies.
Introduction
Methodological evaluations have identified a potential association between source of funding and the publication of more favourable results for the sponsor’s product [1, 2]. Existing evaluations have focused mainly on effectiveness outcomes, with the aim of determining the relationship between funding and more positive reporting of beneficial effects. However, favourable outcomes can consist of greater benefit, a reduction in harm, or a combination of both. It is not clear whether adverse effects profiles are affected by bias in industry-funded research, whereby potential harm is downplayed and positive aspects of safety are emphasized. Manufacturers are governed to some degree by the safety requirements of the regulatory authorities, and may therefore strive to provide unbiased data on adverse effects.
We aimed to review systematically any methodological evaluations that assessed the reporting of adverse effects and any potential association with source of funding. Information on the extent (if any) of this type of bias will help clinical pharmacologists and pharmacovigilance teams who are involved in the critical appraisal of drug safety data from different sources.
Methods
Our systematic review was conducted by two independent reviewers who retrieved potentially relevant articles and extracted data. The two reviewers then met, resolved discrepancies and reached a consensus on the final results.
Search strategy
Searches were undertaken in 10 electronic databases to retrieve methodology papers related to all aspects of the incorporation of adverse effects into systematic reviews. Due to the limitations of searching for methodological papers, it was envisaged that relevant papers may be missed by searching databases alone. We therefore undertook citation searches of all included papers using Web of Science, handsearching of selected key journals, conference proceedings and web sources, and contact with other researchers in the field (Appendix 1).
Selection criteria
A methodological evaluation was considered eligible for inclusion in this review if it looked at the results or interpretation of the reported adverse effects data according to funding source (e.g. adverse effects data in pharmaceutical industry research vs. data from nonprofit organization, or from one manufacturer vs. another). We accepted methodological evaluations of any design, including primary studies and systematic reviews. Methodological evaluations were excluded if categories of funding source were not explicitly specified by the researchers and if we were uncertain that industry-funded studies were present in the evaluation.
Data extraction
Information was collected on the selection criteria, interventions and adverse effects, the number, design and funding sources of studies included in the methodological evaluation, and the outcomes used in assessing differences between studies.
Data analysis
We aimed to provide a narrative assessment of the available methodological evaluations, and did not plan on conducting a meta-analysis as the outcome measures were unlikely to be homogeneous. Where available, we recorded both the crude summary statistics and the adjusted estimates based on correction for confounding factors.
Assessment of methodological quality
The following criteria were used to assess the quality of the existing methodological evaluations;
-
Role of confounding factors: Did the researchers select comparison groups (i.e. data from different funding sources) that were equally matched? For example, did the industry-funded studies share similar aims, designs and sample sizes as the non-industry-funded ones? If not, were there adjustments for potentially confounding factors that could affect the association between funding and the nature of the adverse effects data? We looked to see if any of the following confounding factors had been considered: study design, methodological quality, type of intervention and control intervention, sample size, disease area, type of adverse effects.
-
Missing data or misclassification: How often were the researchers able to establish reliably the source of funding for the reported data?
-
Blinding: Were the researchers aware of the funding source when they were judging the nature of the adverse effects data?
-
Validity and representativeness: Did the researchers select an adequate sample of studies (in terms of size, diversity of topics and range of adverse effects) that were reasonably reflective of current literature?
Results
Included studies
The searches retrieved 4609 records, of which six methodological evaluations met the inclusion criteria (Table 1). The flow chart of study selection is given in Figure 1. All six reports were concerned with drug interventions, but five of six evaluations were limited to the adverse effects of a single agent or single class of drugs. We found only one report that assessed funding source and reporting of safety data across a wide range of diseases and drugs. The number of studies included in the methodological evaluations ranged between 10 and 504, with only two reports including more than 100 studies.
Half of the methodological evaluations focused on adverse effects data within randomized controlled trials, two included observational data, and one had a mixture of reports of original research, reviews and letters. Most methodological evaluations compared manufacturer funding with nonmanufacturer funding; however, one report looked for differences in adverse effects data in research funded by competing manufacturers.
Excluded studies
There were two methodological evaluations that potentially met the inclusion criteria but were subsequently excluded from this review [3–5]. One evaluation [3] contained duplicate data from an included article [6], whereas another evaluation (in two publications) was excluded after contacting the author, as the categories of funding source were unclear, but were unlikely to include industry-funded studies [4, 5].
Summary of methodological quality (Table 1)
Four of the methodological evaluations used some form of adjustment for potentially confounding factors, although the comprehensiveness of the factors adjusted for varied [6–10]. A major constraint in assessing an association between source of funding and the reporting of adverse effects is the lack of information on funding source. Only two methodological evaluations described or used appropriate methodology to assess the number of studies not reporting any funding source; these evaluations both included trial data only and reported that 28.6% and 17.3% of studies did not disclose any funding source [7, 8, 10]. Blinding was reported by only two evaluations, one of which tested the effect of blinding on a subsample of included studies and found that blinding did not impact on the results [7, 8]. Overall, our assessment of quality and validity showed that Als-Nielsen’s evaluation was probably the most robust [7, 8].
Definitions of manufacturer-associated funding varied, as did the methods and outcome measures used to assess the association between funding and adverse effects reporting, making it difficult to pool the results of the studies in a meta-analysis.
Impact of funding source and selective reporting of specific types of adverse effects
Als-Nielsen et al. looked at a diverse range of randomized trials funded by for-profit organizations and noted that these trials had more complete reporting of adverse events (128/146, 88%), particularly with a higher frequency of adverse events being found in the experimental arm [7, 8]. In contrast, trials funded by nonprofit organizations often failed to mention adverse effects (35/60, 52.3%) and were less likely to describe higher frequencies of adverse events for the experimental arm [7, 8].
Confounding factors
Nieto et al.’s evaluation of inhaled corticosteroids reported that statistically significant results for adverse effects were found less often in pharmaceutical industry-funded studies [crude prevalence ratio 0.53, 95% confidence interval (CI) 0.44, 0.64], whereas the non-industry-funded studies were more likely to report significant harm [10]. However, in many ways Nieto et al. may have been comparing apples with oranges. Studies funded by industry were more likely to be multicentre, parallel group, randomized controlled trials, with the stated primary objective of evaluating efficacy over a relatively short follow-up period. Conversely, non-industry-funded studies were more commonly observational in nature, with the primary objectives of evaluating adverse effects such as long-term problems with growth or bone metabolism. After adjustment for these confounders, Nieto et al. found a nonsignificant prevalence ratio 0.94 (95% CI 0.77, 1.15), thus indicating that the difference associated with funding may be mediated by other variables in the analysis [10].
Impact of funding source and magnitude of risk of harm
We looked for evidence that the risk of harm from the sponsor’s product may have been downplayed in industry-funded studies. A subgroup evaluation from Kemmeren et al.’s meta-analysis showed that the pooled data from industry-funded studies yielded a weaker association between third-generation oral contraceptives and venous thrombosis [6]. Similarly, Juni et al.’s meta-analysis showed that studies funded by Merck were associated with greater cardioprotective effects of naproxen, thus implying a lesser risk of harm from Merck’s product (rofecoxib) [9]. However, the weakness of this evidence is that they were post hoc subgroup analyses, involving only a small number of studies and subject to confounding.
Funding source and interpretation of adverse effects data
The included studies revealed some interesting potential associations between funding source and the subjective interpretation or conclusions regarding the adverse effects data. For example, Nieto et al. found that authors of pharmaceutical studies were more likely than authors of nonpharmaceutical studies to conclude that a drug was safe even among studies that did find a statistically significant increase in adverse effects [10]. Similarly, Rochon et al. also found that a manufacturer-associated drug was often judged to be less toxic, even though this interpretation was not always supported by test of statistical significance [11]. Finally, Als-Nielsen et al. noted an association between favourable recommendation for a product and the manufacturer’s sponsorship, irrespective of the actual magnitude of treatment benefit or safety results seen in the trial [7, 8].
The study by Stelfox et al. focused on the association between financial relationship with manufacturers and the conclusions of studies, and identified that authors of articles supporting the safety of calcium-channel antagonists were more likely to have a financial relationship [12]. However, this study did not check the appropriateness of the conclusions of the authors against the actual adverse effects data of the studies. Moreover, authors of supportive articles were just as likely to have received funding from manufacturers of drugs that compete with calcium-channel blockers. This suggests that authors who are less critical of safety issues may be more likely to be receive funding from any manufacturer.
We should also bear in mind the considerable potential for error and bias when trying to judge whether the data interpretation and conclusions of a study are excessively favourable or not. Stelfox et al. [12] and Als-Nielsen et al. [7, 8] attempted to have some degree of blinding of the reviewers, but none of the remaining four studies used blinding.
Discussion
Our systematic review has identified somewhat mixed evidence surrounding the postulated link between industry funding and more favourable reporting of adverse effects data. Before drawing any conclusions, we need to look into the methodological problems that surround such research. First, we should bear in mind that all the existing methodological evaluations are ‘observational’ in nature. Whereas some of them had predefined objectives [7, 8, 12], others were post hoc or subgroup analyses. We consider confounding to be a major problem in most of the methodological evaluations, where the baseline features (e.g. study design, patient population, primary objectives) of the industry-funded studies may have differed from those of the non-industry-funded studies. This is particularly apparent in Nieto et al.’s evaluation, where the observed differences became nonsignificant after adjustment for confounding factors [10].
We are also concerned about the possibility of reporting or publication bias with respect to methodological evaluations [13]. Journal editors may look more favourably upon articles that show biased reporting of adverse effects in industry-funded studies. Researchers who did not find any industry-related bias may have decided to omit such results from their manuscripts or chosen not to submit their articles for publication. Equally, researchers who found evidence of industry-funded bias may have avoided publicizing the results so as not to jeopardize any industry ties that they might have.
The generalizability of the data is also contentious. It would be unfair for us to draw broad conclusions about bias in all industry-funded studies when the data are limited to a few studies or to only a specific class of drugs. Moreover, reporting recommendations have changed over time, with tightening of regulatory requirements, and the publication of the CONSORT statement on harms [14]. Existing methodological evaluations have not taken into account temporal changes, or the availability of complete adverse effects data from unpublished company trial reports such as that of the excellent GlaxoSmithKline Clinical Trials Registry (http://ctr.gsk.co.uk/welcome.asp). Unpublished trial data from many manufacturers are now unified under a single website (http://www.clinicalstudyresults.org), although it remains unclear whether the format of the data is of sufficient simplicity to allow easy interpretation by lay persons.
Failure to classify funding source accurately is the most prominent weakness in the methodological evaluations. In Nieto et al.’s work, 87 (17.3%) studies were lumped into the non-industry-funded group, despite there being no information on funding source [10]. Misclassification of such a large number of studies could have major influences on the direction and magnitude of any link between funding and adverse effects data. The largest methodological problem, however, lies with the difficulty in verifying authorship and the reliability of financial declarations in published papers. Two recent papers have highlighted problems with ghost authorship and inaccurate financial disclosures (e.g. not disclosing a financial interest in one article, but found to have declared industry funding in another publication) [15, 16]. If studies categorized under non-industry funding have been misclassified (and were actually industry funded), this would dilute the strength of any argument that non-industry-funded studies provide less biased reports of adverse effects.
There is also a considerable amount of subjectivity involved in trying to determine whether the interpretation and conclusions of a study were biased towards the sponsor’s product. Reviewers who are critical of the pharmaceutical industry may have taken a harsher view in finding fault with industry-funded studies, whereas those supportive of the industry may have been less likely to judge that bias was present. Unfortunately, blinding and inter-rater reliability are key parameters that were seldom specified by the methodological researchers.
As this is a systematic review, the main limitations stem from the weaknesses in the original data. We have attempted to address publication bias by using a comprehensive search strategy that included handsearching, checking conference proceedings and discussion with experts in the field for any information. Where possible, we have focused our review on results related to clearly defined funding sources, and looked for data where the researchers have made adjustments for confounding factors. We appreciate, however, that future methodological evaluations could be substantially improved if a wider range of drugs were studied, with more rigorous ascertainment of funding source, and closer matching of trial designs and quality in the comparator groups.
Bearing in mind these limitations, what conclusions might be drawn? First, there is no definite evidence that funding source leads to selective reporting of adverse effects outcomes that favoured the sponsor’s product. Indeed, Als-Nielsen et al., whose study probably has the strongest quality criterion, found that the opposite was true, with industry-funded studies providing more complete reporting and higher rate of adverse effects for the experimental arm [7, 8]. Unlike nonprofit organization-funded studies, pharmaceutical companies hoping to submit a licensing application may be more focused on providing an accurate depiction of adverse events as the data might be subjected to rigorous regulatory scrutiny. Indeed, the information submitted to the regulatory authorities may be less positive than that seen in the published articles [17].
We also do not have definite evidence that industry-funded studies present a lower magnitude of risk of harm from the sponsor’s product. However, pharmaceutical trials have been accused elsewhere in the literature of using design modifications to ascertain lower adverse effects. Such methods may potentially include using lower doses of the intervention and higher doses for the controls, monitoring for adverse effects using open-ended or nonspecific questions, and the choice of inappropriate comparators [10, 18–21].
Our systematic review does indicate, though, that funding source may impact on the nature of the authors’ interpretation and conclusions regarding the safety profile. However, the interpretation of adverse effects data relies not only on statistical significance, but also on subjective judgements of clinical relevance, preventability and absolute risk. It would be prudent for readers to check the adverse effects data themselves using the approach recommended in the Cochrane Handbook [22], rather than rely on the authors’ subjective interpretations.
Conclusion
Our systematic review indicates that industry funding may not be a major threat to bias in the reporting of the raw adverse effects data. However, we are concerned that industry funding may sway the interpretation and conclusions of the study.
Competing interests
S.G. is funded via a Health Sciences Research Fellowship from the Medical Research Council (MRC). Y.K.L. is European Editor of the British Journal of Clinical Pharmacology, but has no involvement in the review process for this manuscript.
Acknowledgments
We thank Jane Burch of CRD, York for her kind assistance in screening the titles and abstracts in the Endnote library.
Supporting information
The following supporting information is available for this article online:
This material is available as part of the online article
Please note: Blackwell Publishing is not responsible for the content or functionality of any supplementary materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.