Is there evidence for biased reporting of published adverse effects data in pharmaceutical industry-funded studies?

Spread the love
Br J Clin Pharmacol. 2008 Dec; 66(6): 767–773.
Published online 2008 Aug 27. doi:  10.1111/j.1365-2125.2008.03272.x
PMCID: PMC2675760




  • Industry-funded studies tend to emphasize favourable beneficial effects of the sponsor’s product, but we do not know if reports of adverse effects are downplayed.
  • Pharmaceutical companies are required to collate and accurately report adverse effects data in order to fulfil regulatory requirements.


  • The bias found in the studies looking at the association between industry funding and reporting of beneficial effects may not be as prominent when considering adverse effects data.
  • Industry-funded studies do not appear to differ from non-industry-funded studies in reporting the raw adverse effects data, but the interpretation and conclusions may be slanted to favour the sponsor’s product.
  • Readers of industry-funded studies should critically examine the raw safety data themselves rather than be swayed by the authors’ interpretation.


To investigate whether adverse effects data for the sponsor’s product are presented more favourably in pharmaceutical industry-funded studies than in non-industry-funded studies.


We conducted a systematic review of methodological evaluations that had assessed the relationship between industry funding and the reported risk of adverse effects. Searches were undertaken in 10 databases and supplemented with other sources of information such as handsearching, citation searching, checking conference proceedings and discussion with experts. Two reviewers independently screened the records and carried out data extraction for potentially relevant papers. We included studies that compared the results and interpretation of the adverse effects data according to funding source (e.g. adverse effects data in pharmaceutical industry research vs. data from nonprofit organizations, or from one manufacturer vs. another). Methodological evaluations were excluded if categories of funding source were not explicitly specified by the researchers, and if we were uncertain that industry-funded studies were present in the evaluation.


The search strategy yielded 4069 hits, of which six methodological evaluations met our inclusion criteria. One survey of 370 trials covering a wide range of topics found that trials with industry sponsors had more complete reporting of adverse effects compared with non-industry-funded trials, whereas another survey of 504 inhaled corticosteroid studies showed no apparent difference after confounding factors were adjusted for. In contrast, we found evidence from post hoc subgroup analyses involving two products where the likelihood of harm was of a lower magnitude in manufacturer-funded studies compared with nonmanufacturer-funded studies. There is also evidence from four methodological evaluations that authors with industry funding were more likely than authors without pharmaceutical funding to interpret and conclude that a drug was safe, even among studies that did find a statistically significant increase in adverse effects for the sponsored product.


Our review indicates that industry funding may not be a major threat to bias in the reporting of the raw adverse effects data. However, we are concerned about potential bias in the interpretation and conclusions of industry-funded authors and studies.

Keywords: adverse effects, bias, industry funding, systematic review


Methodological evaluations have identified a potential association between source of funding and the publication of more favourable results for the sponsor’s product [1, 2]. Existing evaluations have focused mainly on effectiveness outcomes, with the aim of determining the relationship between funding and more positive reporting of beneficial effects. However, favourable outcomes can consist of greater benefit, a reduction in harm, or a combination of both. It is not clear whether adverse effects profiles are affected by bias in industry-funded research, whereby potential harm is downplayed and positive aspects of safety are emphasized. Manufacturers are governed to some degree by the safety requirements of the regulatory authorities, and may therefore strive to provide unbiased data on adverse effects.

We aimed to review systematically any methodological evaluations that assessed the reporting of adverse effects and any potential association with source of funding. Information on the extent (if any) of this type of bias will help clinical pharmacologists and pharmacovigilance teams who are involved in the critical appraisal of drug safety data from different sources.


Our systematic review was conducted by two independent reviewers who retrieved potentially relevant articles and extracted data. The two reviewers then met, resolved discrepancies and reached a consensus on the final results.

Search strategy

Searches were undertaken in 10 electronic databases to retrieve methodology papers related to all aspects of the incorporation of adverse effects into systematic reviews. Due to the limitations of searching for methodological papers, it was envisaged that relevant papers may be missed by searching databases alone. We therefore undertook citation searches of all included papers using Web of Science, handsearching of selected key journals, conference proceedings and web sources, and contact with other researchers in the field (Appendix 1).

Selection criteria

A methodological evaluation was considered eligible for inclusion in this review if it looked at the results or interpretation of the reported adverse effects data according to funding source (e.g. adverse effects data in pharmaceutical industry research vs. data from nonprofit organization, or from one manufacturer vs. another). We accepted methodological evaluations of any design, including primary studies and systematic reviews. Methodological evaluations were excluded if categories of funding source were not explicitly specified by the researchers and if we were uncertain that industry-funded studies were present in the evaluation.

Data extraction

Information was collected on the selection criteria, interventions and adverse effects, the number, design and funding sources of studies included in the methodological evaluation, and the outcomes used in assessing differences between studies.

Data analysis

We aimed to provide a narrative assessment of the available methodological evaluations, and did not plan on conducting a meta-analysis as the outcome measures were unlikely to be homogeneous. Where available, we recorded both the crude summary statistics and the adjusted estimates based on correction for confounding factors.

Assessment of methodological quality

The following criteria were used to assess the quality of the existing methodological evaluations;

  1. Role of confounding factors: Did the researchers select comparison groups (i.e. data from different funding sources) that were equally matched? For example, did the industry-funded studies share similar aims, designs and sample sizes as the non-industry-funded ones? If not, were there adjustments for potentially confounding factors that could affect the association between funding and the nature of the adverse effects data? We looked to see if any of the following confounding factors had been considered: study design, methodological quality, type of intervention and control intervention, sample size, disease area, type of adverse effects.
  2. Missing data or misclassification: How often were the researchers able to establish reliably the source of funding for the reported data?
  3. Blinding: Were the researchers aware of the funding source when they were judging the nature of the adverse effects data?
  4. Validity and representativeness: Did the researchers select an adequate sample of studies (in terms of size, diversity of topics and range of adverse effects) that were reasonably reflective of current literature?


Included studies

The searches retrieved 4609 records, of which six methodological evaluations met the inclusion criteria (Table 1). The flow chart of study selection is given in Figure 1. All six reports were concerned with drug interventions, but five of six evaluations were limited to the adverse effects of a single agent or single class of drugs. We found only one report that assessed funding source and reporting of safety data across a wide range of diseases and drugs. The number of studies included in the methodological evaluations ranged between 10 and 504, with only two reports including more than 100 studies.

Table 1

Characteristics of included studies and outcome measures
Figure 1

Flowchart of study selection

Half of the methodological evaluations focused on adverse effects data within randomized controlled trials, two included observational data, and one had a mixture of reports of original research, reviews and letters. Most methodological evaluations compared manufacturer funding with nonmanufacturer funding; however, one report looked for differences in adverse effects data in research funded by competing manufacturers.

Excluded studies

There were two methodological evaluations that potentially met the inclusion criteria but were subsequently excluded from this review [35]. One evaluation [3] contained duplicate data from an included article [6], whereas another evaluation (in two publications) was excluded after contacting the author, as the categories of funding source were unclear, but were unlikely to include industry-funded studies [4, 5].

Summary of methodological quality (Table 1)

Four of the methodological evaluations used some form of adjustment for potentially confounding factors, although the comprehensiveness of the factors adjusted for varied [610]. A major constraint in assessing an association between source of funding and the reporting of adverse effects is the lack of information on funding source. Only two methodological evaluations described or used appropriate methodology to assess the number of studies not reporting any funding source; these evaluations both included trial data only and reported that 28.6% and 17.3% of studies did not disclose any funding source [7, 8, 10]. Blinding was reported by only two evaluations, one of which tested the effect of blinding on a subsample of included studies and found that blinding did not impact on the results [7, 8]. Overall, our assessment of quality and validity showed that Als-Nielsen’s evaluation was probably the most robust [7, 8].

Definitions of manufacturer-associated funding varied, as did the methods and outcome measures used to assess the association between funding and adverse effects reporting, making it difficult to pool the results of the studies in a meta-analysis.

Impact of funding source and selective reporting of specific types of adverse effects

Als-Nielsen et al. looked at a diverse range of randomized trials funded by for-profit organizations and noted that these trials had more complete reporting of adverse events (128/146, 88%), particularly with a higher frequency of adverse events being found in the experimental arm [7, 8]. In contrast, trials funded by nonprofit organizations often failed to mention adverse effects (35/60, 52.3%) and were less likely to describe higher frequencies of adverse events for the experimental arm [7, 8].

Confounding factors

Nieto et al.’s evaluation of inhaled corticosteroids reported that statistically significant results for adverse effects were found less often in pharmaceutical industry-funded studies [crude prevalence ratio 0.53, 95% confidence interval (CI) 0.44, 0.64], whereas the non-industry-funded studies were more likely to report significant harm [10]. However, in many ways Nieto et al. may have been comparing apples with oranges. Studies funded by industry were more likely to be multicentre, parallel group, randomized controlled trials, with the stated primary objective of evaluating efficacy over a relatively short follow-up period. Conversely, non-industry-funded studies were more commonly observational in nature, with the primary objectives of evaluating adverse effects such as long-term problems with growth or bone metabolism. After adjustment for these confounders, Nieto et al. found a nonsignificant prevalence ratio 0.94 (95% CI 0.77, 1.15), thus indicating that the difference associated with funding may be mediated by other variables in the analysis [10].

Impact of funding source and magnitude of risk of harm

We looked for evidence that the risk of harm from the sponsor’s product may have been downplayed in industry-funded studies. A subgroup evaluation from Kemmeren et al.’s meta-analysis showed that the pooled data from industry-funded studies yielded a weaker association between third-generation oral contraceptives and venous thrombosis [6]. Similarly, Juni et al.’s meta-analysis showed that studies funded by Merck were associated with greater cardioprotective effects of naproxen, thus implying a lesser risk of harm from Merck’s product (rofecoxib) [9]. However, the weakness of this evidence is that they were post hoc subgroup analyses, involving only a small number of studies and subject to confounding.

Funding source and interpretation of adverse effects data

The included studies revealed some interesting potential associations between funding source and the subjective interpretation or conclusions regarding the adverse effects data. For example, Nieto et al. found that authors of pharmaceutical studies were more likely than authors of nonpharmaceutical studies to conclude that a drug was safe even among studies that did find a statistically significant increase in adverse effects [10]. Similarly, Rochon et al. also found that a manufacturer-associated drug was often judged to be less toxic, even though this interpretation was not always supported by test of statistical significance [11]. Finally, Als-Nielsen et al. noted an association between favourable recommendation for a product and the manufacturer’s sponsorship, irrespective of the actual magnitude of treatment benefit or safety results seen in the trial [7, 8].

The study by Stelfox et al. focused on the association between financial relationship with manufacturers and the conclusions of studies, and identified that authors of articles supporting the safety of calcium-channel antagonists were more likely to have a financial relationship [12]. However, this study did not check the appropriateness of the conclusions of the authors against the actual adverse effects data of the studies. Moreover, authors of supportive articles were just as likely to have received funding from manufacturers of drugs that compete with calcium-channel blockers. This suggests that authors who are less critical of safety issues may be more likely to be receive funding from any manufacturer.

We should also bear in mind the considerable potential for error and bias when trying to judge whether the data interpretation and conclusions of a study are excessively favourable or not. Stelfox et al. [12] and Als-Nielsen et al. [7, 8] attempted to have some degree of blinding of the reviewers, but none of the remaining four studies used blinding.


Our systematic review has identified somewhat mixed evidence surrounding the postulated link between industry funding and more favourable reporting of adverse effects data. Before drawing any conclusions, we need to look into the methodological problems that surround such research. First, we should bear in mind that all the existing methodological evaluations are ‘observational’ in nature. Whereas some of them had predefined objectives [7, 8, 12], others were post hoc or subgroup analyses. We consider confounding to be a major problem in most of the methodological evaluations, where the baseline features (e.g. study design, patient population, primary objectives) of the industry-funded studies may have differed from those of the non-industry-funded studies. This is particularly apparent in Nieto et al.’s evaluation, where the observed differences became nonsignificant after adjustment for confounding factors [10].

We are also concerned about the possibility of reporting or publication bias with respect to methodological evaluations [13]. Journal editors may look more favourably upon articles that show biased reporting of adverse effects in industry-funded studies. Researchers who did not find any industry-related bias may have decided to omit such results from their manuscripts or chosen not to submit their articles for publication. Equally, researchers who found evidence of industry-funded bias may have avoided publicizing the results so as not to jeopardize any industry ties that they might have.

The generalizability of the data is also contentious. It would be unfair for us to draw broad conclusions about bias in all industry-funded studies when the data are limited to a few studies or to only a specific class of drugs. Moreover, reporting recommendations have changed over time, with tightening of regulatory requirements, and the publication of the CONSORT statement on harms [14]. Existing methodological evaluations have not taken into account temporal changes, or the availability of complete adverse effects data from unpublished company trial reports such as that of the excellent GlaxoSmithKline Clinical Trials Registry ( Unpublished trial data from many manufacturers are now unified under a single website (, although it remains unclear whether the format of the data is of sufficient simplicity to allow easy interpretation by lay persons.

Failure to classify funding source accurately is the most prominent weakness in the methodological evaluations. In Nieto et al.’s work, 87 (17.3%) studies were lumped into the non-industry-funded group, despite there being no information on funding source [10]. Misclassification of such a large number of studies could have major influences on the direction and magnitude of any link between funding and adverse effects data. The largest methodological problem, however, lies with the difficulty in verifying authorship and the reliability of financial declarations in published papers. Two recent papers have highlighted problems with ghost authorship and inaccurate financial disclosures (e.g. not disclosing a financial interest in one article, but found to have declared industry funding in another publication) [15, 16]. If studies categorized under non-industry funding have been misclassified (and were actually industry funded), this would dilute the strength of any argument that non-industry-funded studies provide less biased reports of adverse effects.

There is also a considerable amount of subjectivity involved in trying to determine whether the interpretation and conclusions of a study were biased towards the sponsor’s product. Reviewers who are critical of the pharmaceutical industry may have taken a harsher view in finding fault with industry-funded studies, whereas those supportive of the industry may have been less likely to judge that bias was present. Unfortunately, blinding and inter-rater reliability are key parameters that were seldom specified by the methodological researchers.

As this is a systematic review, the main limitations stem from the weaknesses in the original data. We have attempted to address publication bias by using a comprehensive search strategy that included handsearching, checking conference proceedings and discussion with experts in the field for any information. Where possible, we have focused our review on results related to clearly defined funding sources, and looked for data where the researchers have made adjustments for confounding factors. We appreciate, however, that future methodological evaluations could be substantially improved if a wider range of drugs were studied, with more rigorous ascertainment of funding source, and closer matching of trial designs and quality in the comparator groups.

Bearing in mind these limitations, what conclusions might be drawn? First, there is no definite evidence that funding source leads to selective reporting of adverse effects outcomes that favoured the sponsor’s product. Indeed, Als-Nielsen et al., whose study probably has the strongest quality criterion, found that the opposite was true, with industry-funded studies providing more complete reporting and higher rate of adverse effects for the experimental arm [7, 8]. Unlike nonprofit organization-funded studies, pharmaceutical companies hoping to submit a licensing application may be more focused on providing an accurate depiction of adverse events as the data might be subjected to rigorous regulatory scrutiny. Indeed, the information submitted to the regulatory authorities may be less positive than that seen in the published articles [17].

We also do not have definite evidence that industry-funded studies present a lower magnitude of risk of harm from the sponsor’s product. However, pharmaceutical trials have been accused elsewhere in the literature of using design modifications to ascertain lower adverse effects. Such methods may potentially include using lower doses of the intervention and higher doses for the controls, monitoring for adverse effects using open-ended or nonspecific questions, and the choice of inappropriate comparators [10, 1821].

Our systematic review does indicate, though, that funding source may impact on the nature of the authors’ interpretation and conclusions regarding the safety profile. However, the interpretation of adverse effects data relies not only on statistical significance, but also on subjective judgements of clinical relevance, preventability and absolute risk. It would be prudent for readers to check the adverse effects data themselves using the approach recommended in the Cochrane Handbook [22], rather than rely on the authors’ subjective interpretations.


Our systematic review indicates that industry funding may not be a major threat to bias in the reporting of the raw adverse effects data. However, we are concerned that industry funding may sway the interpretation and conclusions of the study.

Competing interests

S.G. is funded via a Health Sciences Research Fellowship from the Medical Research Council (MRC). Y.K.L. is European Editor of the British Journal of Clinical Pharmacology, but has no involvement in the review process for this manuscript.


We thank Jane Burch of CRD, York for her kind assistance in screening the titles and abstracts in the Endnote library.

Supporting information

The following supporting information is available for this article online:

This material is available as part of the online article

Please note: Blackwell Publishing is not responsible for the content or functionality of any supplementary materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.

Appendix 1

Sources searched


1. Bero L, Oostvogel F, Bacchetti P, Lee K. Factors associated with findings of published trials of drug–drug comparisons: why some statins appear more efficacious than others. PLoS Med. 2007;4:e184. [PMC free article] [PubMed]
2. Sismondo S. Pharmaceutical company funding and its consequences: a qualitative systematic review. Contemp Clin Trials. 2008;29:109–13. [PubMed]
3. Vandenbroucke JP, Helmerhorst FM, Rosendaal FR. Competing interests and controversy about third generation oral contraceptives. BMJ. 2000;320:381. [PMC free article] [PubMed]
4. Chou R, Fu R, Carson S, Saha S, Helfand M. Empirical Evaluation of the Association between Methodological Shortcomings and Estimates of Adverse Events. Rockville, MD: Agency for Healthcare Research and Quality (AHRQ); 2006. Report No.: Technical Review 13. [PubMed]
5. Chou R, Fu R, Carson S, Saha S, Helfand M. Methodological shortcomings predicted lower harm estimates in one of two sets of studies of clinical interventions. J Clin Epidemiol. 2007;60:18–28. [PubMed]
6. Kemmeren JM, Algra A, Grobbee DE. Third generation oral contraceptives and risk of venous thrombosis: meta-analysis. BMJ. 2001;323:119–20. [PMC free article] [PubMed]
7. Als-Nielsen B, Chen W, Gluud C, Kjaergard LL. [abstract] In: XI Cochrane Colloquium: Evidence, Health Care and Culture. Spain: Barcelona; 2003. Association of funding and authors’ conclusions in randomized drug trials: a reflection of treatment benefit or adverse events? October 26–31, 2003;36.
8. Als-Nielsen B, Chen W, Gluud C, Kjaergard LL. Association of funding and conclusions in randomized drug trials: a reflection of treatment effect or adverse events? JAMA. 2003;290:921–8. [PubMed]
9. Juni P, Nartey L, Reichenbach S, Sterchi R, Dieppe PA, Egger M. Risk of cardiovascular events and rofecoxib: cumulative meta-analysis. Lancet. 2004;364:2021–9. [PubMed]
10. Nieto A, Mazon A, Pamies R, Linana JJ, Amparo Lanuza A, Jiménez FO, Medina-Hernandez A, Nieto FJ. Adverse effects of inhaled corticosteroids in funded and nonfunded studies. Arch Intern Med. 2007;167:2047–53. [PubMed]
11. Rochon PA, Gurwitz JH, Simms RW, Fortin PR, Felson DT, Minaker KL, Chalmers TC. A study of manufacturer-supported trials of nonsteroidal anti-inflammatory drugs in the treatment of arthritis. Arch Intern Med. 1994;154:157–63. [PubMed]
12. Stelfox HT, Chua G, O’Rourke K, Detsky AS. Conflict of interest in the debate over calcium-channel antagonists. N Engl J Med. 1998;338:101–6. [PubMed]
13. Dubben HH, Beck-Bornholdt HP. Systematic review of publication bias in studies on publication bias. BMJ. 2005;331:433–4. [PMC free article] [PubMed]
14. Ioannidis JP, Evans SJ, Gotzsche PC, O’Neill RT, Altman DG, Schulz K, Moher D. CONSORT Group. Better reporting of harms in randomized trials: an extension of the CONSORT statement. Ann Intern Med. 2004;141:781–8. [PubMed]
15. Ross JS, Hill KP, Egilman DS, Krumholz HM. Guest authorship and ghostwriting in publications related to rofecoxib: a case study of industry documents from rofecoxib litigation. JAMA. 2008;299:1800–12. [PubMed]
16. Weinfurt K, Seils D, Tzeng J, Lin L, Schulman K, Califf R. Consistency of financial interest disclosures in the biomedical literature: the case of coronary stents. PLoS ONE. 2008;3:e2128. doi:10.1371/journal.pone.0002128. [PMC free article] [PubMed]
17. Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med. 2008;358:252–60. [PubMed]
18. Djulbegovic B, Lacevic M, Cantor A, Fields KK, Bennett CL, Adams JR, Kuderer NM, Lyman GH. The uncertainty principle and industry sponsored research. Lancet. 2000;356:635–8. [PubMed]
19. Safer DJ. Design and reporting modifications in industry-sponsored comparative psychopharmacology trials. J Nerv Ment Dis. 2002;190:583–92. Review. [PubMed]
20. Lexchin J, Bero LA, Djulbegovic B, Clark O. Pharmaceutical industry sponsorship and research outcome and quality: systematic review. BMJ. 2003;326:1167–70. [PMC free article] [PubMed]
21. Chan AW, Hrobjartsson A, Haahr MT, Gotzsche PC, Altman DG. Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA. 2004;291:2457–65. [PubMed]
22. Higgins JPT, Green S. Cochrane Handbook for Systematic Reviews of Interventions 5.0.0. The Cochrane Collaboration; 2008. updated February 2008.

Articles from British Journal of Clinical Pharmacology are provided here courtesy of British Pharmacological Society