Concerns have once again been raised about the manipulation of clinical trials and reviews and the misrepresentation of data by pharmaceutical companies and researchers. A recent JAMA editorial suggests that either these practices are becoming more common or there has been more exposure of them. A separate report in the Lancet raises concerns that unpublished data on the benefit of trastuzumab given sequentially is substantially less than is currently thought.
An initially cautious approach to adopting new technologies based on evidence from a single source remains appropriate, especially in the light of these descriptions of the manipulation of basic scientific practices and data. Such practices must cease. Using trusted, public-sector summaries of evidence which collate evidence from several sources, together with understanding the importance of absolute as well as relative benefits and translating those numbers into terms patients can understand, offers the best protection against being misled and results in informed decisions being based on the best available evidence.
What has come to light recently?
Two recent reports have suggested that Merck persuaded academics to put their names to papers that had been ghost-written by people paid by the company, and may have misrepresented the risk-benefit profile of rofecoxib by selectively reporting data, including to the FDA.
Clinical trial manuscripts related to rofecoxib were frequently written by Merck employees and review articles on rofecoxib were often prepared by authors at medical publishing companies. The academic “authors” had little to do with the studies or reviews and did not always disclose financial support from Merck, despite many of them accepting honoraria of up to several thousand dollars for the use of their names on the papers eventually published. Although Merck employees were credited as co-authors on most papers, the employees of the medical writing companies were not. These claims are based on documents from the lawsuit concerning rofecoxib and are discussed in a case study review published in JAMA.
A separate second case study review published in the same issue of JAMA found that Merck may have misrepresented the risk-benefit profile of rofecoxib. It looked at data, obtained during the lawsuit, from clinical trials of rofecoxib in Alzheimer’s disease and cognitive impairment. In April 2001, Merck performed an intention-to-treat analysis of pooled data from two trials in patients with dementia. They found a significant three-fold increase in total mortality with rofecoxib compared with placebo (hazard ratio 2.99, 95% CI 1.55 to 5.77). However, when Merck submitted a Safety Update Report to the US Food and Drug Administration (FDA) in July 2001, it appears that only on-treatment analysis of total mortality was included, which underestimated the risk. Even though the FDA then raised questions about the safety of rofecoxib, Merck assured it that the difference in mortality between rofecoxib and placebo was probably due to chance. The data from the intention-to-treat analysis were not submitted to them until 2003. In addition, when the FDA questioned Merck about safety monitoring, Merck revealed that there was no data and safety monitoring board for one of the studies. In spite of these concerns the study continued for a further two years.
Publication bias is also being increasingly reported. This occurs when only positive studies are published, not those which show insignificant or unfavourable results, or when only selected results from clinical trials are published. A Lancet comment recently highlighted concerns over adjuvant trastuzumab (Herceptin®) for early breast cancer.
A randomised controlled trial by the North Central Cancer Treatment Group compared three groups: sequential treatment (as licensed in the UK) and concurrent treatment with trastuzumab and other chemotherapy, and control. However the results have not been published in full. Although the results from the 12-month concurrent and control groups were published almost three years ago, and interim data for all three groups were presented in brief at a conference, the full data for the women given sequential trastuzumab therapy have not yet been released. If the interim data for sequential therapy are incorporated with data from other sequential trastuzumab studies, it appears that that trastuzumab given sequentially is substantially less effective than is currently thought. Full, updated outcome data are necessary to enable a proper meta-analysis to be conducted.
Being sceptical about new clinical trial data from a single source would seem to be sensible, especially if those data are either remarkably positive or inconsistent with the current body of evidence. However, concern about scientific processes being manipulated introduces a risk of disillusionment and, as the JAMA editorial says, loss of public trust.
We should avoid being cynical and dismissing evidence-based medicine as being fraught with insurmountable problems and too complicated. The alternative to evidence is anecdote, “received expert opinion” and reliance on personal experience. These are shifting sands on which to build clinical policies – one might reflect on whether, for example, beta-blocker treatment for heart failure would have been accepted without good quality clinical trial data, much of it funded by the pharmaceutical industry.
Concerns over the validity of individual study data reinforce the view that practice should rarely be changed on the basis of one study. Results need to be reinforced by further trials conducted by different researchers in differing populations supported by high quality post-marketing surveillance before we can be confident that we have a reasonably accurate picture of the risks and benefits of a new technology. Using trusted, public-sector summaries of evidence which collate evidence from several sources, together with understanding the importance of absolute as well as relative benefits and translating those numbers into terms patients can understand, offers the best protection against being misled and results in informed decisions being taken on the best available evidence.
We hope that raised awareness of the ways in which studies can be misrepresented and manipulated will lead to improvements in future publications.