In trying to create a pure scientific experiment and thereby maximize likelihood of drug approval, sponsors may restrict enrollment using restrictive eligibility criteria that may exclude, for example, patients on other medications or with comorbidities. This practice may be reasonable in the early phases of the study to distill the effect of the drug, free of confounding influences; however, when these restrictive criteria are carried over to later phases of the trials, they make it even more difficult to find a sufficient number of participants and consequently protract the recruiting process (Kramer, Smith, & Califf, 2012). To illustrate the enrollment implications of this increased stringency, a 2010 study by Tufts CSDD reported that 48 percent of patients screened for clinical trials actually completed the trials in the period between 1990 and 1999, while only 23 percent of patients screened in the 2000-2009 period completed them (Kramer, Smith, & Califf, 2012).
Aside from hampering recruitment, the restrictions on participant eligibility also raise scientific concerns, as the new drug might not be adequately studied on relevant patient populations, such as people with common comorbidities. For example, the cardiovascular risks associated with the arthritis drug rofecoxib were established as the sponsor pursued a possible new indication for the drug, not in the course of a systematic study of arthritis patients with concomitant cardiovascular disease (Kramer, Smith, & Califf, 2012). This issue is discussed further Section 4.7.
Complex Clinical Trial Protocols
Clinical trial protocols, which outline the trial methodology, are becoming increasingly complex, involving more assessments, exploratory endpoints, biomarkers, biopsies, etc., and increasing the administrative burden of trials. A study of over 10,000 industry-sponsored clinical trials found that the quantity and frequency of trial-related procedures (e.g., laboratory tests, patient questionnaires) per protocol has increased increased by 6.5 percent and 8.7 percent per year, respectively, during the time period between 1999 and 2005(Getz, Wenger, Campo, Sequine, & Kaitin, 2008). A separate study of 57 Phase 1–Phase 3, industry-created research protocols found that the average total number of protocolrequired procedures increased from 90 for the time period between 1999 and 2002 to 150 for the time period between 2003 and 2005; the average number of inclusion criteria increased from 10 in 1999 to 26 in 2005, and the average case report form expanded from 55 pages in 1999–2002 to 180 in 2003–2006 (Kramer, Smith, & Califf, 2012).
Case Report Forms (CRFs)
A case report form (CRF) is a tool used by investigators to collect data for each participant throughout the trial. More complex CRFs including many data points can significantly increase trial monitoring and other costs (e.g., storage of samples) (English, Lebovitz, & Giffin, 2010), perhaps unnecessarily if the data being collected are not relevant to the specific study. According to experts and industry representatives interviewed, sponsors almost always capture more data than they eventually use in their FDA submissions, and sometimes this extra data even confounds study results. Though the percentage of data collected that ultimately goes unused varies by trial, interviewees estimated that it is anywhere from 10 to 30 percent, and a recent study by Kenneth Getz and others at Tufts CSDD found that 22.3 percent of all clinical trial procedures are considered to be non-core (17.7 percent of Phase 2 procedures and 24.7 percent of Phase 3 procedures). According to that study, which used clinical data from Medidata, 18 percent—or approximately $1.1 million—of a typical study budget is being spent on procedures for supplementary secondary, tertiary, and exploratory endpoints, while another $1.3 million (22 percent) is spent on procedures supporting regulatory compliance (Tufts CSDD, 2012). These findings confirm anecdotal evidence cited in an earlier article by Kenneth Getz, which reported that sponsors estimate that between 15 and 30 percent of all clinical data collected is not used in NDA submissions, costing an additional $20 to $35 million in direct drug development costs for the average drug (Getz K. A., 2010b).
The reasons given by interviewees for collecting this extra data were many and varied. Researchers tend to be overly inclusive, as they are scientifically-minded individuals who want to be able to answer the main question and test other theories, as well. Some of the extra data are needed when the clinical value of some endpoints is uncertain. Moreover, companies tend to collect what they have always collected in the past and simply add new items as needed, without reconsidering whether the old measurements are necessary (Getz & Campo, 2013). FDA reviewers, for their part, might have grown accustomed to seeing the “usual” data points, such as hematology and other general health measures, even if they are nonessential to the study. Some data are collected in part to satisfy payers and providers (e.g., quality of life measurements and other patient-centric measurements). Finally, companies may solicit input from “key opinion leaders” (KOLs) on protocol design, and, while KOLs are practitioners and experts in their disease areas, they may be less well versed in study design and the specific data points that are needed for FDA approval.
Some of the individuals interviewed expressed the opinion that collection of extra data is unavoidable due to the nature of the process; clinical trials represent research under uncertain conditions, and at the time when they are making data collection decisions, study designers do not know for sure what they will need. Some also argued that the data being collected are not actually superfluous because there is always need for the data on file, not because FDA is mandating it, but because it is supportive and reasonable to collect.
Other respondents felt data collection—or at least data collection costs—could be reined in through various means. For example, some of the data can be collected at lower-cost facilities, such as local clinics and pharmacies, reducing the need for infrastructure and overhead. Companies can also be more practical in their planning and streamline their studies by minimizing the number of research questions they seek to answer in a single trial. One respondent said the ideal scenario would be for sponsors to conduct large, simple trials that make use of information that already exists in patients’ electronic health records (EHR) rather than collecting lots of redundant data themselves. In fact, FDA recently published guidance on best practices for conducting and reporting on pharmacoepidemiologic safety studies that use electronic healthcare data sets (including administrative claims data and electronic medical record (EMR) data), acknowledging the potential for new technologies and statistical methods to allow for easier study of safety issues, particularly in situations where observational studies/clinical trials are infeasible (U.S. Food and Drug Administration, 2011a). Some respondents also called for more flexibility on the part of FDA; for example, drugs can be approved without mortality data with the requirement that post-marketing data be collected to demonstrate safety. The drug can later be withdrawn from the market if there are concerns.
Still, there are hurdles to implementing some of these ideas. While the use of administrative databases sounds promising, in reality, researchers always fear the “what-ifs” and collect more data “just in case.” Data sufficiency concerns can be crippling to the development timeline, especially if another clinical trial is required, and researchers are over-cautious as a result. Furthermore, with regard to post-market data collection, several respondents noted that FDA is justifiably worried about the problematic history of pharmaceutical company promises about post-marketing clinical trials, as some companies have drawn out the process of designing post-market clinical trials for many years. Lastly, efforts to simplify data collection are presently hindered by the lack of standardized electronic CRFs that can be used by all researchers across the industry (still, progress is being made; efforts to develop a library of standardized oncology CRFs are already underway) (English, Lebovitz, & Giffin, 2010).
Clinical trial protocols often need to be amended after they have been finalized and approved, a process which can be costly and time-consuming, but also preventable. Using data provided by 17 midsized and large pharmaceutical and biotechnology companies, a recent study conducted by Tufts CSDD analyzed the types, frequency, causes, and costs of nearly 3,600 protocol amendments from 3,410 protocols. The study found that nearly 60 percent of all trial protocols require amendments, a third of which are avoidable through better initial planning and participant recruitment.13 Completed protocols across all clinical trials were found to incur 2.3 amendments on average, with each amendment requiring an average of 6.9 changes to the protocol and causing substantial unanticipated costs and delays. Onethird of all amendments are related to protocol description and patient eligibility criteria; other change categories include dosage/administration, statistical methods, and trial objectives. Across all phases, 43 percent of amendments occur before any patients are enrolled, with amendments more likely to occur in Phase 1. The median time to resolve a protocol problem is 65 days (65 days multiplied by 2.3 amendments equals four to five months of lost time) (Getz, et al., 2011; Tufts CSDD, 2011).
According to the CSDD study, it cost an average of $453,932 to implement each individual protocol amendment. This total is comprised of the following direct costs associated with implementation of an amendment: increased study grants/site fees ($265,281); contract change orders to existing contracts ($109,523); new contracts with providers ($69,444); additional drug supply ($5,300); and IRB fees ($4,384). It does not include the cost of internal time dedicated to implementing each amendment, costs or fees associated with protocol language translation, and costs associated with resubmission to the local authority, nor were any indirect costs (e.g., of development or commercialization delays) estimated. It is also important to note that cost data were only available for 20 of the amendments in the sample; therefore, these cost estimates are highly prone to bias and “should be viewed with caution” (Getz, et al., 2011).
The most common causes of amendments were found to be availability of new safety information (19.5 percent), requests from regulatory agencies to amend the study (18.6 percent), changes in the study strategy (18.4 percent), protocol design flaws (11.3 percent), and difficulties recruiting study volunteers (nine percent). Less common causes include errors/inconsistencies in the protocol (8.7 percent), availability of new data (7.1 percent), investigator/site feedback (4.5 percent), changes in the standard of care (1.9 percent), and manufacturing changes (one percent). In general, protocols with longer treatment durations had a higher incidence of amendments. Among therapeutic areas, cardiovascular and gastrointestinal protocols had the highest incidence of amendments and changes per amendment (Getz, et al., 2011). One of the study’s authors, Kenneth Getz, believes protocol amendments will continue to be prevalent, as the mean number of amendments was found to be positively and significantly correlated with the increasing number of procedures per protocol, study length, and number of investigative sites involved in each clinical trial (Tufts CSDD, 2011).
When asked about protocol amendments, many representatives from smaller drug companies indicated that they regarded them as “just a cost of doing business” or a “necessary evil” that “comes with the territory.” Large companies, by contrast, seemed to have done more internal analysis of their own protocol amendment costs and set goals to lessen their frequency. One large company representative confirmed that the Tufts study estimate of $453,932 per amendment (on average) is accurate or possibly even conservative because it does not include all associated costs. Analysis of that company’s own protocol amendments found that roughly half could be categorized as “avoidable” and the other half as “unavoidable.” Another large company representative estimated the cost per amendment to be $500,000 to $1 million (including implementation costs), depending on what is involved, as some changes require costly new training or equipment, or add a whole new arm to the study and are therefore more expensive.
Failure to Integrate Study Design with Clinical Practice Flow
Industry sponsors generally do not involve site investigators in the protocol design process. As a result, the required procedures outlined in the protocol might not be easy to smoothly integrate into clinical practice at the sites (Kramer, Smith, & Califf, 2012). A CRO representative interviewed provided examples: for instance, a protocol could require that magnetic resonance imaging (MRI) and a series of neurocognitive tests be performed within three days of each other at a site that does not have sufficient access to an MRI machine; or, a protocol might require a series of labs that are highly specialized and cannot be done by the site in house. Better planning and conferring with site investigators during the protocol design phase can help trials to avoid hitting foreseeable logistical snags such as these.
13 This study was based on data collected from seventeen midsized and large pharmaceutical and biotechnology companies: Amgen, Astellas, AstraZeneca, Biogen Idec, Cephalon, Forest, Genentech, Genzyme, Lilly, Merck, Millennium, Otuska, Pfizer, Roche, Schering-Plough, Sepracor, and Takeda. Data from 3,410 protocols were collected across various therapeutic areas, yielding information on 3,596 amendments containing 19,345 total protocol modifications. The study defines amendments as “any change to a protocol requiring internal approval followed by approval from the IRB, ERB, or regulatory authority. Only implemented amendments—that is, amendments approved both internally and by the ethics committee— were counted and analyzed in this study.”