Ms. Nancy L. Buc, Ms. Deborah Livornese
Buc & Beardsley
919 Eighteenth Street
N.W. Washington, D.C. 20006-5503.
Dear Ms. Buc and Livornese:
This letter responds to your complaint and request for correction under the Information Quality Act (IQA), 1 the first dated May 16, 2007 (Request for Correction) and the second dated June 26, 2007(Request for Correction Amendment). Your Request for Correction, submitted on behalf of your client, Genta Incorporated (Genta or the sponsor), concerns the Food and Drug Administration’s (FDA’s) presentations and statements mad regarding clinical data submitted in support of new drug application (NDA) 21-649, Genasense (oblimersen), for advanced melanoma.
You Request for correction alleges that presentations and statements made by the Office of Oncology Drug Products (the Office) and the Divisions of Drug Oncology Products (the Division) as part of the Oncologic Drugs Advisory Committee (ODAC) Meeting on May 3, 2004, “applied a flawed statistical model to Genta’s data and based on that model, stated the erroneous conclusion that Genta’s data sis not demonstrate that Genasense significantly improved progression-free survival.” (Request for Correction at 1). Your Request for Correction claims that these presentations and statements are “not accurate, reliable, or unbiased, do not meet commonly accepted scientific and statistical standards, and sis not apply sound analytical techniques.” (Request for Correction at 4-5). You further allege that both the FDA statistical model, also referred to in this response as a simulation, and statements based on the simulation regarding Genta’s data on Genasense “Lack the objectivity, accuracy, reliability, and lack of bias that the [information] Quality Act requires.” (Request for Correction at 5).The information to which you refer (the Disseminated Information) includes:
- selected pages from the Division’s briefing material for the ODAC Meeting (Briefing Material);2
- questions to the ODAC prepared for the ODAC Meeting (Questions);3
- Selected portions of the ODAC Meeting transcript (Transcript);4
- specific slides that were part of the Division’s slide presentation at the ODAC Meeting (Slide Presentation);5
- an FDA errata sheet dated April 26, 2004 (Errata Sheet); 6
- possibly other documents disseminated by FDA outside the Agency, such as those to foreign regulatory authorities; and
- oral statements addressing the substance of the above information disseminated by members of the Office and the Division to, among others, members of the European Medicines Agency (EMEA) and the Australian Therapeutic Goods Administration (TGA).
In your Request for Correction, you request that FDA take the following corrective action:
- stop disseminating the materials, including removing them from FDA’s Web site;
- post on FDA’s Web site a notice stating that the previous statistical analysis as reported in the Disseminated Information is flawed and inaccurate, should not have been applied to Genta’s data, and reached the erroneous conclusion that Genta’s data do not demonstrate progression-free survival; and
- issue corrective communications of any other information disseminated by FDA outside the Agency, including to the EMEA, the TGA, and other foreign regulatory bodies. (Request for Correction at 3-4).
For the reasons described below, FDA does not agree that the referenced Dissemination Information violates the IQA, Office of Management and Budget’s (OMB’s) Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity of Information Disseminated by Federal Agencies (OMB Guidelines),7 or FDA’s own implementing guidelines (FDA Guidelines),8 which are part of the Department of Health and Human Services Guidelines for Ensuring and Maximizing the Quality Objectivity, Utility, and Integrity of Information Disseminated to the Public (HHS Guidelines).9 The Disseminated Information is accurate, reliable, and unbiased, and complies with HHS IQA quality standards. Furthermore, the web posting related to advisory committee meetings are a historical record; they are designed to reflect the actual material and events related to the Advisory Committee. They are not deleted once they are posted, but there is a process for identifying errors once information is posted. 10
Some background on issues related to data quality in cancer clinical trials, the Genta study, and the FDA simulation precedes our conclusion that the FDA simulation meets IQA requirements.
A. Data Issues in Clinical Trials for Cancer Drugs
Before FDA approves a drug, the law requires that sponsors demonstrate that the drug is both safe and effective, as shown by adequate and well-controlled clinical investigations. 11
FDA determines a drug’s safety and effectiveness primarily on its analysis of submitted data, therefore, the integrity of the submitted data is essential. FDA’s analyses rely on "commnonly accepted scientific... and statistical standards" and "sound analytical techniques," as stated in the FDA Guidelines.12 To assist sponsors in meeting these standards, and ensuring the integrity of submitted data, FDA's Center for Drug Evaluation and Research (CDER) has issued numerous guidances for industry, including Clinical Trial Endpoints for the Approval of Cancer Drugs and Biologics (Endpoints Guidance) that describes accepted principles applicable to the appropriate design and analysis of clinical studies.13 As discussed in greater detail below, this guidance highlights several data quality concerns relevant to the Genta study.
Generally, endpoints for later phase efficacy studies commonly evaluate whether a drug provides a clinical benefit; for example, a study with an endpoint of improved survival would mean that the study is designed to show that the drug improves survival.14 A primary endpoint is one that reflects clinically relevant effects and is typically selected based on the principal objective of the study.15 Secondary endpoints assess other drug effects that may or may not be related to the primary endpoint.16 They are so named precisely because they are regarded as being of secondary importance. In designating progression- free survival as a secondary endpoint, Genta itself determined the subordinate level of importance to ascribe to this measurement.
The Endpoints Guidance recognizes that progression-free survival may be considered as an acceptable primary or secondary endpoint(i.e., an acceptable clinical benefit) in cancer trials, provided that the drug does not cause negative side effects that outweigh the clinical benefit.17The Endpoints Guidance identified the following stated data quality concerns in trials for cancer drugs and recommends that the sponsors demonstrate that these methodological concerns do not compromise the quality of its data: (1) lack of confirming study (i.e., having only one trial),18 (2) study being open-label (i.e., an unblended trial), 19 (3) missing data,20 (4) analysis of secondary endpoints for statistical significance when the trial has failed on the primary endpoint,21 and (95) asymmetry in progression assessment times between the control and experimental groups, the last two of which are especially relevant in the Genta study.22
B. The Genta Study
Genta sought approval of Genasense in combination with dacarbazine(DTIC) for treatment of patients with advanced melanoma who have not received prior chemotherapy.23 In support of its new drug application, Genta ‘s study was designed to show that the experimental arm would improve median survival compared to the control group; i.e., survival was the primary endpoint.24 Genta does not disagree with FDA’s conclusion that the study did not show a statistically significant increase in survival in the experimental arm over the control arm. (Request for Correction at 5).
In addition to studying improved survival, the study examined whether the experimental arm showed a statistically significant increase over the control arm in progression-free survival (PFS) (a secondary endpoint). Progression-free survival is generally defined as the time from randomization until objective tumor progression or death. 25
The schedule for assessing progression in the experimental was significantly different from that for patients in the control group (asymmetry in assessment); in other words, the median number of days from the randomization date to the assessment (e.g., CAT scan) date for patients in the experimental group differed from the median number of days between the randomization and assessment dates in the control group.26 Genta’s initial analysis showed an increase in progression-free survival from a median of 49 days in the control arm to 74 days in the experimental arm, a difference of 25 days, which Genta termed “highly significant” and “statistical[ly] significant.” (Request for Correction, Exhibit A at1; Request for Correction Amendment, Exhibit A at 9). FDA asked the sponsor to perform a different statistical analysis of PFS to account more accurately for missing data (e.g., CAT scans on comparable dates for both the control and experimental groups). Genta performed this analysis, which showed a PFS median of 48 days in the control group versus 61 days for the experimental group, a difference of 13 days, which Genta also reported as statistically significant.27 The study showed that patients in the experimental group experienced greater toxic effects than sis those in the control group.28
The Genta study raises each of the previously mentioned data quality concerns identified in the Endpoints Guidance: it was a single, unblended trial with missing data that failed on its primary endpoint of survival and had different assessment times for the control and experimental groups. (Request for Correction, Exhibit A at 1; Request for Correction Amendment, Exhibit A at 9-10). 29
The problems that result when a study that fails on its primary endpoint is analyzed for possible effect on a secondary endpoint are well recognized.30 Except in rare situations, secondary endpoint analysis cannot be validly interpreted if the primary endpoint does not demonstrate statistical significance because these multiple endpoints increase the overall study type1 error rate (i.e., the chance of incorrectly concluding that the treatment was effective).31 Analysis of secondary endpoints are therefore considered exploratory, and not capable of allowing a confirmation conclusion.32 When the primary endpoint fails, a secondary endpoint that was not prospectively allocated any type I error rate by definition cannot truly represent a statistically significant finding. In addition, as noted, the Genta study was further weakened by missing data, asymmetric observations, and lack of blinding.
C. FDA’s Simulation
Despite the well-known problems associated with reliance on a secondary endpoint after failure of the primary endpoint, FDA carefully examined the data Genta submitted on Genasense for the secondary endpoint PFS. In doing so, FDA recognized that certain aspects of the ascertainment of PFS, the secondary endpoint, were critical to a valid statistical comparison of PFS between the two treatment groups. In its review, FDA identified an asymmetry in the assessment times between the two arms in the study and attempted to evaluate the impact such an asymmetry could have in accounting for the apparent observed PFS difference. Simulations are standard statistical strategies to explore the impact of this type of difference on clinical trial outcomes. FDA performed a simulation with different scenarios to explore whether the apparent difference in PFS between the experimental and control group could be explained b the asymmetry in assessments and not by a true effect of Genasense on PFS.
To investigate how the different assessment schedules for the control and experimental groups could have influenced the comparison of progression-free survival between the treatment groups, FDA’s simulation was performed under the assumption that the distribution of progression-free survival was equal between the two treatment groups. With this assumption, several scenarios were considered that simulated how different assessment schedules between the treatment groups could result in differences in PFS even when there were no true differences. In the simulation for Scenario 1, patients in the control group were assessed every 6 weeks for up to 6 assessments, while in the simulation for scenario 2, patients in the control group were assessed every 3 weeks for up to 12 assessments. In each of these two scenarios, two different schedules for patients in the experimental group were applies: (a) patients in the experimental group were assessed 2 days later than those in the control group for each assessment; and (b) the assessment interval for patients in the experimental group was 2 days longer than those in the control group.33 The scenarios in the simulation were not intended to cover all situations nor the actual asymmetry observed in the study, but rather to show that plausible differences in ascertainment schedules could explain apparent observed PFS differences.
II. The Disseminated Information Meets IQA Standards
Your Request for Correction alleges that the FDA simulation violates the IQA because it is based on assumptions that do not reflect the actual study and it “behaves bizarrely.” (Request for Correction at 5). We disagree, and conclude that because the Disseminated Information meets the IQA requirements, no correction is needed.
As noted above the times for assessment of disease progression in the experimental arm was different from that for the control arm. The FDA reviewer articulated the valid concern raised in the scientific literature that a “difference in assessment schedule between two treatment groups could very likely lead to a false positive conclusion when in fact there is no difference in progression-free survival between the two treatment groups” (emphasis added).34 The FDA reviewer illustrated this concern with a simulation to evaluate the impact of different assessment schedules on the false positive rate, based on certain assumptions that were clearly set forth as summarized above. The simulation was provided simply to illustrate the possibility of false inference due to a defect in study design or execution. 35
Your Request for Correction recognizes “that it is appropriate to investigate whether assessment bias exists in the determination of PFS in clinical trials” (Request for Correction at 5), but “objects strenuously to the lack of objectivity, accuracy, and reliability inherent in the statistical simulation FDA used to argue that ascertainment bias rather than actual differences in the rates of progression were responsible for Genta’s results.” (Request for Correction at 5). Contrary to these allegations, FDA did not “argue that ascertainment bias rather than actual differences in the rates of progression were responsible for Genta’s results” (emphasis added), but instead stated only that the simulation demonstrated that the increase could be due to the assessment time asymmetry, 36
The HHS Guidelines describe several factors that support data objectivity. The HS Guidelines recognize that data that are transparent in their analytic assumptions 37 and are reproducible by others 38 improve objectivity and accuracy since they allow other to evaluate the analysis. The FDA simulation clearly and transparently set forth its assumptions, and the findings were reproduced by Genta and another researcher. 39
Genta has not identified any published peer-reviewed documents that conclude that the Disseminated Information was flawed. Indeed, some peer-reviewed articles and public presentations support FDA’s simulation. 40 The Carroll article attached as Exhibit B to the Request for Correction Amendment, rather than showing “that the dissemination of these flawed materials continues as a result of their continuing presence on FDA’s Web site without correction” (Request for Correction Amendment at 20, shows that the Disseminated Information withstands objective quality analysis.
III. Genta’s Criticisms of the Simulation DO Not Require Corrective Action under IQA
You assert that the Disseminated Information is flawed because “it assumed all assessments in one group occurred on a given day in the treatment arm and all assessments in the control arm took place 2 days later than that,” and that “[t]this assumption… does not reflect what actually happened in the study, or would be likely to happen in real life.” (Request for Correction at 5). FDA does not dispute that the simulation assumptions do not reflect the actual study. However, differences between assumptions used in the simulation and facts of the study do not constitute a violation of the standards in the IQA or the FDA Guidelines. The purpose of the simulation, as described above, was to better understand the impact of the asymmetry in the assessment times between the control and experimental groups as observed in the data Genta submitted. The FDA simulation was designed to identify whether the study design could lead to false inferences, based on certain simple assumptions with hypothetical data different from that which existed in the actual study; these assumptions were clearly set forth to illustrate the possibility of false inference due to a study design defect.
The fact that Genta has created simulations that do not show the same level of false inferences does not rebut the showing by the FDA simulations that the study design could have lead to a false inference. Genta has not shown that asymmetric assessment times will not cause inflation of the false inference rate. It is also important to note the scenario Genta offered also indicated inflation of the false positive rate. (Request for Correction Amendment. Exhibit A at 5). In addition, an independent simulation (with different assumptions from those in the FDA simulation) demonstrated that if there is assessment asymmetry between the two research groups, the false positive probability can become inflated. 41 Thus, the conclusion that the PFS increase could be due to asymmetry in assessments (rather than the drug) is not dependent on unrealistic assumptions, but is shown in other simulations with other assumptions.
Secondly, Genta also asserts that the FDA simulation must be flawed because, Genta claims, it “behaves bizarrely.” Genta claims that the fact that greater differences in assessment times between the two trial arms lead to a smaller percentage of false positives than do smaller differences in assessment times constitutes “bizarre behavior.” Though the differences in assessment times in FDA’s simulation do not correlate directly with the percentage of false positives, this does not diminish the usefulness of the simulation in demonstrating that the study design (e.g., systemic differences in assessment times) can result in false positive results.
The FDA simulation uses the log-rank 42 test to analyze PFS. Genta does not dispute that the log-rank test is appropriate in analyzing PFS and indeed uses the test itself. (Request for Correction Amendment, Exhibit A at 5). 43 Furthermore, in general, the log-rank test assumes that there is random variation in the event times among patients. Because the FDA simulation imposed fixed event times occurring in only one or the other treatment arm, the log-rank statistic produced results that did not track the assessment times(e.g., increased differences between assessment times did not always lead to increased differences in results). The crucial result of the simulation was that it produced more false positives than is normally accepted. Other simulations using different methods, specifically simulations that do not impose fixed times (unlike the FDA fixed-time simulation) and use varying times also showed unacceptably high false positive results. 44
Thus, even though the simulation did not produce intuitive results, Genta has not shown that the unacceptable level of false positive produced that led to FDA’s concerns about potential false positives due to asymmetric assessment time are unfounded. Nor has Genta shown that the design or results of the simulation constitutes a violation of the IQA.
We do not believe that the Disseminated Information, taken as a whole, or in any of its parts, violates the Information Quality Act standards. In addition, as this information is part of the public record associated with past Advisory Committee meetings, continuing to post this information is part of HHS compliance with the Federal Advisory Committee Act. Accordingly, we decline to take the corrective action you request. In compliance with the HHS and FDA implementing guidelines, if you do not agree with this decision on your request, you may send a request for reconsideration within 30 days of receipt of this decision. Your request for reconsideration should be designated as “Information Quality Appeal” and should include a copy of your original request as well as this decision. Your appeal should state the reasons why you believe this response to you Request for Correction is inadequate.
Sincerely, John Jenkins, MD., F.C.C.P., Director, Office of New Drugs, Center for Drug Evaluation and Research, Food and Drug Administration, cc: Jane Axelrad, Laurie Lenkel, Gerald Masoudi
1 Section 515(a) of the Treasury and General Government Appropriations Act for Fiscal Year 2001, Pub. L. No. 106-554 (Appendix C), 114 Stat.2763A-153. Previous documents in connection with this Request for Correction also referred to the law as the Federal Data Quality Act or FDQA.
2 Available on the Internet at http://www.fda.gov/ohrms/dockets/ac/04/briefing/4037B1_02_FDA-Genasense.pdf.
3 Available on the at http://www.fda.gov/ohrms/dockets/ac/04/questions/403701_01_Genasense.pdf
4 Available on the Internet at http://www.fda.gov/ohrms/dockets/ac/04/transcripts/4037T1.htm.
5 Available on the Internet at http://www.fda.gov/ohrms/dockets/ac/04/slides/4037S1_02_FDA-Kane-Yang%20_files/frame.htm#slide0105.htm.
6 Available on the Internet at http://www.fda.gov/ohrms/dockets/ac/04/briefing/403B1_02_FDA-Genasense-Errata.pdf.
7 OMB Guidelines published in the Federal Register of February 22, 2002 (67 FR8452).
8 Guidelines for Ensuring the Quality of Information Disseminated to the Public, available on the Internet at Http://aspe.hhs.gov/infoQuality/Guidelines/fda.shtml (last revised December 13, 2006).
9 Available on the Internet at http://aspe.hhs.gov/infoQuality/Guidelines/part1.shtml (last revised December 13, 2006).
10 FDA first provided Genta with the briefing Material, including the simulation at issue here, several weeks before the ODAC Meeting on May 3, 2004. Genta wrote a letter dated April 26, 2004, to FDA requesting that the Agency correct certain errors in the Briefing Material, but the letter did not allege that the simulation included in these materials was flawed or incorrect. Genta did not question or challenge any aspect of FDA’s simulation, or its application to the data Genta submitted on Genasense, at any time prior to, hiring, or soon after the ODAC Meeting. FDA demonstrated its willingness to make timely corrections with regard to other information disseminated about Genasense by promptly issuing an Errata Sheet and by reformulating on of the questions presented to the ODAC members to eliminate potential bias. (Transcript at 154-55; Questions at 3). Consistent with routine practice of posting on FDA’s Website materials for advisory committee meetings on or before the date of the committee meetings, the FDA material for the May 3, 2004, meeting was first posted on FDA’s Web site on April30, 2004. In a letter to FDA dated May 13, 2004, Genta withdrew its application, and requested specific guidance about additional data FDA would need to support approval of the NDA. FDA responded to Genta in a letter dated July 2, 2004. Since FDA’s July 2004 response, until now Genta has not raised concerns about the quality of the FDA simulation or any of the related Disseminated Information.
11 Section 505(d) of the Federal Food, Drug, and Cosmetic Act, codified at 21 U.S.C. 355(d); 21 CFR 314.126(b); Clinical Trial Endpoints for the Approval of Cancer Drugs and Biologic at 2, 8 (May 2007), available on the Internet at Http://www.fda.gov/cder/guidance/index.htm (Endpoints Guidance) (citations omitted); Transcript at 25-26, 70.
12 HHS Guidelines at I.D.1.c.2; FDA Guidelines at II.F.VII.B.
13 Clinical Trial Endpoints for the Approval of Cancer Drugs and Biologics (May 2007), available on the Internet at http://www.fda.gov/cder/guidance/index.htm (Endpoints Guidance) (citations omitted). In April 2005 (70 FR 17095), the Agency made available a draft version of the Endpoints Guidance for public comment, more than 2 years before Genta filed its IQA Request for Correction. The Endpoints Guidance includes many long-standing industry standards. The Endpoints Guidance was finalized in 2007. See the notice of availability for the draft guidance for industry on Clinical Trial Endpoints for the Approval of Cancer Drugs and Biologics. 72 FR 27575 (May 16, 2007).
14 Endpoints Guidance at 2; see also International Conference on Harmonisation, Guidance on General Considerations for Clinical Trials, 62 FR 66113, 66118 (December 17, 1997) (ICH notice) (study endpoints are the response variables that are chosen to assess drug effects that are related to pharmacokinetic parameters, pharmacodynamic measures, efficacy and safety).
15 ICH notice (62 FR at 66118; see also Robert T. O’Neill, Secondary Endpoints Cannot Be Validly Analyzed If the Primary Endpoint Does Not Demonstrate Clear Statistical Significance, 18 Controlled Clinical Trials 550,551 (1997) (O’Neill) (primary endpoint is a “clinical endpoint that provides evidence sufficient to fully characterize clinically the effect of a treatment in a manner that would support a regulatory claim for the treatment”).
16 ICH notice (62FR at 66118; O’Neill at 551 (a secondary endpoint is a “clinical endpoint that provides an additional clinical characterization of treatment effect but that is not sufficient to characterize fully the benefit or to support a claim for a treatment effect”).
17 Endpoints Guidance at 8; Transcript at 193-201; John R. Johnson, grant Williams, and Richard Pazdur, End Points and United States Food and Drug Administrative Approval of Oncology, 21 J. Clin. Oncol.1404 (April1, 2003) (approval for drugs with PFS As endpoint occurs but is rare).
18 Endpoints Guidance at 2-3 (in certain cases, evidence from a single trial can be sufficient, for example, in cases in which a single multicenter study provides “highly reliable and statistically strong evidence of an important clinical benefit”) (emphasis added).
19 Endpoints Guidance at5, 9.
20 Endpoints Guidance at 9, 14-16.
21 Endpoints Guidance at 6, 8.
22 Endpoints Guidance at 5, 6, 8-9 (symmetry in assessment times for PFS is preferable to reduce the potential for research design errors).
23 Briefing Material at 1.
24 Transcript at 39.
25 Endpoints Guidance at 8; Briefing Material at 13.
26 Briefing Material at 30, Table 15; Request for Correction, Exhibit A at 1.
27 Briefing Material at 26, 27; Transcript at 76-86.
28 Transcript at 56-60; Slide Presentation at 47-50.
29 See Briefing Material at 1-2; transcript at 22-25, 162-66.
30 O’Neill; Transcript at 91-92, 98-99, 179-81; Briefing Material at 24, 25 (“Efficacy evaluation should be solely based on the pre-specified primary analysis. Post-hoc analyses do not demonstrate efficacy and can only be considered exploratory”);
Endpoints Guidance at 8(although the Guidance only refers to PFS as an appropriate endpoint when is id a primary endpoint, the principles in the Guidance can be applied to secondary endpoints as well).
32 O’Neill at 551 (controversy arises “when there is a multiplicity of endpoints whose collective use has not been considered in advance and when none of these endpoints may fully characterize a treatment effect…. If we permit a secondary endpoint to become a primary endpoint solely on the basis of its observed statistical significance, then it is very important to formulate, in advance, the statistical structure of the decision rule for judging clear statistical evidence…. A secondary endpoint could not become a primary endpoint after the fact”); ICH notice (62 FR at 66118) (endpoints and the plan for their analysis should be prospectively specified in the protocol); Briefing Material at 51-52(sponsor claims efficacy based on apparent statistically significant differences in progression-free survival and antitumor response rate; from a statistical perspective, all the allocated type-I error rate was spent in conducting the primary analysis of overall survival, so the secondary endpoint analysis is merely exploratory; see also transcript at 22-25.
33 Briefing Material at 53.
34 Briefing Material at 28.
35 The assumptions in the simulation are set forth numerous times, See, e.g. Briefing Material at 34,53; see also request for Correction at 2; Request for Correction Amendment, Exhibit A at 1.
36 See, e.g., Briefing Material at 1, 27, 30, and 51052 (“Although the sponsor’s analysis results suggested a statistically significant difference in progression-free survival between the two treatment groups, it is not clear whether this is a true finding… Even a slight difference in assessment schedule between treatment groups could potentially bias the estimation of treatment effect and likely lead to a false positive inference in large study…. These exploratory analyses suggested that the highly statistical significance presented by the sponsor diminished after taking into account the uncertainty of missing values and different assessment schedule….[S]imulations conducted be FDA reviewers suggest that in a large study such as the one under review, with a very small systematic study arm bias such as in assessment intervals between the study arms, statistically significant differences may be observed which are in fact false positive”)(emphasis added).
37 HHS Guidelines at I.D.2.c.
38 HHS Guidelines at I.D.2.c.2, I.D.2.j.
39 Request for Correction, Exhibit A at 2; see also Brent Blumenstein, Simulations Assessing Deviations From Trial Design Assumptions,” Bringing Therapeutic Cancer Vaccines and Immunotherapies Through Development to Licensure: An FDA-NCI Sponsored Workshop (February 8-9, 2007) (Blumenstein).
40 Blumenstein; see also Kevin J. Carroll, Analysis of Progression-Free Survival in Oncology trials: Some Common Statistical Issues, 6 Pharmaceut. Statist.99, 100 (January 22, 2007).
41 Blumenstein. Even though the magnitude of the inflation was different in the Blumenstein simulation from that in the FDA simulation, the magnitude of the inflation is not the key element; any magnitude of inflation beyond one-sided 2.5% is unacceptable, and both simulations exceeded that magnitude.
42 The log-rank test is a non-parametric test used to compare treatment groups with respect to time-to-event endpoints. The assumption under the null hypothesis is that the risk of event (death) is the same in all the treatment groups and thus the expected number of events at any time is assumed to be distributed between the treatment groups in proportion to the numbers at risk. The log-rank test is the combination of the differences between observed events and expected number of events over all times at which events occurred.
43 Transcript at 75.