hero image

A paper published on-line on February 11, 2020 entitled “Internal Deadlines, Drug Approvals and Safety Problems” made the point that drug approvals “surge” in December and that drugs approved in December, at month-ends and pre-holiday are “associated with significantly more adverse effects (AEs), including more life-threatening incidents and deaths”.   The authors are from the Harvard Business School, National Bureau of Economic Research. MIT and the University of Texas. See: Cohen, Lauren and Gurun, Umit and Li, Danielle, Internal Deadlines, Drug Approvals, and Safety Problems (Available at SSRN: https://ssrn.com/abstract=3427338 or https://dx.doi.org/10.2139/ssrn.34273).

This is an important claim and the paper is worth examining in detail. The authors make several points:

The next question the authors ask is whether there is evidence that the “desk-clearing” is driven by alternate explanations.

  1. Do regulators examine different types of drugs (e.g. more complex ones) in December or at month end? To attempt to answer this they looked at several factors about each drug including disease to be treated, market size, priority review status and other factors. Controlling for these possibilities does not alter the conclusions. So the drugs seem to be similar throughout the year.
  2. Drugs approved in December do not appear to be “more complicated or difficult to review as explicitly measured by their chemical of functional novelty)”. They found no spikes in “days under consideration” and felt this shows the hardest evaluations were not saved for December, month-end or before holidays.
  3. Might it be possible that pharma companies time their application submissions in the “hopes of receiving a lax December review?” The authors found no evidence of “strategic timing”. The authors found no evidence to suggest the companies were gaming the system. Author comment: I agree. Companies want the NDA submitted asap. I’ve never seen strategic timing used.

The authors then ask why the FDA, if rushing to meet internal deadlines, “they seem to err on the side of approval rather than rejection.” The authors state that they believe this is a result of internal performance benchmarks that look at quantity of drugs approved rather than the “quality of those decisions”. The approval rate and number are easily and immediately tracked while the safety profiles showing toxicity may take years to be seen. In addition, the authors note that industry and patient groups “typically advocate for approval of new drugs rather than rejection.”

The authors note 2 other related publications – one in 2008 (Carpenter, Daniel, Evan James Zucker, and Jerry Avorn. 2008. “Drug Review Deadlines and Safety Problems” New England Journal of Medicine, 358(13): 1354–1361.) and the other in 2012 (Carpenter, Daniel, Jacqueline Chattopadhyay, Susan Moffitt, and Clayton Nall. 2012. “The Complications of Controlling Agency Time Discretion: FDA Review Deadlines and Postmarket Drug Safety.” American Journal of Political Science, 56(1): 98–114).

In these publications they note that FDA is evaluated on the percentage of applications processed in a timely manner (180 or 300 days depending upon priority). An increase in drugs approved before the 6 or 10 month deadline was noted also in these publications. The authors feel this is a different phenomenon from the “end of the month” phenomenon they observe but that their findings complement the findings of Carpenter. From this they conclude:


This is an important paper that deserves to be read and evaluated by FDA, the medical community and others.

Firstly, the authors are not medically trained (as best as I can tell) and are primarily business people. There does not appear to be a pharmacovigilance evaluation of the claim that statistically significant increases in adverse events are seen with these drugs.

The authors state that if a review is rushed, then “we might expect that drugs approved during these periods to have more safety issues”. They look at various factors to “test this hypothesis” which are not clear to a clinical reviewer (e.g. fixed effects ICD-9 and drug cohort, for a drug’s decile in terms of market size and others). They use prescription data as a marker of drug use. Clinicians know that not all drugs prescribed are taken or taken as directed or are stopped early or have dosage changes. We never truly know the denominator of drugs actually taken when calculating rates. In fact, we never truly know the numerator either. They do note that a key concern is that this analysis does not take into account the “proportion of use cases that do not generate AEs”. That is, they don’t really have a denominator for drug use nor are they sure they received all AEs that occurred.

The authors do not comment on the possibility that even if a review is “rushed” in totality, it is possible that the safety review is complete and the same as would be done for review during a different month.

“A safe and popular drug may generate more adverse effects than a dangerous drug simply because it is used by more people.” A review of the safety data by a medically trained pharmacovigilance person would have been useful.
This is a highly technical and complex “business type publication” and not a medical publication.

Bottom line: This is a very important paper. It is highly technical paper with some findings that might not be fully supported without some further medical input regarding the safety issues in particular. However, the findings are “believable” and consistent with both human nature and similar findings referenced in other countries, other fields and other publications. It should be more thoroughly evaluated by the regulators, academics and medical worlds.
It is hoped that this publication will be picked up in the medical literature and by FDA and other agencies. It would be interesting to hear FDA’s comments as well as comments from other stakeholders in the field.

Related Articles