hero image

Risk Evaluation and Mitigation Strategies (REMS) have been in place in the US for nearly five years and questions are now being asked whether they are achieving what they were designed for, namely, making drug use safer.  The question is particularly striking in light of the changes that the European Union has put in place with their Risk Management Plans (RMPs) in the new Good PV Guidelines.  These changes have made an already long, complex and detailed document into one that is even longer, more complex and hyper-detailed.  There are questions raised in the EU whether this is overkill but that is not the topic for discussion here.

US REMS are actually rather simple.  Firstly, not all drugs require them whereas in the EU all new and eventually all old drugs require some degree of an RMP.  REMS basically come in three flavors (I simplify slightly):

Most of the REMS are only MedGuides and or Communication Plans with the ETASUs being reserved for drugs that are perceived to be more toxic or for patient groups felt to be at greater risk.  Such ETASUs include:

The FDA may require a REMS.   The sponsor may also offer to do one.  In either case the FDA must approve of the final version.  REMS usually have timetables built in for assessment.  Officially it was set at 18 months, 3 years and 7 years though this can be changed or varied by FDA.  The assessment should include whether the specific goals in the REMS are being met and whether the REMS should be modified.

The FDA’s Division of Risk Management then has a team review the sponsor’s assessment and determines whether the REMS is complete, has met its goals and whether there are any deficiencies that need to be corrected.  The Office of Compliance also has a say.  FDA’s (unofficial) goal is to complete this review in 60 days.

There are about 70 currently approved REMS for individual drugs and 6 shared or class REMS. There are about 125 that were “released” or closed.

So how well are they working?

Well, the scuttlebutt around the country is “not so well”.  A general feeling is that the patient inserts and MedGuides (which didn’t work 25 years ago when they were first tried) is that they still don’t work.  The communications sometimes work but the medical community is so overwhelmed by the amount of information floating about that it is hard to distinguish the REMS communication from the sea of other messages.  Others feel that there should be REMS for certain problems but, alas, there are none (e.g. acetaminophen use).  Another apparent and dangerous problem is that some physicians seem to be avoiding prescribing REMS drugs where they have to do “extra” work or where there seems to be a liability risk to the physician him or herself.

Another point is that the class REMS, particularly for the long-acting opiods, are too minimal.  The opiod REMS basically is a complex set of communications and non-mandatory training.  See: https://www.fda.gov/drugs/drugsafety/informationbydrugclass/ucm163647.htm There are no limits or controls on distribution, amount prescribed, registries etc.  This disappointed many in the medical community.  The REMS was finally published in August 2012 and it is too early as of yet to see if it is having any positive effect.

More provocative though is a report released by the Office of the Inspector General of the Department of Health & Human Services in February of 2013.  The title of the report gives away the results: “FDA Lacks Comprehensive Data to Determine Whether Risk Evaluation and Mitigation Strategies Improve Drug Safety.”

The goals were:

The findings were not happy:

The Inspector General concluded that the findings “raise concerns about the overall effectiveness of the REMS program.” The recommendations were:

The FDA, in response, essentially agreed with all of these points though they were less clear about getting further Congressional authority for enforcement.  Probably a political battle they want to stay clear of for now.

So what can the public conclude from this?

The conclusion is that the mechanics of preparing REMS, getting buy-in from all stakeholders and approval from FDA are not working well at all.  The basics of REMS, namely, creating the appropriate components to minimize risk have not been worked out.  In fact, it is not clear anyone knows yet how to validate the components or whether they are working.  Doing meaningful, scientific comparisons of REMS vs no REMS seems to be a very difficult challenge that has not received too much discussion.  The implication is that the legislation putting REMS into place was not well thought out or tested and perhaps did not have any valid basis other than the hope that these are sound and reasonable requirements and should work.

Note that the report focuses on the mechanics of REMS preparation, negotiation and implementation.  It does not address the contents and whether any of the REMS have improved public health.  None of the recommendations address the public health impact of REMS; they only address the mechanics of obeying the law.

In a sense, REMS may be thought of as surrogate markers but not as actual indicators of improved public health.  We can and will measure whether x% of physicians and pharmacists were trained in REMS.  We can look at the number of patients who actually received and who (claimed to have) read the MedGuides. We can determine which pharmacies are dispensing drugs they should not dispense. We can see how many patients enrolled in a registry and so on. This is not unreasonable to do if not burdensome but it really is not relevant to public health.

What we have not shown, nor does it seem that we are even attempting to measure, is whether the public health has improved.  Are there fewer opiate overdoses or deaths? Are there fewer drug-related hospitalizations? Are there fewer women getting pregnant while teratogenic drugs?  This is the stuff we do care about and are not really measuring well if we measure at all.

The criticism in this report is muted and addresses mechanics not public health.

The first attempts by FDA at formal risk management date only to the 2002-2005 period (RiskMAPs). One might argue that only now, about 10 years after starting to think about how to really do benefit/risk evaluations, are we starting to get serious.  Let’s hope so.

Related Articles