Home >> ALL ISSUES >> 2020 Issues >> How to right the wrongs of clinical decision support alerts

How to right the wrongs of clinical decision support alerts

image_pdfCreate PDF

Charna Albert

December 2020—Always think about timing, maintain a log of malfunctions, and make the right decision the easy decision.

These are a few of the clinical decision support tips that Ronald Jackups Jr., MD, PhD, and Amanda Blouin, MD, PhD, presented last month in a CAP20 session.

“When you say clinical decision support, the image that occurs in most people’s minds is the ugly pop-up alert,” said Dr. Jackups, associate professor of pathology and immunology, Washington University School of Medicine in St. Louis. “In reality, the pop-up alert is the last resort.”

Laboratories can affect how providers order tests at the point of data entry in several ways, he said. “How we build the test, what questions or prompts we put in the order, what options we provide to the provider when they search for what they think is the appropriate test.” Then, too, there are lab test reflexes and algorithms. “Ultimately, if deemed necessary, an order alert may be a useful function.”

Dr. Jackups and Dr. Blouin, medical and scientific director of the histocompatibility laboratory, Memorial Sloan Kettering Cancer Center, opened the session with this scenario:

The director of infection prevention at a hospital where you serve as laboratory medical director contacts you about the hospital’s clinical decision support system alert designed to reduce unnecessary testing for Clostridium difficile. The alert fires when a provider places an electronic order for C. diff on a patient currently prescribed a laxative and warns the provider that such testing is associated with a high false-positive rate. The infection prevention director complains to you that too many providers continue to order C. diff unnecessarily despite this alert, leading to antibiotic overuse and federal financial penalties due to pseudo hospital-acquired infection. What can you do?

Clinical decision support guidelines, Dr. Jackups said, tend to fall under the five “rights” of clinical decision support:

  • The right information: “What are you telling the provider at the time they see the CDS tool? Is it evidence-based, useful, current information?”
  • The right person: “Are you making sure you’re targeting the right person? Should this go to a doctor, a nurse, a pharmacist?”
  • The right format: “Are you using the optimal format to get your information across? An alert may be what you have to use,” but another type of CDS such as an order set or reference information may be able to obtain the desired result.
  • The right channel: Should it be in the electronic medical record, ordering system, patient portal, other?
  • The right time: “Primarily it is when the clinician is making a decision or needs more information.”

So what might have gone wrong with the C. diff versus laxatives alert? Dr. Jackups cited a few of the possible malfunctions. The alert may have been built to fire when C. diff toxin was ordered, but perhaps the lab recently updated testing to C. diff PCR and the alert was not updated to account for the new method. Or the rule may have been designed as intended but missed certain triggering events—a commonly used laxative may have been left off the list of patient medications that trigger the alert, or the pharmacy formulary might have changed. It’s also possible the alert was vague, confusing, or wordy, leading providers to ignore it.

Dr. Blouin

“Or perhaps there are conflicting alerts,” he said. “You may not have been aware there was another alert in your hospital that recommends C. diff testing in any patient diagnosed with diarrhea.”

In a survey of hospitals and academic medical centers, Dr. Blouin said, 93 percent of chief medical information officers reported CDS system malfunctions at their institutions, and 65 percent reported at least one malfunction per year (Wright A, et al. J Am Med Inform Assoc. 2018;25[5]:496–506).

In this study, the most common causes of malfunction were found to have originated in the initial CDS build and implementation. “This includes not just coding errors but also design errors, often the rule criteria being incomplete,” she said. For example, for a rule that should fire for any patient on heparin, the rule designer included only unfractionated heparin in the specifications and therefore a patient on low-molecular-weight heparin doesn’t trigger the rule. Or a rule that fires to start a beta blocker in cardiac patients did not include the category of drugs with alpha beta blocking activity and thus it prompts physicians to order beta blockers for patients already on drugs with beta activity. Also common are errors that are made when moving the rule from the test and building environment into the production environment. And “any sort of system upgrade may affect how CDS rules fire,” Dr. Blouin said.

Changes to the underpinning data dictionaries used by a rule also can cause malfunctions, such as when a new medication has been added to the pharmacy formulary or replaces another drug.

“This is a type of error that is really pertinent to the laboratory,” she said. “Not only is CDS built to drive laboratory test ordering, it is driven by lab test results. We are generating the elements of these data dictionaries, and when we modify something in the lab, the CDS could be affected.”

An example: a change in reportable ranges. In one case, a lab that reported undetectable troponin values as “0.1” changed its system to report such values as “less than 0.01.” “There was a CDS alert to assess whether a patient with an acute MI was given aspirin in the ED. This was triggered by a diagnosis code of MI, or a troponin greater than 0.5,” Dr. Blouin said. The lab began to receive complaints from ED physicians that this alert was firing too frequently and on patients with a troponin of less than 0.01 (Stone EG. J Am Med Inform Assoc. 2018;25[5]:564–567).

“With investigation, they realized this rule was interpreting the ‘less than’ sign as a very, very large number, and it was triggering this alert. As more hospitals are adopting the high-sensitivity troponin method, this is something to think about, because this assay has different reportable ranges, different reference ranges, different units, and different clinical decision points,” she said, all of which are constantly evolving.

Sometimes a rule is built exactly as intended and firing as intended, she said, “but it’s still not having the intended effect,” and the reason can be provider nonacceptance. “Excessive alerts. Increased time on the computer. Decreased time with patients. And clicks—a lot of clicks,” she said, citing just some of the reasons for nonacceptance. One study found that in a single 10-hour emergency department shift, a physician can click a mouse up to 4,000 times (Hill RG Jr., et al. Am J Emerg Med. 2013;31[11]:1591–1594).

In an example from an academic medical center, providers were required to acknowledge a pop-up alert before finalizing an order for red blood cell transfusion. The intention was to ensure the patient had a type and screen within the past 72 hours. When the patient’s type and screen was active, the provider had to acknowledge the alert before finalizing the order—interrupting the workflow. “These types of ineffective CDS contribute to provider nonacceptance,” Dr. Blouin said, as does distrust, alert fatigue, and a perceived limitation to professional autonomy.

CAP TODAY
X