Differential strokes—Mayo’s take on middleware

 

CAP Today

 

 

 

November 2008
Feature Story

How middleware can be used in a hematology laboratory—that's what Curtis A. Hanson, MD, spoke about last spring at Lab InfoTech Summit 2008 when he shared Mayo Clinic's decade-long experience with it. Dr. Hanson is a consultant in Mayo Clinic's Division of Hematopathology and professor of laboratory medicine and pathology in the College of Medicine. An edited version of his talk follows.

The laboratory director is in a precarious position these days. On one side he or she has the clinicians demanding appropriately that the most accurate results possible be provided in a turnaround time that’s always less than what the laboratory is capable of achieving. On the other side, the director continues to be pushed on costs in an era of decreasing resources. But in the end it doesn’t have to be a contest between efficiency and patient care.

Why should we focus on the CBC and diff? The CBC is the most frequently ordered laboratory test in the clinical laboratory. Data that I saw recently from the CMS indicate that the CBC is also the highest reimbursed laboratory assay, so it should get everybody’s attention. It’s a widely used test in outpatient and inpatient settings. You need it 24/7. You can use it as a screening tool, as a door to health in a given patient. It gives you an idea about a variety of potential underlying disease processes that will prompt further investigation. And you can use it not only as a screening tool but also to look at particular acute and chronic hematologic diseases.

The manual differential is labor-intensive and expensive. It requires highly skilled personnel. Something as subjective as a manual differential lends itself to subjective error, and because it’s manual, you run into data-entry errors as well.

The automated differential is geared toward not missing any significant finding. It must recognize a spectrum of white cell abnormalities: lymphoid and myeloid, acute and chronic processes, benign and malignant, pre- and post-therapeutic. No single cell type perfectly represents each disease process, and you cannot consistently distinguish between reactive situations and disease based on single cell type abnormalities. And every instrument technology that I’ve ever had an opportunity to evaluate has, in my experience, had its own inherent idiosyncrasies and will either overcapture or undercapture particular morphologic abnormalities. But, from a financial side, it costs dimes to do on a particular case.

So there are clearly financial and resource advantages to pushing the automated diff as much as you can. The challenge is, in doing differentials, how do you identify the most important abnormal findings while minimizing the time required for normals or those that are just minimally abnormal? In other words, how do you maximize the use of automated differentials while minimizing the number of manual diffs? It comes down to the technology versus the art.

The differential can be verified at various steps in the process. Autoverification may occur directly at the instrument based on autoverification rules that are inherent within the software of the analyzer. It may occur in the lab, whether it be a technologist manually verifying results or through the use of middleware. It can also occur at the LIS where you can set up autoverification-type rules.

As I previously mentioned, autoverification at the instrument level is geared toward not missing anything. It’s based on using quantitative flags that the laboratory sets and qualitative flags established by the manufacturer. And it’s important to understand that success of autoverification at the instrument is very dependent on your local patient population.

If we look at small hospitals and clinics, they often have about a 60 percent auto-release rate directly off the hematology analyzers. At Mayo in Rochester and at the University of Michigan in the hematology laboratory (where I started many years ago), having a tertiary acute and chronic patient population, at the instrument level there is about a 40 percent auto-release rate. So there will be inherent differences from practice to practice.

The ability to verify differentials within an LIS is dependent on the vendor or the system being used. Many or most have some degree of rule-writing capabilities. If so, an LIS approach will make it much easier for the laboratory to meet regulatory requirements. You have the IT and backup support you need, but you are dependent on non-laboratory people to write the rules for you. So your priorities within the laboratory may be different than the priorities of the whole laboratory information system.

Middleware is a great tool to use for autoverification of the differential. You can purchase it from instrument manufacturers or other non-instrument vendors. You need the ability to get information from both the instrument and the LIS, and it’s important that you bring the middleware vendor, the manufacturer of the instrument, and your local LIS resources together to understand the interface and the limitations it may or may not impose. It’s also important to understand what backup and IT support you must have. That’s critical. Does the laboratory have to develop its own expertise or is centralized expertise available?

We’ve had a lot of experience with middleware in our hematology laboratories. We started off 11½ years ago and began trialing it about a year or so before that. It’s a system that morphed into a product that was supported by Orchard Software, called Aqueduct. We’ve used that software system now for almost nine years.

I want to walk through the differential process in detail because it’s important to understand this if you are going to apply middleware to a hematology laboratory.

A differential count is requested. It runs through the instrument and the first question is: Will the instrument autoverify? If it autoverifies, it dumps the results into the LIS and then into the electronic medical record.

If not, it’s important that a laboratory then go through what we call a scan-and-release process. You make a slide, you take a gander at it, and you decide, yes, that automated differential is correct and release it at that point. You avoid doing a manual differential and you try, at least from a scanning perspective, to verify that the information from the instrument is correct.

If it is correct, it releases into the LIS and, if not, you move on and do a manual differential, which would then ultimately come through back to the LIS and the electronic medical record.

There are actually two choices within that manual differential bucket: You may be doing normal differentials and those that are only minimally abnormal versus those that are significantly abnormal. As laboratorians we have a desire to not miss anything, and if there’s anything even slightly abnormal, we feel compelled to document and report it. Instead, we need to put on our clinical hats and ask: What is the real clinical value that we’re providing and what information do our clinicians need to have? Therefore, you need to be thinking about that manual differential as providing two different possible results—normal/ minimally abnormal or the significantly abnormal.

Highlights the four actions that may occur within your hematology laboratory. From a middleware perspective, the scan and release and your normal or minimal abnormal manual differentials are where you will gain by using middleware to get those cases out of the laboratory in an automated way and save your manual effort for those significant abnormal cases. Before you go into a middleware process, you need to model your laboratory to understand your practice so you know what your gain will or won’t be with middleware.

There’s always the tendency from above to say, ‘You’re going to put in middleware—that means you’re going to save X number of FTEs.’ Well, that’s all dependent on what your practice model shows. Modeling will help determine the value you should get from the system.

If your instrument autoverification rate to start with is at 80 percent of all cases, you’re not going to gain much from a middleware system. If it’s 40 percent, I can almost guarantee that you’ll find value by implementing a middleware process. Let’s assume you employ a scan-and-release process. If that process is used for five percent of total cases, that either means you’re very efficient or you haven’t bought into that concept. If it’s at 30 percent, again there is a lot of potential within your model to see gains from a middleware system.

And with manual differentials, how many cases have true clinical value added by a morphologic review? It’s not just whether you can identify any morphologic abnormalities. It’s whether or not you are providing important information that the clinician will actually use.

I had saved information about our lab from 1995, which was pre-middleware implementation. Back then when we had done the modeling before our first middleware implementation, we had about a 40 percent autoverification rate and quite a few cases that had either normal or minimally abnormal manual differentials, and we were using a scan-and-release process.

Today, more than 80 percent of our differentials are released through either the instrument or middleware autoverification, with the gain being fully from that scan-and-release bucket and the normal or minimally abnormal manual differentials. In only about 10 percent of the cases that come into the laboratory will we do a complete manual differential and review.

So understanding your practice ahead of time is critical in setting expectations for outcomes. It’s also important to understand the ‘value equation’ during this modeling process. The value equation is defined as quality over cost during a particular episode of care with quality being defined as either patient or physician satisfaction, actual outcomes, direct patient outcomes, patient safety and error issues as well as service issues to the physician and to the patient. This value equation can help drive the process of how middleware should be used.

Reducing cost is the first thing that comes to everybody’s mind when they think middleware. If you do reduce cost, you will increase value. But I also want to share with you how you can use middleware to improve quality, which will also directly improve the value you provide to your clinical practice.

In our laboratory, we’ve identified five phases of middleware utilization: automating rules for manual differentials, focusing on patient safety, enhancing lab effectiveness, supporting management goals, and unifying multi-site practices.

For phase one—the rules, you first must know and model your practice. It’s simple: You write rules for what the technologist does. Does the technologist look up results in the computer, compare results to previous studies, base a decision whether or not to do a diff on where the patient is or what kind of doctor the patient is seeing? Do you minimize or ignore certain results or certain instrument flags? In other words, what are the algorithms that a technologist goes through when deciding whether that sample at that point in time gets autoverified or gets a slide and a manual differential?

What sources of information do you need to have as you write these rules? The simple things account for the vast majority of your rules. It’s the quantitative numbers coming off the instrument. It may be based on the gender and age of that particular patient. It’s the instrument flags. Delta checks are great but also difficult to write. Delta checks can be based on absolute numbers or on percentage differences. They can be used to hold a sample within the laboratory or to release samples into the laboratory. You can also create delta checks using instrument flags. Have the flags changed or not over a given period? You can also write rules by where a laboratory is located, where the patient is located. You can write rules for an individual patient, which is a powerful tool for middleware. And also obviously time—when’s the last time the patient had this kind of a result?

A lot of people talk about using middleware rules to write specific rules for a certain physician. I don’t like that idea because it ingrains into your system an individual physician’s personal idiosyncrasies that may or may not be medically valid. I would rather write rules for groups of physicians, for a particular specialty, or for a particular disease process as opposed to an individual physician. I’m lucky at Mayo: We have enough physicians that I don’t have to worry about dealing with an individual. I can always revert back to the group. In smaller institutions where you may have only one or two oncologists, it may be difficult to ignore this physician or to get around that physician’s idiosyncratic practice behaviors.

Remember that the CBC is not really a single test. It’s used for screening patients, to follow a specific hematolo­gic disease, or to monitor a therapeutic response. As such, what we have done at Mayo is to create more than one type of CBC to meet those needs. We have different kinds of CBCs that clinicians can order that will generate different kinds of results depending on what the clinical purpose is for that CBC.

You can access many sourc­es of information to help you write the rules you need within your laboratory. There are a variety of academic papers and other sources that have established or proposed guide­lines for rulemaking. The Internation­al Society of Laboratory Hematology (ISLH) has published such a set, and it’s on its Web site—a set of consensus guidelines for establishing when a microscopic slide review should be done. You can use those rules as the basis for beginning your own set of rules. Those rules may be based on the component of the CBC or differential. Sometimes it’s just a one-step review process. Other times it’s a two- or a three-step rule process. There’s also a large list from the ISLH relative to particular instrument flags that come up and how your lab can respond. Again, it’s layered into primary, secondary, tertiary type information.

An example of an issue we had that we solved through middleware was when we had clotted samples that would make it all the way through the system and get autoreleased, and then the laboratory would get a call from that clinician who would say the patient is totally healthy and can’t possibly have that kind of result.

What we found was that, as crazy as it seems, if you actually put in a delta check that compared back to a previous platelet count (if you had a previous one) and it was within plus/minus 80 percent of that previous sample, then you could reliably know that the specimen wasn’t clotted. But if your platelet count result was beyond that plus/minus 80 percent, then there was a high likelihood you were dealing with a potentially clotted specimen. Although it sounds crazy, it’s been a fabulous tool for us to be able to use to identify potentially clotted specimens. So again, try to find different ways to think about how to use rules to help you solve an issue within the laboratory. You may be surprised at what you find.

As I provide some of the outcomes for the rules we’ve written, I need to lay out the struc­ture we have at Mayo. We operate in a 12-story laboratory building. In the lower level we have our phlebotomy area and over the wall is our highly automated central clinical laboratory where our hematology analyzers are. They’re in an area in which there was no way to put the rest of hematology next to it. Our Hematology Lab is eight floors up. There is a transport system that takes on average a minute and a half to go from the basement to the eighth floor. It’s worked out well because this is where our bone marrow area and our other complex hematology laboratories are.

Usually we see 1,800 to 2,500 specimens a day. But for illustrative purposes I gathered one day’s data, and it happened to be during spring break when fewer doctors were in, so our volume was down a bit. We had 1,700 CBCs come into the central laboratory on that day; 1,400 of them were autoverified either at the instrument or through the middleware system.

The remaining 300 specimens came up to the Hematology Laboratory because they were ‘trapped’ by the middleware for a variety of reasons. On those 300 cases, 671 flags were generated, with about half being instrument flags and the other half being quantitative flags on the CBC or differential. Of the 300 cases, 105 were scanned and released and, for the remainder, a manual differential was done.

Overall, then, 82 percent of the specimens were autoverified using the middleware system, about 10 percent had a manual differential done, and the rest were scanned and released by the technologist. So for the manual hematology area, whether it’s eight floors removed like us, or eight miles or 80 feet away, whatever the distance might be, you can see that you can substantially reduce the manual review effort your laboratory has to provide. This should give you an idea of the workflow and the outcomes you can expect if you use middleware.

What’s the impact of a system like this? Our middleware implementation in 1996 led to an immediate decrease of three FTEs within the laboratory. Our turnaround time of our priority No. 1 or our stat CBCs went from 50 percent to 85 percent being released within 30 minutes. That’s because we were doing far fewer manual interventions. And our overall turnaround time dropped dramatically because of that. Middleware has clearly driven down our costs and has led to service improvements for our practice.

I want to give you another idea as to how to think differently about the CBC and how you can use middleware to solve a problem. At Mayo, we have applied different rules for patients receiving chemotherapy for myeloma, lymphoma, and certain kinds of leukemias. Why? Because the entire clinical question is different for those patients as opposed to, for example, the patient who’s walking in off the street, going to see his or her family practice doctor or going into a clinic pre-surgery. The question for the chemotherapy patient is simple: Are there enough platelets and neutrophils so that the hematologist can give chemotherapy that day?

Too often in the laboratory we think about the CBC as a single test when in reality it’s a multitude of tests, all with different purposes. Our job is to try to define a specific CBC for those unique patient populations. So we created a different type of CBC available only to the hematologists called the CBC-C (or CBC-chemotherapy), and they know it’s no longer a screening CBC assay. It’s a therapeutic monitoring assay. To the hematologist for that patient, a perfectly accurate differential is not a concern. Just: What are the neutrophil and platelet counts? Are there less than 20,000 platelets? Should I hold the dose of chemotherapy today? Are there less than 1,000 neutrophils or are there greater than 2,000 neutrophils? The subtle abnormalities that the laboratory usually hones in on and gets aggressive about are of no concern to the hematologist in this situation. The main thing is that they need that rapid turnaround time to get their patients through the chemotherapy unit. So we have defined very broad autoverification rules for the CBC-C.

It’s important that the clinicians be reminded about what the role is of this CBC type. Every year you need to send out a note to them saying, ‘Okay, here’s a reminder. This is the only way you should use this assay.’ It really does require a close working relationship and understanding between the lab and the clinician.

In the pre-CBC-C day, we were seeing about 150 CBCs from the chemotherapy unit. We were autoverifying through our Aqueduct middleware system, and only about 35 percent were autoverifying while the rest of the screening CBCs were at about 75 percent. So there was a big difference in turnaround time for those that autoverified versus those that didn’t. This is the issue on which I got the vast majority of phone calls from unhappy clinicians.

So we collected the data, put it through the system, and implemented the CBC-C in ’01. We’re now at almost a 90 percent autoverification rate for those chemo patients with about a 20-minute turnaround time.

We do audit this routinely to see what’s going on and indeed we find cytopenias and other abnormalities that are clearly there that would be important for a screening CBC but are irrelevant from a therapeutic monitoring point of view.

Next phase: patient safety issues. In the Hematology Lab we rarely have true primary sentinel events. They do occur, but they’re rare. However, we do generate revised reports and they in turn may lead to unnecessary patient events.

Most laboratory directors are only remotely aware of all the reports that are revised in their laboratories. Every lab director who starts to collect that information will be surprised at how big that iceberg really is. Instead of pointing the finger at someone in the lab, however, you should ask: Why were the reports revised? What went wrong with the process? Until you collect that information, I guarantee you will always underestimate the severity of the issue.

Why do we have revised reports? The most common thing we have found is that it’s a revised diff count after we’ve reviewed a bone marrow, a subsequent blood smear, or a call from a clinician who says ‘I don’t believe that result—something is wrong here.’ It’s a revised platelet count due to those clotted specimens I previously spoke about.

They also are the result of memory type of issues. You have technologists who don’t follow standard operating procedures—they forgot what they were supposed to do. They did a scan and release instead of doing a diff. They forgot to have a second review by a senior technologist or by a physician.

This led to our asking how we could use middleware to help reduce the number of these revised reports. We quickly came to realize that individual patients act like individuals. They don’t always follow the rules of the group.

For our purposes here, we’re going to call our sample patient ‘Mrs. Johnson.’ It may be that for Mrs. Johnson’s acute leukemia, the nature of her blasts allow them to be missed by the instrument. She might have a lymphoma and circulating lymphoma cells, and the lymphoma cells are missed by the instrument. Or the morphologic features of her disease may be difficult to interpret, for example, if it’s a monocytic-type process or if it’s an unusual lymphoma.

Other examples: Once you’ve identified her as a platelet clumper, you don’t want to miss her again. Likewise, you don’t want to miss her red cell agglutination second time around. Sometimes there’s something unique about Mrs. Johnson that you want to follow closely. Sometimes particular protocols and clinical trial requirements demand that you do certain things. These are all examples of the things you want to remember about Mrs. Johnson, and usually that means you put up a sticky note and hope the next person will read and remember it if they happen upon her the next time.

One of the best quality improvements we’ve done within the laboratory, when we encounter one of those situations, is to immediately create a rule within the middleware system to trap the next sample that comes through on Mrs. Johnson. We apply a timeline, whether it be one month, three months, or longer or shorter. And it ­doesn’t matter where in the system Mrs. Johnson enters—whether it’s the clinic, the ER, one of the outlying clinics. Mrs. Johnson’s specimen will be trapped and we will look at it, and we won’t have to depend on seeing the right sticky note on the board at that particular day and time.

We may have five to 10 Mrs. Johnson rules in place at any given time. It takes less than two hours from when a request comes from one of our bench technologists to when a rule goes live. So we may identify it in the seven o’clock run and if, by whatever chance, another sample gets drawn later that day, it will get trapped within the system. We found this to be a powerful use for middleware within the clinical hematology laboratory.

You can also use middleware to help support management goals. We have been using middleware to look at technologist productivity, the types of CBCs that are released by the technologists, etc. You can use it to look at how your system has operated within the last week. You can monitor utilization rules. When we started out, we were using this aspect of middleware all the time—looking at particular scenarios for which we could then write new rules.

For the final phase of middleware, you may have multiple hospitals and clinics within an organization over various areas: locally, regionally, nationally. You may have different instrumentation, different clinical needs, and different patient characteristics. Not all of the sites may do manual differentials. You may or may not have a common EMR. It varies from organization to organization. Middleware offers you that common language through which laboratories can communicate, and it may therefore be a starting point for standardizing at least how you do your hematology testing in your organization.

In conclusion, before you purchase or implement, it’s critical that you model your practice to understand where you might benefit from middleware and to what extent. Keep in mind the value equation of quality divided by cost. Envision how you might coordinate your instrument, LIS, and potential middleware system to work in your practice.

In some labs it may be that your instrument and LIS might be sufficient and that middleware will only add a layer of complexity that won’t help you much. And you must work with your IT people to understand their needs and what kind of support your laboratory will require.

You can reduce costs and improve turnaround time, but you can also clearly improve employee satisfaction. Employees don’t enjoy flipping through the computers and flipping papers. They want to do what they were hired to do and to use their skills. Middleware is one way to get there, but the value to be gained must determine whether and how it should be used.

 

Related Links Related Links