Home >> ALL ISSUES >> 2014 Issues >> IT drives clinical, financial gains in hospital labs

IT drives clinical, financial gains in hospital labs

image_pdfCreate PDF
Harrison_Jim_thumb

Dr. Harrison

It’s well known that tracking performance is essential for improvement. But how you track performance can be critical, as panelist James H. Harrison Jr., MD, PhD, associate professor of public health sciences and pathology at the University of Virginia, demonstrated in his presentation on “Designing Quality Metrics to Meet Quality Needs.” By using new software tools, laboratories can overcome limitations of laboratory information systems and perform more sophisticated exploratory data analytics for continuous quality improvement, Dr. Harrison said. He discussed how tools like the Python programming language, the iPython (interactive Python) Notebook, and the Pandas Data Analysis Library (a Python code library) can be used to produce deeper insights into turnaround time (TAT) patterns.

The project he described was focused on a cancer center satellite lab that performs about 50,000 tests a year, supporting outpatient medical, surgical, and radiation oncology. Measured by 90 percent completion time done monthly, a standard metric, TAT had increased about 10 percent from 2011 to 2012. “The goal of the analysis was to try to minimize routine turnaround time to help overall patient flow and optimize patient and physician satisfaction,” Dr. Harrison said.

Python, iPython, and Pandas are all open source (and free) software tools developed for general purpose data exploration and analysis. Dr. Harrison showed how he created a data set of 36,000 test receipt and result time stamps, and used iPython with Pandas to clean out bad data, aggregate the data into months, weeks, and days, and calculate TAT percentiles. “To do these calculations you write and execute Python statements line by line, inspecting and plotting the results as you go. What you’re doing is incrementally building a computer program that does data analytics without the complexity of writing the entire program at once and debugging separately.”

What’s the advantage of the iPython Notebook and Pandas over using a spreadsheet? “You write executable code and explanations of what you’re doing, and capture results graphics as you go,” said Dr. Harrison. “It’s essentially a live lab notebook.”

That feature turns out to be important in data analysis. “What is this 10 percent rise in monthly TAT from one year to the next year telling us? Is it a small increase in most turnaround times, a modest increase in some, or an increase in a limited number of outliers? We really can’t tell that when the time granularity is at the monthly level.” By re-focusing to the daily level, the analysis showed there was an increase in the frequency of problematic outliers rather than an overall change in TAT.

Plots of the distribution of turnaround times yielded useful follow-up questions. Was outlier frequency directly related to test volume or laboratory staffing? Could other events occurring on high-volume days lead to increased outliers on those days? Or was the increasing number of time-consuming bone marrows the lab was assisting with during the day having an impact on TAT?

It turned out that the days that had bone marrow assists had more outliers and a higher median TAT by about 10 minutes than the other days. And days with high volume plus bone marrow assists might have a synergistic increase in TAT. The data can also be used to identify problem workflow patterns within a single day, specifically identifying the effect of irregular events like bone marrow assists, instrument problems, etc. “Being able to drill down within the day to see the individual values is one of the additional benefits of this sort of analysis,” he said.

The goal of these sophisticated analytics is “to find out what is going on, and see if we can identify a problem that we can predict or mitigate in some way by making more resources available at certain times,” Dr. Harrison explained. “It’s important to realize that continuous process improvement involves measurement that leads to asking the right questions,” he added. “Ask the questions, try to mitigate the problems, and then iterate.” Multiple problems will be uncovered. “We probably have a multifactorial context with interaction between various elements,” he said. “And if we can identify those, we’ll be in a good position to improve performance.”

Middleware continues to be one of the key tools for improving clinical and operational performance of the laboratory, said Tim Skelton, MD, PhD, medical director of the core laboratory and laboratory informatics at Lahey Hospital and Medical Center, Burlington, Mass. “The advantage of middleware is that it ensures all aspects of test performance—preanalytic, analytic, and postanalytic—are actually meeting the quality that’s required for patient care,” Dr. Skelton said.

Dr. Skelton

Dr. Skelton

One example is using middleware to increase precision. “What do you do if you buy a testing method like vitamin B12 and there’s too much random variation?” he said. “If the cutoff for normal is 300 and the clinicians just want to know if it’s normal or abnormal, operationally if it’s below 300 those patients are going to get a whole workup. Why are they vitamin B12 deficient? Is it their diet? Do they have intrinsic factor antibodies and pernicious anemia? Do they have some kind of small bowel syndrome?”

“There’s going to be a lot of downstream costs and labor to figure out what’s going on, and they’re going to get supplements and get retested to see if it improves, because a deficiency can lead to irreversible neurologic damage as well as reversible hematologic problems. So it drives the clinicians absolutely crazy when they send a patient in and get a 250, and they repeat the result and get a 350. They say, ‘What’s going on here?’”

At his hospital, Dr. Skelton was able to use middleware to address this problem. “We just want to know if the actual value is above or below 300 because that’s going to have all the downstream impact, and we want to be correct about it.

“If you just do a single value as most institutions do, and you look at the random analytic variation of the assay, you could be anywhere between 225 and 375 by taking plus or minus three standard deviations. That means 16 percent of your patients are in that zone where the result could be wrong based purely on random variation. What the middleware can do is look at those values and do real-time actions to replicate if needed, and not replicate if it’s not needed,” Dr. Skelton said.

He explained how the laboratory activates a set of rules in the middleware if the value is in the range close to 300. “The instrument will auto-repeat anything that’s between 241 and 359, sort of in that borderline zone.” If the second test is outside that range, the result will auto-release. “But if both values are within that range, you’ll have to make a decision. If both are less than 300 or greater than 300, then one of the results is manually released by the technologist.” Otherwise the replicates are done until there are six values, and the technologist picks the median and verifies it as the result.

Another situation where middleware proves its value is with patients who have pulmonary hypertension and are on nitric oxide. This treatment can save lives but has a downside as far as the laboratory is concerned: nitric oxide causes oxidative stress to red cells. “So you end up getting hemolysis of all your older red cells, and we end up reporting from the laboratory that the potassium specimen is ‘hemolyzed, unable to assay or interfering substance.’”

SkeltonSlide2Unfortunately, this inability to run tests can persist for days. “And the clinicians don’t know how to take care of these patients; they don’t know their potassium because we keep saying ‘hemolyzed results.’ So something needed to be done. We used the middleware to help us do a better job of interpreting results.”

With in vitro hemolysis, a redraw is the usual answer. But with in vivo hemolysis, which is what is happening with patients on nitric oxide, redraws are futile. He realized that hemoglobin in vivo is metabolized to bilirubin. “So we figured we could use the hemolysis index and the icteric index to tell whether the hemol-ysis was in vivo or vitro, and we set up the middleware to do that.”

With this approach, if the problem appeared to be in vivo hemolysis, the potassium value could still be reported out. “We use this comment: ‘hemolysis and hyperbilirubinemia present; evaluate clinically for in vitro versus in vivo hemolysis.’ We then, in the case of potassium, include an appropriate comment such as ‘hemolyzed potassium may be elevated 0.6 to 0.8 mmol/L by in vitro hemolysis; redraw if clinically indicated.’”

“So we’re using the middleware exactly the way it should be: to give a qualified or smart comment in a laboratory result,” Dr. Skelton said. And the impact he reported was fairly remarkable. “Before we went live with this, we were averaging 110 chemistry specimens per month where we had to say ‘hemolyzed, unable to assay.’ Afterward, the 15-month mean was only 45 chemistry specimens like that. So we got a 59 percent reduction in specimens reported as ‘hemolyzed, unable to assay’ by using these middleware rules.” (See box, above.)

Obviously there are a lot of clinical benefits, Dr. Skelton noted. “We’re avoiding all those unnecessary redraws, we’re getting patient results where the clinicians really need it, and we’re getting rid of a lot of waste.”

Middleware also has proved helpful in doing specimen type checks. “Things like creatinine, potassium, sodium, and BUN are run in urine and plasma, and the wrong specimen may get put in there. So the middleware will send up these messages to the technologists. For example, where a plasma was run as a urine, or vice versa, it might say, ‘questionable result, check specimen integrity before manual resulting.’”

“We do the same thing with our blood gas analyzers where the percent oxygen saturation is high. If it’s a venous blood and it’s greater than 94 percent, the middleware will stop it and say ‘check to make sure this is really venous and not arterial.’ This kind of check turns out to be really useful.”

Along with many other examples, Dr. Skelton cited middleware’s usefulness as a work cell instrument update. “Our blood gas and chemistry analyzers are updated in real time with information on specimen result status. This allows us to perform critical care testing in our central laboratories rather than point-of-care.” He also described the middleware’s ability to give the medical technologist a real-time action list. “When we first started this project, we went to the LIS and said we want to get this real-time data to the technologist, and our LIS really couldn’t do it. So we said, well, we’ve got to do it in middleware. And we’ve done that—and a lot more.”

Finally, Dr. Skelton noted the benefit of not having to have special downtime procedures. With the middleware, “your LIS can go down and all of the rules are still there in the middleware. And we have redundancy in the middleware so that if you need to update it, there’s another server that can run the instruments. So basically we are never, ever down. Even if the network goes down, even if the LIS goes down, we’re up and running, and we use the exact same process.”

Much of laboratories’ need for IT stems from the deficiencies of most current LISs, said Jorge Sepulveda, MD, associate director of laboratory medicine and medical director for lab informatics at New York-Presbyterian Hospital, Columbia University Medical Center. “The LIS should do more than just record orders, provide worksheets for laboratory testing, and then provide result reports. Advanced reporting features are often lacking, as are physician notification tools, comprehensive quality assurance, and accreditation tools. You can’t flag results other than high or low or critical. Also lacking are what I call para-analytical databases, which means keeping track of all the stuff other than test results that is needed for optimal running of a laboratory, such as document control, inventory, personnel competency, management, and so on.”

But common desktop software from Microsoft Office, including Excel, Access, SharePoint, and InfoPath, can fill the gap and improve laboratory efficiency and test interpretation, Dr. Sepulveda said. “You can locally adapt it in your laboratory, it’s relatively inexpensive, and it’s quickly customizable to your needs. It does require work maintaining and validating, but that should be within the ability of most people.” Some examples of how desktop software can help:

Creating a clinical calculator for a Web site. By using Excel to create a calculator, the laboratory can either allow users to launch it via a Web link, or embed it on the Web site by uploading it to a cloud storage system like SkyDrive (now renamed One Drive), right-clicking on the file name, and choosing “Embed” to get the HTML code (the Web site programming language). Dr. Sepulveda described how to combine user input cells—either laboratory values or yes-or-no questions—with an algorithm that estimates the bleeding risk for patients on warfarin.

Many useful formulas are available online at sites like www.mdcalc.com and can be validated and adapted to the local laboratory, he said. A more complex calculator might be developed to calculate, for instance, significant change values to determine whether the difference between two laboratory values is due to analytical and biological variability or there is something else going on.

Result interpretation. HTML can also be employed to streamline complex commenting of laboratory tasks. “I think every clinician hates to see a long list of comments for every test—they have to scroll through a lot of text, sometimes full of disclaimers and often there to teach people who are not familiar with the test how to interpret it. We use the approach of just putting one line of text along with a link for detailed interpretive comments,” Dr. Sepulveda said.

Tracking proficiency testing. After obtaining a list of all of your laboratory’s PT shipments, “you can upload these to a SharePoint site, and columns in the Excel worksheet get automatically incorporated into fields in SharePoint,” Dr. Sepulveda said. “Then you can use SharePoint Designer to create workflows based on the values of those fields.” For example, if the current date is seven days past the shipping date, you could trigger an e-mail to the supervisor saying, “We haven’t received this shipment; something is wrong. Let’s investigate.” If results are logged as “OK,” that might complete the workflow. “Or if results are not OK, it initiates a ‘pending investigation,’ which basically automatically creates an InfoPath form to investigate the proficiency testing failure.”

Tracking quality indicators. If you are interested in tracking test or blood use patterns or turnaround times, or compliance with CAP checklists, “you can easily enter these into an Excel worksheet and then track with a dashboard using conditional formatting to highlight which indicators are off and which are almost off.” Personnel management and financial management can also be handled with Excel or Access, Dr. Sepulveda noted. These applications can all be linked with one another by means of Microsoft’s relational database capabilities. “And SharePoint can be used for further functions such as task assignment, meeting management, surveys, and discussion forums.”

“With some dedication and not much learning curve, you can use this common software to provide customized solutions in your laboratory for efficient laboratory operation and enhance customer interfacing,” Dr. Sepulveda said.
[hr]

Anne Paxton is a writer in Seattle. Drs. Kratz, Harrison, Skelton, and Sepulveda will present an updated version of their 2013 AACC session on July 28 at this year’s AACC Annual Meeting and Clinical Lab Expo.

CAP TODAY
X