Harnessing the LIS to bird-dog critical test results

 

CAP Today

 

 

 

August 2009
Feature Story

Anne Paxton

Laboratories should perform critical tests rapidly and report critical results rapidly. That’s a given. But as the laboratory accreditation process has repeatedly proved, there is a big difference between striving for excellence in general and actually measuring how often correct actions occur, or how long they take to happen.

In the past year, two pathologists at Penn State Hershey Medical Center have used their information technology skills to make that all-important measurement process error-free and automatic.

Using readily available programming tools, William J. Castellani, MD, director of clinical chemistry at the center, and Michael B. Bongiovanni, MD, chief of clinical pathology and director of the clinical laboratory, have been able to plug into the data stream of their laboratory information system and fully automate an audit of how often their laboratory’s critical value reporting is documented appropriately, and how often critical tests are done within the target timelines.

“Most labs, which conduct manual audits with periodic reviews, do not have a precise sense of how they’re succeeding at critical reporting,” Dr. Bongiovanni says. “In general, we’re not extracting information from the LIS as well as we should be.”

What galvanized attention to critical tests and critical values was the 2006 decision by the Joint Commission on Accreditation of Healthcare Organizations to highlight them as a National Patient Safety Goal, he says. “Everybody knew that when ­JCAHO came in for inspections, they were going to be looking at the numbers.”

Drs. Castellani and Bongiovanni have more than the average background in information technology. “The two of us here at Penn State have spent most of our careers playing around with computers,” says Dr. Castellani, noting that Dr. Bongiovanni, a math major as an undergraduate, was working at Penn State when it was the “alpha” test site for the Sunquest LIS.

But the key to designing an audit system are two skills that don’t involve statistical background: being able to define a question and being able to design a strategy to get that question answered, Dr. Castellani says. “These are the two things we’ve done in looking at both critical value reporting and critical test turnaround time. We’ve done them using tools that are fairly sophisticated, but that actually are readily available.”

What both pathologists have done is use third-party packages that interact with the LIS database to extract information. There are two packages that are designed to interface through mSQL (a widely used “database engine” to provide fast access to stored data) into Sunquest LISs, Dr. Castellani says: Crystal Reports and the programming language Perl.

To develop the audits of critical tests and critical results reporting, Dr. Bongiovanni uses the first and Dr. Castellani uses the second.

“Both of these packages are programming but at different levels,” Dr. Castellani says. “Crystal Reports is more of a pre-established tool, while Perl is a language that can execute a script. I write my queries using Perl.” Says Dr. Bongiovanni: “Bill and I complement each other because we use different tools, so we were able to pull different pieces of information more easily.”

A laboratory can always do an audit manually, Dr. Castellani points out. “You can pick a certain number of critical values, look at the comment the technologist made about when the phone call took place, then try to find out what the earliest point was at which that result was known.”

“But I thought if we did it manually we’d have to do a fair amount of searching the Sunquest LIS records anyway to determine when the data transmission was made.”

Instead, Dr. Castellani came up with a routine to pull out the data in the transaction record for all online instruments and store it. “So now I had the time that every instrument reported every result for the entire month.” With a separate query, he extracted all of the text comments on critical results and searched for a structured date and time in the comments.

This gave him the date of transmission, which could be matched up against the critical value in the comments and the time it was being called—allowing him to conduct a 100 percent audit of all critical values. A full audit rather than a random sample is desirable, in his view, because lapses are relatively rare events. Though the Joint Commission allows laboratories to define the size of sample to audit, “we’ve found we don’t have a lot of failures, so if we audit a limited number of samples, such as 20, it may not be truly representative of how well the lab is doing,” Dr. Castellani says. “If we do a 100 percent audit, we’re able to definitely say whether we have true outliers.”

The usual obstacle to a 100 percent audit—how time-consuming it is—is not an issue because the automated audit is a self-operating routine now. “I don’t even know it’s running,” he says. “It stores its output as a file onto a network folder available to anyone who has access in the laboratory, and we can all look at it for quality improvement.”

To make the program as flexible as possible, Dr. Castellani says, he also has it produce a file of every call that takes longer than a certain threshold (set now at 60 minutes). “It outputs all the information as to who the technologist was, the date and time it occurred, and what the result was, so if for some reason we find a problem, we can go back and see whether it was a personnel issue or insufficient staffing.”

What insights has the audit been able to provide? “My first impression from the data is that we were doing a reasonably good job,” Dr. Bongiovanni says. “We were documenting that we handled critical results appropriately in roughly 98 to 99 percent of cases. The harder part was getting timeliness, but once that audit was up and running, we found we were also at 98 percent.” Occasionally outliers would be identified. “And we would follow up on those by feeding back the information to the section supervisors in the laboratory and asking them to investigate,” he says. “That allowed us to look at the system and make modifications.”

Those modifications tended to involve issues of personnel training, usually where newer people did not understand how to document correctly. “But sometimes we hadn’t defined in the LIS what a critical result was, and sometimes there were issues outside the LIS, such as the difficulty of reaching people at times—particularly outside practices in off hours—and how you would record that as a piece of data,” Dr. Bongiovanni says. The biggest problems occur around 5 PM or 6 PM when the clinics are closing.

There’s no way to know how a 98 percent rate of compliance compares with the laboratory’s earlier performance because there was no auditing then, Dr. Castellani says. “We did nothing before. We didn’t start extracting the information until we started the programming.”

But among other things, the audit has helped the laboratory determine that its only real problem with critical values was with platelets. “When the analyzer puts out platelet results, if they are critical they can’t be called immediately,” Dr. Castellani says. “A slide has to be made and checked, to see if the platelet count is decreased because of clumping, and is therefore, in fact, not a critically low platelet count but an artifact. So you can never hit 90 percent because of that additional step that’s required. And the audit helped us determine this is not something that can be changed unless we decide to call critical platelets before we run the slide. So this may lead to a change in process, which we’re discussing right now.”

While the laboratory can audit only the instruments that are online, that hasn’t been much of a problem. “We only have one test that we consider to have a critical value that is not online: osmolality. And we don’t do that many tests of osmolality or see that many critical values, so we’ve chosen not to aggressively pursue it,” Dr. Castellani says.

Medical staff at some levels need a better understanding of pathologists’ role in critical results, Dr. Bongiovanni believes. “The higher you go in the medical staff, the more they understand the principles and fully support pathology. Then as you go lower down to the people who actually wrestle with the results and figure out what to do, there’s more confusion,” he says.

Laboratories themselves, however, have set it up to make critical results reporting harder, in his view. “If you go across institutions and look at what they’ve defined as critical values, there’s a wide variance. At some institutions they have gone way overboard, and the less stringent the criteria are, the more ‘critical’ results you have. You may end up having lots of results that are not critical for patients, but you still have to deal with them.”

All of the auditing of critical tests and critical value reporting is preliminary to meeting a much tougher Joint Commission goal: auditing when the physician or other clinical provider learned of the result and when appropriate action was taken. Achieving that goal requires the laboratory to link its information with what happens on the clinical floor. Right now, “we feed the list of criticals to the nurse managers and they’re charged with looking at documentation that the nurses are taking appropriate action,” Dr. Bongiovanni says.

Dr. Castellani explains why such an audit would end up being a manual process rather than an automated one. Some critical values might be handled based on a preexisting order set, such as a sliding scale response to a glucose. Others might require the nurse to get an order from a physician, so there would be a significant difference in time. “There’s no very simple way to go through the patient record and look at the known parameters and determine what was done,” he says.

Within the laboratory, “I could automate the audit because I knew how the data I needed would be structured. But when you don’t have something structured to key off of, all the interpretation has to be done manually.”

The project has hit a few bumps along the way. “Right now I am trying to do an extract that is not matching with another extract, and I have no idea why,” Dr. Castellani says. Auditing the output is essential to ensure that the data are reliable. When the data can’t be verified, he says, “there is something you’ve missed in the design.” But “fat fingers” are his biggest problem with the program: “I thought I had put in all the ways people could mis-type data, but when someone slipped and put in 209 instead of 2009, the subroutine that was trying to interpret turnaround time blew up when it hit that date. In fact the entire program crashed.”

Similarly, when the program started, it reported a lot of undocumented callbacks. But it turned out they weren’t undocumented. “They were just being manually entered in a nonstandard way,” Dr. Castellani explains. To fix that problem, the technologists were reminded to use a preexisiting routine on their terminal that, with one keystroke, enters the current system time and date in standardized format into their canned comments.

Do others need more training in the kind of programming skills he was able to use with these audits? “As pathologists we should be good communicators,” Dr. Castellani says. “There is a skill set you can develop and learn when it comes to IT issues, which requires a basic exposure to certain facts about such systems. But often what’s more useful is knowing how to define an issue and describe it to someone, such as an IT specialist who can use the description to develop code.”

Too often, pathologists and IT specialists end up talking at each other rather than to each other, he adds. “People really have to understand that when you’re talking to IT people, they’re very much removed from the underlying issue of how the application they’re doing is affecting the real world.” The best IT person he has ever worked with would come down and work the bench with the technologist to see what the IT project was supposed to implement. “Then he would efficiently produce code.”

Other laboratories may be able to initiate a similar automatic audit of critical testing and critical values reporting, Dr. Bongiovanni says. “I suspect there are two categories of labs. One is a small number of institutions that have very ample IT resources. They’re the kind of places that could set this up in a very effective manner in a short time. Then there are places more akin to us, where the laboratory directors have some knowledge of information systems and in their spare time have dug into them.”

“The majority of labs are probably struggling now. For them, maybe the best thing to do is extract more manual lists of critical results and do audits of samples.” It is worth checking with LIS vendors to see if they can help, however: “I know the major LISs all have user groups, and within them word goes out on issues like this,” Dr. Bongiovanni says.

The CAP is actively involved in highlighting pathologists’ role as information experts. In meetings of the CAP’s Diagnostic Intelligence and Health Information Technology Committee, says Dr. Castellani, who is a member of the DIHIT’s education and accreditation workgroup, there has been considerable discussion of the future skills set pathologists will need, and there is a strong feeling “that we have to make that aspect of our contribution clear to clinicians and to patients.”

To Dr. Bongiovanni, commitment is the most important factor in setting an automatic audit in motion. “No. 1, the lab and, more important­ly, the institution have to want to do it. The second thing that’s going to make it happen is having a dedicated core of people who are willing to do some work.” At Penn State, the results have been worth the trouble: “It’s really helped the laboratory be seen as an integral piece of a quality program.”


Anne Paxton is a writer in Seattle.
 

Related Links Related Links