Summary
Laboratories considering AI-driven workflows should first identify the specific problem they aim to solve and assess if simpler coding solutions are sufficient. While out-of-the-box AI solutions from vendors offer a lower barrier to entry, laboratories must ensure these solutions align with their workflows and validate them in their own environments. For more complex or unique problems, particularly in clinical pathology, laboratories may need to develop their own AI models, necessitating investments in IT infrastructure and data management.
Charna Albert
November 2025—Nicholas Spies, MD, is no stranger to taking the long view. As medical director of applied artificial intelligence at ARUP Laboratories’ Institute for Research and Innovation, it isn’t unusual for him to work on a project that won’t hit the laboratory for a decade.
That may be why he takes a measured approach to implementing AI-driven tools in clinical practice, though he’s sympathetic to anyone feeling the impulse to keep up with the Joneses.
“The field is moving so quickly in popular culture and in our personal lives, it makes sense to feel like you’re falling behind,” says Dr. Spies, medical director of clinical chemistry at ARUP and assistant professor at the University of Utah School of Medicine. “I would say, being in the roles we are, in the field we’re in, that we are behind in a good way.” Better to assess the potential dangers in lower-risk environments, he says, before implementing anything new in clinical workflows. And though patient safety is the first consideration, it’s far from the only one.
“Substantial up-front investments need to be made to roll out AI clinically, no matter how big or small your lab is,” says Dr. Spies, a member of the CAP Artificial Intelligence Committee.

For many of the laboratory’s problems, an AI fix isn’t strictly necessary; much of the time the issue can be solved with simpler forms of coding. “As AI tools become more and more available, it’s tempting to have this big hammer and go around searching for nails,” Dr. Spies says. “Identifying the problem you’re trying to solve and why your simplest approach isn’t going to be sufficient is step one.”
For Dr. Spies and his colleagues, workflow inefficiencies are at the fore, “especially as the regulatory environment around the diagnostic side of this continues to mature.” One example, he says, is an AI algorithm used to automate part of the flow cytometry workflow at ARUP. (The solution’s architect, he notes, was his co-medical director of applied AI, David Ng, MD.)
“Flow is a high-volume test for us at ARUP, and a lot of the cases have this cyclical pattern,” Dr. Spies says, “or at least they did before our AI model. We would run the whole test, run all our panels, a technologist or analyst would do the pre-gating and send the case to the pathologist,” only to have the pathologist request additional tubes, adding a day’s turnaround time. The model they developed, he says, predicts which tubes will need to be added before the case gets to the pathologist.
“The model is at most automating a painful step in the workflow,” he says. “It’s a great example of how knowing the clinical use case well allows you to make good decisions with AI.” Flow has a highly complex set of inputs, he adds. “We couldn’t just automate a rule for this.”
For the laboratory that identifies the right sort of problem, the next debate is a variation on literature’s most famous soliloquy: To build or not to build? Out-of-the-box solutions from AI vendors will have a lower barrier to entry for “pretty much every lab,” he says. “And if medical directors haven’t already been approached by these AI vendors, they will be soon.”
But before buying, it’s important to parse how much access the solution allows for, he says. “How can we monitor these solutions in real time? What change management systems exist, and how are the vendors going to partner with us to validate that solution in our own laboratory and do all the things we want to do to be confident in our rollout or all the things we would consider to be the bare minimum if we were building it ourselves?” FDA approval removes some but not all of the validation burden, he notes. “It will require hands-on testing on your patients in your workflows, and asking laboratories to shoulder the costs for that will put most labs on the build part of the spectrum, if they can build it. My bigger concern is a lot of labs won’t have the resources to build for some time, so it might shut them out of the game entirely.”
Then, too, finding a vendor solution to match the problem is no guarantee, particularly in clinical pathology. “Most of the AP AI projects that go through the FDA and do the entire regulatory process are focused on solving relatively universal problems, like quantifying immunohistochemistry stains or suggesting diagnoses,” he says. “In CP, the diversity in workflows across laboratories makes it harder for the big market players to find universal solutions. So our read on the CP AI world is that laboratories are going to have to start developing these models or workflows themselves.”
From his perspective, this puts the onus on laboratories to make systemwide investments in IT and infrastructure. “If you want the agility to build your own solutions in the decade or so that it will take for vendor options to be available for most of your problems, what is it going to take to get the infrastructure and people in place to have that build option available for you?”
He suggests as a starting point a deep dive into the laboratory’s data infrastructure. “All this gets down to access to data and having the skill sets you need to make sense of that data,” he says. “If you don’t know where your data lives or who has access to it, that’s an easy first step you can take.” And while there may be more layers of governance in a larger organization, “there’s a lot of variation in what laboratorians have access to and where,” even among the larger systems.
The next step: Establishing an analytics database or enterprise data warehouse. “That’s a stepping stone on the AI journey,” but it’s also a safe bet, he says. “You’ll get benefits from a robust data infrastructure regardless of whether you ever implement an AI model in your clinical workflows.”
For some with AI aspirations, satisfaction with the laboratory’s whole slide image management system is a prerequisite.
Spectrum Healthcare Partners is a private pathology practice serving Maine and New Hampshire. The group has used Proscia’s Concentriq AP image management system for the past two years, as it inches closer to becoming a fully digital operation.

Pathologist Bilal R. Ahmad, MD, MBA, who is leading the group’s digital deployment, has considered bringing on an AI-driven prostate cancer solution. “I can tell you that Ibex’s algorithm is very good,” he says. “Paige also has an excellent one.” Most of the products on the market have equivalent sensitivity and specificity in disease detection, he notes. Still, his practice is putting off the investment—at least for now. “We had four pathologists who vetted algorithms independently, and they felt that although the tools were interesting and nice to have, they wouldn’t meaningfully contribute to our efficiency.”
Why? In his view, as more vendors have begun to offer AI-driven solutions for improving diagnostic efficiency, new bottlenecks have emerged. (This applies to the breast cancer algorithms as well, he says.) Manually transferring case data from the AI solution to the native laboratory information system for synoptic reporting, for example, adds considerable time to each case. He also sees pain points in integration and bidirectionality between the AI solution and IMS. “Too many point solutions, too many different windows to manage.”
“Ultimately, it’s not the [AI] vendor’s problem,” Dr. Ahmad says. He hopes, rather, that IMS vendors like Proscia will step up. “One thing I’ve been advising not just the Proscia team on but other vendors is to develop more integrated solutions—what I call embedded intelligence. That means not only generating a heat map as a single point solution but also enabling data from the algorithm to prefill user-defined fields.”
“This is how we need to start thinking about our image management systems,” he adds. “Not just as viewers, but as platforms that manage all pathology input.”
It wasn’t the bells and whistles, in Dr. Ahmad’s words, that sold him on Proscia. It was that the company was willing to entertain his feedback on this issue and others. Technology’s shelf life is brief, he reasoned. “Ultimately, you rely on the strength of the brand behind the technology, because that determines whether they’ll keep up with your evolving needs.”
At Chicago-based Northwestern Medicine, when Jeffery A. Goldstein, MD, PhD, and his colleagues chose PathAI’s AISight for the laboratory’s new IMS, also inking a collaboration with the company on joint research initiatives, AI capability did not factor into their decision.
“I know it’s funny for a company that has the word AI in the name and a product that has the word AI in the name, but we specifically did not look at AI when we picked a product,” says Dr. Goldstein, director of perinatal pathology and associate professor of pathology. (Dr. Goldstein is leading the PathAI implementation, as well as the laboratory’s overall digital pathology deployment.)
Every product they considered could run algorithms and bring in third-party software or AI developed onsite, “which is clearly of interest to us. But we felt the user experience looking at the slides was the most important thing,” he says.
“I do think AI is going to be extremely valuable,” he clarifies. “I think it’s going to be a huge part of our workflow.”
AI has the potential to speed up tests now underbilled and allow for new laboratory services: AI-driven tests that predict cancer survival, for example, and are now offered as send-outs, could in time be done in-house. But in choosing a new IMS, Dr. Goldstein and his colleagues were largely concerned with speed and visualization, strengths of PathAI. They also plan to use the product’s conferencing capabilities to conduct quality assurance. “Right now we do QA over Teams or Zoom, and we know the image quality there is not as high as looking at the slide.” AISight wasn’t IT’s top choice, he says. “But the discussion we had was, this is something people are going to be using for six, eight, 10 hours a day, every day. And if you have something that is 100th of a second faster on the AI product side, that’s going to pay off almost immediately.”
The new system will be deployed gradually, Dr. Goldstein says. Not every Northwestern laboratory has digital scanning, and in those that do, not every pathologist signs out cases digitally, though they introduced digital primary diagnosis in late 2024. It will be available to everyone from day one as a link to launch within the LIS, he notes.
Dr. Goldstein sees advantages to the build side of the build-versus-buy debate. “If we’re building AI with our own data…things look the same way on the slide as they do in the training data. We have a direct line to the developer, so we’re able to build the features we want.” And in-house development would involve end users—that is, his colleagues—early enough to ensure the problem being solved is one they really have. But any plans to develop AI in-house are still in flux, he says. “This is something that’s still being solidified.”
Looking ahead, Dr. Goldstein and others see in AI deeper implications for pathology practice.
“There’s this concept called upskilling, where appropriate use of AI can let someone do things they wouldn’t otherwise be able to do, but still be able to supervise them [what’s done] and do it in a responsible way,” he says. “And I think building AI that does that is going to be important for the field.”
It’s a subject close to home for Dr. Goldstein, who uses AI in his research to improve fundamental understanding of the placenta. “You don’t hear as much worry today as you did a few years ago among pathologists that ‘the robots are coming for our jobs, that AI is going to put us out of a job,’” he says. “But as a placental pathologist, I hope AI puts me out of a job.”

In the U.S., he explains, placental specialists are in such short supply that only about 20 percent of placentas are assessed by pathologic exam, despite the importance of timely diagnosis for conditions like neonatal sepsis and chorioamnionitis. Training and continuing medical education can help bring nonexperts up to speed, he allows. “But that’s not going to get you all the way there. I think machine learning is the best way to do it.”
Time is another barrier. “Tissue processing, sectioning, all those things take time, and by the time we can observe it and say there is chorioamnionitis or not, often the infant is either already ill and known to be septic, or they’re well and our information doesn’t impact management. Whereas if you could provide the same information one hour after delivery, you have a real chance to make a difference.”
Dr. Goldstein and his research collaborators aim to automate placental examination through machine learning. They’ve developed an algorithm—trained on a data set of about 1,300 placenta images taken at Northwestern Memorial Hospital—that predicts placental morphological characteristics (Chen Y, et al. Comput Med Imaging Graph. 2020;84:101744). A more recent version of the algorithm, Dr. Goldstein says, uses a vision-language contrastive learning approach, which incorporates pathology reports into the training data (Pan Y, et al. Paper presented at: 25th International Conference on Medical Image Computing and Computer Assisted Intervention; Sept. 18–22, 2022; Singapore. doi:10.1007/978-3-031-16437-8_68). “Instead of building one model to look for meconium and one model to look for abnormalities in the umbilical cord, by associating the images with everything in the pathology report, we are potentially able to pull out any abnormality that might be found,” he says.
Could automating placental assessment help contribute to the body of medical knowledge on the placenta? “I think it’s hard to make guarantees about that,” Dr. Goldstein says. The hope is that computers can learn more, and more quickly, than people. Take placental villi, for example. When a pathologist examines a placenta and decides the villi look abnormal, that’s a subjective assessment. But a machine can look at thousands of villi and build up a strong understanding of what’s normal and abnormal, and then associate that with other abnormalities. “Computers are very good at counting and measuring things, and that’s a lot of information we’re leaving on the table,” he says.
Moreover, what pathologists capture now in diagnosis is a small portion of placental variation. “The reasons for that are simple,” he says. “We need things that are reproducibly observable, that are large deviations from the norm, and that we can readily link to something happening clinically.” That’s why conditions like chorioamnionitis and preeclampsia are strongly associated with negative outcomes, but it’s much harder to draw associations between, say, placental health and meeting developmental milestones at three years of age. “It’s just very hard to make that connection.”
He hopes to pilot the AI tool soon. But given his position as someone leading digital pathology at an organization, “I have to look at our algorithm the same way I would look at any other thing we would bring in. How do we see this fitting into our workflow? How disruptive is this going to be? How useful is it?” Another question: Because it’s providing information at a different time than physicians would normally have it, should that information go to the pathologist, or directly to obstetricians? “We want to try all those things out, and I look forward to doing that because that’s going to tell us a lot about the best way to use this technology.”
Dr. Spies, of ARUP, is investigating the role for AI in quality control and detection of preanalytical error.
Internal quality control is the standard of care. “But we also have patient-based QC and moving medians that people are starting to dip their toes into more,” he says. Integrating those two data streams, however, has been a barrier. “There’s a role for AI there,” he says. “AI can integrate those two approaches to give you a better sense of, do I need to take action now, or can I wait for my next standardized QC run, using all the information available to us rather than snapshots or pieces of it here and there.” Dr. Spies and his colleagues are implementing patient-based moving medians in routine clinical workflows, “with a goal to get AI more plugged in throughout the entire process.”
Now in clinical use at ARUP are AI-driven tools he and his colleagues have built to detect preanalytical error, such as IV fluid contamination and wrong blood in tube. “That will help internally in our operations but could have effects more universally as well.”
On the anatomic pathology front, laboratories seeking AI-driven quality improvement tools need not have the technological know-how of Dr. Spies and his colleagues.
Marilyn Bui, MD, PhD, senior member and professor of pathology and scientific director of the analytic microscopy core at Moffitt Cancer Center, recommends several AI-driven workflow and QC solutions. “I think we’re not paying enough attention to these tools,” says Dr. Bui, chair of the CAP Digital and Computational Pathology Committee.

One is Visiopharm’s Qualitopix, which uses AI to monitor immunohistochemical stain quality and consistency. How it works: Using standardized reference material as controls, the laboratory creates a control slide and stains it following routine protocol, scans the control slide, and uploads the image to a cloud-based Qualitopix account. The tool measures staining intensity daily, monitoring the measurements through a Levey-Jennings plot. Any variation from the norm is flagged, allowing the laboratory to act immediately. Laboratories without a full digital setup can still benefit—all that’s needed is one small scanner, the cloud, and the control material, Dr. Bui says.
Another is DigitCells’ Tissue Workflow Optimization for Digital (TWOD) system. TWOD is a complete preanalytical and grossing solution that creates a digitally tracked chain of custody for biopsies, addressing what Dr. Bui calls “the biggest bottleneck in the digital workflow—preanalytical error.” Its tech-enabled cassette design and digital mapping “eliminate manual data entry errors,” she says, and reduce consumable waste by 70 percent. The system automates the grossing step: an AI-enabled station captures digital images of cores within the cassette, measures them, and logs the data into the LIS. This step reduces tissue handling and fragmentation risk and allows for the deployment of advanced computational pathology algorithms.
As with Qualitopix, Dr. Bui says, TWOD offers advantages even for laboratories without a full digital setup, but to realize the system’s full benefits, especially for high-volume prostate and GI biopsies, a complete digital setup is needed.
There’s a misconception, she says, around AI-driven quality improvement solutions—that the laboratory must already have undergone a full digital transformation to take advantage of them. “It’s totally wrong,” she says.
With much still up in the air, there’s little consensus on how laboratories can get the most bang for their buck when it comes to an AI investment.
Dr. Spies recommends expectation management. “It’s a good question, and unfortunately my answer is going to be a little disappointing. But I think the most important thing we need to learn is to start monitoring costs up front regardless of what AI is going to be involved,” he says. “Having a robust way to assign dollars and cents to each of your workflow steps, either through the FTE or reagent approach or by some other metric, is a valuable first step in trying to identify your high-value use cases.”
It’s best not to assume that introducing AI in any one step of the testing process will reduce FTE workload, he says. “I think that’s just setting us up for disappointment.”
The math on digital pathology is a little more forthcoming. Spectrum Healthcare’s partial digital conversion reduced the organization’s FTE count by about 0.6 percent, Dr. Ahmad says. “We’re seeing gains in productivity by virtue of less travel and more instant access, and that’s without the use of AI,” he says. “The hope is once we have AI there will be a measurable increase in productivity.”
But digital pathology has its own unknowns, as Dr. Goldstein and his colleagues at Northwestern discovered in their search for a new IMS. “One of the vendors we initially talked to went out of business while we were doing our investigation,” Dr. Goldstein says. “We know there’s going to be rapid change in the digital pathology ecosystem, and we have to be prepared for the idea that three years down the road, seven years down the road, we may need a different system.”
Yet there’s a difference between being prepared and staying out of the game entirely.
“If we waited for these systems to settle down, we’d be doing this in 10 years or something. We’d be totally behind everyone,” he says. “So I think we need to adapt. We need to adopt, and we need to adapt.”
Charna Albert is CAP TODAY senior editor.