Editors: Raymond D. Aller, MD & Dennis Winsten
Pathologist develops feedback tool to help trainees with case reports
October 2025—For pathology residents, there are easier tasks than writing a perfect preliminary case report on the first attempt. Nailing Jell-O to a tree, for example, or stapling sunlight to a cloud.
“When the attendings make no changes or just sign out the case as we wrote it, there’s a sense of accomplishment,” says Jingjing Cao, MD, pathology resident in the Departments of Pathology and Laboratory Medicine, University of California San Francisco.
Of course it’s far more common for attending pathologists to make significant corrections to residents’ preliminary reports. And it’s useful for residents to review those corrections for educational purposes. Unfortunately, however, many laboratory information systems overwrite or delete a trainee’s draft report once a case is signed out, and they lack workflows for structured feedback.
That’s why Dibson Dibe Gondim, MD, vice chair of computational pathology and AI, associate professor of pathology and laboratory medicine, and director of genitourinary pathology at University of Louisville School of Medicine, has created customized software that allows residents to easily flag and save the cases they want to review.

“This was the tool that I wished to have when I was training as a resident,” says Dr. Gondim, who introduced an early iteration of the pathology trainee report-feedback system at Louisville three years ago. In 2024, Dr. Gondim began working with Dr. Cao, a former Louisville medical student, to analyze data from the system and thereby improve it.
The software, which is integrated with Louisville’s Oracle Cerner Millennium LIS and Paige digital pathology platform, allows trainees to flag and store cases by simulating a sign-out command, so they can go back and compare their preliminary diagnoses to attending pathologists’ final reports. After finishing a report, the residents simply type the letter R four times and click “enter” to save their draft version in the report-feedback system.
The software is also programmed to send automated summary reports to the residents at 6 PM each day. The reports include trainees’ preliminary reports presented side-by-side with the corresponding final reports signed out by faculty, for comparison, and links to relevant digital slides. The software sends end-of-day summaries to the attendings, as well, that show all cases flagged that day by the trainees they supervise.
The idea is that residents get in the habit of opening an email to check for discrepancies between the attending pathologist’s final diagnosis and their preliminary diagnosis, Dr. Gondim says. “They can access these in a safe manner from any computer using their digital pathology credentials.”
The software is more efficient than other, similar systems, Dr. Cao says. “Those methods either require the residents to enter a bunch of different filters and then get the final report, or it requires them to copy and paste their diagnosis into a [separate] platform,” she explains. The Epic Beaker LIS, for one, contains functionality similar to that of Dr. Gondim’s offering, but it’s “not as automated and as customized,” Dr. Cao adds.

Louisville’s report-feedback system is not only a boon for residents who want to keep track of their errors in order to learn from them, without having to handwrite information in notebooks, but also a benefit to administrators in search of quantifiable data on trainee accuracy, its developers assert. “The program can assess individual trainee’s performance in different subspecialties, and [attending pathologists] can even track their trainees’ performance over time to see . . . the overall diagnostic accuracy,” Dr. Cao said in a poster presentation on the software at the 2025 United States and Canadian Academy of Pathology annual meeting. “This data-driven approach can also further aid in curriculum development to ensure adequate subspecialty exposure for every trainee to prepare them for future board exams and also for their independent practice in the future,” she added.
To elaborate, Drs. Cao and Gondim cite a study in which they randomly selected a sample of 3,854 cases from 13 trainees and analyzed discrepancies between residents’ preliminary reports and attendings’ final reports. They found an overall accuracy rate of 68.8 percent, with the highest accuracy rate in neuropathology, at 84 percent, and the lowest in placenta, at 46 percent. By analyzing performance not just by resident but also by organ system, a program director could “encourage trainees to review more cases in subspecialties where they demonstrate lower accuracy,” Dr. Cao says.
While use of the software is voluntary, an estimated 80 percent of residents are employing it, Dr. Cao found. However, within that 80 percent, usage varies widely, with one resident having reported using it for more than 700 cases and another having reported using it for only seven.
Data aside, Drs. Gondim and Cao feel strongly that the software should be used on a voluntary basis and “in a way that always creates a positive environment,” Dr. Gondim says. “What if this tool gets used in a way that’s not productive, but in a way to punish people? In order to avoid that, we give control to the user.”
Dr. Gondim plans to eventually use large language models to make the software even more productive. An LLM could read a resident’s report, compare it to the attending’s report, and calculate the degree of discrepancy. If a resident flags more than one case in a day, the LLM could list those cases according to degree of discrepancy “so people would be able to see very quickly what the cases are that they need to work on,” he says.
Right now, however, software maintenance is the challenge du jour. “Whenever changes occur in the LIS, the Paige digital pathology platform, or network certificates, the configuration must be updated and the system redeployed,” Dr. Gondim explains.
Still, Louisville pathology residents at a recent educational retreat reported positive experiences with the software. Some even reported that using the tool led them to try to do a better job at assessing cases, Dr. Gondim says. “It incentivizes the residents,” he adds, “given that we are going to have a final copy of their report [as a mock sign-out] that they’re not going to be able to change. It becomes similar to how a pathologist practices.”
— Anne Ford
AI tool from Mount Sinai detects and reduces biases in data sets
Researchers at the Icahn School of Medicine at Mount Sinai have developed AEquity, a tool to detect and reduce biases in health care data sets before those data sets are used to train machine-learning algorithms and other forms of artificial intelligence.
In tests of AEquity, in which data from medical images, patient records, and the National Health and Nutrition Examination Survey were applied to a variety of machine-learning models, the tool identified well-known and previously overlooked biases across data sets.
The researchers reported that AEquity is adaptable to machine-learning models that vary widely in power and complexity and data sets of varying sizes and complexity. The tool can be used to assess input data, including laboratory results and medical images, as well as such outputs as predicted diagnoses and risk scores.
“It may be used during algorithm development, in audits before deployment, or as part of broader efforts to improve fairness in health care AI,” according to a Mount Sinai press release.
Details of the researchers’ study of AEquity were recently published in the Journal of Medical Internet Research (Gulamali F, et al. Published online Sept. 4, 2025. doi:10.2196/71757). The study was funded by the National Center for Advancing Translational Sciences and the National Institutes of Health.
Tempus AI purchases Paige
The technology company Tempus AI has acquired the digital pathology firm Paige.
“As we embark upon building the largest foundation model that’s ever been built in oncology, the acquisition of Paige substantially accelerates our efforts,” said Tempus founder and CEO Eric Lefkofsky, in a company press release. “Paige is a leader in digital pathology and has amassed one of the most comprehensive digital pathology data sets in the world through its relationship with Memorial Sloan Kettering Cancer Center. We believe both the Paige team, with their deep generative AI experience, and the data set they have built will [catalyze] all of our AI efforts.”
Paige created an anonymized data set made up of almost 7 million digitized pathology slides through an earlier collaboration with Memorial Sloan Kettering. The company recently released the PRISM2 whole slide foundation model for multimodal AI in pathology and cancer care, which was built on more than 2.3 million whole slide images.
Tempus AI, 800-976-5448
LigoLab and Docus AI partner
The laboratory information systems and lab billing solutions company LigoLab has entered a strategic partnership with Docus AI, a provider of artificial intelligence-driven medical report interpretation.
Under the partnership, LigoLab’s all-in-one informatics platform can automatically push test results from any laboratory testing discipline to Docus.ai, which, in turn, instantly generates narrative summaries and biomarker insights from those results for clinicians and patients.
The digital workflow sends result notifications to patient portals or mobile apps after final validation, with customizable messaging for normal and abnormal results. It also automates appointment reminders for follow-up testing based on result triggers and reduces the need for medical laboratory scientists to draft reports manually.
“This partnership underscores our dedication to advancing AI across the LigoLab platform,” said LigoLab CEO Suren Avunjian, in a company press release.
LigoLab, 800-544-6522
Gestalt Diagnostics and Voicebrook collaborate
Voicebrook has integrated its VoiceOver Pro reporting platform with Gestalt Diagnostics’ PathFlow digital pathology solution, allowing PathFlow clients to navigate images and create reports via speech rec-
ognition.
“Pathologists can dictate report sections, complete CAP protocols when required, and seamlessly move between reviewing slides and capturing structured data—all without interrupting their diagnostic flow,” according to a press announcement from Voicebrook.
Gestalt Diagnostics, 509-492-4912
PathAI teams up with Moffitt Cancer Center
PathAI and Tampa, Fla.-based Moffitt Cancer Center have announced a multi-year strategic collaboration to deploy PathAI’s digital pathology platform, AISight Dx, across Moffitt’s pathology programs.
Under the partnership, the entities will collaborate on research, undertake clinical trials and biopharma initiatives, codevelop next-generation artificial intelligence-based diagnostics, and collate real-world multimodal data.
“Adopting cutting-edge digital pathology technologies is key to driving Moffitt’s vision for a true digital transformation, one that unlocks a wealth of pathology data and seamlessly integrates it with imaging and clinical insights to deliver truly personalized patient care,” said William Westra, MD, vice chair of pathology at Moffitt, in a press statement.
AISight Dx is a cloud-native digital pathology image-management system that supports laboratory workflow and translational research use cases.
PathAI, 617-500-8457
Dr. Aller practices clinical informatics in Southern California. He can be reached at rayaller@gmail.com. Dennis Winsten is founder of Dennis Winsten & Associates, Healthcare Systems Consultants. He can be reached at dennis.winsten@gmail.com.