Editors: Raymond D. Aller, MD, & Dennis Winsten
Vanderbilt making strides with 3D scanning project
November 2024—A picture may be worth a thousand words, but a conventional two-dimensional photograph of a surgical specimen can convey only so much information to a pathologist or surgeon. Overall geometry and margin status are difficult to interpret from static, single-perspective images.
A team at Vanderbilt University Medical Center, led by Michael Topf, MD, is addressing this issue using a 3D protocol for surgical pathology. The protocol yields “a 3D image file, which allows the user to move and manipulate and rotate and tilt the specimen on your screen,” says Dr. Topf, assistant professor, Department of Otolaryngology–Head and Neck Surgery, at Vanderbilt.

The team first used 3D scanning for head and neck specimens, as noted in a Journal of Pathology Informatics article (Perez AN, et al. Published online Jan. 2, 2023. doi.org/10.1016/j.jpi.2022.100186).
“Since the anatomy is complicated and the specimens are pretty small, it was a sweet spot for starting to use this technology in pathology,” says James Lewis Jr., MD, who worked on the project with Dr. Topf while at Vanderbilt. (Dr. Lewis is now professor of laboratory medicine and pathology and lead for the Head and Neck Center, Mayo Clinic, Scottsdale and Phoenix.)
A majority of head and neck cancers at Vanderbilt are now being 3D scanned, along with numerous sarcoma cases of the extremity. “I have dreams of expanding to other solid malignancies,” says Dr. Topf, “but we’re trying to do this in a methodical and thoughtful manner. We have done a couple breast cancer cases, and that’s gone well.”
Dr. Topf’s group uses a commercial 3D scanner and its companion software. They scan the ex vivo specimen in the pathology lab in two phases: The scanner platform completes eight turns of 45 degrees each, the specimen is flipped over to reveal the opposite surface, and the 3D capture process is repeated. Then the two data point clouds are aligned to create a model that’s exported into a computer-aided design workspace in the 3MF file format, developed specifically for 3D printing.

After overnight formalin fixation, a pathology assistant routinely processes the specimen while a research assistant annotates the 3D scan. “As the prosector is cutting the specimen, a research team member is following along and using computer-aided design software, similar to Microsoft Paint, to indicate on the 3D model exactly where each cassette was taken,” explains Alexander Perez, MD, a pathologist who worked on the 3D scanning project at Vanderbilt as a resident. (Dr. Perez is now completing a fellowship in soft tissue pathology at the Uni-versity of Texas MD Anderson Cancer Center.)
The digital ink is matched in color and location to the actual specimen ink. Different colors and patterns are used for perpendicular versus shave or en face sections, and the research assistant assigns each tissue block a letter. “They write that on the 3D model,” Dr. Perez says, “so anyone looking at it would know section 1B came from the lateral side at the midpoint of the mandible, or whatever the particular section may be. We want to be as granular as possible when explaining where a margin could be positive.”
“To see the virtual mapping and know where the specimens are and how they were cut up—it pops your eyeballs out of your head,” Dr. Lewis adds. “It’s the next best thing to holding it in your hand.”
Vanderbilt’s final pathology report integrates the typical reporting style of its Cerner laboratory information system with still 2D images from the 3D scan and 3D specimen map. A printed or onscreen 3D model can be used to augment communication of final surgical margins.
Dr. Perez believes the greatest benefit of 3D scanning is in cases where there are no intrinsic anatomic landmarks.
“You can be much more specific if you have to convey something regarding a margin or a surface of the specimen that was of interest. You can give [the surgeon] a more exact area.”
Dr. Lewis cites numerous benefits, including the usefulness of 3D scanning when margins are close or the surgeon has cut through the tumor, deliberately or not, and has to take additional tissue. Three-dimensional scanning also creates a permanent, easily storable record of the postoperative specimen, he says. Therefore, “you can use it for all manner of discussions. Ultimately you can 3D print it and use it for teaching; you can have a discussion with the patient—[telling them] this is where your tumor was. You can use these images for radiation dosing and correlation with radiographic features or intraoperative photographs.”

Yet the use of 3D scanning in pathology is not without drawbacks. It takes eight to 10 minutes to make the two-part scan, Dr. Perez says. “We have to be mindful of how many specimens the pathology assistants have to process,” Dr. Topf adds. That’s the rationale, he says, for having a separate research team perform the virtual inking alongside their pathology colleagues. “But our data suggests that we don’t slow anything down and this is purely additive. We also don’t lengthen time for permanent sectioning and processing the specimen.”
Furthermore, the off-the-shelf computer-aided design software Dr. Topf’s team uses is not specifically designed for the work they’re doing. “It’s not intuitive; it’s not necessarily easy,” he says. “There is a learning curve.” Becoming adept with 3D scanning and specimen mapping requires “a lot of work and a lot of repetition,” Dr. Perez adds.
Consequently, the Vanderbilt group is creating custom software so pathology assistants eventually will be able to work on tablet computers. They will mark on the 3D model how they are processing the specimen, “and they won’t need to worry about all that dictation,” Dr. Topf says.
Dr. Topf and his colleagues have also recently completed a proof-of-concept trial of the feasibility of incorporating augmented reality into their 3D scanning protocol, allowing them to superimpose a computer-generated image onto the patient.

The new procedure involves communicating frozen section analysis results in an augmented reality environment, with Dr. Topf staying in the operating room while the research team 3D scans and processes the specimen and the research assistant annotates the 3D model to show sites of margin sampling. Then, with Dr. Topf wearing a Microsoft HoloLens augmented-reality headset and the pathologist stationed at his computer, the two of them review the model jointly via a videoconference. “Together we put the holographic specimen into the patient to show its normal anatomic orientation and then review microscopic images that are also uploaded to the HoloLens,” Dr. Topf says. “We have a much more visual and interactive conversation than the standard-of-care telephone call.”
Dr. Topf encourages surgeons and pathologists at other institutions to follow his lead by pursuing the visualization of surgical specimens. “In 2024,” he says, “with all of the graphics, 3D scanning, and different technology that is available and is utilized in other fields, I think we can do better.” —Faith Reidenbach
ASTP publishes findings about lab interoperability
The Assistant Secretary for Technology Policy/Office of the National Coordinator for Health IT recently released a data brief that provides insights into laboratory participation in health information exchange organizations, or HIOs.
The document, based on a 2023 national survey of leadership at 77 HIOs, reported that four out of five such entities nationwide make laboratory test results available to participating organizations, but the portion of HIOs that make such data available varies regionally by hospital service area. Furthermore, hospital-based laboratories share test results with HIOs and view or receive data from these organizations at higher rates than other types of medical laboratories.
Approximately one-third of HIO survey participants reported that laboratories have limited or prevented access to health information or its exchange or use. But “while these instances represent impediments in access to data, they cannot be qualified as acts of information blocking without a fact-based case-by-case assessment of the circumstances,” according to the data brief.
The survey respondents reported that independent labs, in particular commercial labs, were significantly more likely to generate impediments to HIOs accessing electronic health information. Ninety-six percent of those surveyed indicated that they encountered such issues with independent labs, followed by nine percent for mobile laboratories and nine percent for public health labs. None of the respondents reported that physician office-based labs limited or refused access to EHI.
Those that reported encountering issues accessing lab data noted that labs provided a variety of reasons for limiting or refusing access. The reason most commonly cited was that labs don’t derive value by serving only as a contributor of EHI (61 percent). Other reasons included laboratories’ stance that reporting obligations end with returning patient results to ordering providers (52 percent) and labs’ concerns about needing consent from individual providers before supplying EHI, which could be a complex process requiring multiple disclosure forms (48 percent).
The full ASTP data brief, titled “Laboratory Interoperability Through Health Information Exchange Organizations,” is available at www.bit.ly/404sYEx.
PathPresenter secures Series A funding
The global pathology-focused image-sharing platform PathPresenter has closed a Series A funding round, raising $7.5 million from investors to accelerate adoption of the company’s enterprise pathology workflow solution.
“The investment will enable PathPresenter to enhance the features and interoperability of its vendor-agnostic platform, broadening access for hospitals, laboratories, and pharmaceutical companies globally,” PathPresenter founder and dermatopathologist Rajendra Singh, MD, told CAP TODAY magazine.
The pathology platform has more than 50,000 users in 172 countries.
Proscia expands digital platform with tools for AI development
Proscia has released Concentriq Embeddings and the Proscia AI Toolkit to further the use of artificial intelligence via its Concentriq enterprise digital pathology platform.
Concentriq Embeddings, which is integrated in the Concentriq platform, allows pathology and data science teams to generate high-dimensional numerical representations, or embeddings, from whole slide images. The embeddings are derived from the foundation models DINOv2, PLIP, ConvNext, and CtransPath. Researchers can select foundation models based on their specific needs, with applications ranging from image classification and segmentation to risk scoring and multimodal data integration, supporting rapid prototyping and development of large-scale AI models within the Concentriq platform.
“This tight integration with [Concentriq’s] existing data infrastructure allows organizations to immediately generate embeddings and rapidly iterate on AI models, cutting development time and enabling faster experimentation,” according to a press release from Proscia.
The newly released Proscia AI Toolkit is a suite of open-source resources for helping to integrate Embeddings into Concentriq users’ workflows. It includes a Python client for seamless application programming interface integration with Concentriq Embeddings; tutorials paired with Python code in Jupyter Notebooks; and a library of helper functions for such tasks as image tiling and organizing API outputs.
Proscia, 215-608-5411
Dr. Aller practices clinical informatics in Southern California. He can be reached at rayaller@gmail.com. Dennis Winsten is founder of Dennis Winsten & Associates, Healthcare Systems Consultants. He can be reached at dennis.winsten@gmail.com.