Home >> ALL ISSUES >> 2023 Issues >> Pathology informatics selected abstracts

Pathology informatics selected abstracts

image_pdfCreate PDF

Editor: Liron Pantanowitz, MD, PhD, MHA, director of anatomical pathology, Department of Pathology, University of Michigan, Ann Arbor.

Relevance of CAP guidelines for validating whole slide imaging for diagnostic purposes in cytopathology

March 2023—Whole slide imaging is increasingly being adopted by pathology laboratories worldwide. In 2013, the College of American Pathologists published guidelines on validating whole slide imaging (WSI) for diagnostic purposes. The CAP updated the recommendations in 2021. The guidelines include three strong recommendations and nine good-practice statements. The purpose of the validation guidelines is to ensure that a WSI system performs as intended in a particular clinical environment before it is used in patient care. In other words, the process is intended to make sure pathologists can render accurate diagnoses with WSI that are at least comparable to those provided via traditional light microscopy and that there are no interfering artifacts or technological risks to patient safety. In this way, the guidelines promote safety, standardization, and the adoption of digital pathology. The application of WSI to cytopathology is beyond the scope of the CAP guidelines, largely due to limitations of the technology for digitizing cytology slides. To address this gap, the authors systematically reviewed published literature on WSI validation studies in cytology. They conducted a systematic search in the PubMed-Medline and Embase databases and retrieved 3,963 articles, only 25 of which met the inclusion criteria for their study and were, therefore, subsequently included in their literature review. The authors reported that only four (16 percent) of the studies from the literature satisfied all three strong recommendations and only nine (36 percent) fulfilled all good-practice statements of the CAP guidelines. Even though the CAP guidelines for WSI validation in clinical practice have contributed to the widespread adoption of digital pathology, more evidence is required to promote the routine use of WSI for diagnostic purposes in cytopathology practice. Specifically, additional dedicated validation studies that satisfy all strong recommendations or good-practice statements, or both, recommended by the CAP are needed to expedite the use of WSI for primary diagnosis in cytopathology.

Antonini P, Santonicco N, Pantanowitz L, et al. Relevance of the College of American Pathologists guideline for validating whole slide imaging for diagnostic purposes to cytopathology. Cytopathology. 2023;34(1):5–14.

Correspondence: Dr. Albino Eccher at albino.eccher@aovr.veneto.it

Recommendations for compiling test data sets to evaluate AI solutions in pathology

Developing an artificial intelligence solution for pathology requires large amounts of data, such as a data set of whole slide images. In the absence of a large archive of whole slide images (WSIs), these data sets must be created. Such data can be labelled (for example, assigned a diagnosis such as adenocarcinoma) or linked with metadata (for example, pathology tumor stage or patient response to therapy), or both. For supervised learning, a human expert can annotate specific features within images, such as mitotic figures. While many of the images in a data set are used to train algorithms, it is necessary to retain some of these images (referred to as hold-out or test data sets) for subsequent analytical validation. Another data set, which ideally is not derived from the original test set, should be used to gauge the performance of an AI-based model before using an algorithm in clinical practice or to obtain regulatory approval. Concerns have recently surfaced with regard to determining how many images are needed to train an algorithm, how to deal with low-prevalence subsets, and how to recognize algorithms that are biased, as well as with the lack of generalizability of AI systems. The authors published recommendations for compiling test data sets to evaluate AI solutions in pathology in order to address such concerns and to help AI developers demonstrate the utility of their products, as well as to help pathologists and regulatory agencies verify reported performance measures. Their advice is based on an extensive literature review and input from a committee of stakeholders, including commercial AI developers, pathologists, and researchers. Among their recommendations is that data sets be diverse enough to cover the biologic, technical, and observer variability in a target population of images. Furthermore, test data sets should cover the relevant subsets and be unbiased (for example, by being collected prospectively). They should also be large enough to avoid sampling error, undergo annotation by multiple people or consensus, and not contain purely synthetic data in which images have been altered, but instead employ real-world data. The authors’ guidelines do not offer recommendations on how to collaborate with data donors or pertaining to the legal aspects of collecting test data.

Homeyer A, Geible C, Schwen LO, et al. Recommendations on compiling test datasets for evaluating artificial intelligence solutions in pathology. Mod Pathol. 2022;35:1759–1769.

CAP TODAY
X