Editors: Liron Pantanowitz, MD, director of anatomical pathology, Department of Pathology, University of Michigan, Ann Arbor, and David McClintock, MD, associate chief medical information officer for pathology, Department of Pathology, University of Michigan.
A computer display QA tool for remote digital pathology
March 2021—A significant benefit of whole slide imaging is the ability to view digital slides remotely. This benefit has been reinforced during the COVID pandemic as pathologists render pathology diagnoses from home. At the same time, the FDA has temporarily relaxed regulations for modifying FDA-cleared digital pathology devices and the marketing of devices that are not FDA 510(k) cleared. This contrasts with previous requirements that various digital pathology systems use computer displays with specifications that have satisfied regulatory or institutional approval, or both. This, in turn, raises concern about pathologists working in unregulated home settings where they use a variety of monitors that vary in visual quality and, therefore, in image clarity. The authors, all of whom are from the section of pathology and data analytics, Leeds Institute of Medical Research, University of Leeds, United Kingdom, developed a point-of-use quality assurance (POUQA) tool to address variation in display screen equipment and environmental factors when remotely viewing digital pathology slides. Their free Web-based tool is a psychophysical test based on combining prior successful quality assurance tools adopted by pathology and radiology. End users are tasked with performing a minimally intrusive display screen validation. The test requires visual discrimination between colors derived from H&E staining with a perceptual difference of ±1 delta E (dE), a range set by the authors for evaluating the color accuracy of monitors in the absence of globally accepted minimum standards for primary diagnosis using digital pathology. The test was designed to require minimal pathologist effort. After accessing the POUQA pathology tool at www.virtualpathology.leeds.ac.uk/research/systems/pouqa, a user is prompted to identify four randomly generated alphabet letters that are faintly displayed on four noncontiguous 7- by 7-cm tiles in shades of purple and white. If the string of letters is entered correctly, the tool will indicate that the device and environment in which it is being used support the discrimination of contrast with a minimum value of 1 dE, which is considered to be a reasonable level for visually assessing digital slide images. In this way, the test can serve as a computer display self-validation tool for remotely viewing digital slides.
Wright AI, Clarke EL, Dunn CM, et al. A point-of-use quality assurance tool for digital pathology remote working. J Pathol Inform. 2020;11:17. doi:10.4103/jpi.jpi_25_20
Correspondence: Alexander Wright at a.wright@leeds.ac.uk
Challenges of adopting artificial intelligence in anatomic pathology
While artificial intelligence has received much attention over the past few years when applied to anatomic pathology, it is not infallible. Failures can be common when machine learning algorithms are applied to expanded sets of unlabeled pathology image data. Therefore, AI tools for AP should, for the foreseeable future, be viewed as supplementary to pathologists’ practices, serving to augment their operational or diagnostic performance, or both. Integral to the adoption of artificial intelligence in AP is understanding the complicated and variable process of developing AI algorithms. These algorithms should address specific clinical or practical workflow issues, or both, in pathology. Once a problem is identified, the data set used to develop the algorithm must be matched to the problem (for example, micrometastasis identification in lymph nodes) and the algorithm type (for example, a convolutional neural network). Then the data set is labeled, either manually or via the chosen algorithm, and split into multiple subsets to be used in training, testing, and validating the algorithm. This is followed by AI deployment, which has many technical and operational challenges, including acquiring pathologist buy-in; transitioning from glass to digital workflows; determining the proper information technology infrastructure needed, such as cloud based versus on premises; integrating AI experience and training into pathology resident or fellow education; determining the proper AI use cases for a practice; assessing how best to clinically validate AI algorithms for diagnostic use; and determining how to integrate AI tools into the clinical workflow of the AP laboratory and, more specifically, the pathologist. The regulatory environment surrounding the use of AI in pathology also poses challenges given the lack of guidance for adaptive technologies that, by design, undergo frequent modifications. The authors noted that the regulatory landscape, including obtaining FDA approval, European Union CE mark approval, and CLIA validation, is evolving, so no definitive recommendations regarding the validation and clinical use of AI in AP can be made at this time. Yet the authors concluded that AI shows promise for advancing the practice of anatomic pathology.
Cheng JY, Abel JT, Balis UGJ, et al. Challenges in the development, deployment, and regulation of artificial intelligence in anatomic pathology [published online ahead of print November 24, 2020]. Am J Pathol. doi:10.1016/j.ajpath.2020.10.018
Correspondence: Dr. Jerome Cheng at jeromech@med.umich.edu