Making a match between pathology, mammography

 

CAP Today

 

 

 

May 2011
Feature Story

Karen Lusky

The findings of a Q-Probes study on mammography correlation with pathology reports, released in March, provide reason for pathologists to be pleased and point to a few areas where they might do more to follow up where things don’t quite add up.

On the upside, the CAP Q-Probes found that pathologists at 48 institutions attempted to correlate all image-guided core needle biopsies, and 95 percent of the time their findings correlated with radiography, says Raouf E. Nakhleh, MD, co-author of the study and professor of pathology at Mayo Clinic, Jackson, Fla., and chair of the CAP Quality Practices Committee.

To put that 95 percent figure in perspective, Dr. Nakhleh compares it with the 62 percent correlation rate found in a 1997 Q-Probes study (Nakhleh RE, et al. Arch Pathol Lab Med. 1997;121:559–567). The 1997 study involved lumpectomy specimens for mammographic calcifications rather than core needle biopsies, he points out. And “obviously technology and practices have changed, so you can’t directly compare these results. But the findings demonstrate some progress.”

The new Q-Probes study, in which pathologists from the 48 institutions retrospectively reviewed 30 needle core breast biopsy cases (for a total of 1,399 cases), measured correlation in two ways. “We asked pathologists to judge for themselves whether correlation was present between the pathologic and mammographic findings,” and this yielded correlation 95 percent of the time, Dr. Nakhleh says.

The correlation rate dropped to 83 percent when the expected pathologic diagnoses were simply matched with radiologic findings. Even so, Dr. Nakhleh finds that rate to be reasonable, considering that the study may not have accounted for instances in which a pathologist, for example, might have talked to the radiologist about a case, resulting in correlation.

Reviewing cases at an interdepartmental, multidisciplinary breast conference produced the highest correlation rate: just 1.7 percent shy of perfect. The study found that 89.4 percent of the institutions have such conferences. Yet, overall, only 25.7 percent of cases were discussed at one.

Dr. Nakhleh thinks the case discussion at a conference raises correlation rates, in part, by allowing pathologists and radiologists to clarify more complex cases. For example, he says, the mammography may show three lesions in the same breast. But unless the pathologist reads the radiology report or talks to the radiologist about the case, he or she won’t know which lesions to correlate on a needle biopsy.

Carefully correlating needle core breast biopsy histology with the imaged lesion can head off the “disaster case,” says pathologist Kenneth Bloom, MD, who is not an author of the study. That’s one where “the biopsy misses the lesion and the report comes back benign, and the woman goes away,” says Dr. Bloom, chief medical officer at GE Healthcare’s Clarient.

Dr. Nakhleh cites the example of the radiology report indicating an ill-defined mass but the pathologist sees a smooth contoured mass, such as a fibroadenoma. Although a lesion is found, the findings do not correlate, requiring further investigation.

Susan Lester, MD, PhD, chief of breast pathology services at Brigham and Women’s Hospital and head of the Breast Cancer Review Panel for the CAP Cancer Committee, says she learned breast imaging by sitting down with radiologists. Dr. Lester, who is not an author of the Q-Probes, advocates institutions supporting staff attendance at the conferences.

Technology could help improve participation, in Dr. Bloom’s view. He predicts that the increasing availability of digital radiology and digital pathology might soon allow virtual conferencing of all breast biopsies.

Other opportunities for improving communication are the pathology report itself and the requisition.

The data regarding the practice of documenting lack of correlation in the pathology reports are limited in this Q-Probes. But it is good practice for pathologists to provide such documentation in the pathology reports, says Q-Probes co-author Michael Idowu, MD, MPH, director of breast pathology and quality management, Division of Anatomic Pathology, Virginia Commonwealth University Health System.

Providing such information on reports is important, Dr. Idowu says, “because you never know who may need to act on the pathology report(s). It may be a primary care physician who may not have time to figure out whether there is radiologic-pathologic correlation.”

Dr. Lester agrees it’s helpful for the pathology report to note lack of correlation with the imaged lesion. “It’s up to the radiologist to go back and further investigate what might have happened,” she says. Sometimes the radiologist may not have adequately conveyed the type of lesion. “The lesion might have been described as an irregular mass when it was actually better described as an ill-defined density,” she notes.

“In other cases, the lack of correlation could mean that the lesion was missed.”

According to the data analysis, “Most of the biopsies were performed for radiologic findings of ‘mass not otherwise specified’” (34.1 percent), “calcifications not otherwise specified” (21.5 percent), “calcifications with specific pattern and/or distribution” (11.6 percent), “mass with smooth contours” (7.6 percent), “spiculated mass” (7.6 percent), and “new calcifications” (4.6 percent).

To determine correlation between histology and imaging, Dr. Lester says she needs to know whether the targeted lesion was calcifications or a mass and, if a mass, whether it was irregular or circumscribed. It can be helpful to have a requisition form for core needle biopsies with a menu of options that the radiologist can circle, she says.

In the Q-Probes, 52 percent of the biopsies were guided by mammography, 46.4 percent by ultrasound, and 1.2 percent by MRI. All of the lesions were imaged initially using mammography, Dr. Nakhleh says.

“Ultrasound is used for masses and may provide a better estimate of size,” Dr. Lester says. “Stereotactic biopsies could be for calcifications or a mass.” When either of those modalities are used, she adds, pathologists can expect to find that the actual shape of the mass corresponds to what they are seeing. For example, almost all masses with irregular borders on mammography are invasive cancer, though rare benign lesions also can cause irregular masses, she says.

MRI is a different story in that regard because “the shape of a mass shown by enhancement reflects the blood flow going into it, so the correlation with the pathologic shape of the mass is much lower,” Dr. Lester says. “A lesion can be circumscribed but the blood flow coming into the lesion makes the margins look irregular.”

Also helpful is the description of the calcifications on mammography. A linear distribution, Dr. Lester says, is almost always ductal carcinoma in situ. Rarely, linear calcifications have other causes—for example, “secretions in a duct that look linear.” But the key information the pathologist needs to know is that the lesion sampled contained calcifications, she says.

The Q-Probes study took a look at whether separating core biopsies that contain calcifications from those that do not significantly affected correlation rates. The answer was no. Dr. Idowu says, “While we only have data on 42 percent of the cases, 65 percent of these separated out the cores with calcifications in some way.” Some experts believe that’s helpful to do, he notes, but others point out that pathologists are supposed to examine all of the cores thoroughly anyway.

Dr. Bloom notes that when calcifications identified by the radiology report can’t be found in the tissue, the question, of course, becomes: What happened to them? “Did they dissolve during processing? Are they there but not visible on the H&E slide? Are they still in the paraffin-embedded block waiting to be sectioned? That’s why many pathology departments insist on receiving the cores with the calcifications separately. If no calcifications are identified on the microscopic slides, the tissue block can be x-rayed to ensure that the calcifications have been sampled, and if not, additional levels can be prepared.”

The study’s participants were asked how they managed specimens that did not initially appear to contain calcifications identified by the radiology report. The answers ranged from doing no further workup (4.3 percent) before signing out the case to various levels of effort to find them. The majority (36.2 percent) said they x-ray the paraffin tissue block(s) and then cut additional sections if calcifications are present on the specimen x-ray images. Other actions, including an unspecified “other category” (8.5 percent), included the following: “Cut deeper sections until calcifications are identified without tissue block(s) x-ray” (27.7 percent); “level through the tissue block(s) without tissue block x-ray” (19.1 percent); and “cut a specific number of deeper levels and verify the case if there is still no calcification” (4.3 percent).

To keep calcifications from falling out of the biopsy tissue, it can be helpful for radiologists to wrap the core biopsies in tissue paper and to place them in a cassette before putting the specimen in formalin, Dr. Lester says. “Otherwise, whoever is processing the tissue has to be meticulous in checking the cap and so on to make sure they aren’t missing anything,” she says, noting, “It’s very easy for small fragmented specimens to be difficult to find.”

The Breast Imaging Reporting and Data System, or BI-RADS, score can provide pathologists with a way to estimate the risk of malignancy. The study found that the BI-RADS score was provided in the pathology requisition sheets in 41 percent of the cases, Dr. Idowu says.

When the scores were provided, “3.2 percent, 79.5 percent, and 12.9 percent of the biopsies were performed for BI-RADS 3, 4, and 5, respectively, with 1.4 percent of the biopsies performed for BI-RADS score 2,” the authors write in their analysis of the data.

The study did not attempt to correlate biopsy histology with BI-RADS scores. But the likelihood of malignancy for BI-RADS 3 is two percent, Dr. Bloom says, and for BI-RADS 4, it’s two percent to 95 percent. But for BI-RADS 5, the risk exceeds 95 percent.

Of course, a lower BI-RADS score doesn’t rule out a pathologist finding a cancerous lesion that did not show up on the mammogram. “Sometimes,” Dr. Nakhleh says, “you see subtle things like a growth within a duct that’s not cal­cified,” which the mammogram will not detect. “On occasion, the mammographic finding is a smooth contoured mass and the pathologic diagnosis is fibroadenoma. But adjacent to the fibroadenoma is ductal carcinoma in situ which was not detected on mammographic examination.”

While the BI-RADS scores may not be on the pathology requisition, Dr. Idowu says, most pathologists participating in the study (80.3 percent) reported they had access to electronic clinical history that included radiologic findings. (In 93.7 percent of cases, the clinical history included the radiographic findings.)

However, Dr. Idowu says, less than half of the pathologists participating in the study said they reviewed the radiologic findings before signing out the cases.

In addition, the patient’s actual breast imaging was available for pathologists to review 73.4 percent of the time. But only 22.3 percent of pathologists looked at it before verifying pathology reports.

“While reviewing the radiologic images/reports is a good practice, there is no indication that this practice significantly improves the correlation rates,” the authors say in the data analysis.

In explaining that finding, which the study didn’t probe, study co-author Lindsay Hardy, MD, a pathology resident at Beth Israel Deaconess Medical Center in Boston, says the pathologists may sign out a case without reviewing the electronic clinical record and imaging when the histology correlates “perfectly well” with information provided on the requisition. But when pathologists find that a case doesn’t correlate or they don’t get the clinical information required, they will take a look at the radiology report or the breast imaging, or both, she says. “Some may review the radiology report without reviewing the images.”

The Q-Probes uncovered a couple of findings that do not seem to correlate to conventionally held expectations.

For one, Dr. Idowu says, correlation rates did not vary significantly based on whether an institution had a designated breast pathologist, a finding the Q-Probes authors cannot explain. One possibility, the authors write in their data analysis: Institutions may have pathologists with interests in breast pathology who do not necessarily have the designation of breast pathologist.

Furthermore, Dr. Idowu says, pathologists may have a second pathologist review some of the difficult/borderline cases without documenting that information in the pathology report.

The study also did not show that correlation rates were significantly better or worse based on whether patients had digital or film-based mammograms. “Many radiologists believe that digital mammography is better at cancer detection than film-based mammography,” Dr. Idowu says, noting this belief remains “somewhat controversial.”

And while the study did not compare cancer detection rates of digital versus film-based mammography, it is reasonable to assume that the interpretation of the images is critical regardless of the imaging techniques used, he says.

“As pathologists, we are correlating the interpretation of the imaging with the histologic findings.”


Karen Lusky is a writer in Brentwood, Tenn.