Home >> ALL ISSUES >> 2014 Issues >> No ifs, ands, or buts on IHC assay validation

No ifs, ands, or buts on IHC assay validation

image_pdfCreate PDF
  1. When a new lot of antibody is opened, “We don’t believe you need to completely revalidate the assay,” says Dr. Fitzgibbons. “We just think you need to confirm that the new antibody is working as expected.” One known positive and one known negative case should suffice.
  2. If there are minor changes to the assay itself—antibody dilution, using the same antibody clone but purchasing it from a different company, or making changes in incubation times—labs should run two known positives and two known negatives. “Slightly more stringent,” as Dr. Fitzgibbons puts it. “But still a fairly easy confirmation that the assay performs as expected.”
  3. When a lab uses an entirely different clone or antibody, the assay needs to be completely revalidated, as if it were a brand-new assay.
  • The committee spent a fair amount of time discussing the best approach for specimens other than routinely formalin-fixed paraffin-embedded tissues, including cytology specimens and decalcified specimens. They eventually decided not to specify the number of cases needed for validation sets. Given the wide variety of cytology specimens, for instance, it was too hard to come up with a number that worked for all situations. Labs do need to take steps to prove that the assays work on the alternative specimen types, however.
  • Tissue microarrays, also known as multitissue blocks, presented another mind-bender. Labs are increasingly using tissue microarrays as an efficient means of validating assays, Dr. Fitzgibbons notes. But does the literature support that? “Our conclusion was these are acceptable specimens for validation purposes for the majority of cases,” he says, “though there are limitations to their use.”
  • A final highlight, says Dr. Fitzgibbons, is also the most obvious. Recommendation No. 14 reminds labs that they need to document all validations and verifications in compliance with regulatory and accreditation requirements.
  • “It’s a no-brainer,” he says. But it’s worth noting because it was the least controversial item when the guideline was put out for public comment.

    Not every recommendation met with such genial response, which made the workgroup sit up and take notice.

    The guideline’s first incarnation had 18 recommendations. The group winnowed it down after the public comment period, which garnered some 1,000 comments from more than 200 individuals, Dr. Fitzgibbons reports. “We deleted some; we consolidated some others. And we really rewrote quite a few of them.”

    “The guideline was made better by the comment period,” says Dr. Goldsmith, noting that in many cases the feedback focused on practical concerns.

    Loykasek was the only laboratory technologist in the guideline group. As such, she also brought pragmatic views to the discussions. “Sometimes things on paper sound very doable, but in practicality, in the lab, it’s almost impossible,” says Loykasek, who previously was involved in validating new IHC assays at PhenoPath Laboratories, Seattle.

    One lesson from PhenoPath, she says, was that validation requires labs to think about specificity. It’s easy to fall into the trap of looking only for an antibody to stain a specific cell type. “You need to look beyond that—what should this antibody be negative on, and can we prove that it’s indeed negative?” Labs also need to look for cross-reactivity, Loykasek says, given that an antibody will oftentimes stain more than one thing. “See how your tissues are fixed and processed, and what kinds of cross-reactivities you’ll have. And document those.”

    She says that whenever a new antibody came onboard at PhenoPath, it was always assigned to one technologist and one pathologist who would do the workup together. Likewise, she says, “Technologists can play a huge role” in helping labs follow the new guideline. At Pheno-Path, she says, validation was most successful when technologists were involved and when the process was well organized. “Before they started, they knew how they were going to capture and track their data and had the forms ready.”

    Dr. Swanson urges medical directors to involve all laboratory personnel in the design of validation protocols. “They’re invested in the quality of the lab, and they want to understand why we’re making changes.” That’s another responsibility for the lab director, in fact—making the argument clearly to others in the lab and being receptive to feedback and suggestions. For example, he says, laboratorians might more readily recognize that a 10-positive and 10-negative validation set doesn’t accurately represent the expected stain distribution of a given marker in the clinical population tested in their lab. “Maybe you want to do 12 positives and eight negatives to better reflect that distribution,” Dr. Swanson says. “This is the sort of conversation we’ve had in our laboratory. It gives the laboratory director greater insight into the nuances of the testing environment, and provides a bigger role in the validation process to those who actually run the tests.”

    The feedback during the comment period helped the group reconsider the discretion laboratory directors have in ensuring validation. “We’re basically stressing, more than we had initially, that the lab director has to be responsible for making some of the decisions,” Dr. Fitzgibbons says.

    Commenters also took issue with the numbers used for the validation sets. “People didn’t like having a minimum,” says Dr. Fitzgibbons. Some wanted no number given at all; others said one or two cases should suffice. “We had some individuals comment that as long as you’re doing positive and negative controls, you don’t need to validate your assay, which we of course disagreed with,” says Dr. Fitzgibbons.

    There were also some comments that fell along the “state’s rights” axis. “A lot of the negative comments focused on not having an organization like the CAP tell a lab how to do its business,” says Dr. Fitzgibbons, who adds, “We anticipated that there would be people who don’t like guidelines at all. But there were quite a few comments to that effect.”

    Dr. Swanson offers advice to those naysayers, which, in blunt terms, is: Get used to it. With more interdisciplinary guidelines likely to appear—the ASCO/CAP collaborations on HER2, ER, and PgR testing are prime examples—there will be added pressure on lab directors to more objectively define how they determine the quality of their IHC assays, he says.

    Then there were the comments that revealed a lack of understanding about basic validation tenets. “Some people didn’t recognize it’s a CLIA requirement—they thought it was more a discretionary thing,” says Dr. Fitzgibbons.

    Is it surprising that some labs view validation as optional? “I don’t really know the answer to that, because we were surprised, too,” says Dr. Fitzgibbons. The guideline became more than an attempt to bring order out of chaos. It’s also an effort to build something from nothing. “Some labs weren’t validating their assays at all,” says Dr. Fitzgibbons.

    He and others in the workgroup turn to history for answers—specifically, the history of special stains. Not everyone views, say, a keratin stain as a laboratory test, but rather as a special stain. “With these, we’re usually referring to histochemical stains like trichrome and PAS, stains that have been around for a hundred years,” says Dr. Fitzgibbons. Some pathologists may not view them as tests because they’re stains that permit better assessment of tissue but don’t provide stand-alone results. Some may then reason that validation isn’t needed. But, says Dr. Fitzgibbons, “There are good reasons why that’s not true.”

    Twenty-five years ago, at the dawning of the IHC era, pathologists—who already had plenty of experience doing special stains—didn’t consider the new assays to be all that different. IHC was seen more as a special stain than a quantitative analyte such as serum glucose.

    We now know, of course, that they’re identifying specific analytes and even quantifying those analytes,” says Dr. Fitzgibbons. IHC tests are different from special stains, in other words, especially with predictive markers, where a single test result can drive therapy. And validation is critical.

    The 2007 HER2 guideline toppled the first domino, asking labs to validate IHC tests like they would any other clinical lab test. “In other words, doing everything the right way,” says Dr. Fitzgibbons.

    Initially, predictive markers were thought to be more important from a validation standpoint, which is partly borne out by the aforementioned CAP survey. Validation of nonpredictive markers was much less consistent, he says. “ It’s not like the predictive markers were perfect,” he says. “But clearly we were further along in that category.”

    At the same time, the boundary between predictive and nonpredictive markers is a fluid one, much like it can be hard to define what, exactly, is a molecular test. Some nonpredictive markers are used individually, “and they may make a huge difference,” says Dr. Fitzgibbons. A keratin stain alone might be used on an undifferentiated malignant tumor to identify a poorly differentiated carcinoma; the patient would then be treated for carcinoma, not lymphoma. “It’s not a simple adjunct,” Dr. Fitzgibbons argues. “It’s completely changed how you interpreted the case.”

    As the relationship between the diagnosis and targeted therapy becomes more precise, the traditionally nonpredictive lineage selective markers effectively become predictive in certain clinical settings. So it’s reasonable, Dr. Swanson says, to keep open the discussion about whether a “diagnostic” test is less risk to a patient than a predictive one. “You can still make that argument in most settings, but it is becoming increasingly difficult as the lines between predictive and nonpredictive markers are blurred,” he says.

    CD117 (c-kit) offers one well-characterized example, says Dr. Swanson, noting the marker is considered both diagnostic of gastrointestinal stromal tumor and predictive, generally, of response to anti-c-kit (Gleevec) therapy.

    The lines could very easily blur even more with the rise in targeted therapies, based on molecular and morphologic analysis. The guideline says that for a marker with predictive and nonpredictive applications, labs should validate it as a predictive marker when used as such.

    The guideline doesn’t purport to have all the answers, and, being a guideline, it is by definition something that will be revised. Dr. Swanson is fine with that. “Basically, we want to get our foot in the door and remind laboratories of their responsibility to the patient in providing an assay with reproducibility and high predictive value.” And, he adds, “This was designed, in part, to make it as palatable as possible to a laboratory, allowing it to comply with what we regard as reasonable expectations for developing clinically precise and confident assays.”

    The guideline will, the group hopes, stimulate more research. The need is there, says Dr. Fitzgibbons, noting, “We didn’t have the strength of evidence for most of these [recommendations] that we hoped to find. There isn’t a lot of level one evidence for IHC.”

    Adds Loykasek: “When papers are published on new antibodies, they tend to gloss over how they were validated.”

    The HER2 guideline, once again, could be a good model to follow. “When that was published, there was a bit of an uproar,” Dr. Goldsmith recalls. “And as a result, people started publishing research that addressed the various points of contention, and the guideline changed.” In its first incarnation, for example, the guideline called for a fixation interval of between six to 48 hours. “Almost everyone in pathology thought that was too strict, that there were no downsides to testing specimens fixed for longer than 48 hours,” Dr. Fitzgibbons recalls. “But we could not prove it.” Since then plenty of published evidence has made the case for a longer interval, and the updated guideline recommends a six- to 72-hour interval.

    The IHC guideline isn’t meant to usher in a Day of Wrath for labs. Dr. Swanson notes that when the group began its work, the goals were to re-emphasize the notion that all tests have to be validated and to provide basic guidance for the general immunohistochemistry laboratory.

    “I would hope,” says Dr. Swanson, “that people would look at this as something that will help them do their job well.”
    [hr]

    Karen Titus is CAP TODAY contributing editor and co-managing editor. Jeffrey Goldsmith, MD, will present a CAP webinar on the guideline for analytic validation, to take place April 1 from 11 am to noon CDT. Register at https://www1.gotomeeting.com/register/801536592.

    CAP TODAY
    X