Clinical Abstracts

 

 

 

 

September 2011

Editor:
Michael Bissell, MD, PhD, MPH

Evaluating PSA as a marker of prostate cancer risk Evaluating PSA as a marker of prostate cancer risk

Prostate-specific antigen is the most widely used and validated marker of prostate cancer risk. With more than 20 years of experience using PSA, the medical community recognizes that it is a good, if imperfect, marker for prostate cancer risk. Early evaluations of PSA were flawed and overestimated its sensitivity and specificity for predicting prostate cancer because, often, only men with an elevated PSA underwent biopsy. The magnitude of these limitations was illustrated in several series, notably the placebo arm of the Prostate Cancer Prevention Trial, which demonstrated that widely recommended thresholds for biopsy were not very sensitive and would miss many cancers of low and high clinical significance. Furthermore, the lack of specificity of so-called elevated PSA levels is documented in everyday practice where only one in four men with an elevated PSA has cancer on biopsy. Although several factors can confound the ability of PSA to predict prostate cancer, resulting in low specificity, the main cofounder is benign prostatic hyperplasia. One way of eliminating the confounding effects of benign prostatic hyperplasia on PSA and PSA increase is to treat them with a 5α-reductase inhibitor (5-ARI). Three prospective randomized phase three trials showed that 5-ARIs improve the diagnostic performance of PSA. These agents improve PSA by stabilizing the amount of PSA from benign prostatic hyperplasia, and probably low-grade cancers, that enters the serum. Consequently, a rising PSA in men on a 5-ARI should be more indicative of a clinically important cancer than a rising PSA in untreated men. This was verified in the Reduction by Dutasteride of Prostate Cancer Events (REDUCE) trial. In the trial, men with a rising PSA on dutasteride were significantly more likely to have high-grade cancer—that is, Gleason score seven to 10 tumors (13.2 percent)—than men with a rising PSA in the placebo arm of the study (7.7 percent). Along with improvements in cancer specificity is the potential concern of missing high-grade disease. To examine this issue, the authors highlighted data from the REDUCE trial. One study pointed out that 93 high-grade cancers (43 percent) in the dutasteride arm of the REDUCE study would be missed using PSA changes from the post-treatment baseline. These data used month six post-treatment PSA levels as the baseline for judging subsequent changes in PSA. However, PSA values continue to decrease beyond month six in many men taking dutasteride. If, instead of considering rises from month six, one uses any PSA rise from the nadir PSA value as a reason for biopsy (the recommendation included in the dutasteride label), only 54 of the high-grade cancers (25 percent) would be missed. If men in the placebo arm of REDUCE had been biopsied only according to the National Comprehensive Cancer Network guidelines for using PSA changes—that is, PSA rise greater than 0.35 per year or 0.75 per year, depending on baseline PSA level—36 percent of the high-grade cancers would have been missed in the placebo arm. This suggests that, in clinical practice, the use of PSA rises may lead to fewer missed high-grade cancers in men on dutasteride. Despite this improvement, PSA remains imperfect. Some small-volume, high-grade cancers detectable by study-mandated biopsies leak very little PSA into the serum. This is true in placebo-treated and dutasteride-treated men. Consequently, the medical community is challenged to determine how best to use a good but imperfect test in the real world.

Freedland SJ, Andriole GL. Making an imperfect marker better. Eur Urol. 2011;59:194–196.

Correspondence: Gerald Andriole at andrio leg@msnotes.wustl.edu

[ Top ]

A multiplex assay for specific allergen extracts A multiplex assay for specific allergen extracts

Of more than 1,200 allergenic extracts on the market in the United States for diagnosing and treating allergies, only 19 are standardized; the remainder are marketed without a scientifically valid measure of potency. The biological potencies of most reference-standardized allergen extracts were established using the ID50EAL (intradermal dilution for 50-mm sum of erythema diameters determines the allergy unit) technique. The replacement of these references, and their comparison with manufactured lots, is achieved by one of several surrogate assays. For standardized short ragweed pollen (SRP) and cat hair extracts, for which there are single predominant allergens, a radial immunodiffusion assay with a monovalent antiserum is used to determine the potency of the extracts relative to their respective standards. When multiple relevant allergens are present, as in dust mite and grass pollen allergen extracts, a competition enzyme-linked immunosorbent assay (ELISA) with human antisera is used to assess the overall allergen content of these products. The competition ELISA is sensitive and specific for determining overall allergen content, but its performance characteristics for each component allergen are uncertain. When an individual allergen is removed from the extract, the polyvalent sera cannot reliably discern its absence. Until each relevant allergen in a mixture is identified, regulators have no choice but to ask manufacturers to measure overall potency. Once important allergens are determined, their measurement requires changes in technology that can be time-consuming and difficult. The authors conducted a study to develop assays that will have the strengths of radial immunodiffusion and competition ELISA and that will allow manufacturers and regulators to determine the amounts of specific allergens in a mixture and its overall allergenicity. Multiplex assays have been used successfully for a variety of assays, such as for detecting allergen-specific IgE and the amount of allergen in the environment. Following this approach, the authors designed a multiplex allergen extract potency assay (MAEPA). Multiple monoclonal antibodies to the predominant allergens in cat hair and short ragweed pollen allergenic extracts were used to measure the potencies of these extracts. The authors used short ragweed pollen and cat hair extracts because both have been standardized and the major allergen of each is well known. For the study, six anti-Fel d 1 and six anti-Amb a 1 recombinant antibodies were generated and covalently bound to carboxy-labeled beads. Antibody-bound beads were then used to measure Fel d 1 and Amb a 1 levels in commercial cat hair and short ragweed pollen extracts, respectively, using bead-based flow cytometry. These major allergen levels were compared with those obtained using a conventional antibody-based method. Allergen levels were calculated by comparing the half-maximal effective concentrations of dose-response curves analyzed using four-parameter fits. Bead-antibody pairs were tested to determine whether additional bead-antibody pairs affected the apparent potency of the extract. The authors found that the allergen content of cat hair and short ragweed pollen extracts determined using the MAEPA and anti-Fel d 1 and anti-Amb a 1 antibodies were comparable with potencies determined using conventional methods. Cross-interference from the concurrent use of multiple beads was minimal. The authors concluded that the MAEPA determined Fel d 1 levels in cat hair allergenic extracts and Amb a 1 levels in short ragweed pollen extracts. The results of this assay are reproducible and consistent with data obtained using conventional methods.

DeVore NC, Huynh S, Dobrovolskaia EN, et al. Multiplex microbead measurements for the characterization of cat and ragweed allergen extracts. Ann Allergy Asthma Immunol. 2010;105:351–358.

Correspondence: Dr. Jay E. Slater at jay.slater@fda.hhs.gov

[ Top ]

A novel prognostic biomarker for Alzheimer’s disease A novel prognostic biomarker for Alzheimer’s disease

Clinicopathological studies suggest that the pathological hallmarks of Alzheimer’s disease, amyloid plaques and neurofibrillary tangles, begin to appear approximately 10 to 20 years before the synaptic and neuronal loss that accompany the onset of dementia. Identifying and treating individuals during this preclinical stage will maximize the benefit from disease-modifying therapies. By definition, this preclinical phase of Alzheimer’s disease (AD) will elude detection by conventional clinical examination and, therefore, require the use of biomarkers for diagnosis. Beyond diagnosis, biomarkers also may provide prognostic information and facilitate the monitoring of disease progression and response to treatment. Furthermore, novel biomarkers may advance the medical field’s understanding of Alzheimer’s disease pathophysiology and thereby influence future treatment strategies. Because many proteins expressed in the brain are present in cerebrospinal fluid, the CSF proteome is a logical source for potential AD biomarkers. CSF amyloid-β42 (Aβ42), tau, and phosphorylated forms of tau have already shown promise in AD diagnosis and prognosis. Nevertheless, a need exists for supplemental biomarkers that represent different aspects of AD pathophysiology and can improve diagnosis and prognosis at early disease stages. To identify additional CSF biomarkers for early AD, the authors used two-dimensional difference gel electrophoresis (2-D DIGE) in conjunction with liquid chromatography-tandem mass spectrometry (LC-MS/MS) to identify proteins that increase or decrease in individuals with early AD relative to age-matched cognitively normal subjects. One protein found to be significantly more abundant in the CSF of Alz- heimer’s disease patients, YKL-40 (chitinase-3 like-1, human cartilage glycoprotein-39, and chondrex), is a secreted 40-kDa glycoprotein with sequence homology to bacterial and fungal chitinases and chitin-binding ability but no chitinase activity. Reports suggest YKL-40 plays a role in inflammation and tissue remodeling and is upregulated in the AD brain, but its physiological function remains unclear. Nevertheless, plasma/serum or CSF levels of YKL-40 have been proposed as candidate biomarkers for arthritis, asthma, multiple sclerosis, and myriad cancers. The authors conducted a study to evaluate the potential of CSF and plasma YKL-40 as diagnostic and prognostic biomarkers for AD, and they used immunohistochemistry to investigate the sources of YKL-40 in the brains of subjects with Alzheimer’s disease. The authors measured CSF YKL-40 by enzyme-linked immunosorbent assay in a discovery cohort (n=47), validation cohort (n=292), frontotemporal lobar degeneration cohort (n=9), and progressive supranuclear palsy (PSP) cohort (n=6). Immunohistochemistry was performed to identify sources of YKL-40 in the human AD brain. The authors found that the discovery and validation cohorts showed higher mean CSF YKL-40 in very mild and mild AD-type dementia (Clinical Dementia Rating [CDR], 0.5 and 1) than did control subjects and PSP subjects. The CSF YKL-40/Aβ42 ratio predicted risk of developing cognitive impairment (CDR 0 to CDR greater than 0 conversion). Mean plasma YKL-40 was higher in CDR 0.5 and 1 versus CDR 0 and correlated with CSF levels. YKL-40 immunoreactivity labeled astrocytes near a subset of amyloid plaques, implicating YKL-40 in the neuroinflammatory response to Aβ deposition. The authors concluded that these data demonstrate that YKL-40, a putative indicator of neuroinflammation, is elevated in AD and, together with Aβ42, has potential prognostic utility as a biomarker for preclinical AD.

Craig-Schapiro R, Perrin RJ, Roe CM, et al. YKL-40: a novel prognostic fluid biomarker for preclinical Alzheimer’s disease. Biol Psychiatry. 2010;68:903–912.

Correspondence: David M. Holtzman at holtzman@neuro.wustl.edu

[ Top ]

Three ways of estimating baseline creatinine for RIFLE classification Three ways of estimating baseline creatinine for RIFLE classification

Acute kidney injury is a common condition in critical illness and is associated with a significantly increased risk of death. This type of injury is classified using RIFLE criteria, although a slight modification recently was proposed. RIFLE classification provides three grades that reflect the increasing severity of acute kidney injury (AKI)—risk, injury, and failure—and two outcome classes—loss and end-stage kidney disease (from which the acronym is derived). The severity grades for AKI are based on urine output or changes in serum creatinine from baseline. However, many patients may present with an elevated creatinine but without a baseline measure of renal function. When the Acute Dialysis Quality Initiative initially proposed the RIFLE criteria, it also proposed that, for patients without recognized chronic kidney disease, an estimated baseline creatinine could be obtained by solving the four-variable Modification of Diet in Renal Disease (MDRD) equation for a low-normal glomerular filtration rate (GFR) of 75 mL/min./1.73 m2. Most current data concerning AKI defined by RIFLE criteria are based on this assumption that baseline creatinine for respective patients could be estimated by solving the MDRD equation using the low-normal GFR of 75 mL/min./1.73m2. However, this assumption may not be valid. The MDRD formula was derived from a population of outpatients with renal disease. It is considered to perform poorly in patients with GFR above 60 mL/min./1.73m2, severely ill hospitalized patients, and malnourished patients. Therefore, scant data show the results of GFRs—determined by estimation equations or direct measurement—in critically ill patients. The authors sought to test the performance of the MDRD-based approach and explore alternative methods of estimating unknown baseline creatinine using data from three geographically distinct cohorts of critically ill patients. They analyzed four cohorts of intensive care unit patients from three centers—two cohorts from the Pittsburgh Medical Center and one cohort each from the Mayo Clinic and Austin Hospital, Melbourne, Australia. Three of the cohorts consisted of preselected patients without AKI (Pittsburgh 1, n=1,048; Mayo, n=737; Austin, n=333). Creatinine values measured in these cohorts represented baseline creatinine values. The last cohort (Pittsburgh 2, n=468) consisted of unselected ICU patients with baseline creatinine values recorded within one year of admission to the intensive care unit. Using the Pittsburgh 1 cohort, the authors derived an equation using the same anthropometric variables as the MDRD equation: baseline creatinine = 0.74 - 0.2 (if female) + 0.08 (if black) + 0.003 × age (in years). They then compared the measured creatinine in the Mayo and Austin cohorts and the recorded creatinine in the Pittsburgh 2 cohort to the estimated creatinine from the MDRD equation as well as to their new equation and a gender-fixed creatinine of 0.8 mg/dL for females and 1.0 mg/dL for males. The authors found that using any of the three methods, the median absolute error of the estimates was of the order of 0.1 to 0.2 mg/dL and overall accuracy was similar. When the definition of AKI was limited to the severity grades of injury and failure, all three methods were able to generate 78 percent to 90 percent reliable results for preselected normal range cohorts and 63 percent to 70 percent for the unselected cohort of ICU patients. The authors concluded that estimates of incidence of AKI in critically ill patients using RIFLE classification can be affected by the bias and limited accuracy of methods to estimate baseline creatinine. Whenever possible, recorded creatinine values should be used as a reference of baseline. Use of the MDRD equation to estimate baseline creatinine when it is unknown may overestimate or underestimate some mild (risk grade) AKI cases but is unlikely to misclassify patients as injury and failure grades.

Zavada J, Hoste E, Cartin-Ceba R, et al. A comparison of three methods to estimate baseline creatinine for RIFLE classification. Nephrol Dial Transplant. 2010;25:3911–3918.

Correspondence: John A. Kellum at kellumja@ccm.upmc.edu

[ Top ]

Blood concentrations as predictors of chloroquine poisoning severity Blood concentrations as predictors of chloroquine poisoning severity

Chloroquine is responsible for rare but life-threatening poisonings. The incidence of ingesting chloroquine with the intent of committing suicide remains increased in France, since the publication of a well-known suicide how-to guide, as well as in malaria endemic countries. Chloroquine toxicity is attributed to sodium-channel inhibition, resulting in intraventricular conduction blockade, ventricular rhythm disturbances, and cardiovascular collapse. Early recognition of prognostic factors is essential because the only effective treatment for severe poisonings is a combination of epinephrine, mechanical ventilation, and diazepam. Criteria associated with risk of fatal outcome include an ingested dose of 5 grams or more, systolic blood pressure of 80 mmHg or less, and a QRS complex duration of 0.120 seconds or more on electrocardiogram (EKG). The peak blood chloroquine concentration has been shown to predict increased mortality in suicidal ingestions, although cases of survival in patients with levels above 10.5 mg/L (25 µmol/L) and deaths in patients with lower levels have been reported. In contrast to the majority of other toxicants, the prognostic value of plasma chloroquine concentrations has been poorly investigated. Conflicting data exist regarding the prognostic value of sodium-channel blocker concentrations. Serum carbamazepine levels of 170 µmol/L or more are significantly associated with an increased risk of serious complications, such as coma, seizures, respiratory failure, and cardiac conduction defects. However, no significant correlations were found between plasma carbamazepine concentrations and heart rate, PR, QRS, or corrected QT intervals on EKG. Interestingly, plasma tricyclic antidepressant levels have also failed to predict risk of ventricular arrhythmias in poisoning. Therefore, to investigate the prognostic value of blood and plasma concentrations in acute chloroquine poisonings, the authors designed a prospective study based on modeling of the concentration/effect relationships. They performed a prospective study of consecutive patients with chloroquine poisonings admitted to an intensive care unit from 2003 to 2007 and simultaneously measured blood and plasma chloroquine (chloroquine and desmethylchloroquine) concentrations. A population pharmacokinetic-pharmacodynamic model described epinephrine infusion rate, the authors’ surrogate marker of cardiovascular toxicity, as a function of blood or plasma chloroquine concentrations. The study included 44 patients (29 female and 15 male; 33 years [25–41]; median (25th to 75th percentile), 34 percent with cardiac arrest). Management included mechanical ventilation (80 percent), 8.4 percent sodium bicarbonate (66 percent), epinephrine (73 percent; maximal rate, 2.8 mg/hour [0.8–5.0]), and extracorporeal life support (16 percent). Seven patients died. Blood (6.7 mg/L [4.0–13.0]) and plasma (1.5 mg/L [1.2–2.9]) chloroquine concentrations were correlated weakly, although significantly (r=0.66; P<0.0001, Spearman test). Admission chloroquine concentrations correlated with the reported ingested dose (r=0.70 for blood versus 0.48 for plasma), QRS duration (r=0.82 versus 0.64), lactate concentrations (r=0.63 versus 0.47), and epinephrine infusion rates (r=0.70 versus 0.62). Chloroquine concentrations differed significantly between patients who did and did not experience cardiac arrest (P=0.0002 for blood versus 0.02 for plasma). A one-compartment pharmacokinetic model described blood chloroquine concentrations adequately. And an effect compartment linked to the blood compartment described plasma chloroquine concentrations adequately. Using a sigmoidal Emax pharmacodynamic model, epinephrine infusion rate was better predicted with blood than plasma concentrations (P<0.01), suggesting that time-course of blood concentrations is a better prognostic value than plasma concentrations. The authors concluded that immediate and serial measurements of blood chloroquine concentrations are better than plasma for predicting cardiovascular severity of chloroquine poisonings.

Megarbane B, Bloch V, Hirt D, et al. Blood concentrations are better predictors of chloroquine poisoning severity than plasma concentrations: a prospective study with modeling of the concentration/effect relationships. Clin Toxicol. 2010;48:904–915.

Correspondence: Bruno Megarbane at bruno-megarbane@wanadoo.fr

[ Top ]


Clinical pathology abstracts editor: Michael Bissell, MD, PhD, MPH, professor, Department of Pathology, Ohio State University, Columbus.