Clinical Abstracts

 

 

 

 

November 2011

Clinical pathology abstracts editor:
Michael Bissell, MD, PhD, MPH, professor,
Department of Pathology, Ohio State University, Columbus.

Automated microscopy versus dipsticks for diagnosing UTI Automated microscopy versus dipsticks for diagnosing UTI

The diagnosis of urinary tract infection in children remains a contentious issue. Different collection methods have different contamination rates, and a variety of methods exist for processing urine. An accurate diagnosis was important in the United Kingdom because every child with a proven urinary tract infection (UTI) was investigated according to recommendations published in the guidelines of the Royal College of Physicians, London. The recent National Institute for Health and Clinical Excellence (NICE) guideline indicated that timely diagnosis and treatment was the best method of preventing renal parenchymal injury and suggested a more targeted investigation strategy in those children confirmed to have a UTI. Diagnostic methods play a crucial role in ensuring that prompt treatment is commenced appropriately. The NICE guideline recommended manual microscopy for those under three years of age and urine dipstick for those over three years. Manual microscopy is labor intensive and should be conducted on fresh urine samples. Maintaining a 24-hour service is restricted by changes in laboratory staffing, financial pressures, and workload. For example, the authors’ laboratory processes up to 750 samples per day (nine percent in patients under 16 years of age). Therefore, automated microscopy for urine using flow cytometers is increasingly utilized. Automated microscopy can issue an instant negative report using preset criteria based on bacterial and white blood cell counts. Only samples that are positive using automated microscopy criteria are sent for culture, reducing the burden on laboratory staff as well as laboratory costs. Studies indicated that automated microscopy performed on adult patients demonstrated superiority to urine microscopy and dipstick methods for diagnosing UTI. To the authors’ knowledge, no studies have been done exclusively with children. The NICE guideline considered automated microscopy to be an alternative diagnostic test, stating that, “There is not enough evidence to draw conclusions about alternative diagnostic tests for identifying UTI in children.” Urine dipstick methods are an established screening tool for UTI. They have been shown to be sensitive and specific but need to be interpreted in the appropriate clinical context. Following an internal audit, which was in agreement with previously published data, the authors’ practice has been to use urine dipstick as a screening tool and only send for culture urine samples positive for leukocytes or nitrites, or both. A urine sample negative for leukocytes and nitrites is rarely cultured, unless there are overriding clinical factors. The authors conducted a study to determine the accuracy of automated microscopy and urine dipstick in screening for UTI in children. A pure growth of more than 105 cfu/mL of a pathogenic organism is accepted as a definition of a positive culture and was used as the gold standard. However, the culture result still requires interpretation in children as this standard is based on data from adult populations. In practice, a diagnosis of UTI is made when a child has positive laboratory findings and clinical findings consistent with UTI. Therefore, the authors collected data on clinical outcomes and the physician’s decision to treat the patient to allow interpretation of results in conjunction with clinical context. For the study, 280 urine samples were collected from 263 patients (143 male; median age, 10.2 years; range, 0.1 to 19.75 years) during a six-week period. Of those, 221 were midstream or clean-catch samples and 57 were bag specimens. Automated microscopy identified 42 of 186 samples as requiring culture, including 17 of 19 samples which had a pure growth of more than 105 cfu/mL. (Two patients were not identified by automated microscopy: One was treated for vulvovaginitis, and one commenced prophylactic antibiotics prior to the culture result being obtained.) The sensitivity, specificity, and positive and negative likelihood ratios were 0.89, 0.85, 5.98, and 0.17, respectively, compared with 0.95, 0.72, 3.34, and 0.29, respectively, for urine dipstick. The authors concluded that automated microscopy performed comparably to urine dipstick in the diagnosis of UTI, with improved specificity and likelihood ratios and slightly reduced sensitivity. The findings support the use of automated microscopy for screening urine samples for culture in children, but different automated microscopy methods and algorithms require evaluation locally.

Lunn A, Holden S, Boswell T, et al. Automated microscopy, dipsticks and the diagnosis of urinary tract infection. Arch Dis Child. 2009;95:193–197.

Correspondence: Alan R. Watson at judith.hayes@nuh.nhs.uk

[ Top ]

Refining the use of serum ferritin measurements Refining the use of serum ferritin measurements

Plasma ferritin concentrations reflect the concentration of stored iron in the liver. Most investigators accept that serum ferritin concentrations of less than 12 µg/L in those younger than five years and less than 15 µg/L in those older than five years indicate iron deficiency. Plasma ferritin concentrations respond well in iron-intervention studies and were the principal recommendation of the World Health Organization (WHO) at a meeting in 2004 to discuss the best way of assessing iron status in populations. However, ferritin is also a positive acute-phase protein that is elevated in the presence of infection or inflammation. Therefore, the WHO working group recommended that ferritin measurements be accompanied by analysis of one or more acute-phase proteins to detect infection or inflammation. Yet there is uncertainty about how acute-phase protein should be used. Regression analyses of data from African-American infants and Guatemalan school-age children showed that serum ferritin correlated with acute-phase protein concentrations but found poor positive predictive values. Investigators have suggested raising ferritin thresholds to higher values in the presence of inflammation to discriminate iron deficiency, but others have suggested that such action is fraught with uncertainty. Likewise, excluding results from subjects with inflammation could bias the results if iron-deficient persons are more prone to infection. It is also impractical if the number of people with elevated acute-phase protein in a study population is high, such as in Gambia, where more than 90 percent of apparently healthy infants had elevated acute-phase protein concentrations. The authors believe that regression analysis is poorly predictive of ferritin concentrations because the increase in ferritin after infection follows a different pattern than that of C-reactive protein (CRP) or α1-acid glycoprotein (AGP). At the onset of infection, CRP rises rapidly and reaches maximum concentrations between 24 and 48 hours, whereas AGP may take four or five days to plateau. As the intensity of infection diminishes, CRP falls rapidly, whereas AGP remains elevated. In contrast, ferritin rises rapidly within a few hours of a trauma and remains elevated after the CRP concentrations have subsided and while AGP concentrations are still increased. Plasma retinol concentrations are also influenced by inflammation. To overcome the different decay times of the inflammatory protein and avoid excluding data, the authors devised a way to use elevated acute-phase proteins to categorize apparently healthy subjects by their inflammatory state. This method also produced correction factors to remove the influence of inflammation. The authors used the same method for plasma ferritin concentrations. The authors estimated the increase in ferritin associated with inflammation—that is, CRP greater than 5 mg/L and/or AGP greater than 1 g/L. The authors used 32 study groups that comprised infants (five studies), children (seven studies), men (four studies), and women (16 studies) (n=8,796 subjects). In two-group analyses (either CRP or AGP), the authors compared the ratios of log ferritin with or without inflammation in 30 studies. In 22 studies, the data allowed a comparison of ratios of log ferritin between four subgroups: reference (no elevated acute-phase protein), incubation (elevated CRP only), early convalescence (elevated acute-phase protein and CRP), and late convalescence (elevated AGP only). The authors found that in the two-group analysis, inflammation increased ferritin by 49.6 percent (CRP) or 38.2 percent (AGP; both P<0.001). Elevated AGP was more common than CRP in young people than in adults. In the four-group analysis, ferritin was 30 percent, 90 percent, and 36 percent (all P<0.001) higher in the incubation, early convalescence, and late convalescence subgroups, respectively, with corresponding correction factors of 0.77, 0.53, and 0.75. Overall, inflammation increased ferritin by approximately 30 percent and was associated with a 14 percent (confidence interval, 7 percent, 21 percent) underestimation of iron deficiency. The authors concluded that measures of acute-phase protein and CRP are needed to estimate the full effect of inflammation and can be used to correct ferritin concentrations. Few differences were observed between age and gender subgroups.

Thurnham DI, McCabe LD, Haldar S, et al. Adjusting plasma ferritin concentrations to remove the effects of subclinical inflammation in the assessment of iron deficiency: a meta-analysis. Am J Clin Nutr. 2010;92:546–555.

Correspondence: D. I. Thurnham at di.thurnham@ulster.ac.uk

[ Top ]

Abnormal liver panel in acute Kawasaki disease Abnormal liver panel in acute Kawasaki disease

Kawasaki disease is a multisystem inflammatory disease of childhood with vasculitis involving medium-sized arteries. The inflammatory lesions develop not only in coronary arteries but also in abdominal arteries. Hepatic dysfunction and hydrops of the gallbladder have been reported in the disease. Several studies have reported on select liver function abnormalities. The authors conducted a study in which they sought to define the spectrum of abnormalities in liver panel tests performed on children with Kawasaki disease (KD). They studied the characteristics of KD patients who presented with an abnormal liver panel and the patients’ response to treatment. The authors retrospectively reviewed the medical records of all KD patients admitted to The Children’s Hospital, in metropolitan Denver, between 2004 and 2009 who had one or more liver function tests performed at presentation. They compared patients with and without at least one abnormal liver panel test including alanine aminotransferase, aspartate aminotransferase, gamma glutamyl transferase, and bilirubin. These patients were divided into two groups: those with normal liver function tests (normal liver function test group) and those with at least one abnormal liver function test at presentation (abnormal liver function test group). The authors found that of 259 patients, 240 (92.7 percent) patients with KD reviewed had one or more liver function tests performed. Of the latter group, 109 (45.4 percent) had at least one abnormal liver panel test. Patients in the abnormal liver function test group presented earlier (P=0.01) and were more likely to have intravenous immunoglobulin (IVIG) resistant disease (P=0.01). No significant difference between groups was noted with regard to development of coronary artery abnormalities or aneurysms. Multivariate analysis identified C-reactive protein and total bilirubin at admission as significant predictors for IVIG-resistant disease. The authors concluded that abnormalities of liver function tests are frequently found in patients with acute KD, and children with abnormal liver function tests are at higher risk for IVIG resistance.

ElAdawy M, Dominguez SR, Anderson MS, et al. Abnormal liver panel in acute Kawasaki disease. Pediatr Infect Dis J. 2011;30:141–144.

Correspondence: Mary P. Glode at glode.mary@tchden.org

[ Top ]

PCR for Aspergillus detection PCR for Aspergillus detection

Invasive aspergillosis is a significant cause of morbidity and mortality in immunocompromised patients, despite the development of effective antifungal drugs. Aspergillus fumigatus is the species most commonly involved in cases of invasive aspergillosis. The rationale for routine environmental sampling to detect Aspergillus spp in hospitals is controversial. Fungal surveillance is usually carried out during construction and renovation, which are constants in large hospitals. In France, several hospitals conduct systematic fungal surveillance. Time to results, which is usually seven days with culture-based techniques, is a major issue at these hospitals. Quantitative polymerase chain reaction (QPCR) provides rapid detection and accurate quantification of individual species, and results are available in 48 hours. The use of QPCR could reduce the delay in initiating preventive measures and thereby prevent Aspergillus exposure and infection in immunocompromised patients. The authors conducted a preliminary study to evaluate QPCR detection of A. fumigatus in air samples collected by impaction on low-melt agar and to assess the potential benefits of this technique for systemic fungal surveillance of the hospital environment. For the study, fungal DNA was extracted from 43 samples of impacted low-melt agar using a three-step extraction method and was amplified by QPCR. Identification was made using an A. fumigatus probe. The authors found that with QPCR, 19 of the 43 samples were positive for A. fumigatus. With culturing, seven of the 19 samples were positive and 12 were negative. The cycle threshold values for the 12 culture-negative samples were between 39 and 43 cycles, and the cycle threshold values for six of the seven culture-positive samples were less than 38 cycles, suggesting that the amount of DNA detected by QPCR was higher in the presence of viable conidia. The authors concluded that QPCR detection of airborne A. fumigatus in impacted low-melt agar significantly reduces the time between sample collection and results, suggesting that this new approach can be beneficial for routine environmental sampling.

Bellanger A-P, Reboux G, Murat J-B, et al. Detection of Aspergillus fumigatus by quantitative polymerase chain reaction in air samples impacted on low-melt agar. Am J Infect Control. 2010;38:195–198.

Correspondence: Anne-Pauline Bellanger at apbellanger@chu-besancon.fr

[ Top ]

Rapid testing for malaria Rapid testing for malaria

Falciparum malaria is one of the most significant diseases in Africa in terms of mortality and burden on health services. Many Africans who have malaria are not being treated with effective drugs. The authors studied the impact of rapid diagnostic tests on the prescribing of antimalarials and antibiotics in West Africa in two settings serving the same population: one in which microscopy was routinely available (to allow comparison with previous studies) and one in which microscopy was not available (peripheral clinics, which represent the majority of clinics in sub-Saharan Africa). Most deaths from malaria are in children, but the majority of antimalarials in Africa are administered to adults, so the authors examined the impact of rapid diagnostic tests on children and adults. The authors conducted a randomized, controlled, open-label clinical trial. The setting was four medical clinics in the rural Dangme West district of southern Ghana: one in which microscopy was used to diagnose malaria (microscopy setting) and three in which microscopy was not available and diagnosis was made based on clinical symptoms (clinical setting). Study participants with suspected malaria were randomly assigned to a rapid diagnostic test or the diagnostic method used at the clinic (microscopy or clinical diagnosis). A blood sample for a research microscopy slide was taken for all patients. The primary outcome was the prescription of antimalarials to patients of any age whose double-read research slide was negative for malaria. The major secondary outcomes were correct prescription of antimalarials, impact of test results on antibiotic prescription, and correct prescription of antimalarials to children younger than five years. Of the 9,236 patients screened, 3,452 were randomized in the clinical setting and 3,811 in the microscopy setting. Followup to 28 days was 97.6 percent (7,088 of 7,263). In the microscopy setting, 722 (51.6 percent) of the 1,400 patients with negative research slides in the rapid diagnostic test arm were treated for malaria compared with 764 (55 percent) of the 1,389 patients in the microscopy arm (adjusted odds ratio, 0.87; 95 percent confidence interval [CI], 0.71–1.1; P=0.16). In the clinical setting, 578 (53.9 percent) of the 1,072 patients in the rapid diagnostic test arm with negative research slides were treated for malaria compared with 982 (90.1 percent) of the 1,090 patients with negative slides in the clinical diagnosis arm (odds ratio, 0.12; 95 percent CI, 0.04–0.38; P=0.001). The use of rapid diagnostic tests led to better targeting of antimalarials and antibiotics in the clinical, but not the microscopy, setting for children and adults. No deaths occurred in children under five years of age at 28 days’ followup in either test arm. The authors concluded that where microscopy is available, introducing rapid diagnostic tests has limited impact on prescriber behavior. In settings where microscopy is not available, however, using rapid diagnostic tests leads to a significant reduction in the overprescription of antimalarials, without any evidence of clinical harm, and to better targeting of antibiotics.

Ansah EK, Narh-Bana S, Epokor M, et al. Rapid testing for malaria in settings where microscopy is available and peripheral clinics where only presumptive treatment is available: a randomised controlled trial in Ghana. BMJ. 2010;340:c930–c939.

Correspondence: E. K. Ansah at ansahekdr@yahoo.co.uk

[ Top ]

Using ALT to screen for chronic liver disease in children Using ALT to screen for chronic liver disease in children

Chronic liver disease increasingly is becoming a clinical problem in children and adolescents. Screening for chronic liver disease is most commonly done using serum alanine aminotransferase (ALT) activity. Pediatric clinical trials use ALT to exclude potential subjects with liver disease. Numerous national guidelines recommend the use of ALT to screen for nonalcoholic fatty liver disease (NAFLD) in overweight and obese children. Despite widespread use of ALT in pediatrics, the threshold value for detecting liver disease in children is unknown and the proper interpretations of ALT assays performed in children is unclear. The liver SAFETY (Screening ALT for Elevation in Today’s Youth) study was conducted to help develop a uniform standard for normal values of ALT in children. The study had as its goals to determine the ALT thresholds used in acute care children’s hospitals in the United States; develop new gender-specific biologically based ALT thresholds derived from a national population sample of children free of known liver disease or established risk factors for liver disease; and compare the sensitivity and specificity of the ALT thresholds in use with the study-derived ALT thresholds for classifying children as having or not having three common forms of chronic liver disease—hepatitis B virus (HBV), hepatitis C virus (HCV), and NAFLD. The SAFETY study collected observational data from acute care children’s hospitals, the National Health and Nutrition Examination Survey (NHANES, 1999–2006), overweight children with and without NAFLD, and children with chronic HBV or HCV infection. The study compared the sensitivity and specificity of ALT thresholds used by children’s hospitals versus study-derived, gender-specific, biology-based ALT thresholds for detecting children with NAFLD, HCV, or HBV. The authors found that the median upper limit of ALT at children’s hospitals was 53 U/L (range, 30–90 U/L). The 95th percentile levels for ALT in healthy weight, metabolically normal, liver disease-free NHANES pediatric participants were 25.8 U/L for boys and 22.1 U/L for girls. The concordance statistics of these NHANES-derived thresholds for liver disease detection were 0.85 (95 percent confidence interval [CI], 0.74–0.96) for boys and 0.91 (95 percent CI, 0.83–0.99) for girls for NAFLD, 0.80 (95 percent CI, 0.70–0.91) for boys and 0.79 (95 percent CI, 0.69–0.89) for girls for HBV, and 0.86 (95 percent CI, 0.77–0.95) for boys and 0.84 (95 percent CI, 0.75–0.93) for girls for HCV. Using current ALT thresholds from children’s hospitals, the median sensitivity for detecting NAFLD, HBV, and HCV ranged from 32 percent to 48 percent; median specificity was 92 percent for boys and 96 percent for girls. Using NHANES-derived thresholds, the sensitivities were 72 percent for boys and 82 percent for girls; specificities were 79 percent for boys and 85 percent for girls. The authors concluded that the upper limit of ALT used in children’s hospitals varies widely and is set too high to reliably detect chronic liver disease. Biology-based thresholds provide higher sensitivity and only slightly less specificity. Therefore, clinical guidelines for the use of screening ALT and exclusion criteria for clinical trials should be modified.

Schwimmer JB, Dunn W, Norman GJ, et al. SAFETY study: alanine aminotransferase cutoff values are set too high for reliable detection of pediatric chronic liver disease. Gastroenterol. 2010;138:1357–1364.

Correspondence: Jeffrey B. Schwimmer at jschwimmer@ucsd.edu

[ Top ]