BudWiederman, MD, MA, Evidence eMended Editor, Grand Rounds
This prospective study found poor reproducibility of physical exam reports by different
providers evaluating children with pneumonia. Should we just order radiographs on
everyone with even mild respiratory distress, regardless of other clinical details?
Anyone who knows me, or reads this blog frequently, knows my answer to the question
I posed. The authors of course aren't suggesting we abandon the physical exam, but
I believe they have glossed over a potentially fatal flaw in their study that limits
what we do with the results.
The study plan actually is very sound, and I was excited to delve deeper into it.
Investigators at a children's hospital emergency department prospectively enrolled
128 children 3 months to 18 years of age who received a chest radiograph for suspected
pneumonia. Each child's primary provider completed a detailed assessment form, and
then a second provider was brought in to evaluate the same patient and fill out the
same form, without knowledge of the first assessor's findings. The researchers compared
the findings of the 2 assessors for agreement on several physical exam findings and
looked at some patient details. They found poor interrater reliability, utilizing
the kappa statistic, for most findings on physical examination; only wheezing, retractions, and respiratory
rate had an acceptable agreement. This calls into question a highly regarded clinical practice guideline that advocates for avoiding the use of chest radiographs in outpatient community-acquired
pneumonia in children, relying instead on clinical features.
It surely is a mark of my medical generation that I find particular dismay in any
study that downgrades the efficacy of history and physical exam. In some respects,
though, the authors' findings confirm my anecdotal observations that, in spite of
medical school curricula that include attention to basic interviewing and physical
exam skills, in practice these labor-intensive skills are marginalized by an over-reliance
on laboratory, imaging, and other testing. I still pride myself (probably in a biased
manner) on my auscultatory skills, but at the same time I have noticed many practitioners,
including some experienced attending physicians, listening to a child's chest with
the stethoscope outside the hospital gown or clothing. I was taught this was a no-no,
and it's easy to demonstrate on any child how different a chest or heart sounds with
the stethoscope on bare skin versus clothing. I was even a bit saddened that, in the
current study, researchers didn't bother to have assessors evaluate the children for
egophony or tactile fremitus. (Now I've really dated myself!)
But enough of my whining about the loss of the good old days, what about this so-called
fatal flaw? First of all, let me state that pulling off this type of study is extremely
difficult in a busy emergency department. Because of that, the researchers used a
"convenience sample" of patients, presumably enrolling the patients at a time when
research assistants were available and when the emergency department was still staffed
so that the second assessors could have time to participate. Convenience samples are
a well-recognized source of bias in clinical studies. What bothered me most, however,
is that 71% of first assessors and 63% of second assessors knew the results of their
patients' chest radiographs prior to performing their assessments. The researchers
didn't want this to happen, it's just another reality in a busy emergency department.
However, I would argue (and I think the authors agree with me) that knowledge of radiographic
results can influence the interpretation of physical exam findings and completion
of the assessment sheets. In the discussion section of the article, the authors didn't
consider this a source of bias "... given the random selection of the second assessor."
Therefore, they state that any bias contributed by knowledge of the radiographic results
would be randomly distributed. Unfortunately, nowhere in the article or the supplemental
information is any mention of a random selection of the second assessor. Either the
authors neglected to include this information, or they are applying a loose interpretation
of "random," but either way the journal editors should have caught this and asked
the authors to provide clarification. I suspect the selection of the second assessor
also was a "convenience sample."
Another important point to mention here is the authors give us no outcome findings
for the children. Did they receive antibiotics, and were they admitted to the hospital?
In our wonderful vaccine era, most children evaluated for pneumonia have viral infections.
With the relatively small sample size over a very broad age range in this study, it's
likely that only a few patients would actually benefit from antibiotic therapy.
BTW, a recent article in theAMA Rational Clinical Examination series was a meta-analysis of 23 prospective
cohort studies of children with pneumonia, also demonstrating the failure of some
auscultatory findings to predict pneumonia. Moderate hypoxemia and increased work
of breathing were most associated with pneumonia, but with very poor likelihood ratios
not likely to be helpful clinically.
Until more detailed studies are performed, I'll continue my antiquated approach to
patient care, being careful to perform histories and physical examinations to make
important decisions in children with respiratory and other illnesses. In the spirit
of the season, Bah, Humbug to anyone who disagrees!