I hope everyone is enjoying a wonderful holiday season. In the spirit of the holidays,
I wanted to share this example of a study that hits exactly the right notes in the
discussion, neither under- nor over-hyping the results.
It's a retrospective study of 1660 central line devices placed in 1255 children in
4 pediatric intensive care units in Brazil over a 3-year period. The investigators
looked at central line-associated bloodstream infection (CLABSI) in these children,
specifically trying to determine if peripherally inserted central catheters (PICCs)
had a lower infection rate than central venous catheters (CVCs). They did find that
PICCs were less likely than CVCs to become infected, 1.4% versus 4.2%. But, here's
the kicker. Remember this was a retrospective study, and the choice of whether the
child received a PICC or CVC was decided by the medical team caring for each patient.
Anyone who has been around intensive care units knows that PICCs and CVCs are very
different animals in terms of how they are used and accessed. It is common sense that
CVCs would have higher infection rates because they tend to be utilized in sicker
kids, have multiple ports, and are used to infuse more complex treatments.
That's where the propensity-adjusted analysis comes in. I've highlighted this issue
a number of times in this blog, most extensively last summer. However, this current study had a bit of a different twist to the analysis, and
it's just such a good study I couldn't help but highlight it here.
A simplistic way to look at propensity scores, or propensity adjustments, is that
it is a statistical method to make a non-randomized study look more like a randomized
controlled trial. Statisticians would scream if they heard me say that, but I use
that analogy because some authors seem to be so sold on propensity scores that they
believe it can correct for a bunch of confounding variables in nonrandomized trials.
Be certain of this: no amount of statistical maneuvering can turn a retrospective
study into a prospective randomized trial, but the analysis still can be helpful.
In the current article, the authors used covariate-balancing rather than the more
common regression methods to analyze their data. (Honestly, the statistical arguments
for the choice of method is above my pay grade!) However, they found that after accounting
for important covariables that could alter infection risk (e.g. severity of illness,
underlying diagnosis, reason for admission, cardiac arrest), PICCs still were less
likely than CVCs to be associated with infection, with an adjusted hazard ratio of
at least 2.18 (95% confidence interval 1.02-4.64). If you have access to Intensive Care Medicine, look up this study in last August's issue and just read the last paragraph of the
discussion. They explain the study limitations very well, including the fact that
residual confounding occurs no matter how careful the propensity analysis.
Although I've waxed endlessly on the joys of reading a well-done clinical report,
I must admit I still prefer curling up in front of a warm fire (or heating vent) with
my new holiday novels my mother-in-law sent me. Everyone please have a safe and wonderful
remainder of the holiday season, enjoying family, friends, and fun pastimes.