I am a self-professed data geek. Math, comparing data sets, the mechanics of how to draw valid conclusions, statistics – all were early interests of mine. As my education progressed through college, medical school, residency and fellowship, the variety of data available, the process of deciding what was relevant and valid and which elements should be discarded and not used to inform decision-making became increasingly complex and important (and interesting).
Regardless of the type of information – clinical, financial or administrative – a great deal of rigor must be applied when making comparisons to avoid the “apples-to-oranges” problem. For instance, what if one compared the length of stay and cost of care for pneumonia patients between hospitals A and B? In hospital A, the length of stay is shorter and the cost of care less than is found in B, so one might conclude that hospital A provides better care for these patients. On closer examination, the average age of patients in hospital A is 60 and they have no other medical problems, but in B, the average age is 75 and the vast majority have two or more other significant medical conditions. The initial superficial analysis reached an invalid conclusion, as the two patient populations are very different. This is the apples-to-oranges issue, and it occurs much too frequently when comparisons in health care are undertaken casually.
In my previous roles as Chief Quality Officer, Chief Medical Officer and a clinical researcher at Fletcher Allen, I was frequently responsible for making valid comparisons. Whether considering outcomes for different surgical procedures for a peer-reviewed publication or clinical results for different physicians or organizations, one rule was paramount: always use risk adjustment to avoid comparing apples to oranges. In the example above, risk adjustment would take into account the age and medical condition of the patient populations. The science of risk adjustment is very complex and evolving, but the better the method, the better the comparisons and the more ironclad the conclusions.
We belong to an organization, University HealthSystem Consortium (UHC), that helps us compare our clinical, operational and financial outcomes with other academic medical centers. UHC has spent over three decades refining its risk-adjustment methodology so the comparisons are valid. In its rigorously validated annual “Quality and Accountability Survey,” Fletcher Allen has been rated increasingly higher in each of the past three years – 41st, 33rd and 14th this past year – an excellent record of performance and year-over-year improvement.
The most recent issue of Consumer Reports attempted to compare the nation’s hospitals on a variety of quality and safety measures. Using data from a variety of sources, lacking transparency as to their methods and failing to do careful risk adjustment, their ratings and conclusions are invalid at best and damaging at worst. Other respected health care organizations have come to a similar conclusion about this survey. This venerable publication, which many of us depend on when comparison shopping for refrigerators and microwaves, dramatically missed the mark in their first try at health care and has shaken consumer confidence in excellent health care organizations. If they attempt this again – and I hope they do – they should apply the necessary rigor to their methods. Some may say this is just sour grapes from a harried health care executive; I would say it’s apples and oranges.
John R. Brumsted, M.D. is the President and Chief Executive Officer of Fletcher Allen Health Care and Fletcher Allen Partners.