ICYMI: “The New York Times Misrepresented a Shoddy Study Claiming Private Equity Worsened Hospital Care”

“The study’s strongest finding shows that lives were saved in hospitals acquired by private equity”

Washington, D.C. – Recently, Reason published a story written by Aaron Brown of New York University correcting the record on a New York Times article that misrepresented findings of a study examining private equity investment into hospital care. Brown, a regular columnist for Bloomberg and a statistics instructor at New York University and the University of California, San Diego, holds an M.B.A. in Finance and Statistics from the University of Chicago and an S.B. in Applied Mathematics from Harvard. In his piece, Brown calls out the New York Times article and study for being littered with multiple factual inaccuracies and displayed inherent bias from the outset.
 
Notable excerpts from Reason’s report included: 

  • Lives were saved in hospitals acquired by private equity – “The study’s strongest finding shows that lives were saved in hospitals acquired by private equity—the opposite of what Kannan expected to find. Patient mortality, the most important measure, dropped a statistically significant 9 percent in the study group, which represents nearly 500 lives saved.”
  • Study never defines private equity – “Even its premise is fuzzy. The authors never say what they mean by “private equity,” which has no formal definition. Half of the hospitals in the study were already privately owned, for-profit hospitals before they were acquired. The authors suggest that what they call “private equity” is characterized by excessive leverage and short horizons, but present no data on either factor.”
  • Bias was built into the study design – “Bias was built into the study design. Research that looks only at “adverse” events and outcomes is designed to dig up dirt and will tend to come up with meaningless conclusions. Serious investigators study all events and outcomes—good and bad—in search of accurate, balanced conclusions.”
  • Experts don’t call for regulation – “So what can we conclude from this study? The Times reporters seem to have gone on a second fishing expedition, this one for a scholar willing to conclude from the study’s findings that we need more government regulation, or perhaps a ban on private equity hospital acquisitions. To their credit, none of the experts they quoted fully delivered, forcing the reporters to blandly conclude that the study “leaves some important questions unanswered for policymakers.”

The authors and researchers notably excluded the fact that private equity plays a critical role supporting quality, affordable health care in the United States with private equity-funded innovations delivering more effective treatments, helping lower health care costs, and increasing access to life saving care for millions of Americans.
 
Please see the full article in Reason below:

The New York Times Misrepresented a Shoddy Study Claiming Private Equity Worsened Hospital Care
Reason
By Aaron Brown | January 5, 2024
 
“Serious Medical Errors Rose After Private Equity Firms Bought Hospitals” was the headline of a New York Times article looking at the findings of “a major study of the effects of such acquisitions on patient care in recent years” published in the December issue of JAMA. The paper was also written up in USA Today, MarketWatch, Common Dreams, and The Harvard Gazette.
 
“This is a big deal,” Ashish Jha, dean of the Brown University School of Public Health, told Times reporters Reed Abelson and Margot Sanger-Katz. “It’s the first piece of data that I think pretty strongly suggests that there is a quality problem when private equity takes over.”
 
Abelson, Sanger-Katz, and their fellow reporters misrepresented the findings of the study, which suffers from its own “quality problems.”
 
Even its premise is fuzzy. The authors never say what they mean by “private equity,” which has no formal definition. Half of the hospitals in the study were already privately owned, for-profit hospitals before they were acquired. The authors suggest that what they call “private equity” is characterized by excessive leverage and short horizons, but present no data on either factor. Times readers may interpret the phrase private equity to mean “evil Wall Street greedheads,” in which case it seems logical that patient care would deteriorate.
 
Even the paper’s lead author started with that assumption. “We were not surprised there was a signal,” Massachusetts General Hospital’s Sneha Kannan told the Times. “I will say we were surprised at how strong it was.”
 
Bias was built into the study design. Research that looks only at “adverse” events and outcomes is designed to dig up dirt and will tend to come up with meaningless conclusions. Serious investigators study all events and outcomes—good and bad—in search of accurate, balanced conclusions.
 
The study’s strongest finding shows that lives were saved in hospitals acquired by private equity—the opposite of what Kannan expected to find. Patient mortality, the most important measure, dropped a statistically significant 9 percent in the study group, which represents nearly 500 lives saved.
 
The paper could have been headlined “Patient Mortality Fell After Private Equity Firms Bought Hospitals,” except JAMA might not have published it, The New York Times certainly wouldn’t have bothered to write it up, and Common Dreams couldn’t have run with the headline, “We Deserve Medicare for All, But What We Get Is Medicare for Wall Street.” So the study authors fell over themselves to explain this finding away. They theorized, without any evidence, that maybe private equity hospitals routinely transfer out patients who are near death. Though they raise legitimate reasons for skepticism that private equity acquisition saved patient lives, they apply equally to the negative findings that are trumpeted both in the study and the news write-ups.
 
Another one of the 17 measures the study authors looked at was length of stay. They found that at the private equity hospitals the duration of stays was a statistically significant 3.4 percent shorter, which was another finding the authors were quick to downplay.
 
Falls are the most common adverse events in hospitals, and the study found that they were more likely to occur in hospitals acquired by private equity. According to the Times, the “researchers reported…a 27 percent increase in falls by patients while staying in the hospital.”
 
This isn’t what the study says. The rate of falls stayed the same at hospitals after they were acquired by private equity at 0.068 percent. Falls didn’t decline at the rate that they did at hospitals in the control group—from 0.083 percent to 0.069 percent—which is where the 27 percent number came from.
 
In other words, the situation improved in the control group but didn’t get worse or better in hospitals acquired by private equity. So the authors assumed that there was some industrywide drop in hospital falls and that this positive trend didn’t take place at the private equity hospitals.
 
What this finding actually suggests is that the control hospitals were badly chosen and run worse (at least when it comes to preventing patient falls) than the acquired hospitals both before and after private equity acquisition. That falls could change by 27 percent without any cause (the control hospitals were not purchased by anyone) makes nonsense of claiming statistical significance for much smaller changes in other factors.
 
Let’s even assume that there was an industrywide decline in falls and that private equity hospitals didn’t see the improvement that would have taken place had their greedy new owners not been allowed to acquire them. If that improvement had taken place, there would have been 20 fewer falls in the study group. Doesn’t that matter less than the 500 deaths prevented—the stat that the authors chose to downplay?
 
The Times article mentions that bed sores increased at the private equity hospitals even though that wasn’t a statistically significant finding, meaning that there weren’t enough data included in the study to make that assertion. The study authors acknowledged that this finding wasn’t significant, but the Times journalists chose to report it anyway.
 
The study authors did claim that another one of their adverse findings was statistically significant: Bloodstream infections allegedly increased in private equity hospitals from about 65 cases to 99 cases. This is indeed serious, as such infections can easily be fatal. However, the finding had marginal statistical significance, meaning it was unlikely, but not completely implausible, to have arisen by random chance if private equity acquisition did not affect the rate of bloodstream infections. If the only hypothesis that the authors had tested was whether private equity acquisition increased bloodstream infections, then the finding would meet standard criteria for statistical significance.
 
If you run a fishing expedition for adverse events and outcomes, you are very likely to find some findings that occur by random chance. The authors were aware of this and adjusted the claimed significance of this result as if they had tested eight hypotheses. But the paper reported 17 measures, and the authors may have tested more. If we adjust for 17 hypotheses, the bloodstream infection result loses its statistical significance.
 
The rigorous way to do studies is to pre-register hypotheses to ensure that the authors can’t go fishing in a large amount of data to pick out a few conclusions that they like that happen to appear statistically significant by random chance. The authors did not report pre-registration.
 
So what can we conclude from this study? The Times reporters seem to have gone on a second fishing expedition, this one for a scholar willing to conclude from the study’s findings that we need more government regulation, or perhaps a ban on private equity hospital acquisitions. To their credit, none of the experts they quoted fully delivered, forcing the reporters to blandly conclude that the study “leaves some important questions unanswered for policymakers.”
 
“This should make us lean forward and pay attention,” was the best Yale economist Zack Cooper was willing to give Abelson and Sanger-Katz, adding that it shouldn’t lead us to “introduce wholesale policies yet.” Rice economist Vivian Ho told the Times that she “was eager to see more evidence.”
 
Setting out to find “more evidence” of a conclusion that researchers already believe to be true, instead of going where the data lead, is what leads to such sloppy and meaningless research in the first place.