Evidence-based medicine is a great idea– except for the evidence part. In a powerful op-ed piece in the Boston Globe, Sylvia Pagán Westphal makes the seemingly obvious point that “evidence-based medicine is only as strong as the evidence used to support it. The stark reality is that evidence can be weak, biased, or even fraudulent.”
A key weakness in the chain is the use of guidelines in evidence-based medicine, since expert opinion instead of “solid clinical trial evidence” plays a large role in the guidelines. The problem is that “many physicians who write these recommendations have financial ties to drug companies.” Unfortunately, she observes, “there’s a systemic lack of transparency in the guideline-setting process, which means that low-quality data gets portrayed as much better evidence than it really is.”
Perhaps no recommendations carry more weight — and the burden of more life-and-death decisions — than those in cardiology, issued jointly by the American College of Cardiology and the American Heart Association. Of the 2008 guidelines, involving over 2,700 treatment recommendations, only 11 percent were backed by strong evidence from multiple randomized clinical trials, according to a 2009 study by Duke University researchers published in the Journal of the American Medical Association. Meanwhile, 48 percent of the recommendations were based on the personal opinions of experts in the field.
Sometimes the experts disagree. In 2008 guidelines from the Endocrine Society recommended against using the apoB test to assess risk, while the American Disabetes Association and the ACC supported the test.
And sometimes the experts are plain wrong, causing real damage. Westphal cites a recommendation in the pneumonia guidelines from the Infectious Diseases Society of America that patients should receive antibiotics within 4 hours of hospital arrival, instead of 8 hours as in the past, although there was no good data to support the recommendation. But because it was included in the guidelines it was adopted as a performance standard by CMS and the JCAHO. The result:
But it wasn’t long before reports began to surface of emergency rooms over-diagnosing patients with pneumonia and giving them antibiotics just to comply with the four-hour rule. Studies also came out showing no survival advantage from giving the antibiotic earlier. By 2007, the government announced that it would stop using the four-hour performance measure in its tracking of hospitals. In the interim, many thousands of people were affected, plus there was the wasted cost incurred for many unnecessary doses of antibiotics. Moreover, the Medicare study’s senior author was one of the six men in charge of writing the society’s pneumonia guidelines — and he, along with two more members of the very same panel, also sat on the national pneumonia panel. Allowing the same experts to sit on multiple treatment panels, rubber-stamping their own recommendations, violates any system of checks and balances.
Westphal notes that many organizations have “moved to shore up” their internal controls of their guidelines, including more transparency about the process and the degree or agreeement and disagreement among guideline committee members. “The bottom line is that expert opinion, as well intentioned as it might be, shouldn’t masquerade as evidence-based medicine,” she writes.
But what to do when “the experts themselves are clouded by financial ties to companies making the drugs under consideration”? She points out that over 50% of the panel members of the DSM had financial ties to industry, and “in areas where drugs are the treatment of choice, such as mood disorders and schizophrenia, all guideline panel members had financial links to industry.”
Disclosure itself isn’t enough, argues Westphal:
Doctors with ties to companies making the drugs under consideration shouldn’t be assessing them. It’s a basic ethical assumption, but it hasn’t been widely embraced by guideline-setting organizations, which think the problem is fixed by allowing panel members to disclose their financial ties or asking them to recuse themselves from voting. The truth is — and companies know this — that having a “thought leader’’ speak on behalf of a treatment is influential to his or her peers, even if disclosures have been made.
Westphal also discusses the limitations of data derived from clinical trials sponsored by industry, citing examples of companies hiding adverse events data or “publishing only studies that are positive for a drug, while burying the negative results.”
Here’s the real dilemma:
Evidence-based medicine functions on the assumption that strictly analyzing all evidence available for a treatment leads to an accurate judgment of the treatment’s effect. When negative studies are not published, or when they are published in a way that makes them seem positive, or when adverse outcomes are omitted from the medical literature, the “evidence” is no longer a reflection of a treatment’s real effect but a skewed, artificially positive illusion.
I highly recommend you read the original article in the Boston Globe.