This appeared a little while ago.
Cardiac Practice Guidelines Have High Turnover
Published: May 27, 2014 | Updated: May 28, 2014
By Salynn Boyles, Contributing Writer, MedPage Today
Reviewed by Robert Jasmer, MD; Associate Clinical Professor of Medicine, University of California, San Francisco and Dorothy Caputo, MA, BSN, RN, Nurse Planner
Action Points
- One in five cardiology class I clinical practice guidelines published since the late 1990s have been downgraded, reversed, or omitted.
- Point out that after accounting for guideline-level factors, the probability of being downgraded, reversed, or omitted was three times greater for recommendations based on opinion or on one trial or observational data versus recommendations based on multiple trials.
One in five cardiology class I clinical practice guidelines published since the late 1990s have been downgraded, reversed, or omitted, with recommendations not supported by strong clinical trial evidence the most likely to get the axe, an analysis of more than 600 guidelines found.
Among recommendations with available information on level of evidence, 90.5% (95% CI 83.2%-95.3%) supported by multiple randomized studies were retained versus 81% (95%CI 74.8%-86.3%) supported by one randomized trial or observational data, and 73.7% (95% CI, 65.8%-80.5%) supported by opinion (P=0.001), wrote Mark D. Neuman, MD, of the University of Pennsylvania in Philadelphia, and colleagues, in the May 28 issue of the Journal of the American Medical Association.
After accounting for guideline-level factors, the probability of being downgraded, reversed, or omitted was three times greater for recommendations based on opinion (odds ratio 3.14, 95% CI 1.69-5.85, P<0 .001="" 1.45-8.41="" 3.49="" 95="" ci="" data="" em="" observational="" on="" one="" or="" trial="">P0>
=0.005) versus recommendations based on multiple trials, the group reported.
The findings revealed that shifts in cardiology guidelines over time are largely predictable, with recommendations based on just one clinical trial or on retrospective studies much more likely to be changed than those made on the basis of multiple clinical trials, Neuman told MedPage Today.
He said there were clear implications for policymakers charged with identifying quality and performance measures for cardiology practices.
"I would say the safest bet in terms of ensuring that these measures will endure would be to build them around areas of medicine where there are multiple clinical trials to show something works," he said.
Study Details
The analysis included clinical practice recommendations jointly produced by the American College or Cardiology (ACC) and the American Heart Association (AHA). All were current as of Sept. 1, 2013 and all had at least one prior version.
The sample included 11 guidelines addressing:
- Atrial fibrillation
- Perioperative cardiovascular evaluation
- Cardiac pacemakers and antiarrhythmia devices
- Secondary prevention of coronary artery disease
- Coronary artery bypass graft surgery
- Cardiovascular disease prevention in women
- Heart failure
- Percutaneous coronary intervention
- Chronic stable angina
- Unstable angina and non-ST-segment elevation myocardial infarction
- Valvular heart disease
For each guideline, the researchers considered the version immediately preceding the current one to be the index. They identified 619 class I recommendations in the 11 index guidelines published between 1998 and 2007. The median number of years between the index guideline and the next full revision was 6, the number of listed writing committee members for index guidelines ranged from 11 to 33 (median 14), and the percentage of members retained between versions ranged from 0% to 75% (median 30.8%).
The durability of class I ACC/AHA guideline recommendations for procedures and treatment varied significantly across individual guidelines and levels of evidence, with the most omissions by topic seen for preoperative cardiovascular evaluation (nine of 13 guidelines omitted, 69.2%) and congestive heart failure (25 of 66 guidelines omitted, 37.9%).
Downgrades or reversals were most common among level B recommendations (single randomized trial or nonrandomized studies), occurring in 12.8% (95% CI 8.5%-18.3%, 25 of 195), while omissions were most common among level C (consensus opinion, standard of care, or case studies) evidence, occurring in 16.9% (95% CI 11.2%-23.9%, 25 of 148).
When the researchers assessed changes over time in the level of evidence for downgraded or reversed recommendations whose initial level of evidence was B or C, they found that the level of evidence increased for eight (20.5%) and decreased or stayed the same for 311 (79.5%).
"While our results highlight the overall durability of cardiovascular disease guideline recommendations, they also emphasize that particular subsets of recommendations may be more fragile than others as a basis for changes in practice and policy," the researchers wrote. "For example, one of eight recommendations that was based on a single trial or observational data was either downgraded or reversed in the subsequent guideline version, versus one of 26 recommendations based on two or more randomized trials."
Lots more here:
The implication of this study is that not only is clinical knowledge advancing rapidly but that this good news is meaning that quality evidence based practice is getting harder and harder as the evidence base is not all that stable.
If an Electronic Health Record has guideline based decision support it is vital to ensure that these guidelines are the most current available as if this is not done there are risks of liability etc.
A secondary implication of this is that clinicians must ensure they are using the most current versions of their EHR software and that their EHR provider is actively keeping up with new releases of the guideline they have integrated. This is a very good reason to pay annual maintenance fees etc.
I have to say the other issue that strikes me is just how hard it is for even super-specialists to stay on top of their field. It was really much easier even a decade or two ago!
And to top it up even more have a browse at this post from the KevinMD blog.
http://www.kevinmd.com/blog/2014/05/problem-evidencebased-health.html
As Dr Lowinger says:
"Yet lately the inadequacies with evidence have become more and more glaring to me. Lately it seems to me that we need to start paying better attention to what evidence can’t do — as much as to what it can do. And I wonder if it isn’t time for a better approach in developing and transmitting health information via any channel.
Evidence, for one thing, doesn’t last. It is a fluid beast — forever slithering under our grasp. Recommendations change over time and there seems to be a growing fuzziness around the edges."
Tricky what?
And to top it up even more have a browse at this post from the KevinMD blog.
http://www.kevinmd.com/blog/2014/05/problem-evidencebased-health.html
As Dr Lowinger says:
"Yet lately the inadequacies with evidence have become more and more glaring to me. Lately it seems to me that we need to start paying better attention to what evidence can’t do — as much as to what it can do. And I wonder if it isn’t time for a better approach in developing and transmitting health information via any channel.
Evidence, for one thing, doesn’t last. It is a fluid beast — forever slithering under our grasp. Recommendations change over time and there seems to be a growing fuzziness around the edges."
Tricky what?
David.
1 comment:
"Evidence, for one thing, doesn’t last."
Indeed. So all the old information in a Health Record may not only be past its use-by-date, but could be so wrong that, if relied upon, could lead to bad health decisions.
Any healthcare professional who assumes the data in a health record is correct is a fool. And the people who think that a health record is going to make a significant difference to the practice of medicine are also IMHO, fools.
Health care professionals don't really give a toss as to the medical history of a patient. What they really want to know is the current state of the patient. Then they can make real healthcare decisions with confidence.
Reading eHR is like watching last week's weather forecast. Interesting, but essentially useless if you want to what to wear tomorrow.
Post a Comment