There seems to have been a bit of a rush of academic studies that are suggesting the use of EHRs is not associated with any improvements in the quality and safety of care delivered.
The following summarises the issue
January 27, 2011 — 2:55pm ET | By Janice Simmons - Contributing Editor
A quick peek at the past few days in the literature on whether electronic health records are capable of improving quality care shows EHRs taking it on the chin a few times. Despite the efforts and expense of installing EHRs in practices, EHRs are not improving overall quality as much as might be expected, several researchers said. But taking a closer look, it's important to ask ourselves: Are we all on the same page when it comes to defining quality?
After covering the issue of healthcare quality for the past two decades, it's become apparent to me that there's no one single definition of quality. It can mean many things, such as improving the overall well-being of a patient, or creating a better standard of living for a group of individuals or a population.
What we all can agree on, though, is that achieving quality care is an important goal. But exactly how do we monitor and measure it--and can EHRs provide the means to do it?
In a study appearing online in the Jan. 24 issue of the Archives of Internal Medicine, Stanford University researchers Max Romano and Randall Stafford, MD, PhD, reviewed guideline adherence for 250,000 outpatient visits using data from the National Ambulatory Medical Care Survey and from the National Hospital Ambulatory Medical Care Survey from 2005 to 2007.
Overall, what they found was that among 20 indexes of care quality, only diet counseling for high-risk adults showed "significantly better performance" in visits where EHRs were used when compared with visits using other types of record-keeping systems. "There were no other significant quality differences" regarding the clinical benefits of EHRs and clinical decision support, they said.
However, in a commentary appearing in the same journal, two National Library of Medicine (NLM) researchers--Clement McDonald, MD, and Swapna Abhyankar, MD--said that they suspected that the EHR and clinical decision support systems in use at the time of Stanford study were "immature," failed to cover many of the guidelines that the study targeted, and had incomplete patient data.
They also said that EHRs without clinical decision support do not affect guideline adherence because without that support, "most EHRs function primarily as data repositories that gather, organize, and display patient data--not as prods to action."
Most of the guidelines in the Stanford study concerned medication use, but none dealt with such areas as immunizations or screening tests. "In our experience, care providers are less willing to accept and act on automated reminders about initiating long-term drug therapy than about ordering a single test or an immunization," they wrote.
In another study appearing online Jan. 18 in the Public Library of Science (PLoS), British researchers--looking at the use of eHealth technologies including EHRs--said that little empirical evidence was found to substantiate their claims of quality and safety.
Extra details of the studies are provided here:
Posted: January 25, 2011 - 11:15 am ET
A pair of researchers at Stanford University, Palo Alto, Calif., has released results of a three-year study that indicates EHRs did little to improve the quality of care.
"There's a lot of enthusiasm and money being invested in electronic health records," senior author Dr. Randall Stafford said in a news release. "It makes sense, but on the other hand it's an unproven proposition. When the federal government decides to invest in healthcare technology because it will improve the quality of care, that's not based on evidence. That's a presumption."
Stafford is an associate professor of medicine at the Stanford Prevention Research Center. A seven-page article based on the study, "Electronic health records and clinical decision support systems: Impact on national ambulatory care quality," appears online in the Archives of Internal Medicine.
In the new study, Stafford and former Stanford undergraduate student Max Romano, who is now a medical student at Johns Hopkins University in Baltimore, analyzed data from nearly 250,000 patient visits in 2005 through 2007. They looked at whether computerized, clinical decision-support tools in EHR systems improved the quality of care.
Their conclusions? There was "no consistent association between EHRs and CDS and better quality," according to the report. "These results raise concerns about the ability of health information technology to fundamentally alter outpatient-care quality."
Here is a comment from the site
I think that this study was done too soon to accurately gauge how effective an EHR will be in improving the quality of medical care. The clinical decision-support tools are still too new, and as Dr Stafford noted, "These are complicated systems used by individuals who have received little formal training, at least until recently." Once the training has been completed and the physicians, nurses, pharmacists, and all other medical ancillary personnel increase their expertise is using these systems, quality of medical care should increase. The magnitude of the increase will still be determined by each individual on the health care team by how well they do their job. These tools and systems will make those tasks easier to do and will have the capability to highlight possible errors. We just need a little bit of patience with the industry as the systems come together and everyone starts to use the systems effectively. A similar study in the next decade should provide a much better picture as to how much an EHR did to improve medical care.
January 21, 2011, 12:34 PM ET
With the U.S. and the U.K. heading full steam towards electronic medical records and other health IT applications, how much evidence is there that they improve care?
Not a whole lot, according to a review of existing research on the topic published this week by PLoS Medicine. While governments and other proponents are claiming that digitizing health records can save lives and increase efficiency, the review’s “key conclusion is that these claims need to be scrutinized before people invest quite large sums of money in these technologies,” Aziz Sheikh, lead author of the study and a professor of primary care research and development at the Center for Population Health Sciences at the University of Edinburgh, tells the Health Blog.
Sheikh and his colleagues scrutinized 53 reviews of the evidence surrounding technologies including electronic medical records, computerized provider order entry and computerized decision-support systems. The strength of the evidence varied from technology to technology, but in general the review found that “many of the clinical claims made about the most commonly deployed [digital health] technologies cannot be substantiated by the empirical evidence,” the authors write.
Regular readers will remember some comments on a paper with a similar message here:
And regular readers will also remember my comments on this latter study found here:
All this in the last month prompted me to wonder just what might be happening here and why we are seeing such a diversity of study outcomes.
The first point that needs to be made is that there is a pretty large evidence base supporting the use of EHRs. The Agency for Health Care Research and Quality (AHRQ) has assembled a good range of this material.
The key elements of this - and studies of accepted high quality - can be found here:
There are a large range of topics covered with the coverage of clinical decision support being most useful.
There are a few defining characteristics of these reports as best I can tell.
1. They are all retrospective analyses of data that was created for other purposes and with other intended uses - i.e. the research core data was not collected with the study in mind to ensure relevance and accuracy for that purpose.
2. They are typically at least 4-5 years behind current practice
3. They are all relying on large data sets that are not all that well defined or definable.
Additionally it could be there is a bias in news reporting for bad outcomes and hence we hear more about these studies.
To me this is not the way science progresses. We need work undertaken that is prospective, designed to answer specific questions that are actually clearly defined and capable of answer, and work that actually addresses current practice.
It is clear some work of this sort has been analysed by the AHRQ and found to be quite at odds with these reports cited above. My view is that well designed, prospective and controlled studies will show quality Health IT makes a positive difference. What is really needed is for more analysis of the successful implementations around the world to be undertaken. Of course those with the successes are more interested in improving rather spending time touting their success so it may be we do not hear enough about them (Kaiser, Intermountain and Partners come to mind - they publish but maybe not enough to get their message out!).
The successes in Africa with the use of simple systems to better manage AIDS care similarly argues that simple things done well can really help!
I simply do not believe the studies cited above are example of what I would call anywhere near conclusive evidence - as the authors to their credit point out. More better work is needed to nail this!
I advise a very critical and sceptical mind-frame looking at e-Health research - especially aggregate studies of disparate entities and functionality.
Health IT has enough issues to address in utility, quality and safety without having to respond to methodologically challenged research!