This appeared a few days ago:
ED patient encounters not accurately documented in EHR
September 19, 2019, 12:08 a.m. EDT
Documentation in electronic health records did not match observed and recorded physicians' behaviors during patient encounters, according to a study of emergency department residents.
The study, published on Wednesday in the journal JAMA Network Open, involved emergency departments in two academic medical centers where residents’ patient encounters were observed to compare real-time performance with EHR documentation.
“No other study has attempted to quantify the accuracy of electronic physician documentation using concurrent observation,” the authors contend.
In the study, nine residents were shadowed by trained observers for 20 encounters—10 encounters per physician per site—to gather real-time observational data, with the associated EHR data subsequently reviewed.
According to the authors, there were “inconsistencies” between the documentation of review of systems (ROS) and physical examination (PE) findings in the EHR and observational reports.
“While physicians may commonly dictate or type a customized history of present illness or medical decision-making note,” the study makes the case that the ROS and PE sections of the EHR may be prone to inaccuracy because of the use of autopopulated text.
More here:
Here is the Abstract and Key Points from the article:
September 18, 2019
Concordance Between Electronic Clinical Documentation and Physicians’ Observed Behavior
Carl T. Berdahl, MD, MS1Gregory J. Moran, MD2Owen McBride, MD3 Alexandra M. Santini, BS4Ilya A. Verzhbinsky, BS5,6David L. Schriger, MD, MPH7
Question How closely does documentation in electronic health records match the review of systems and physical examination performed by emergency physicians?
Findings In this case series of 9 licensed emergency physician trainees and 12 observers of 180 patient encounters, 38.5% of the review of systems groups and 53.2% of the physical examination systems documented in the electronic health record were corroborated by direct audiovisual or reviewed audio observation.
Meaning These findings raise the possibility that some physician documentation may not accurately represent actions taken, but further research is needed to assess this in more detail.
Abstract
Importance Following the adoption of electronic health records into a regulatory environment designed for paper records, there has been little investigation into the accuracy of physician documentation.
Objective To quantify the percentage of emergency physician documentation of the review of systems (ROS) and physical examination (PE) that observers can confirm.
Design, Setting, and Participants This case series took place at emergency departments in 2 academic medical centers between 2016 and 2018. Participants’ patient encounters were observed to compare real-time performance with clinical documentation.
Exposures Resident physicians were shadowed by trained observers for 20 encounters (10 encounters per physician per site) to obtain real-time observational data; associated electronic health record data were subsequently reviewed.
Main Outcomes and Measures Number of confirmed ROS systems (range, 0-14) divided by the number of documented ROS systems (range, 0-14), and number of confirmed PE systems (range, 0-14) divided by the number of documented PE systems (range, 0-14).
Results The final study cohort included 9 licensed emergency medicine residents who evaluated a total of 180 patients (mean [SD] age, 48.7 [20.0] years; 91 [50.5%] women). For ROS, physicians documented a median (interquartile range [IQR]) of 14 (8-14) systems, while audio recordings confirmed a median (IQR) of 5 (3-6) systems. Overall, 755 of 1961 documented ROS systems (38.5%) were confirmed by audio recording data. For PE, resident physicians documented a median (IQR) of 8 (7-9) verifiable systems, while observers confirmed a median (IQR) of 5.5 (3-6) systems. Overall, 760 of 1429 verifiable documented PE systems (53.2%) were confirmed by concurrent observation. Interrater reliability for rating of ROS and PE was more than 90% for all measures.
Conclusions and Relevance In this study of 9 licensed year emergency medicine residents, there were inconsistencies between the documentation of ROS and PE findings in the electronic health record and observational reports. These findings raise the possibility that some documentation may not accurately represent physician actions. Further studies should be undertaken to determine whether this occurrence is widespread. However, because such studies are unlikely to be performed owing to institution-level barriers that exist nationwide, payers should consider removing financial incentives to generate lengthy documentation.
End Abstract.
Here is the link to the Abstract.
This is important stuff as it says that if you incentivise physicians using an EMR to produce so called ‘comprehensive’ notes they will do so, possibly at the expense of accuracy.
There is a clear message here for system designers – build system that make it easy to capture what you actually observe with the patient and don’t expect hundreds of negatives (e.g. no pain in abdo. etc.) just to cover the tail and increase billing. The negative tail covering notes my seem great but may actually fall into the NAD category (Not Actually Done! Or assessed etc.)
This really is a classic example of incentive use that produces un-anticipated consequences! I wonder will the DoH and the ADHA notice the trouble and inaccuracy they might be causing?
David.
2 comments:
To be honest, I think you'll find the same problem with recording of information in a patient history using any technique, including hand-written records.
So EHRs are no better than existing manual methods.
Well that's good to know. A pity ADHA doesn't know it. Or maybe they do.
Post a Comment