The Medical Journal of Australia published an important paper and editorial last week.
Quality of drug interaction alerts in prescribing and dispensing software
Michelle Sweidan, James F Reeve, Jo-anne E Brien, Pradeep Jayasuriya, Jennifer H Martin and Graeme M Vernon
Abstract
To investigate the quality of drug interaction decision support in selected prescribing and dispensing software systems, and to compare this information with that found in a range of reference sources.
Design and setting:
A comparative study, conducted between June 2006 and February 2007, of the support provided for making decisions about 20 major and 20 minor drug interactions in six prescribing and three dispensing software systems used in primary care in Australia. Five electronic reference sources were evaluated for comparison.
Sensitivity, specificity and quality of information; for major interactions: whether information on clinical effects, timeframe and pharmacological mechanism was included, whether management advice was helpful, and succinctness.
Six of the nine software systems had a sensitivity rate ≥ 90%, detecting most of the major interactions. Only 3/9 systems had a specificity rate of ≥ 80%, with other systems providing inappropriate or unhelpful alerts for many minor interactions. Only 2/9 systems provided adequate information about clinical effects for more than half the major drug interactions, and 1/9 provided useful management advice for more than half of these. The reference sources had high sensitivity and in general provided more comprehensive clinical information than the software systems.
Drug interaction decision support in commonly used prescribing and dispensing software has significant shortcomings.
Full paper is found here (if you are a subscriber – otherwise bad luck for 12 months):
http://www.mja.com.au/public/issues/190_05_020309/swe11286_fm.html
The editorial begins thus:
Quality of prescribing decision support in primary care: still a work in progress
Farah Magrabi and Enrico W Coiera
Clinical software governance and real-world testing involving users are urgently needed
In this issue of the Journal, a study from the National Prescribing Service (NPS) examines the quality of drug interaction alerts generated by nine clinical software systems currently used by general practitioners and pharmacists in Australia for prescribing or dispensing medications (Sweidan et al). The findings will come as no surprise to those who have repeatedly expressed concern about the shortcomings of clinical decision support software. Only half of the six prescribing systems examined by the NPS alerted users to all 20 of the major drug–drug interactions tested, which can occur with commonly used drugs and with the potential to trigger serious adverse reactions. The best of the three dispensing systems detected 19 of these drug interactions. Yet Australian GPs are heavily reliant on such software alerts: 88% of respondents to a recent national survey reported relying on their prescribing software to check for drug–drug interactions. Any failure of decision support systems to provide adequate drug safety alerts is thus likely to pose risks to patient safety.
The rest of the editorial is found here:
http://www.mja.com.au/public/issues/190_05_020309/mag11315_fm.html
To take this from a slightly different perspective, it seems to me that the justification for the use of e-prescribing systems is based on the fact that they reduce the risk of poor clinical outcomes through ensuring, as far as is possible, that the drugs prescribed to an individual are, taken as a whole at least safe and hopefully effective.
If they don’t work optimally then that justification – and indeed the rationale behind the use of such systems is challenged.
No one would put up with a banking system that got your account balance wrong 20 or 30% of the time or an airline booking system that got departure times wrong 20% of the time!
It is not beyond the wit of man to consistently take the information in a database and reliably transform that information into an accurate and consistent response. If this is not done properly then the product is simply not fit for purpose and should be returned for a refund!
So getting system reliability and predictability should be a given. It is that simple. The NPS should here just name names and say which system is best and the market (and firm regulation) should rip – although I can understand their reluctance to do so!
The is also a second, and to me much more difficult issue. This is the one of how the ensure the knowledge in the database and software is effectively transferred to usable knowledge in the mind of the clinician so the right decisions are made. There are all sorts of issues under achieving this outcome including interface design, alert and alarm presentation, user control and machine learning or user capability and so on.
We have an obligation to get the software and data base information correct. Listening NEHTA and the TGA? – They need to work on this together and fix this problem. It is really simple – regulation just specifies what systems can be used for e-prescribing – and after reasonable notice it becomes illegal to use the 2nd rate products. The risk to individuals is just too high to ignore the issue.
The second issue needs to be the subject of a lot of thinking, research and evaluation. The outcome needs to be evidence based usability design parameters that really make the linkage between the knowledge database and the prescriber as effective as possible.
The following link provides a useful starting point (Thanks Scot Silverstein) – as mentioned last week.
http://hcrenewal.blogspot.com/2009/02/are-health-it-designers-testers-and_27.html
Doing nothing is both dangerous and not really an option!
David.
Small Note:
I note the National Prescriber Service (NPS) has re-commenced its attack on advertisements in prescribing software. I support this stance 100% and they are to be commended for taking a strong stand! We don’t want decision support distorted by advertising!
A report is available here if you have access:
Australian NPS renews ad ban call
Posted 9 March 2009
The Australian National Prescribing Service (NPS) is renewing its call for drug advertising to be banned from prescribing software, saying it breaches state and federal law.
In a submission to Medicines Australia's Code of Conduct Review, the NPS maintains that software advertising "appears to contravene State and Commonwealth legislation that prohibits direct-to-consumer advertising of prescription medicines".
Full report here:
http://www.pharmainfocus.com.au/news.asp?newsid=2656
D.
2 comments:
This raises the more basic issue around the quality of software used in the health sector. While there has been significant movement in the US under the FDA to improve the quality of health focussed software, the same cannot be said for Australia.
For most health software, testing by the vendors amounts to not much more that cursory component and integration testing, and even less functional testing. Try to get documentation out of a vendor as to what testing has been done. Good luck.
Testing of patches to "fix" acknowledged defects is even worse, and the concept of regression testing is often met with blank stares. Because of this, patches always, without exception, break other functionality in the product.
Decision support is a good example.
The concept of testing how well a business process is modelled in the software is simply not done by vendors. This falls to the customer. If you are a GP, do you even recognise this is required, let alone have the resources to do it?
It really is time for software vendors to be called to the carpet about the quality of the prodcut they deliver. Any other industry would die if they produced products of such poor quality and sold them with such inflated statements of the product's capabilities.
As mentioned in the report, it should be noted that the iterations of the software that were tested are now nearly 3 years old. Which is the equivalent of 21 dog years and about 100 software years.
If the testing methodology used in this research is believed to be sound, perhaps the researchers could salvage us tax payers some bang for our buck and have the tests applied against current versions of the software with updated unblinded results released later in the month?
Post a Comment