Sunday, February 14, 2016

Enrico Coiera Demands We Stop It Before We All Go Blind! And Proposes A Fix!

In a fantastic intervention into E-Health Prof. Enrico Coiera of Macquarie University points out that Australian (and much of Global) E-Health lacks an evidence base and is being made up as we are all unwillingly dragged along.
Here is his blog (republished with permission):

Evidence-based health informatics

February 11, 2016

Have we reached peak e-health yet?

Anyone who works in the e-health space lives in two contradictory universes.
The first universe is that of our exciting digital health future. This shiny gadget-laden paradise sees technology in harmony with the health system, which has become adaptive, personal, and effective. Diseases tumble under the onslaught of big data and miracle smart watches. Government, industry, clinicians and people off the street hold hands around the bonfire of innovation. Teeth are unfeasibly white wherever you look.
The second universe is Dickensian. It is the doomy world in which clinicians hide in shadows, forced to use clearly dysfunctional IT systems. Electronic health records take forever to use, and don’t fit clinical work practice. Health providers hide behind burning barricades when the clinicians revolt. Government bureaucrats in crisp suits dissemble in velvet-lined rooms, softly explaining the latest cost overrun, delay, or security breach. Our personal health files get passed by street urchins hand-to-hand on dirty thumbnail drives, until they end up in the clutches of Fagin like characters.
Both of these universes are real. We live in them every day. One is all upside, the other mostly down. We will have reached peak e-health the day that the downside exceeds the upside and stays there. Depending on who you are and what you read, for many clinicians, we have arrived at that point.

The laws of informatics

To understand why e-health often disappoints requires some perspective and distance. Informed observers again and again see the same pattern of large technology driven projects sucking up all the e-health oxygen and resources, and then failing to deliver. Clinicians see that the technology they can buy as a consumer is more beautiful and more useful that anything they encounter at work.
I remember a meeting I attended with Branko Cesnik. After a long presentation about a proposed new national e-health system, focusing entirely on technical standards and information architectures, Branko piped up: “Excuse me, but you’ve broken the first law of informatics”. What he meant was that the most basic premise for any clinical information system is that it exists to solve a clinical problem. If you start with the technology, and ignore the problem, you will fail.
There are many corollary informatics laws and principles. Never build a clinical system to solve a policy or administrative problem unless it is also solving a clinical problem. Technology is just one component of the socio-technical system, and building technology in isolation from that system just builds an isolated technology [3].

Breaking the laws of informatics

So, no e-health project starts in a vacuum of memory. Rarely do we need to design a system from first principles. We have many decades of experience to tell us what the right thing to do is. Many decades of what not to do sits on the shelf next to it. Next to these sits the discipline of health informatics itself. Whilst it borrows heavily from other disciplines, it has its own central reason to exist – the study of the health system, and of how to design ways of changing it for the better, supported by technology. Informatics has produced research in volume.
Yet today it would be fair to say that most people who work in the e-health space don’t know that this evidence exists, and if they know it does exist, they probably discount it. You might hear “N of 1” excuse making, which is the argument that the evidence “does not apply here because we are different” or “we will get it right where others have failed because we are smarter”. Sometimes system builders say that the only evidence that matters is their personal experience. We are engineers after all, and not scientists. What we need are tools, resources, a target and a deadline, not research.
Well, you are not different. You are building a complex intervention in a complex system, where causality is hard to understand, let alone control. While the details of your system might differ, from a complexity science perspective, each large e-health project ends up confronting the same class of nasty problem.
The results of ignoring evidence from the past are clear to see. If many of the clinical information systems I have seen were designed according to basic principles of human factors engineering, I would like to know what those principles are. If most of today’s clinical information systems are designed to minimize technology-induced harm and error, I will hold a party and retire, my life’s work done.
The basic laws of informatics exist, but they are rarely applied. Case histories are left in boxes under desks, rather than taught to practitioners. The great work of the informatics research community sits gathering digital dust in journals and conference proceedings, and does not inform much of what is built and used daily.
None of this story is new. Many other disciplines have faced identical challenges. The very name Evidence-based Medicine (EBM), for example, is a call to arms to move from anecdote and personal experience, towards research and data driven decision-making. I remember in the late ‘90s, as the EBM movement started (and it was as much a social movement as anything else), just how hard the push back was from the medical profession. The very name was an insult! EBM was devaluing the practical, rich daily experience of every doctor, who knew their patients ‘best’, and every patient was ‘different’ to those in the research trials. So, the evidence did not apply.
EBM remains a work in progress. All you need to do today is to see a map of clinical variation to understand that much of what is done remains without an evidence base to support it. Why is one kind of prosthetic hip joint used in one hospital, but a different one in another, especially given the differences in cost, hip failure and infection? Why does one developed country have high caesarian section rates when a comparable one does not? These are the result of pragmatic ‘engineering’ decisions by clinicians – to attack the solution to a clinical problem one way, and not another.  I don’t think healthcare delivery is so different to informatics in that respect.

Is it time for evidence-based health informatics?

It is time we made the praxis of informatics evidence-based.
That means we should strive to see that every decision that is made about the selection, design, implementation and use of an informatics intervention is based on rigorously collected and analyzed data. We should choose the option that is most likely to succeed based on the very best evidence we have.
For this to happen, much needs to change in the way that research is conducted and communicated, and much needs to happen in the way that informatics is practiced as well:
  • We will need to develop a rich understanding of the kinds of questions that informatics professionals ask every day;
  • Where the evidence to answer a question exists, we need robust processes to synthesize and summarize that evidence it into practitioner actionable form;
  • Where the evidence does not exist and the question is important, then it is up to researchers to conduct the research that can provide the answer.
In EBM, there is a lovely notion that we need problem oriented evidence that matters (POEM) [1] (covered in some detail in Chapter 6 of The Guide to Health Informatics). It is easy enough to imagine the questions that can be answered with informatics POEMs:
  • What is the safe limit to the number of medications I can show a clinician in a drop-down menu?
  • I want to improve medication adherence in my Type 2 Diabetic patients. Is a text message reminder the most cost-effective solution?
  • I want to reduce the time my docs spend documenting in clinic. What is the evidence that an EHR can reduce clinician documentation time?
  • How gradually should I roll out the implementation of the new EHR in my hospital?
  • What changes will I need to make to the workflow of my nursing staff if I implement this new medication management system?
EBM also emphasises that the answer to any question is never an absolute one based on the science, because the final decision is also shaped by patient preferences. A patient with cancer may choose a treatment that is less likely to cure them, because it is also less likely to have major side-effects, which is important given their other goals. The same obviously holds in evidence-based health informatics (EBHI).

The Challenges of EBHI

Making this vision come true would see some significant long term changes to the business of health informatics research and praxis:
  • Questions: Practitioners will need develop a culture of seeking evidence to answer questions, and not simply do what they have always done, or their colleagues do. They will need to be clear about their own information needs, and to be trained to ask clear and answerable questions. There will need to be a concerted partnership between practitioners and researchers to understand what an answerable question looks like. EBM has a rich taxonomy of question types and the questions in informatics will be different, emphasizing engineering, organizational, and human factors issues amongst others. There will always be questions with no answer, and that is the time experience and judgment come to the fore. Even here though, analytic tools can help informaticians explore historical data to find the best historical evidence to support choices.
  • Answers: The Cochrane Collaboration helped pioneer the development of robust processes of meta-analysis and systematic review, and the translation of these into knowledge products for clinicians. We will need to develop a new informatics knowledge translational profession that is responsible for understanding informatics questions, and finding methods to extract the most robust answers to them from the research literature and historical data. As much of this evidence does not typically come from randomised controlled trials, other methods than meta-analysis will be needed. Case libraries, which no doubt exist today, will be enhanced and shaped to support the EBHI enterprise. Because we are informaticians, we will clearly favor automated over manual ways of searching for, and summarizing, the research evidence [2]. We will also hopefully excel at developing the tools that practitioners use to frame their questions and get the answers they need. There are surely both public good and commercial drivers to support the creation of the knowledge products we need.
  • Bringing implementation science to informatics: We know that informatics interventions are complex interventions in complex systems, and that the effect of these interventions vary depending on the organisational context. So, the practice of EBHI will of necessity see answers to questions being modified because of local context. I suspect that this will mean that one of the major research challenges to emerge from embracing EBHI is to develop robust and evidence-based methods to support localization or contextualisation of knowledge. While every context is no doubt unique, we should be able to draw upon the emerging lessons of implementation science to understand how to support local variation in a way that is most likely to see successful outcomes.
  • Professionalization: Along with culture change would come changes to the way informatics professionals are accredited, and reaccredited. Continuing professional education is a foundation of the reaccreditation process, and provides a powerful opportunity for professionals to catch up with the major changes in science, and how those changes change the way they should approach their work.


There comes a moment when surely it is time to declare that enough is enough. There is an unspoken crisis in e-health right now. The rhetoric of innovation, renewal, modernization and digitization make us all want to believers. The long and growing list of failed large-scale e-health projects, the uncomfortable silence that hangs when good people talk about the safety risks of technology, make some think that e-health is an ill-conceived if well intentioned moment in the evolution of modern health care. This does not have to be.
To avoid peak e-health we need to not just minimize the downside of what we do by avoiding mistakes. We also have to maximize the upside, and seize the transformative opportunities technology brings.
Everything I have seen in medicine’s journey to become evidence-based tells me that this will not be at all easy to accomplish, and that it will take decades. But until we do, the same mistakes will likely be rediscovered and remade.
We have the tools to create a different universe. What is needed is evidence, will, a culture of learning, and hard work. Less Dickens and dystopia, more Star Trek and utopia.

  1. Slawson DC, Shaughnessy AF, Bennett JH. Becoming a medical information master: feeling good about not knowing everything. The Journal of Family Practice 1994;38(5):505-13
  2. Tsafnat G, Glasziou PP, Choong MK, et al. Systematic Review Automation Technologies. Systematic Reviews 2014;3(1):74
  3. Coiera E. Four rules for the reinvention of healthcare. BMJ 2004;328(7449):1197-99
Here is the link:
Regular readers will know I have been asking for evidence of value from the PCEHR for years now - and I am sure the lack of such evidence is part of what concerns Enrico, along with other similar interventions, which similarly lack evidence.
The recent poll results reported at the link below I am sure reflects the failure of DoH to actually manage Australian E-Health in an evidence based way. We can only hope the Australian Digital Health Agency is listening and will do better!
See here:


Anonymous said...

Good luck with that! Ehealth to date has been technology and supplier driven. It will remain so until funding for such initiatives is substantially reduced to cause systematic change. Sure there has been a bout of buyer remorse from the clinical and doctor fraternity - claiming "we were robbed". But anyone who has worked in the sector also knows doctors are pretty good at gaming the system too. So when the IT suppliers, consultants and contractors actively game it for their own ends there is little in the health culture to correct such behavours. Indeed, the model of "clinical leads" perpetuated it because the money to pay for them was bribe from the IT side of the fence.

I will believe a "clinical lead" when they themselves are not on the same gravy train.

Just as you think no more money will be spent on dud proposals; sure enough there are too many interests to not have the money continue to flow. There is now a whole industrial complex built up around health IT projects. This is the reality.

Enrico Coiera said...

Re: February 14, 2016 4:59 PM

What you describe is no different to the world of clinical medicine and big pharma. Lots of vested interests (that persist to this day) distort the clinical decision process. But despite that, there have been major changes to prescriber and purchaser behaviour because of the EBM movement. But it has taken at least a decade or more for that to happen and repeating this in informatics would be no easier a challenge.

So, yes, there are lots of challenges, and reasons to not start, but I doubt the IT/consulting industry is any harder to refashion than the medical-industrial complex. Its a very big job. Not an impossible job.

Anonymous said...

YES I am describing how the model of clinical medicine and big pharma has been adopted by IT in healthcare/ or equally how healthcare (with such practices) has approached big IT projects

Bernard Robertson-Dunn said...

IMHO, Enrico is completely correct when he talks about vested interests.

I recently came across a report from the RAND corporation
which seems to have been used over and over again to justify electronic medical record systems. The claim is "that interoperable EMR systems could produce efficiency and safety savings of $142-$341 billion." (2005 dollars).

I suspect this article and/or the thinking in it has been reused over and over again.

Their logic goes like this:

Other industries have used IT to become more efficient. If medicine uses IT you will save large amounts of money

This has been jumped upon by consultants, and a whole range of vendors, all with vested interests. The message has got through to politicians who are keen to reduce costs and will clutch at straws.

Unfortunately the argument it is naïve, simplistic and demonstrates a lack of understanding of the complexity of the use of IT. To put it briefly, their modelling is flawed.

Slightly less briefly:

In clinical health care, there are very few repeatable processes that can be automated. So you are unlikely to get any significant benefit by using computers. You can improve health care with health information systems (proper ones) and administrative systems but that will just add to cost.

In pathology and diagnostic testing, innovative use of IT means that more expensive tests are created which are used more frequently, once again, in order to improve health care. More cost increases.

Build a hospital and one of the major costs is IT and communications.

In other words, using IT in health care will drive up costs, not reduce them. The effectiveness of health care may rise (it might rise even more if EBM is properly implemented) but efficiency won’t.

The health industry is not like those industries where automation has brought about great reductions in cost. Caring for a sick person is not like manufacturing a smartphone or a laser printer or selling a book on-line.

Fighting the myth will be very difficult and will take a long time.

Terry Hannan said...

To Enrico and David, thank you for allowing this posting out into a wider public domain (it was already there but I guess most of us did not realise it). Enrico has a wonderful gift for rattling our neurological cages in the most appropriate ways. I know he has shaken up how I have been currently looking at informatics. One aspect of his blog that I see as an important statement is that there is volumes of documentation out there on the evidence for informatics and where possible it should be read or sought out before major policy decisions and public statements are made because we are involved in a new scientific discipline where the health of individuals, groups and populations are involved. We should not be here to be seduced by the technology underlying informatics but perceive its use as bringing to those who can often least afford it the access to effective health care management-treatment, prevention and education (the philosophies of Francois Gremy and Kendall Ho).

Grahame Grieve said...

Bernard: "In pathology and diagnostic testing, innovative use of IT means that more expensive tests are created which are used more frequently, once again, in order to improve health care".

Evidence please? I've seen IT systems - expert systems - used to improve care by recommending additional clinically appropriate tests, or to reduce costs by pointing out unnecessary tests on reports, or to reduce costs by identifying duplicate tests during the ordering process. But I'm not aware of where 'innovative use of IT' has lead to the creation of new more expensive tests. Unless you generically mean, all research into anything is assisted by IT one way or another

Oliver Frank said...

Very well said, as always, Enrico.

I would add that there is much to be gained from clinical practitioners who also have an understanding of clinical informatics, because their daily working experience can provide to the professional health informaticians many of the questions that may be worth trying to answer, and many of the problems that may be worth seeking to address.

However, it also often takes somebody from 'outside' (who may or perhaps should be a non clinician) who is not tainted with the 'this is how we have to do things, or how we have always done things' mindset, and who has a knowledge of where problems in the design and/or operation of information systems are likely to be, to be able to see what could be changed for the better, and then work to generate the evidence for the change.

Bernard Robertson-Dunn said...


I'm not referring to Information Systems, I'm talking about the use of IT in medical and associated laboratory gear such as:

Medical imaging, e.g. MRI, FMRI, CT scans, PET scans, bone scans, ultra-sound etc

Biometric testing, e.g. electroencephalography, magnetoencephalography, electrocardiography etc

Pathology tests such as tissue and fluid testing, drug testing, genetic/DNA testing etc.

I went to
and they have a list of over 1500 tests. My guess is that most of them are either automatic IT based tests or human but IT assisted/enabled.

Bernard Robertson-Dunn said...

And then there's IT used in surgery, in biomedical engineering, prosthesis, exoskeletons, cochlear implants, vision systems etc etc.

Grahame Grieve said...

ok so IT is used widely for lab tests. That's sure true - and I used to be development lead for Kestral, a Lab systems vendor, so I know a little about this. IT is used to drive down costs and improve testing efficiency and throughput.

I do not see how you can claim "innovative use of IT means that more expensive tests are created which are used more frequently". The tests are created based on their clinical efficiency and utility. Of course, there's tests that would not be possible with out IT based support - but so? What's your argument?

Anonymous said...

Too often I seem to see variations of the phrase "it will be better if it's done on a computer". Vendors are particularly prone to this as it does not require them to answer the difficult questions around clinical delivery models; it implies that these won't change, and so the horrible spectre of "change management" can be reduced or removed entirely.

It's obvious what a train wreck the result is.

We currently (yes, I work in the area) have two major projects (and others) running across several acute care hospitals in two major service delivery areas. Both areas have stated that the service delivery models will not change under any circumstances because the vendors have stated that this is not needed, yet it is apparent that change will be required. I can already hear the post go-live bleatings about how the new systems don't work with the current workflows.

Bernard Robertson-Dunn said...

Expanding what I said above:

In other words, using IT in health care will drive up costs, not reduce them. This is because IT enables more tests, more interventions, more solutions.

The effectiveness of health care may rise (it might rise even more if EBM is properly implemented) but the overall cost and efficiency won’t. Existing processes and tests may reduce in cost but because there are more of them total costs go up.

And governments around the world are very worried about this trend. I suspect a few health economists are fully aware of this but the vested interests don't really want to see health budgets reduce.

And for a politician to say "we are are going to cut back on tests, medical treatments and research" is suicide.

Bernard Robertson-Dunn said...

2011, USA numbers:

"The increasing cost of medical technology is a significant contributor to higher health care spending. The implementation of new medical technology accounts for between 38 percent and 65 percent of health care spending increases. New technology expands the range of treatment options available to patients, but it does by replacing lower-cost options with higher-cost services.

Many technologies that have very high value for some patients – meaning, improved outcomes in relation to costs – are applied too broadly, with little benefit for many patients."