Monday, March 21, 2011

Professor Jon Patrick Refines And Expands Publication On Cerner FirstNet Based on ED Director Interviews.

The following announcement appeared this morning.

----- Begin Announcement E-Mail

I've just finished a new critique that I hope you will enjoy.

I have issued a new analysis of the discussions with 7 ED Directors in NSW. The Analysis can be found here

http://sydney.edu.au/engineering/it/~hitru/index.php?option=com_content&task=view&id=120&Itemid=116

Or it is linked from the main report page, which can be found here

http://sydney.edu.au/engineering/it/~hitru/index.php?option=com_content&task=view&id=91&Itemid=146

cheers

jon

Professor Jon Patrick

jon.patrick@sydney.edu.au

Health Information Technologies Research Laboratory: www.it.usyd.edu.au/~hitru

----- End Announcement E-Mail.

This is certainly a very long list of issues Jon was alerted to via the interviews.

I suggest reader browse slowly, consider possible options and form their own view on just what should happen next with this system. It seems pretty clear to me the status quo is probably not an option!

David.

13 comments:

Anonymous said...

Sadly I have to disagree - the status quo is an option simply because Cerner is now so widespread in NSW health that regardless of whatever problems exist if nothing is done to fix them it will still remain entrenched for a very long time. So NSW, by dent of its circumstances is locked-in regardless.

In Victoria however, where Cerner is being implemented under HealthSmart, the option to wave Cerner bye-bye is still alive until such time as Cerner gets a firm foothold in Victoria. Then VIC will be in the same position as NSW.

Anonymous said...

Hmmm...I guess Methodist Le Bonheur Healthcare are wrong...
http://bit.ly/gK6QIT

Dr David More MB, PhD, FACHI said...

On the basis of what we see in NSW looks like the system differences really matter - that just may be the reason, among others.

Anonymous comments citing Cerner publicity pieces will not be published in this thread from now on. Use your name or don't get published!

David.

Scot M Silverstein MD said...

One might start asking the questions I raised in my post at http://hcrenewal.blogspot.com/2011/03/real-medical-informatics-what-does.html :

* How did a government for an entire state of a major country come to allow themselves to believe an EHR system such as this would improve conditions in the most mission critical section of their hospitals, the ED's?

* What testing and validation of the software was done by officials and representatives of said government, and who were they, exactly?

* What experience and background did the validators possess?

* How were clinician complaints during implementation, which has apparently been underway for several years now, handled?

* What other countries are going down the same path?

* Why is not all health IT subject to the same type of government regulator-led validation as this system was put under by a private academic researcher? (Note that the U.S., pharma IT validation is led by the FDA, but that same agency has essentially shied away from healthcare IT validation.)

* Would a country buy software as ill suited to purpose for, say, mitigating disaster risk in their nuclear power facilities?


Finally, I asked:

If the purpose of Medical Informatics is the improvement of healthcare (as opposed to career advancement of a small number of academics through publishing obscure articles about HIT benefits while ignoring downsides in rarified, echo-chamber peer reviewed journals), then:

* Who are the "real" medical informatics specialists, and;

* Who are the poseurs?

Anonymous said...

Can I ask again, if not Cerner, then what should have been selected?

Anonymous said...

The brand is irrelevant.
The software is only the enabler for clinical transformation – if steps were not taken to map and redesign the processes before choosing and implementing a solution, don’t be surprised that no transformation has been achieved.

Anonymous said...

Browsing the list, there seems to be a large number of configuration issues, such as holding times for medical imaging orders, alert structures, etc.
It would be interesting to know how these "incorrect" configurations had been arrived at, who designed them, who reviewed them, and who signed off on them. These could be a failure of the project, rather than the software itself.

It also seems that some (not all) of the items are "fishing" items, logged in order to boost the number of issues so someone can place a large document on the table and say "see, look at how many issues we have found". This almost never works for getting things resolved, as there is no priority, impact or severity to these items. It really does read like just a list of gripes, and no real analysis of these has been done.

For instance, there is the statement:
"There are a great number of non-relevant content screens, that have to be clicked through to progress the work."
But there is no analysis of what this means. Who has decided these are non-relevant? How big is a "great number"? Does this have any real impact, or is it just one person's view?

For an "analysis", it's remarkably free of any analytical technique.

Anonymous said...

Part 5: Lost count at 30 the number of times the paper uses the phrases "seems", "may", "might", "appears", "could", "potentially", "might", "believe", "can" are used. Can this paper really be claimed a “systematic review”?

Anonymous said...

This is a crucial point:"There are a great number of non-relevant content screens, that have to be clicked through to progress the work."

Usability is a key determinant of clinician user acceptance.

Any more that three clicks and they walk away, and I don't blame them.

Anonymous said...

"Any more that three clicks and they walk away".

Bit like search browsers and page loading really. If it takes more than 5 to 10 secs at the outside to load I cancel and move on. the rationale being that if the site developers knew what it was doing, or had anything really useful to offer me, it would ensure a speedy load to lock-in my eyeballs.

Scot M Silverstein MD said...

To those who keep finding reasons to attack the report rather than consider its observations and findings, I offer this:

How Academic and Government Eggheads Kill People

http://hcrenewal.blogspot.com/2011/03/how-academic-boneheads-kill-people.html

Anonymous said...

"To those who keep finding reasons to attack the report"

If you put something like this out for public comment, you've got to expect some comments to be critical, particularly when the report fails to present evidence to backup it's conclusions.

The report lacks rigor and a clear analysis that links the long list of concerns to the conclusions.

As to the "government eggheads" article, this is so far away from reality it is astonishing. Jon Patrick's article is not presented as an investigation. It is presented as an academic paper, and, as such, must be peer reviewed to be taken seriously.

As for the validity of anecdotal evidence, then I guess homeopathy must be real since so many people say it has helped them.

And to set the record straight, I believe that Cerner software (as does most HIS software) suffers from a large range of problems, most due to the enormous complexity of the software that makes it virtually unmanagable using any processes commonly employed today.

Scot M Silverstein MD said...

> If you put something like this out for public comment, you've got to expect some comments to be critical, particularly when the report fails to present evidence to backup it's conclusions.

Most of the "critique" I've seen is just ad hominem in one form or another, and spends considerable ink on attacking the author and not the issues.

> The report lacks rigor and a clear analysis that links the long list of concerns to the conclusions.

Your statement is, in fact, what lacks rigor. No examples, no references, no citations. Nothing except your bald assertion.

> As to the "government eggheads" article, this is so far away from reality it is astonishing.

In fact, it does not get any more real that this in terms of pointing out the logical fallacy of ignoring all anecdotal evidence. See Jon Patrick's article on that issue.

> on Patrick's article is not presented as an investigation. It is presented as an academic paper, and, as such, must be peer reviewed to be taken seriously.

I'm sorry, generally academic papers are not published first on the Web. It is presented as a journalistic investigation report by a domain expert for review in the court of public opinion. Academics, of course, as citizens, are permitted to publish such reports, being that there's the academic freedom to publish what they wish.

> As for the validity of anecdotal evidence, then I guess homeopathy must be real since so many people say it has helped them.

May I suggest you study logical fallacies at this site: http://www.nizkor.org/features/fallacies/ . Fallacy of Composition comes to mind.

> And to set the record straight, I believe that Cerner software (as does most HIS software) suffers from a large range of problems, most due to the enormous complexity of the software that makes it virtually unmanagable using any processes commonly employed today.

The best evidence from your assertion comes from a familiar place: Dr. Patrick's work.

-- SS