We have had a
dump of material delivered late last week. Here are the links to the files
The seven actual
documents are all pretty imposing and complicated and extend to 100+ pages each
not that there is not a heap of boilerplate within each.
I wondered
just what these specifications were for and who was expected to use them.
Going to the
NEHTA Site makes things a lot clearer:
Data Specifications
The data
specifications aim to standardise the information structure and language
that names and describes clinical concepts, and to provide a basis for the
development of further, context-targeted specifications that can be implemented
by system designers.
They are not
intended to be software or messaging design specifications. Instead, they represent
the clinical information requirements for data collection and information
exchange to facilitate safe and effective continuity of care across healthcare
for example, General Practice and Acute Care.
Intended
Audience
This resource is
targeted at:
- jurisdictional ICT managers
- clinicians involved in clinical information system specifications
- software architects and developers, and
- implementers of clinical information systems and other relevant applications in various healthcare settings.
The content is
reasonably technical in nature and expects the audience to be familiar with the
language of health data specification and have some familiarity with Australian
Standards for health messaging, and/or repositories of data specifications.
This information
is found here:
Looking at
the actual files I note, for example, the Pathology Detailed Clinical Model
first appeared as Version 1.0 on 29 May, 2007.
Some 4.25
years later we get a recast version 2.0.
The questions
which rush into my mind are:
Who is
actually using and implementing these specifications after 4 years?
If anyone has
implemented what value have they seen from implementation and use?
Why might
software developers choose to use them in isolation - as there does not seem to
be any ongoing plan for them?
How will
these specifications be maintained over time and who takes over if NEHTA is not
funded in perpetuity?
Is sematic
interoperation achievable without an agreed data model and is that model part
of these specifications? I do understand the need for data and information
clarity if information is to be exchanged between systems but with SNOMED -AU
and AMT both in a less than finalised state where does this all fit?
It is by no
means clear to me just what the underlying data-model for all this is, who owns
it and maintains it etc.
With PCEHR
software being sourced internationally just where do these data group etc. fit?
Overall, thus
far, there seems to have been a lot of work done for no obvious outcome. I look
forward to having all this explained to me.
I do note
that talking about Detailed Clinical Models (DCMs) NEHTA says:
“The collaboration
process in the NEHTA Clinical Knowledge Manager (CKM) will result in a library
of archetypes (initially openEHR archetypes) based upon requirements
identified by Australian clinicians and other health domain experts, and
drawing from comparable work overseas. To create the DCMs, these archetypes
will be transformed into platform and reference model agnostic models (based
upon ISO 11179). They will then be uploaded to the National Information
Component Library that NEHTA is in the process of building.
Initially, the DCMs
will be available only in human-readable PDF format. In the medium term we
intend to make them available in a number of machine-readable formats, and we
will consult the community to determine what formats are required. CKM is being
used to gather and formalise requirements for the DCMs and to support the life
cycle management of each DCM through a collaborative, online review process.
This provides an important vehicle for clinicians and domain experts to
validate that the clinical requirements have been met, and warrant that the
resulting published DCMs are safe, high quality and fit for purpose. They will
then be uploaded to the National Information Component Library that NEHTA is in
the process of building.”
More here:
I wonder
is the National Information Component Library the data model that seems to be
missing to underpin all this - and a range of other initiatives or is there
something else at a more structured level?
I suspect we are
yet to see the full picture of where this is all headed. I am happy for all brief explanations as to what these really mean and who will actually deploy them.
David.
13 comments:
hi David
Some answers to your questions:
> I wonder if this is the National Information Component Library
> the data model that seems to be missing to underpin all this
yes, this is what this intends to be - the underlying formal definitions that underpin the rest of the actual exchange specifications.
> Who is actually using and implementing these specifications
> after 4 years?
Certainly the gestation of these things have been slower than hoped and desired. This is hardly unusual. The NEHTA Clinical Information team does use this - in fact, they are canonical. And therefore anyone implementing NEHTA specifications today - there are many people doing so - are using these. Since the exchange specifications do not provide clarity as the underlying data model, these are published.
The primary utility of these models is as a guide to design of applications; they represent widely agreed models of understanding the problems, and they cater for outcomes of an extensive business analysis (>4 years, as you point out, with quite wide consultation, including an open to the public process). As such they are suitable for direct implementation in software, and in their archetype form (on the NEHTA CKM at dcm.nehta.gov.au - anyone interested should participate) they can and will be actually be used directly by openEHR systems. As you quote, additional usable forms will be provided *when we know what formats will be useful*.
> If anyone has implemented what value have they seen from implementation and use?
I personally have implemented the pathology archetype, and it's very useful (though not in a production system yet, though it draws directly on many years of commercial experience from many people. I can't personally speak to the others.
> Why might software developers choose to use them in isolation
> - as there does not seem to be any ongoing plan for them?
Well, as long as the NEHTA process is ongoing, they will be maintained, and there is a process going forwards (hopefully more about this will be published in due time).
> How will these specifications be maintained over time and
> who takes over if NEHTA is not funded in perpetuity?
Well, that's an open question about everything NEHTA is doing right? But there's a set of foundation blocks that NEHTA is building now that will need to continue somehow whatever happens. These are part of those.
> Is sematic interoperation achievable without an agreed data model
yes, it is. We have done so for many years in v2. However an underpinning agreed data model offers dramatically cheaper implementation. These models are mapped to CDA (published) and partially to HL7 v2 (to the degree possible), and available as openEHR archetypes - that's a pretty good achievement, though it will take time for this to deliver home runs.
> and is that model part of these specifications?
yes, these are part of it.
> with SNOMED -AU and AMT both in a less than
> finalised state where does this all fit?
Well, we're all frustrated by the length of time it is taking to get the terminology stuff done
> It is by no means clear to me just what the underlying data-model
> for all this is, who owns it and maintains it etc.
as I said, hopefully this will be published in the future.
> With PCEHR software being sourced internationally just where
> do these data group etc. fit?
The infrastructure might be based on projects from overseas, but the services and the documents and requirements must be those that come from Australian Standards, NEHTA specifications - all the work we have already done.
Hope this clears up the questions.
Thanks Grahame,
In summary still a work in progress with no clearly defined end point.
David.
> In summary still a work in progress with no clearly defined end point
welcome to interoperability!
Grahame - hence all the discussion on complexity and my suggestions to really start very simple and very basic..but that is no fun I realise!
Sad thing is I have watched this since 1978 and have not actually been convinced we have seen much get better. Some interoperation but little that might be called semantic!
David.
David, there's a tension here. You can continue working in a linear fashion, chipping away at the edges. And there's certainly a lot to be said for that. But eventually you come to the point where you're stuck because you actually need to rework the infrastructure. And you can't do that very simple and very basic.
This applies to all IT projects, of course. We can point to projects that failed because they did go the big bang, and also projects that failed because they didn't (and a number that sort of survived, whichever they chose).
For better or worse, NEHTA chose (or had chosen for them) the big rework project. As a community we'll be arguing about whether that was right for many years to come. But it was decided, and now we just have to make the best of it.
Grahame,
Sorry the 'community' was not asked and was not told what the risks were and was not told the available options. Neither was either the Health IT Community or the Clinical Community.
Just like the super profits tax, the MRRT, the carbon tax, the Malaysian solution and so much else it was just dropped on an unsuspecting public. My sense is they are pissed.
E-Health had a plan, could have been done better and that plan was just filed (by NEHTA etc which were searching for relevance) - while rubbish is being thrust on us as far as I am concerned. Hence this blog - which started hopeful and has now become deeply depressed!
David.
Researchers the world over have been wrestling with these issues since computers and their fervent advocates entered the realm of healthcare as far back as 1970 in the northern hemisphere.
Notable early project leaders and sites at the time were St Thomas' Hospital (Barry Barber), King's College Hospital (Professor John Anderson), University College Hospital (Professor Freddie Flynn) in London. In USA, Massachusetts Institute of Technology (Octo Barnett), Kaiser Permanente California (Maurice Collen), Vermont (Larry Weed of POMR fame)and Werner V Slack as well as Boeing, Westinghouse and a few others.
The claims at the time were not much different from the claims of today. Coding systems have matured and in the process developed new problems to be solved. At that time no-one envisaged the capacity, power, speed, sophistication of tools, languages and operating systems we have available to work with today.
The coding systems like SNOP, SNOMED, ICD, READ, etc have expanded and matured and new systems have emerged embracing increasingly broader boundaries of the clinical healthcare environment; leading perhaps to greater fragmentation of the task and dream to introduce all encompassing universal standards to the coding of clinical medicine and healthcare.
To some degree the terminology of clinical medicine is consistently the same the world over, with some semantic differences, yet that too has changed significantly over the decades referred to above increasing the complexity of the problem at hand for today's dreamers and blue sky advocates.
Today's dreamers are no different from yester years. They continue to build on the work of those who came before them. The issues may be broader and even more complex. Will they ever be completely solved? Most likely not.
Research and development will continue in this elusive complex field for many more decades. That surely is what NEHTA is or has become an R&D hive of busy bees.
The only constant is change. And the only option is to deploy what is available today.
Good health to all.
Will such efforts at standardisation stifle innovation?
I really would rather the Govt play a supporting role and let private enterprise sort it out.
Rather than large investments, how about the Govt foregoing tax inputs for companies/entities that meet certain levels of basic infrastructure.
Where the Govt does intervene, let i be where the standardisation is useful (national identifiers) not red tape that crushes.
Govts have a very bad record in implementing large complex programs, let the private sector fund it - that way you know the investments and equations add up (and if not - bad businesses go out of business).
The Govt can also spend in their own sector, the Govt health sector should be a leading industry light. The interactions and touch points between Govt and private/primary sector should be a high priority.
Spending cash this way may not get us to the top of the list but it would build.
The cost to benefit of the PCEHR is questionable - even if it is a success (which I doubt).
A simple view from yes a simple person.
Napolean
Grahame,
Do you know how interoperable these models are with other models internationally? In particular, are there any implications from choosing SMOMED codes from the AU extensions rather than from the international version?
Was/Is RXNorm being considered by NEHTA?
#Larry
There's small questions, and there's large questions. And then there's your question. I'm not sure how you'd judge their interoperability. They're consistent with models from other countries and other contexts - but they're also different, based on consultation within Australia, and based on their purpose, which is a little more ethereal than many. The pathology and radiology archetypes, for instance, are more demanding than v2, so it requires some clean up of the information to represent it properly. But this clean up will pay off later.
I don't know the answer to the Snomed-CT question. I've never reviewed the differences between Snomed-CT and Snomed-CT AU in depth. Perhaps someone else can comment on that
#Larry
You asked " In particular, are there any implications from choosing SMOMED codes from the AU extensions rather than from the international version?"
Beyond the Australian Medicines Terminology, I don't think NEHTA has 'extended' SNOMED CT in any significant way to produce the SNOMED CT-AU 'extension'. Of the 1,167,656 terms in the most recent (May,2011) release, only 109 are AU additions. Most of these are names of Reference sets, rather than clinical concepts per se.
'Funnel web spider' is often cited as a peculiarly Australian term, but this has been in the SNOMED CT international core for a long time. Less well known organisms like 'Irukandji' or 'blue-ringed octopus' or the venom or effects of these are poorly supported in SNOMED CT core, and so also poorly supported in the current Australian 'extension'.
This - AU extension is a big con job - all amendments should be made to one International version. Another example of the result of a major brain-fart by the PhD idiots at NEHTA.
Post a Comment