Common Interface Mechanism project

Ian, Tom

Yep, bigger scope than I thought but understand it conceptually. From what
I understand it has some parts of an ORM like Entity Framework or
NHibernate to manage the back end ie not worry about the implementation of
the back office/persistent data but with a standardised way of extracting
that data in the middle both in inputting queries and templates and in
their return. FHIR is just a standard way of parsing the data.

A lot of work would be in maintaining a persistent synchronised back end
from several sources in whatever way you need to store the data. I assume
the hooks to link into these database to be able to query AQL are in place.

2 more queries then I’ll leave you

  1. Going back to the original thread does CIM implement OpenEHR or are
    there thoughts of doing this? From what I can gather CIM takes data from
    the source and uses FHIR to standardise or maybe it does have OpenEHR but I
    can’t see it mentioned on the web page.

  2. Are there any examples of OpenEHR in .net I can play around with that
    you know of?

Thanks.

Thomas,

I don’t agree with your assertion that FHIR and openEHR are in competition or fighting over the same problem domain.

FHIR is providing a resource API and a canonical data model, it’s not a clinical model.

I beginning to see where I would use FHIR with openEHR and where I wouldn’t, conversely where I wouldn’t/would be inclined to use openEHR and also where it be useful to use both.

I’d go with the best tool for the job.

p.s. The 5% point is very misleading - I’ve not found a major dataset FHIR can’t cover within the NHS and that’s stable FHIR not draft.

The dissonance comes where we expose openEHR (or indeed any clinical) data that have a native published format. To put them in to FHIR requires mapping to FHIR resources, e.g. Observation, which creates a certain amount of complexity. For easy data it’s easy, for more complex data it’s harder. Converting say all the openEHR vital signs to FHIR (probably 50+ data points in various structures) requires a reasonable amount of work.

In an openEHR environment, we publish the data in its native form, i.e. directly based on openEHR Reference Model. So just pointing out the obvious here.

In terms of coverage, there are about 8,000 clinical data points in the openEHR.org CKM, produced by hundreds of clinical people. I have not counted FHIR recently, but I think it is still in the low hundreds. If it was 400, then that’s a 20x difference, which is 5%.

I have no intention to mislead, only to try to find ways to get them working together. That means looking objectively at the facts of technologies, not hype.

Hi Raza,

If you want to play around with openEHR, I can easily setup a Code4Health platform domain for you. This is being gradually built out but for now exposes an openEHR service layer as a Restful interface, which you should be able to consume/write to easily enough from .net.

This guide to using Postman with the C4H Platform should give you enough info to be able to re-work for .net

Although the specifications needed to develop a back-end openEHR server are freely available, I also recommend that people get familiar with the technology first of all by consuming an existing service. Indeed the philosophy is that the vast majority of 3rd party developers will make use of an existing service, rather than try to develop their own full stack, which is do-able but not easy engineering. Needs to be seen in the same light as “why would I want to build my own new SQL / NoSQL engine?” - definitely required but most devs will use an existing product.

Just ping me privately and I can get your domain setup.

Ian

Thanks Ian, will be in touch.

HAPI FHIR might be useful http://jamesagnew.github.io/hapi-fhir/doc_jpa.html. It stores the FHIR as JSON so suitable for NoSQL, you can recode the JPA/hibernate layer to store in traditional SQL tables. [We’re evaluating HAPI as a API to use with our clinical portal (javascript). We’re changing our PAS system to EPR and we need to change existing API’s over to the EPR and FHIR/HAPI offers very similar API’s (the EPR doesn’t have a FHIR api in the UK just US). We will feed HAPI via HL7v2 messages]

I’ve used similar java open source (apache* and spring) to put FHIR front ends to a Lab IS, ED, Document Management Systems. The main reason for using FHIR in these cases is it normally matches the underlying database structure or API - it’s a simple mapping exercise and it is what developers expect to see. FHIR tends to match the database structure (and API) because both FHIR and PAS/LIS/EPR Databases have been strongly influenced by HL7v2. So in a way FHIR is not new and not immature, it’s a combination of a number of trends.

[Note: I’m a bit unsure about a number of the newer resources(1) in FHIR - do we(/UK) have experience to make these decisions, is openEHR a better solution, is the Pathway/CarePlan/workflow just going to work for US/Canada]

CIM probably used FHIR for same reasons. (As an ex-EMIS developer I was probably conditioned to make the same decision :slight_smile: ). They also use java, apache camel, spring, etc. [As does Ripple].

Would it not be easier for openEHR to support retrieving data via FHIR, so develop once and reuse many times? It would help avoiding all the legacy interface mechanisms to source systems.

(1) I’m classing resource with a zero next to them as new Resourcelist - FHIR v5.0.0

Thanks Kev

I’ll look this up, thanks for the pointers. The problem I face is most open
source technology isn’t .net so it’s a steep learning curve to jump into a
new language and work on a full stack. Being a GP I’d be stupid to know
anything more than just

  • be aware of the technology
  • be able to handle calls to and from a Restful service and inject IP into
    the UI
  • help out with mapping of the middle stack
  • help with any system analysis

Raza
www.razatoosy.com

Understand the learning curve.

We use HAPI FHIR JPA Server to get around some of that, it’s simple to install (no java knowledge) and our UI developers (javascript) are insulated from this java/open source layer.

We do similar with our FHIR facades, UI developers would work with REST Api only (FHIR) and (os/java) servers work with other systems (sql, api (rest or soap), csv, etc). Idea is to keep the api simple and remove complexity away from UI.

Well, openEHR already has a standard way of exposing resources - as native openEHR RM data. You can obtain them in a number of ways:

  • via a standard EHR service interface (e.g. Java; C#)
  • the above includes AQL Query results, which are essentially tables of openEHR RM objects
  • via EhrScape.com (API Explorer > Electronic Health Record APIs > /view > choose an example and do ‘Try it out’)

If you do the last of these, and choose SpO2, you’ll see the following result. The URIs in this interface were generated via a standard algorithm from the archetype paths.

[
  {
    "time": "2014-03-03T07:28:33.000+01:00",
    "spO2": 97.5
  },
  {
    "time": "2014-02-27T14:37:43.000+01:00",
    "spO2": 96.5
  },
  {
    "time": "2014-02-27T01:35:11.000+01:00",
    "spO2": 99.9
  },
  {
    "time": "2014-02-25T21:17:11.000+01:00",
    "spO2": 97.3
  },
  {
    "time": "2014-02-06T08:26:28.000+01:00",
    "spO2": 98.5
  },
  {
    "time": "2014-01-22T17:30:10.000+01:00",
    "spO2": 96.7
  }
]

All I am saying with respect to FHIR is that now we, like everyone apparently, must map to a new set of message content. Each of these mappings will necessarily be manual (that’s the same for everyone), and therefore create work that could have been avoided.

In my view, the missed opportunity here is that we already have large libraries of openly published content models (and as I said, 20 times the size of FHIR’s resources), as well as the benefit of all openEHR data obeying a single schema globally. There are actually two such libraries in the UK - HSCIC and clinicalmodels.org.uk, as well as openEHR.org; these already contain significant UK-developed content. Using these libraries as a basis for data access as per the EhrScape approach above means no translation into any foreign formats, or data conversion work.

The openEHR AQL query language is also mature and widely used, enabling access to ad hoc collections of data.

So it seems we are back to mapping to messages again. Of course we’ll all do it and declare success somewhere along the line. But I do wonder when people will realise the message mentality has far outlived its usefulness, and that we should be working in a single-source, model-based paradigm.

Yes we’ve applied messaging to wrong solutions.

They don’t read well so not easy to follow and understand (HL7v2/IHE PDQ queries), API replacements have had similar issues (e.g. IHE PDQv3/HL7v3/ITK Spine + SOAP) but mostly around the bloated xml needed for a simple lookups.

REST is a far better solution.

Well REST is a mechanism for obtaining content. It’s technically better in various ways than SOAP or messages (who ever thought XML was a sensible idea for a high-performance network interface?! But let’s not go there for now…) but the question remains: should we be imposing specific manually built content i.e. payload, in the REST interface, when the source system already has its data completely and comprehensively modelled? This is still a message mentality where the wire protocol is imposing the content definition.

My proposal to avoid this is to make FHIR open, rather than being a closed (and rather limited) content silo. It would mean we do all agree to the following from FHIR:

  • data types
  • identifiers
  • other infrastructure types
  • terminology referencing approach (no OIDs!)
  • profiling, assuming it works properly (it is a copy of some elements of the archetype formalism)

This can be understand as an open FHIR protocol platform.

Then, clinical content should be modelled in distinct partitions containing resources specific to the model base in use - it may be openEHR, VA FHIM, 13606, CDISC, CIMI, and so on.

It’s important to remember that FHIR is just one concrete technology in use in clinical systems. Don’t forget - object-level APIs, message XSDs, JSON UI proto-forms, display HTML, interactive HTML, PDF and much more.

The key thing is to have the model libraries upstream of all of that.

I’ve been talking to Grahame recently about this; it seems that HL7 is interested although wary, as one might expect.

If one accepts the principle that nothing should be stored that cannot be communicated and all concepts should be present in a taxonomy of some kind then the only question is how things will be communicated and what taxonomies will be used.

Both Snomed-CT and FHIR have a momentum in the UK. Thus if one accepts that Snomed-CT IS the taxonomy and FHIR IS the transport mechanism for transactional data items in the UK , then openEHR-UK must adapt.

I think that means making sure that the archetype attributes must be Snomed concepts with snomed relationships, and all archetypes and templates have a FHIR transforms built in at the authoring and review stage to validate the fact that they can be communicated.

I accept that this means that all of the compromise comes from the openEHR community which of course is irksome but I see no alternative if it is going to survive in the UK.

I aplaud attempts to build the bridges with for example open FHIR but suspect its not enough.
I remember a similar debate in 2002. openEHR lost out to HL-V3 and Snomed. I urge a compromise this time to avoid a repeat of history

Hi David,

it’s not only a question of ‘how things get communicated’, and in any case, there are many ways to communicate data. Apart from ‘communication’, there is visualisation, data entry, programming APIs, and so on - you know better than anyone. Most of these require design-time models or schemas to be realised at runtime. That’s the simple argument for single-source modelling, where the semantics are separated out from each of the many and changing concrete implementation technologies, so that they can be machine mapped into those same technologies as we routinely do today in openEHR, and as is routinely done outside of e-health for decades (at least since RPC was invented).

There is no question of ‘not adapting’ - of course EHR technologies like openEHR need to work with messaging and terminology. But while SNOMED CT may be ‘the taxonomy’ for the UK, it is not for the world, and openEHR therefore was designed from the outset to enable SNOMED CT and all other terminologies and ontologies. This is what enables an openEHR archetype to be used in 150 countries rather than only IHTSDO member countries (most of which are not close to fully implementing SNOMED CT).

If you look at the DNACPR archetype in clinicalmodels.org.uk, you’ll see SNOMED CT codes right there. If you look at other archetypes, you’ll see LOINC codes, ICD codes, ICPC2 codes and many others. It just depends on what is available and what is in use.

With respect to FHIR, as I stated earlier, it’s easy to see how to map openEHR content to FHIR - it’s the same as it is for anyone else: each mapping is a case by case situation, and creates its own work. As I have pointed out, openEHR currently has around 25-30 times the number of formally defined, reusable clinical models as FHIR, so manual mappings have to be found from all of those over time to FHIR resources. This is the same problem that anyone faces with mapping to a communications standard that imposes its own idea of content. It’s completely doable, it’s just totally inefficient. The method I have been discussing with some in HL7 is a far more scalable approach that solves the problem properly.

With respect to HL7v3 in the UK, I would not say ‘openEHR lost out…’. Common sense, UK GP vendors and the British taxpayer lost out. openEHR as such was not even in consideration to my knowledge. All some of us argued was for some basic modelling, software engineering and IT principles to be observed. Instead we were all told we were wrong, and that the silver bullet had arrived. We all know how well that went. Today the talk is all of the new silver bullet.

In sum, openEHR is just there to provide a useful semantic-models based EHR platform. It works with SNOMED CT and other terminologies, and works with FHIR, just like anything else does. However, I believe we should be interested in leveraging semantic models and terminologies to drive all aspects of clinical computing. That’s where real scalability and sustainability can come from.

Hi David,

Interesting :slight_smile:

Whilst that is, of course true as an aspiration, I don’t see it as a reality for many years, particularly outside of the very constrained GP world. Requirements to build data models for use inside systems will continue to far outstrip that capacity / need for them to be communicated. Having said that, I agree that in principle when a data model is created we want to maximise shareability.

I agree that both FHIR and SNOMED-CT have momentum and that any models created by openEHR should take those into account and I would expect a number of archetypes and templates to have both SNOMED-CT bindings and to be easily transformed to/from FHIR profiles.

However I don’t agree that this should be routine, that all archetype attributes should be SNOMED-CT concepts + relationships or that all openEHR concepts should immediately be expressed as FHIR profiles.

We have been down the road of trying to ‘do’ clinical models with terminology and relationships already - remember the Logical Record Architecture project? SNOMED-CT is a great asset when used within it’s comfort zone but one of the things that has really delayed progress in this space IMO, has been an over-expectation of how much will be handled by terminology. One of the strengths of FHIR has actually been to push back on this kind of blind commitment to terminology, to a much more nuanced usage.

I am tempted to say that ‘we know how well that worked out’, and indeed ‘avoid a repeat of history’. :wink:

Just to be clear, I am very happy to support and promote the uptake of both SNOMED-CT and FHIR in the UK. I just want to make sure that we do so with an understanding that…

Modelling clinical data, particularly in secondary situations, registries, feral systems etc is way more complex than in primary care.

SNOMED-CT excels at representing bio-medical concepts and their relationships. It is however a much poorer fit for the documentation of care, which is as much about clinical context, methodology, circumstances, timing etc. Where SNOMED-CT concepts exist they can be helpfully bound to documentation of care concepts but there are very many gaps, and the binding activity itself is hugely resource-intensive, both in working out the correct bindings or in filling the gaps by asking TC for new terms.

FHIR is a very welcome development which will predominate in the market for the next few years and where it makes sense for openEHR-UK ( as currently resourced) to align with FHIR resources and UK profiles, that will happen but I am not convinced that this automatically means that openEHR-UK should commit to creating FHIR profiles for every modelling artefact. If there is demand, I would expect these to emerge quickly.

If, of course, that was part of a broader resourced and supported strategy in the UK, e.g for PRSB to use openEHR to underpin clinical modelling, and for that then to be used to understand the sensible/optimal use of SNOMED-CT and from which to derive appropriate FHIR profiles, then that sounds like a very sensible compromise position, which plays to the strengths of all 3 formalisms.

Ian

Both

Valid arguments and well elaborated.

Which comes first?

  1. Communication and sharing of the data already being recorded, in a form that is as consistent as we can get it, via the use of a common message structure and common taxonomy mapped from legacy data.

  2. The construction of new clinical structures for use in clinical systems

The two are not of course exclusive but perhaps that is where I am coming from, attempting to converge both when I should accept the fact that the second will evolve independently of the first, which of course it always has.

In a sense that makes life easier as I am focused on the first because I believe that there is a set of reasonable solutions to that problem and that gets us to the starting point for the next generation of IT systems, which are likely to be different from that which we can predict just now anyway.

David,

the only flaw in item 1 is that the ‘common message structures’ (FHIR resources and profiles) are all ‘new’ with respect to any real system such as EMIS or TPP or Cerner - they are imposed content that you can’t escape, and that don’t look much like the real system data except in a very generic way.

So given that new models are being created, doesn’t it make sense to adopt a strategy whereby:

a) we use models that can be re-used across entirely different technologies, of which FHIR is just one?
b) the models are created by clinical people (FHIR models are by their own admission, IT-developer oriented)?
c) we don’t impose any payload content in the REST layer, we just use the models already developed, appropriately serialised?
d) we start with an already available, extensive library of models?

Point c) is just normal comms engineering. A protocol layer should never impose content. If it does, it just competes with the real content and causes extra work. What’s the sense in that?

Hi David,

I do not seem either option as having priority, or being in conflict.

We do, of course, have a pressing need to get systems talking to each other, and certainly in and around GP systems we do have a pretty good idea of exposable legacy system content. Better still, in the UK, through the work of GP2GP, it is pretty mature and can take advantage of substantial terminology use.

For the immediate future, I think we can easily have the best of both worlds. Using the GP Connect profiles as an example, I can see lots of good work but, particularly around medications, lots of gaps and difficulties. Many of these could be resolved by referring, either to the UK medication models developed with PRSB (and based on GP2GP) or indeed the international medication archetypes, which fold in most of the UK ideas. Looking at the other profiles, I think we are pretty close but need work e.g. clarifying the SNOMED bindings for causative agent, and making sure this is aligned to PRSB.

So I think we can forge ahead and meet your aims, but at the same time couch this work and the much bigger tranche of work coming up in a way that helps us as clinicians and clinical informaticians directly define and elaborate our content requirements as they emerge, whether for use within systems or moving data between systems, where these require adaptations to existing content models or completely new content models.

INTERopen and GP Connect are great initiatives but lets do them right. Bringing together a sensible blend of openEHR, FHIR and SNOMED-CT, I believe we can do that without compromising agility or speed of delivery and keeping in line with PRSB aims.

Tom

FHIR resource profiles match extremely well to GP (legacy) system content which is very simplistic and generic. Being relatively simple to implement as well helps. One issue will come in with Snomed-CT post coordination but that’s easily tackled down the line as no one is doing it currently.

Ian
Alarmed by your comment on the FHIR medication profiles and gaps and difficulties. Have you checked both HSCIC and Endeavour profiles and what is the problem?

Hi David,

I remain a bit confused by the GP Connect profiles, which seem like only a
partial representation of GP meds, and have resources for administration
and dispense that don’t seem like a good fit for GP (certainly core). I
guess I am struggling to understand the scope of this work.

Some of the naming is also a bit confusing e.g. Review date under
Dispensing, and including structured timing seems premature if, as we know
all GP dosage is for now text only. I’m not meaning to critical, as I am
aware that ’ GP Connect’ is at an early stage. I just want to make sure
that it draws upon the considerable expertise that already exists out there
in terms of understanding current GP system data around meds.

I have just had a quick look at the Endeavour meds profiles ( have not done
so for a while) and feel much more encouraged and on comfortable ground :slight_smile:
Authorisation, rather then Admin, dispensing. Extra fields for patient
guidance, endorsements etc.

I will have a more detailed look now and see how well these marry up with
the PRSB meds archetypes and international meds archetypes.

Ian

Dr Ian McNicoll
mobile +44 (0)775 209 7859
office +44 (0)1536 414994
skype: ianmcnicoll
email: ian@freshehr.com
twitter: @ianmcnicoll

Co-Chair, openEHR Foundation ian.mcnicoll@openehr.org
Director, freshEHR Clinical Informatics Ltd.
Director, HANDIHealth CIC
Hon. Senior Research Associate, CHIME, UCL

Ian

Yes, you are right. I have checked and it seems there is a problem with some of the GP connect as it would appear they have started from scratch. I have now offered to go through a reconcile process with HSCIC between the HSCIC and the Endeavour profiles, as Endeavour profiles fully map to EMIS and System 1 and therefore GP2GP

Interesting to see which way the suppliers go. GP suppliers are working with HSCIC and 30 suppliers are working with Interopen (including GP suppliers) so I suspect that more collaborative working will be essential