Common Interface Mechanism project

The confusion arises because the Geeks (us lot) after 30 years have still not properly defined the boundaries between the semantic and structural concepts and instead of constantly trying to simplify, we simply end up creating confusion that no one in the real world can understand. And by the real world I include suppliers and clinical informaticians. In other words Intellectual arrogance.

FHIR risks moving from a simple to understand set of NoSQL fields covering clinical system content to a mega bucket of unintelligible scope creeps. openEHR should move back towards a simplification space either by constraining itself or dividing into its component parts, each of which need to be recognizable as separate from each other. Both tribes should stop claiming the intellectual high ground as should of course the Snomed lot!

1 Like

David,
I am very happy to take criticism of this sort from people with long and serious implementation experience (indeed, UK GP computing still has a great deal to teach the world). Although I would need some details to better understand what you think openEHR should concretely do (better). [Aside: openEHR has in the last year split its specifications into well-defined components - see here for a picture and the ‘Specifications’ table on this page; whether this is the kind of separation you would like is another question.]

Re: intellectual high ground: I’m not interested in being on high ground. I am very interested in having solid theories underpinning architectural design and consequent implementations. Implementations with no theories will always hit walls in the end, sometimes so large as to require a ground up rebuild / relaunch. We are in desperate need of good theories in health informatics, but there are not many documented. Instead, we have a mass of standards taking the place of theories and architectures. The costs of the NPfIT era attest to this.

In openEHR I think it is reasonable to say that we have made a solid effort to be theoretically as well as practically grounded. The specifications have been created based on deep knowledge from European health informatics projects over the years, our own implementations, as well as the incomparable pool of knowledge from what I might call the ‘British health informatics intelligentsia’ - mostly from the GP computing arena, for which I have great respect.

Even as recently as two years ago, Ian McNicoll and I were analysing the structures of EMIS problem lists and rethinking how the problem list archetype should be designed. Please don’t underestimate our dedication to understanding good principles and design ideas and incorporating them where we can in the openEHR architecture.

There is never any guarantee we have done a good enough job of course, and we can only rely on concrete critique and advice from the rest of the world.

On where we should be trying to go collectively: it has to be a health computing platform framework of some description. We should forget the obsession with (formal) standards clubs as such, and concentrate on standardising elements of this open platform architecture, including sustainable processes for generating shareable, computable models of clinical semantics forever (that includes archetypes, terminology value sets, guidelines e.g. proForma, workflow definitions, and much else). Without a coherent platform approach, I fear we will forever be paying the significant costs of trying to stitch together siloed standards that don’t on their own come anywhere near solving the whole problem, but may like to think they do…

I was thinking more along the human side of things. It was simple when I worked in primary/military health care to get clinical input into a problem and so prove the model worked.

Since I’ve moved into secondary+community, from my perspective the clinician is now at the centre of an ‘onion’. Yes your correct, the technical architect/senior developer and clinical modeller can understand CDA/HL7v3 and openEHR but we have several layers to work through.

At each layer someone will want to use a ‘Manchester screwdriver’ [Using a hammer to fix a screw]. Another way of saying this is people will try to bend the requirements to using FTP, CSV, text files or PDF. The non ‘manchester screwdriver’ version of this - HTTP/REST/SOAP in place of FTP, format XML or JSON in place of CSV, messaging (HL7v2/FHIR)/resource API (FHIR)/API (HL7v3) in place of CSV+text and data transfer object (openEHR/CDA) in place of PDF. It’s another set of layers to work through (more onions!).

Another point to consider here: in primary IT the managers/senior staff will be from a software/health background, in secondary/community they are from hardware/SQL background/health admin.

The experience I’ve had over the last few years is the model needs to be described in easy to understand chunks, how you present that model changes depending on the audience. e.g. UML sequence diagrams for technical staff, BPMN for project staff and an overview BPMN diagram for all, FHIR terminology (NOT FHIR itself) works for BPMN/process flow/sequence flow discussions. Also stick to known software patterns when discussing technical interactions - staff in other sectors (council) should be able to understand these [Quite a few use the Martin Fowler series of books i.e. Enterprise Integration Patterns]

It’s all about communication and getting through the ‘onion’ layers. I did mention using FHIR terminology but that tends to be natural, even if staff don’t know FHIR - this is the biggest reason why I expect FHIR to move forward quickly.

Hi,

Having built what was to become one of the larger GP system suppliers
and remained its Managing Director for 10 years (AAH Meditel (Number 2
with 1500 practices when Torex took it over in 1999 and I left) I would
echo you comments about Primary Care (military primary care was
surprising similar)

There are many reasons why UK GP Computing was a success both compared
to other care settings in the UK and in a global context - We led the UK
and the world in the use of computers at the point of care. Putting
aside the obvious prime reason - my own personal brilliance! - some of
these related to the specifics in the environment (business and
clinical) of UK primary care. We were just lucky they worked in our
favour and they are not easily repeatable elsewhere. However, there are
some other factors that we can learn from and apply elsewhere.

At the top of this list was the fact that GPs drove system design - My
original system was written by Dr David Marwell (then a GP trainee - Now
the high priest of SNOMED) with a bit of help from Dr James Read (a
Loughborough GP and originator of the Read Codes, which became part of
SNOMED) Lots of GPs worked in our development team who have since become
famous (or infamous) in health IT, Dr Glyn Hayes, Dr Mike Bainbridge, Dr
Peter Johnson, Dr Jon Rogers and many others.

In VAMP (now INPS) we had Dr Alan Dean, succeeded for may years by Dr
Mike Robinson

EMIS had Dr Peter Sowerby and Dr David Stables - “Written by Doctors for
Doctors”

In TPP we have Dr John Parry and Frank Hestor’s wife is a GP (which I
bet keeps even Frank focussed on delivering systems that work for GPs)

My problems in AAH Meditel started (some will remember the pain with
System 6000 that was eventually to become iSoft Synergy), when I and my
medical colleagues started to believe that the techies (who clearly knew
lots of clever techie stuff that we didn’t understand) might have a
better idea of how to design a GP system - You can add that to the top
of my long list of mistakes over the past 35 years in this business.

openEHR provides an abstraction from the technical shit that clinicians
find comfortable to work with. Its focus is on content rather than
work-flow (as is FHIRs) so I find it interesting that much of what Kevin
talks about in work-flow. Here I think things like BPMN, Drools,
ProForma and openEHR GDL have a place. Exactly what that place is is
something I and others are trying to work out in Synapta.org.uk
http://www.openmediavault.org/

Ewan

Ewan Davis - Director - Woodcote Consulting

Voice +44(0)207 148 7170 Mobile +44(0) 7774 272724 USA +1 347 688-8950
Skype ewan.davis (by prior arrangement only)
Read my blog at www.woodcote-consulting.com/blog
http://www.woodcote-consulting.com/blog
Follow me on Twitter https://twitter.com/WoodcoteEwan
View my profile on LinkedIN
https://www.linkedin.com/pub/ewan-davis/0/742/747
Director HANDIwww.handihealth.org http://www.handihealth.org

Interesting terms :slight_smile: I think I’m following a similar train of though but we also may need a system that non techies and non clinicians are comfortable with.
Would be interested to hear other peoples views but the non tech/clin side of projects seems to take 95%+ of the time. A lot of this is workflow and consent related.
I’m not mentioning BPMN2 as a computer system, just a tool to convey ideas between parties - (so it doesn’t have to be technically correct). Don’t mind what we use to express these ideas - just suggest we use something that most people understand (it’s not any of the HL7 standards). IHE has used UML in the past and has done some more recent work using BPMN2.

Could I go back a bit and ask a question about openEHR.

Over the last weekend I generated a number of observations about me (a bit of a selfish post!). I generated weights, blood pressures, heart rate and exercise data where would they go into openEHR?

It sounds like they would go into the reference model but it also seems archetypes is the answer?

@mayfield.g.kev all openEHR data are always instances of the reference model (which is an orthodox information model, i.e. what you see expressed in UML in the specs).

The RM is like most fairly generic information models: you can express sensible stuff in it, and also nonsense. Mathematically, the nonsense possibilities are astronomical in number (think of how many ways your kids can build nonsense structures out of Lego bricks). Archetypes and templates enable the sensible possibilities to be modelled.

In a real openEHR implementation, the vital signs you mention will be recorded in an openEHR template, which is an artefact that combines bits and pieces of certain archetypes. You might use a ‘nurses observations’ template that has BP and HR in it, and maybe another exercise related one that has the weight and exercise information in it. But you could use any template that had the right data points - e.g. a GP visit template. The underlying archetypes define the data, in terms of valid RM structures (think: Lego instruction sheets). You can find the first three you mention here:

Depending on what you mean by ‘exercise data’, there may be something but probably not much yet, e.g. if you want a record of times for 2k on the rowing machine, 5k running (with start and finish heart rates maybe?) etc, I don’t think there is an archetype for that yet. Modelling that properly might be done by sports medicine specialists, but clearly some ad hoc archetypes for it could easily be built.

Anyway, assume the archetypes you wanted all existed, you would create one or more templates using the bits you wanted (i.e. cut out the bits you don’t want) and deploy those in an openEHR system. You would build app screen forms based fairly closely on the template data set(s). Ian McNicoll (@ian) regularly demonstrates this kind of stuff, and it’s routine in openEHR implementation environments.

When you run your app, fill out the forms (or the data sets might be created by an app that talks to wearable devices - no UI input forms needed), and save the data, you are saving openEHR RM data. It all contains markers indicating which bit of which archetype each piece of data relates to. But it conforms to the RM - i.e. all openEHR data, all around the world, conforms to the one (stable) Reference Model.

@mayfield.g.kev That stable and consistent reference model is one reason why the openEHR clinical information structures are so popular with clinicians.

Thanks. Would openEHR accepts an Observations (with a SNOMED code) and place it in the correct archetype?
I was thinking of data I’ve recorded automatically in the Strava cycling app and wondering how it would be added to an openEHR system. So I’d have heart rate values and maximum heart rates - I’d want to use a API not a UI to add data.

I am correct in thinking openEHR acts as a recipe or lego instructions set for presenting information to clinicians. I might be getting the wrong impression but I would assume I’d use for FHIR to add my observations (from Strava/devices) into a database and use openEHR as a set of instructions to get the data (via FHIR) from the database to present it to a clinician. This doesn’t match what I’d previously heard about openEHR though.

Hi Kevin,

If you send me or point me to the Strava dataset, I can set up an openEHR
template for you and quickly take you through how you would get data in to
openEHR via the Code4Health Ehrscape API.

Ian

Dr Ian McNicoll
mobile +44 (0)775 209 7859
office +44 (0)1536 414994
skype: ianmcnicoll
email: ian@freshehr.com
twitter: @ianmcnicoll

Co-Chair, openEHR Foundation ian.mcnicoll@openehr.org
Director, freshEHR Clinical Informatics Ltd.
Director, HANDIHealth CIC
Hon. Senior Research Associate, CHIME, UCL

openEHR is a representation for the DB. FHIR is essentially a web message format. I guess you could store that if you need to store such things, but it won’t be very queriable, nor does it obey a standard reference model - it would be like storing HL7v2 messages instead of writing into (say) EMIS or INPS in their native formats.

openEHR isn’t specifically to do with presentation (I am intrigued as to where that idea came from), it’s all about back-end EHR data and querying. Of course, having well-defined data sets means that UI forms can be easily designed, and in some cases generated straight from the template.

Thanks Ian. Can I take up the offer at a later date? I’m swamped with work at the moment and I want to read up on openEHR.

I think I’m not getting the openEHR vrs FHIR discussion - I don’t think the question is valid, each has a purpose which may sound similar but they aren’t

wolandscat…

You wouldn’t store the message in the database. With EMIS systems, any observations within the HL7v2 would be stored in the Observations table or global (they would all be quite similar).
I would assume a openEHR receiving HL7v2 or FHIR would behave in the same way - it would store the data in a format it wants and so we have loosely coupled systems.

For the HR rate example. My device just sends a stream of observations to my phone, so the raw data is something like:

2016-05-04 09:28:10, 176
2016-05-04 09:28:20, 179
2016-05-04 09:28:30, 180
2016-05-04 09:28:40, 188

Strava would supply the data in a similar format but with a person identifier. If I was to convert to FHIR/v2 - I’d be adding the SNOMED code to indicate the observation type.

(The process has a number of similarities to Lab Point Of Care - see http://wiki.ihe.net/index.php/Laboratory_Point_Of_Care_Testing. Strava API can be found here: http://strava.github.io/api/ and Heart Rate device http://api.wahoofitness.com/interface_w_f_heartrate_data.html )

Your heartrate example is an example of a query result data structure. The data might be stored like this in some simple app, but in an EMR/EHR system, there will be a lot more data - units, context, potentially information on measuring device, audit, and so on. So the data structure <timepoint, heart_rate/value> is just one possible query result from the underlying data, but it’s not a definition of the underlying data - that requires a proper model.

In openEHR, we define these structures very easily as AQL queries (based on archetypes), and we can obtain any of the underlying data like that. No special models are needed to query - you just query what you want.

Hi Tom

Reading this thread with interest and thanks for getting further clarity on
OpenEHR/FHIR.

Just another query if I may. Would OpenEHR need to have a hosted server to
manage the AQL queries or would they be managed on the client? I suspect
the former and if it is the hosted environment, how would this be synced
with the Data from the supplier feed?

Well there will already be a hosted server of some kind with an openEHR implementation - that’s where the EHR data are. Now, when I say that, I don’t mean it’s necessarily the EHR data of course. In a UK location with say Cerner, or a GP location with one of EMIS, TPP, etc, those systems contain what is considered the ‘EMR’ in the business sense of the word. The openEHR system will have been deployed in these cases as some kind of additional EHR, or ‘EHR extract’, to do some specific job, e.g. support smart querying, some particular apps or whatever. So think of the openEHR server as an ‘EHR cache’ in these kinds of environments.

As with any cache or ‘overlay server’, there is the question of a data bridge, synchronisation, and the event model driving that. That will usually differ according to needs, but one method is for application ‘commit’ events to be trapped for certain apps, and the data set written to both main EMR and openEHR persistence. Another model is for a job (say in an IE, on a timer, or event driven) to run and extract needed data from the EMR to openEHR. It depends heavily on whether the openEHR service is being used for a ‘thin slice’ of data (e.g. just diabetic patient related) or ‘everything’.

In other environments, openEHR really is the main EMR.

So - in all cases, there will be an openEHR DB and EHR services sitting on top of it. To get an idea of what these might be, see this picture. That happens to be the Ocean picture, but all the suppliers (Marand, DIPS, Code24 etc) have similar core services.

So you will see in there a ‘Query service’. This is AFAIK on the server side for all the openEHR implementations, and for serious systems, you have to do that in order to manage load balancing and latency of responses (i.e. making sure some difficult research query doesn’t kill nurses ward screens). This query service takes AQL queries, and runs them over the physical database, which will look different in each implementation.

Is that a typical system? If so it explains a lot. FHIR would likely become the common format for you Integration Engine (or more realistically most of the those systems are likely to provide a FHIR API). Also my heart rate example wouldn’t need to use openEHR - the date would go to a LIS or EPR system instead.

Is that how Endeavour is expected to be used (it will part of the Integration Engine or the Integration Engine)?

So under the hood is the openEHR Server a normalised Database (eg SQL
Server) and AQL Query Service just maps to SQL queries to return required
datasets in the form of Json/XML/DTOs which are returned to their
destination via FHIR?

Raza
www.razatoosy.com

Within openEHR environments, we don’t need another format for integration; the openEHR models provide all the formats for messaging and documentation.

Creating FHIR-like data can be done in services, and is being done in some environments, particularly in the UK. But FHIR only covers about 5% of the clinical content defined in openEHR. So there’s no simple trick to turn everything in openEHR into FHIR. Manual mapping is required, and that takes time. FHIR resources are also a moving target at the moment. So while putting up a simple heart-rate or BP is easy enough, serving the 5,000 - 10,000 data points needed in a real system using FHIR is not possible today.

Your heart rate data might be in the LIS, more likely in the EPR. The problem is that it will be in a different form in every system you want to interrogate. There are hundreds of these systems in any hospital. For heart rate, maybe its easy to figure out which one to talk to, but for a lot of other data, it is not. And you are unlikely to be able to obtain a published model of what the clinical data you want actually is.

What openEHR is aimed at doing is creating very flexible models of this content, making those available so that there is a standard, public, model-based representation of clinical content. The various Clinical model repositories around the world have something like 8,000 - 10,000 clinical data points modelled.

From those, various downstream formats can be generated including XSD, JSON, Java APIs, C’# APIs, HTML and so on. FHIR is more complicated because it imposes its own clinical content resources, so tends to be a manual mapping.

Hi Raza,

Not necessarily!! There is definitely some kind of database in there but it
can be SQL, NoSQL or use some sort of O-R Hibernate type layer. There are
examples of all of these including efforts based on Mumps, MongoDB and xml
databases out there but the industry leading examples tend to use
non-normalised SQL tables with some kind of binary or native structured
data handling.

See https://github.com/ethercis for an example. This one of the back-ends
being used by the Ripple project https://github.com/RippleOSI , using the
Code4Health Ehrscape API (a simple RESTful wrapper around the common
openEHR service API). (AQL in process of development)

You are correct that AQL is ultimately mapped to the native query method of
whatever physical db is used.

The clever bit is that adding or adapting new clinical content does not
require any technical re-factoring - no new technical information
modelling, no new db re-design. Uploading the archetype definitions (as
templates) and you are ready to go right away.

Ian

Dr Ian McNicoll
mobile +44 (0)775 209 7859
office +44 (0)1536 414994
skype: ianmcnicoll
email: ian@freshehr.com
twitter: @ianmcnicoll

Co-Chair, openEHR Foundation ian.mcnicoll@openehr.org
Director, freshEHR Clinical Informatics Ltd.
Director, HANDIHealth CIC
Hon. Senior Research Associate, CHIME, UCL

Yep. Ian beat me to it, but the varieties of physical storage go a long way outside of a classical 3NF database schema approach, including blobs, path-based, and so on. Some are implemented in an RDBMS, others not.