Really a set of questions (so please bear with me) and I think the answer is a set of open source infrastructure projects. Also is this the right place for questions like this??
A problem I have at the moment is updating a list of GP’s and Practices on a Document Management System. The normal solution is to provide updates via traditional HL7 but our PAS system doesn’t support this.
Unzips the files and load the files data into a database (probably the HAPI HL7 FHIR Server I mentioned in another thread)
When this is done again I would be able to detect changes in GP’s or Practices and possibly generate traditional HL7 messages.
So questions:
Can I get this list elsewhere via an API? [SDS does provide an API (LDAP) for these but I need a list]
Could I just process a list of changes from somewhere?
If not would the solution above be useful for others? (The HAPI FHIR Server would provide an API which most modern systems would prefer rather than file uploads and traditional HL7)
Data can be processed very quickly now (millions of records in seconds),
update only runs should be avoided (look at twitter, you can crate and delete
but no update - KIS)
Think new … use BI tech
Moving files around manually is as much out of date as spreadsheets are (the
less duplication of data and files moving around the system the better)
A problem I have at the moment is updating a list of GP’s and Practices on a
Document Management System. The normal solution is to provide updates via
traditional HL7 but our PAS system doesn’t support this.
So what I’m planning to do is a system where:
* A member of staff (or computer) downloads the relevant files either
I think you’ve misread the requirement. I have limited/no control over the source data or how data is added to 3rd party systems other than messaging.
How would you avoid ETL phase into your BI solution? I would love to avoid it, my line of work is normally measured in milliseconds and so yes months delay is poor.
I’m not a NHS Information/Data Analyst. So if you were thinking I was planning something using DTS or SQL Server Integration Services solution, I’m not.
So there are many techniques for downloading the GP Data.
Using a mechanism where you download the file load it into a database and compare it against an extract from your edms is a great idea and could easily be done. So if you break it down it falls into three steps
Downloading the data.
Comparing the data and creating an exception dataset
Creating and updating the data in your destination system based on the exception dataset.
Downloading the data can be done several ways:
Batch process (Powershell or linux script) - HTTP Get, unzip batch process and parse and insert into holding table.
Integration engine - HTTP Get from within your integration engine, unzip batch process, process the data into holding table.
Custom built API that allows you to download and distribute national files to a specified holding table.
Comparing the data is really a task for the database engine that you employ and you would need the download from TRUD and your EDMS system. once you have this use your database engine to do your comparison and produce the exceptions list ( a series of records for update, insert and delete )
Once you have done this you can get this data into your system in the following ways
Use your integration engine to process the exceptions list into HL7 and send to your EDMS
Use your SQL engine to dump Structured files (HL7 or otherwise) for your edms to pick up (Likely that your edms is using MIRTH so you could get your supplier to do this)
Custom built application to parse the data and using NHAPI to generate HL7
If your hl7 interface is not up to the task even though it says so in the specification you could always use products like Blue Prism to act as a robotic interface into the application directly to update your gps (Slow but effective)
Just my thoughts but there are so many ways that you can do this.
It’s the first bit, have done it several times as I’m sure many of us have done (which makes me think it could be useful to share the code). How the last bit works will vary between systems and trusts.
None of the methods of getting the data in the first place are ideal. You can use LDAP queries to get the data but I’m not aware of any systems request data like that except for core NHS systems. (I keep meaning to write a FHIR demo which does this - don’t have the time )
I hear you. Would be great if we could have a FHIR interface for this with OAUTH2 or even an ODATA interface for downloading that we could wrap with FHIR. MMM we will need to do something like this for one of the projects that we are working on. It may be worth putting something together. We code in C# MVC 5 webapi. if we created something over the next 3 months and put it on a TFS repository would that be of interest?
This is me carrying on from that… I came away thinking we need to do more to kick off FHIR in the NHS and OpenSource community - once started it will run by itself pretty sure of it.
I don’t know why it’s not took off like it has in Europe and rest of the world.
One thing you said is definitely true … hackday.
re: OAuth2. I understand a solution is being purchased for Code4Health, it would be sensible to get a few (FHIR) projects that use it.
As you may know I’m already using using spring security to secure API!s. I didn’t see any problems moving that to point to opusVL libraries. Don’t think I’d have been able to use those libraries without spring or other Java product (opus is php or something like that)
Was told opusVL could use any LDAP database for authentication or others. not sure about authorisation probably LDAP again or within opus database
The Code4Health platform requirement was primarily about developing an Oauth2/OpenID approach to allow Code4Health ‘users’ to connect to various 3rd party APIs on the C4H platform e.g. the Ehrscape back-end or terminology services, NHS Choices etc. So this was mostly about single sign-on for developers.
However it was also recognised that this would have some value in allowing developers to get familiar with OAuth2/OpenID in the context of SMART-on-FHIR, which will be part of the Platform etc, but still in the context of a demonstrator environment, not for production use, or for allowing SSO of real clinicians/patients.
However, I understand that some real-world projects are now at a stage where OAuth2/OpenId are seen as a viable approach and I think there are some discussions about we align the limited C4H ‘dev space’ requirements with emerging real-world requirements. It would certainly make sense to align this work as much as possible.
So this opusVL is off or still under discussion? If not do we have timescale?
I’m not after this for login security/authentication (I can use OAuth2+Spring+(LDAP, Kerberos or OpenID) instead) but for role based access management (leading to SMART across the enterprise).
No definitely not off. I am just not clear if the work that OpusVL have agreed to do for the C4H Platform is suitable for real-world deployment, in terms of clinical providers/RBAC etc. That was not part of the original remit. I am not saying it is not suitable, just not clear of the suitability.
Yep it seems to be the case. The GP file is listing many doctors as Active and assigned to a practice but many also have an end date (dates imply retirement).
I think active means they are still registered as a doctor and the end date means they are not practicing medicine (with NHS) - so for FHIR Practitioner these would be inactive? [Think this is the approach I’ve used in the past]