Faced with the growth and diversity of health data, what role does interoperability play?
How are these data produced?
Healthcare data is growing exponentially. According to a recent study published by the LIR, the amount of data is expected to reach 2.3 billion gigabytes worldwide by 2020. This production has been boosted partly by the vast increase in prospective sources: medical devices and connected objects, home automation sensors, mobile apps, specialised software, etc.
If we look at the Internet of Things (IoT), the health sector is a pioneer according to an international study released in September 2017. It reveals that patient follow-up and equipment maintenance are at the top of the list of IoT applications according to two fifths of professionals.
However, the more the sources diversify – creating a real mishmash of different data – the harder it is to make IT systems match up with each other.
Nevertheless, healthcare data production mostly continues to stem from encounters between patients and healthcare professionals, either directly or indirectly, if biological and imaging examination results are taken into consideration as they have a huge impact on the overall volume of data.
According to a recent study published by the LIR, the amount of data is expected to reach 2.3 billion gigabytes worldwide by 2020. This production has been boosted partly by the vast increase in prospective sources: medical devices and connected objects, home automation sensors, mobile apps, specialised software, etc.
Scientific progress has announced new developments especially in the field of genome analysis. The 2025 French Genomic Medicine plan aims to improve conditions for gaining access to genetic diagnoses in France and has set out to produce several tens of petabytes of data per year over a five-year period. We have entered the era of bioinformatics and so-called ‘omic’ technologies are now able to generate huge amounts of data on multiple biological levels: “from gene sequencing to protein expression [proteomics] and metabolic structures, this data can cover every mechanism involved in the variations that occur in cellular networks, and which influence the functioning of entire organic systems.”
In an interview with Sciences et avenir in March 2018, Nicolas Garcelon, Head of the Data Science Platform at the Imagine Institute in Paris, said that one hospital alone would produce 10 gigabytes of data a year, or 20,000 times The Red and the Black by Stendhal.
“Our data centre today houses the administrative, medical and social information of eight million patients, 163 million biological test results and five million medical reports,” added Claire Hassen-Khodja, Healthcare Data Warehouse Manager for the AP-HP Clinical Research and Innovation Delegation.
Interoperability challenges to be addressed:
The more diversified the sources become – accentuating the heterogeneity of the data – the greater the challenge of making information systems interact is increasing.
In the healthcare sector, data acquisition benefits from developing the uses of voice recognition (particularly in radiology). However, healthcare workers would still be spending 40% of their time documenting patient records, and of course, more than half of them think that this is far too much.
Furthermore, unstructured data entry is still being mixed in with integrated, coded and standardised data in electronic patient records. No institution or private office has moved away from using paper (nor fax for that matter), whilst professionals (or secretaries) spend quite a lot of their time scanning external documents to ensure electronic records are as complete as possible.
A significant step has been taken as the conditions have been specified for digital copies of medical files to now have the same probative value as ‘the original paper document’.
By specifying the conditions for paper medical records to be destroyed, an order published in January 2017 has helped remove one of the obstacles which was preventing healthcare from becoming more computerised.