Jon Payne, Director of Sales Engineering and Education at InterSystems discusses the use of AI in wearables, monitoring and imaging.
Expand
InterSystems
Jon Payne, Director of Sales Engineering and Education at InterSystems
The use of artificial intelligence (AI) in wearables, monitoring, and imaging is forecast to save hundreds of thousands of lives, and billions of pounds by helping individuals make healthy lifestyle decisions and physicians to detect disease early.
A 2023 study by Accenture in the US estimates generative AI could automate 28% of work time in healthcare and augment another 11%. An earlier study by Deloitte estimated the combined effect of eight AI applications could save 400,000 lives annually across Europe, deliver 200 billion EUR in annual savings, and free up 1.8 billion hours of healthcare professionals' time.
Yet AI poses specific challenges for the HealthTech industry and on its own it cannot fuel success. Companies seeking to incorporate AI into their product portfolio will only realise its potential if they surmount the hurdles related to data acquisition, interoperability, data cleansing, and privacy.
Overcoming the constraints of the training model
This is necessary because AI is only as good as the data used for its modelling. AI in medical applications must also go beyond the bounds of its training model to aggregate data from external sources of clinical and non-clinical data. That could be as many as ten different EHR environments, along with next-generation sequencing labs, patient questionnaires, and other sources. This data has to be cleansed, labelled, and rendered interoperable through conformity with a standard such as HL7 FHIR, so senders and receivers understand the information in the same way. The data must also be protected in line with all regulations and privacy laws that apply in any territory.
While an AI application can sift through masses of research or clinical data it remains constrained by its model and will not have the full context. This means, for example, that AI can only assess a patient for risk of stroke based on entered patient data.
An example from imaging diagnostics
Take the increase in diagnostic imaging, for example. A study in the Journal of Digital Imaging suggested nearly 60% of radiology orders had no mention of important chronic conditions despite the increase in their prevalence. There is an obvious need here to include contextual medical record information within diagnostic imaging workflows. Imaging AI and ML applications can ease workloads and cognitive burdens on radiologists by analysing curated clinical data. However, to do this they need to bridge siloed imaging data and disparate picture archiving and communication systems (PACS) that use proprietary technology.
Achieving this requires knowledge of healthcare data and systems, and interoperability with the approaches of companies such as Epic, GE Healthcare, 3M Health Care, INFINITT, Guerbet, Ricoh, Canon Medical, and Roche Diagnostics. Compliance with the full range of healthcare protocols and standards is necessary to facilitate information flows across the imaging data silos and sources.
Once achieved, this puts the right information in front of radiologists at the right moment and eliminates the need for complex searches in the electronic patient record to retrieve specific patient data. Clinicians have an accurate presentation of current diagnoses, doctor's notes, wearables data, and even genomic information prior to reading a study.
The multiple challenges of making data useable in AI
Multi-source datasets of the type we are discussing here depend on interoperability and compliance with multiple data standards within the healthcare sector. An AI application may be able to exploit the full potential of HL7 FHIR but may also need to work with legacy standards such as HL7 V2 and non-standard, or even non-clinical data sources. Relying on a single standard is unlikely to gain a new application or device widespread adoption because in the real-world, older, long-standing systems are always in use, generating a flat file or other non-standard format.
The requirement for comprehensive, aggregated data means that HealthTechs face real challenges in training on very disparate types of data for whatever purpose. Data pulled in from a wide range of sources, including devices or systems with legacy messaging standards, previously had to be cleaned and checked for errors by data scientists, because it is rarely in structured tables. This preprocessing and labelling transforms data into a format that is suitable for use in AI applications.
The challenges of data preparation for AI applications are often underestimated, but the single platform approach is how they are best overcome.
Robust machine learning algorithms commonly used in AI, such as neural networks, can automate much of this. They can take care of some of the necessary preprocessing and cleaning through interpreting patterns in the training data. This is especially helpful when the data includes natural language text or other datatypes that are challenging to deal with programmatically.
AI applications need to be built on a single data platform
The full range of onerous preprocessing requirements, however, can only be fully resolved through a single platform approach that normalises all sources of data and provides the connective tissue with other devices and systems.
It is an approach designed for the challenges of the HealthTech industry, providing a patient-centric model that is ready for analysis. A platform can include numerous, trusted pipelines that aggregate data from across sources' formats, complemented with AI-based techniques.It will take care of labelling, which is critical for training supervised machine learning models. A platform should also track data lineage, allowing developers to use subsets to train predictive models, keeping the link back to the full dataset to ensure context is retained.
With encryption at rest and in flight, this approach is what HealthTech companies need for all-round success in the health sector when they incorporate AI into their solution. New applications benefit immediately from easier and more streamlined deployments, with the ability to develop the solution rapidly to achieve the genuine scalability on which commercial success depends. The challenges of data preparation for AI applications are often underestimated, but the single platform approach is how they are best overcome.