This work attracted the attention of the London Medical Imaging and AI Centre for Value Based Healthcare which co-opted the software as one of their flagship platforms for the AI Deployment Engine (AIDE), funded in part from their £16m programme to scale up AI in the NHS. This allows for the deployment of multiple AI models, integrating them into clinical systems, lowering the cost and effort to deliver cutting-edge med tech to the frontline.
Currently, AIDE enables access to 20 applications including brain tumour and liver MRI, Covid-19 chest x-rays and mammography screening, allowing clinicians to access near real-time AI analysis within seconds.
Another application measures lactose acid in the brains of new-born babies who have had a difficult birth and suffered a hypoxic ischemic event (a loss of oxygen to the brain), a potentially serious or life-shortening condition. Previously that process was done manually by a team available only on weekdays, 9 am to 5pm, so parents could be waiting a considerable time to know their baby’s condition. With the app that analysis is now fully automated and takes 30 seconds to get the result, having a real impact on the quality of care those parents receive.
AIDE is being rolled out in 10 NHS Trusts in London and the South East. First to go live was King’s College Hospital NHS Foundation Trust in October 2021 and it will be rolled out to the nine further Trusts by 2023.
Evaluation of patient benefit is very application specific. Certain applications can easily demonstrate that automation saves patient and MRI departments time, as the almost live score on the image quality removes the need to recall and rescan a patient at a later date. Some more operational departments, such as diagnostics, are still impacted by Covid-19 backlogs so it’s difficult to get good data to compare the impacts of automation.
The optimisation of oncology and surgical pathways are now being explored, as well as more operational functions such as triaging surgical lists and endoscopy scheduling. This uses natural language machine learning applications to work out the acuity of a patient and when their next appointment should be which gets sent to an administrator, freeing up the gastroenterologists to do other tasks.
The project has also led to Haris working with an international community working on the same problem, particularly in the States, leading to mutual collaboration on technical design with Stanford University, the Mayo Clinic and Massachusetts General Hospital.