Artificial intelligence (AI) is revolutionizing medical diagnostics. The state-of-the-art results have already demonstrated that software can achieve fast and accurate image-based diagnostics on various conditions affecting the skin, eye, ear, lung, breast, and so on. These technological advancements can help automate the diagnosis and triage processes, accelerating the process to speed up the referral process especially in urgent cases, freeing up expert resources, offering the best accuracy everywhere regardless of skill levels, and making the processes more widely available. This is a ground-breaking development with far-reaching consequences. Naturally, many innovators are scrambling to capitalize on these advancements.
Furthermore, this report considers the trend of digital health more generally. It provides a detailed overview of the ecosystem and offers insights into the key trends, opportunities and outlooks for all aspects of digital health, including: Telehealth and telemedicine, Remote patient monitoring, Digital therapeutics / digiceuticals / software as a medical device, Diabetes management, Consumer genetic testing, Smart home as a carer and AI in diagnostics.
Significant funding is flowing to start-ups and R&D teams of large corporations who develop AI tools to accelerate and/or improve the detection and classification of various diseases based on numerous data sources ranging from RGB images to CT scans, ECG signals, mammograms and to pathological slides. The state-of-the-art results demonstrate that software can do these tasks faster, cheaper, and often more accurately than trained experts and professionals.
This is an important development which, if successful, can have far-reaching consequences: it can make diagnostics much more widely available and it can free up medical experts’ time to focus on more complex tasks which currently sit beyond the capabilities of AI-based automation. The technology is today making leaps forward, but technology is only a piece of the puzzle, and many other challenges will need to be overcome before such software tools are widely adopted. However, the direction of travel is clear.
Naturally, there is a strong business case here, and many are seeking to capitalize on it. One example is IDx, based out of Iowa in the US, who has designed and developed an algorithm to detect diabetic retinopathy. Their AI system achieves a sensitivity and specificity of 87% and 90%, respectively. In as early as 2017, it was tested at 10 sites across the US on 900 patients.
A very insightful test in eye clinics is the OCT (optical coherence tomography), which creates high-resolution (5um) 3D maps of the back of the eye and require expert analysis to interpret. OCT is now one of the most common imaging procedures with 5.35 million OCT scans performed in the US Medicare population in 2014 alone. This creates a backlog in processing and triage, and such delays can be harmful when they cause avoidable treatment delay for urgent cases.
This two-stage design is beneficial in that when the OCT machine or image definition changes, only the first part will need to be retrained. This will help this algorithm become more universally applicable. In an end-to-end training network, the entire network would need to be retrained.
DeepMind demonstrated that performance of their AI in making a referral recommendation, reaches or exceeds that of experts on a range of sight-threatening retinal diseases. The error rate on referral decision is 5.5%, exceeding or matching specialists even when specialists are given fundus images as well as patient notes in addition to the OCT. Furthermore, the AI beat all retina specialists and optometrists on selectivity and sensitivity measures in referring urgent cases. This is clearly the first step, but an important one that truly opens the door.
The business cases are not just limited to cancer detection. Haut.AI is an Estonian company that proposes to use images to track skin dynamics and offer recommendations. One example is that their AI can be a simple and accurate predictor of chronological age using just the anonymized images of eye corners. The networks were trained on 8414 anonymized high‐resolution images of eye corners labelled with the correct chronological age. For people within the age range of 20 to 80 in a specific population, the machine reaches a mean absolute error of 2.3 years.
There are naturally many more start-ups active in this field. Some firms are focused on health diagnostic whilst others are seeking to use the AI to create tailored skincare regimes and product recommendation. The path to market, and the regulatory barriers, for each target function will naturally be different.