The Future of AI in Radiology: There’s an App for That!

Easy access to algorithms accelerates development of apps – what will it mean for radiology?

By Eliot Siegel, MD, FACR, FSIIM.

Want to improve detection of intracranial hemorrhage in a trauma patient? Or quantify the progression of multiple sclerosis? There will be an app for that – and likely sooner than you expect!

image of cell phone with apps

What’s the future of AI apps in radiology? And what are their implications for radiologists?

Within the next five years, there will be a staggering number of new and useful apps for diagnostic imaging. Also, radiologists will consume them very differently – downloading a la carte algorithms for diagnostic imaging from “app store” platforms. What’s behind this fast-paced development? It is the decrease in time and specialized expertise it takes to develop new AI applications.  Read on to understand this transformation better  – and the implications for radiology.

What’s accelerating the development of AI apps in radiology?

Let’s start with a quick look at the technology developments that are fast-tracking AI applications. To some people, the application of artificial intelligence in diagnostic imaging sounds like a new concept. However, radiology has been applying a form of AI – computer-aided-diagnostics (CAD) – for decades. The AI applications that are emerging now are no better and no worse than the CAD ones.

However, developing CAD applications is a multi-step, time consuming, and complex process. It requires sophisticated knowledge of  image segmentation, feature extraction, and statistical analysis.  First, a CAD developer needs to have access to a large number of cases of a given pathology, such as lung cancer. Then the developer must choose from a variety of advanced image analysis techniques that define the boundaries of the lung, segment out normal anatomy, identify structures that are outside the realm of normal anatomy, and then isolate those structures. Then the developer must analyze multiple parameters such as size and border contours, symmetry, texture, and many others.

Today, algorithms are generated based on the data – eliminating the lengthy development process.

Here’s how the process looks today, using lung cancer as an example.  The developer accesses a large cohort of cases that are positive or negative for lung cancer.  Then the developer applies open source tools for deep learning readily that are available from Google and elsewhere to create an algorithm that can discriminate between positive and negative. The algorithm is generated based on the data – eliminating the lengthy development process described above.  It is important to note that high-quality and detailed annotations and large numbers of cases are required to optimize this process.


Who’s minding the app store?

Without a doubt, the difficulty and length of development time has decreased. But subsequent discussions aren’t so black and white. A hotly researched topic now is how many datasets – medical images – are sufficient to “train” the algorithm. Some developers – likely those who are not from the medical profession – think as few as 20 cases are sufficient. They don’t understand that generalizing and learning from a limited dataset is a skill unique to humans at this point in time.

Although it takes far less time and expertise to develop a deep learning algorithm, the testing, verification, and regulatory approval takes as long or even longer than for the more traditionally developed CAD applications. So far, the FDA has cleared less than a handful of deep learning applications. One hurdle is that evaluating the efficacy of AI applications requires new skill sets. And soon, the FDA will be flooded with AI applications as new developers enter the market.

Other questions have yet to be answered. For example, who will test the algorithms to see if they actually work? Many recent studies – typically published without clinical peer review –have suggested that deep learning outperforms radiologists.  However, these studies have major clinical flaws.

Which apps should be approved by the FDA? And who is responsible from a medico-legal perspective?

Also, which apps should be approved by the FDA? Currently there are more than 100 smartphone do-it-yourself dermatology diagnostic apps, none that have been extensively tested in clinical trials. Will an analogous, unregulated set of apps be developed for the public to use with the CDs they are given after their MRI, CT, or other imaging studies?  We have well-developed, strict requirements for certification of our radiologists. We need to be equally strict about testing and validating medical imaging algorithms, especially those offered directly to the public.

And last but not least, who is responsible from a medico-legal perspective for a missed or misdiagnosis when computers perform the primary reads? This substantial concern drives cardiologists in the U.S. to do overreads of EKGs, even though they have been interpreted by computers for 40 years.

Exploring AI in the short term

The answers to these big questions will take time. What will happen in the short term that will impact radiologists and imaging directors?

Over time, you will be presented with a fair number of algorithms you haven’t seen before. And some will come from companies you have never heard of – at least not in the medical imaging space. The easy accessibility to algorithms has democratized the development process. Innovative startups and universities as well as current providers will become medical imaging app providers.

You will also be able to consume the new applications in a way that is much less constrained than before. You won’t necessarily need an extensive contract with a single group of algorithm developers like you do now for your PACS and other software. You will be able to purchase the applications you want ‘a la carte’ – picking one app from one vendor; other apps from different providers.

AI questions to explore in your practice

Start by asking yourself what kind of apps would improve your workflow. Which interpretations are more common to your patient population? Then look to see if there is an app for it. As you evaluate the app, consider the dataset. Was it based on a population similar to yours?  An algorithm developed using a database from a single or limited source might not reflect your  patient population.

Have a conversation with current providers of diagnostic imaging software. Ask about their roadmap for app development. Tell them the types of software you are interested in, whether it involves imaging findings, hanging protocols, image quality assessment, clinical information extraction etc. And ask about their plans for deploying the apps. Will you go to their website where the algorithm resides and upload your images? Or will you install their software on your workstations?

Today, our imaging centers have multiple workstations – one for each vendor’s software. Will this be the case for apps? Or is there a new workstation or software on the horizon that will be more flexible? Get your IT department involved in the conversation, too.

AI for non-pixel data holds more promise than analysis of images

Despite all of the attention focused on AI for image interpretation, we can lay to rest the discussion of whether AI will replace radiologists. It won’t.

There are a handful of effective apps that are replacing some of the mundane functions in radiology interpretation such as lung nodule or rib fracture detection on thoracic CT. Other deep learning algorithms provide preliminary interpretation to bring suspicious cases for pathology to the top of a worklist. But in these cases, AI is our partner in diagnosis rather than an alternative reader.

In the short term, the majority of apps will be for non-pixel based interpretation. These are apps that improve image quality and accelerate image generation in MR, CT, and other modalities; or improve workflow, communication, and follow-up. I personally would like to see more time and effort spent on developing algorithms for enhanced image quality, safety (including dose), and more effective communication. Hopefully, there will be an app for that!

Eliot Siegel, MD, FACR, FSIIM, is Professor and Vice Chair at the University of Maryland School of Medicine Department of Diagnostic Radiology and Nuclear Medicine; and Chief of Radiology and Nuclear Medicine for the Veterans Affairs Maryland Healthcare System, both in Baltimore, MD.  He is also adjunct Professor of Computer Science and Biomedical Engineering at the University of Maryland undergraduate campuses.  He pioneered the world’s first hospital-wide filmless radiology department and has written over 300 publications on topics related to digital imaging, big data and high performance computing, and artificial intelligence applications in medicine. Dr. Siegel is on Carestream’s Medical Advisory Board and on Carestream’s Board of Directors.

Learn about Carestream’s advanced artificial intelligence and imaging analytics software tools that are designed to enhance both the quality and speed of diagnosis and reporting for radiology imaging exams.

#AI #algorithm #deeplearning #radiology

Can’t get enough of AI in radiology? Read on!

Increasing the Value of Enterprise Imaging Platforms with Analytics and Productivity Solutions

Defining Artificial Intelligence, Deep Learning, and Machine Learning in Diagnostic Imaging

Will Radiologists Be Replaced by Computers? Debunking the Hype of AI


This site uses Akismet to reduce spam. Learn how your comment data is processed.