True or false? Five common myths about AI in healthcare

Published date:

Modified date:

AI brainTrust is widely recognized as a key component to the adoption of artificial intelligence in healthcare, as well as other industries. Step one is to confront over-claims, explode a few myths and establish some basic understanding about AI and its relationship to standards. Pat Baird, Head of Global Software Standards at Philips, explains more.

Myth 1: AI works like a human brain

When people see software performing tasks that previously only humans could do, they assume that the software operates like the human brain.
No, they don’t.
The artificial neural nets of AI are connected in much simpler ways than the complex biological neurons in our own brains, and the number of neurons is much smaller.
How we learn and what we do with that learning is much different. For instance, an artificial neural network in an image recognition system is given a library of images and it gradually learns how to classify them. The system finds patterns in the data – whether the patterns are true or not is a different topic. Animals and humans have a hybrid approach. We are born with some innate skills, and also learn by exploration. The types of tasks we can perform and those experiences give us a much broader view of the world and much broader capabilities than the current state of machine learning (ML) systems.

Myth 2: AI will replace workers

AI is a helper, not a replacement for human participation and knowledge.
Our current state of technology has been described as “narrow AI” . It can know a lot about a little. This is in contrast to people’s expectations for “general AI” where software can perform as well as, or supersede, human intelligence.
This difference is the reason why some people prefer the term “Assistive Intelligence” or “Augmented Intelligence”, rather than “Artificial Intelligence.”

Myth 3: AI is new

The term “artificial intelligence” was originally used over 65 years ago at a research conference, where John McCarthy, Marvin Minsky, and others proposed “every aspect of learning or any other feature of intelligence can be so precisely described that a machine can simulate it.”

Myth 4: AI is old. It has failed before and it will again

This myth stems from the fact that there have been several cycles of high expectations and a failure of AI technology to deliver.
But two things differentiate our current era from previous ones: an explosion of available data and significant improvements in computational power. This combination is largely why there has been a dramatic increase in the use of neural networks in the past few years.

Myth 5: AI needs a new regulatory approach

For traditional software, developers create an algorithm that can be explained to the user. For ML systems, we don’t know why a neural net might choose a particular outcome. This opaqueness can make people nervous. In ML discussions, the topics of “explainability” and “trustworthiness” often come up.
However, we already face issues of opaqueness for pharmaceuticals. For some drugs, we have a relatively good understanding of their chemistry and mechanism of action. For others, we don’t know why they work, we just have a large amount of clinical data that shows they do work.
One of the consequences is that when we know how something works, we will also understand circumstances where it does not work, or is not as effective, or some of the side effects that we should expect. Unfortunately, for opaque systems, we won’t know those items ahead of time, we will likely only discover the shortcomings through experience in using the product.
Although some people think we need a completely new regulatory approach to ML systems, I’ve found that many times, we already know how to address the issues, we’ve just not considered applying those techniques to ML systems.

Re-humanizing healthcare

There is a clear need to provide more healthcare than in the past, and to do it more efficiently and more effectively than before. ML applications can help.
There are many ways in which ML systems can help re-humanize healthcare. Like providing administrative support, helping with patient history review, highlighting points of interest in a diagnostic image, performing screening activities and suggesting a referral to a human specialist for further investigation.
AI can be used for narrow, well-defined activities, which frees up the caregiver to spend more time on what they can do best – apply common sense, consider context and extenuating circumstances, identifying exceptions, etc.
ML systems will enable caregivers to spend less time with computers and more time with their patients.
The faster we can eliminate myths, the faster this technology will help our caregivers and our patients.

Interested in AI and its potential? Hear from leading thinkers at BSI's The Digital World: Artificial Intelligence online event.

AI and standards: to discover published standards and those in development, look here.

 

Click here to provide feedback