Healthcare researchers must be careful not to misuse AI

Nature Medicine (2022). DOI: 10.1038/s41591-022-01961-6″ width=”800″ height=”530″/>

model customization. Given a data set with data points (green dots) and a true effect (black line), a statistical model aims to estimate the true effect. The red line illustrates a tight estimate while the blue line illustrates an overfitted ML model with over-reliance on outliers. Such a model seems to give excellent results on this particular dataset, but does not do well on another (external) dataset. Recognition: naturopathy (2022). DOI: 10.1038/s41591-022-01961-6

An international research team writes in the journal naturopathyadvises that great care must be taken to ensure that machine learning (ML) is not abused or overused in health research.

“I absolutely believe in the power of ML, but it has to be a relevant supplement,” said Dr. Victor Volovici, neurosurgeon trainee and statistics editor, first author of the commentary, from Erasmus MC University Medical Center, The Netherlands. “Sometimes ML algorithms do not perform better than traditional statistical methods, leading to the publication of articles that lack clinical or scientific merit.”

Real-world examples have shown that misusing algorithms in healthcare can perpetuate human biases or inadvertently cause harm when machines are trained on biased datasets.

“Many believe that ML will revolutionize healthcare because machines make decisions more objectively than humans. But without proper oversight, ML models can do more harm than good,” said Associate Professor Nan Liu, senior author of the commentary, from the Center for Quantitative Medicine and Health Services & Systems Research Program at Duke-NUS Medical School, Singapore.

“If we use ML to uncover patterns that we wouldn’t otherwise see — like in radiological and pathological images — we should be able to explain how the algorithms got there to allow controls and counterbalances.”

Together with a group of scientists from the UK and Singapore, the researchers emphasize that while guidelines have been formulated to regulate the use of ML in clinical research, these guidelines only apply once a decision to use ML has been made, and do not ask if or when its use is even appropriate.

For example, companies have successfully trained ML algorithms to recognize faces and road objects from billions of images and videos. But when it comes to their use in healthcare, they are often trained with tens, hundreds, or thousands of data. “This underscores the relative poverty of big data in healthcare and the importance of working to match sample sizes achieved in other industries, as well as the importance of a concerted, international effort to share health big data,” the researchers write .

Another problem is that most ML and deep learning algorithms (which do not receive explicit instructions regarding the result) are often still considered a “black box”. For example, at the beginning of the COVID-19 pandemic, scientists released an algorithm that could predict coronavirus infections using photos of the lungs. In retrospect, it turned out that the algorithm had drawn conclusions based on the imprint of the letter “R” (for “Right Lung”) on the photos, which was always in a slightly different place on the scans.

“We need to move away from the notion that ML can discover patterns in data that we can’t understand,” said Dr. Volovici on the incident. “ML can very well discover patterns that we can’t see directly, but then you have to be able to explain how you arrived at that conclusion. To do that, the algorithm needs to be able to show what steps it has taken, and that requires innovation.”

The researchers advise that ML algorithms (if applicable) should be compared to traditional statistical approaches before being used in clinical research. And when deemed appropriate, they should complement, rather than replace, physician decision-making. “ML researchers should recognize the limitations of their algorithms and models to avoid overuse and abuse, which could otherwise create mistrust and harm patients,” the researchers write.

The team is working to organize an international effort to provide guidance on the use of ML and traditional statistics, and also to set up a large database of anonymized clinical data that can leverage the power of ML algorithms.

Reliably detect cancer from patient data with AI

More information:
Victor Volovici et al, Steps to Avoid Overuse and Abuse of Machine Learning in Clinical Research, naturopathy (2022). DOI: 10.1038/s41591-022-01961-6

Provided by Duke-NUS Medical School

Citation: Healthcare researchers must be be warn of misusing AI (2022, September 13), retrieved September 13, 2022 from

This document is protected by copyright. Except for fair trade for the purpose of private study or research, no part may be reproduced without written permission. The content is for informational purposes only.

Comments are closed.