Developing trustworthy AI solutions for healthcare

The use of AI has steadily increased in healthcare, a development that is both promising and worrying if left unchecked.

AI technology has made remarkable advances over the past decade. Computers can accurately classify images and map their surroundings, enabling cars, drones and robots to navigate real-world spaces. AI has enabled human-machine interactions that were not possible before.

Because of this, AI is being studied for a wide range of healthcare applications. These include improving patient care, accelerating drug discovery, and enabling efficient operations and management of healthcare systems.

The most important goals for patient care include the analysis of X-ray images and tissue samples for detection and diagnosis as well as individualized precision medicine for the treatment and therapy of diseases. But it’s especially important to exercise caution when positioning a machine to make life-or-death decisions.

The focus should be on AI, which can support, not replace, human decision-making in healthcare. A framework in which humans collaborate with machines to arrive at such decisions is great to strive for, recognizing that machines could provide important insights that complement medical professionals.

It’s also worth considering that machines can have potentially serious errors in their judgments. Depending on the AI ​​tools used, they also lack the ability to explain the reasons for a particular decision in a way that patients and doctors can trust.

Factors affecting the trustworthiness of AI decisions in healthcare

There are numerous factors that influence the trustworthiness of AI systems. Bias has been widely cited as one of the key issues in AI-based decision-making systems.

A blog by Michael Jordan, a professor of computer science and statistics at UC Berkeley, highlighted the story that his pregnant wife was told she was at increased risk of giving birth to a child with Down syndrome. Her ultrasound showed white patches around the baby’s heart, an indicator of the condition. However, this result was based on a statistical model using a much lower resolution mapping engine. In this case, increased resolution and additional noise in the measurements led to a recommendation to perform risky amniocentesis. Luckily, they chose not to have the procedure, and Jordan’s wife gave birth to a healthy baby a few months later. Others might not have been so lucky.

Experiences like these underscore the need for a principled approach to building and validating AI-based decision-making systems. Beyond the issues of data quality, bias, and robustness, there is a need to develop systems that are explainable and interpretable, and risk management strategies to identify priorities and make decisions. A good framework and policies help AI systems make better decisions and build trust with stakeholders.

Other factors involve ethical and societal concerns. This is important for any AI-based decision-making system and critical for systems responsible for ensuring security. We could imagine a health management system that decides which patients should receive treatment that has limited availability or be sent to intensive care before others who need more urgent care.

There are concerns about privacy and the expectation that AI systems will have some level of transparency and accountability. Some of these questions do not have a clear answer and require further reflection.

Certification to the rescue?

Many industries have benefited from standards that support some level of guidance for the development, production, and distribution of a product or service. The International Organization for Standardization (ISO) has established numerous management system standards that specify requirements to help organizations manage their policies and processes to achieve specific goals.

The AI ​​community is developing a set of standards to guide industry best practices. Methods for assessing the robustness of neural networks and bias in AI systems have already been developed. Others under development will specify risk management processes, methods to handle unwanted biases and approaches to ensure transparency. Compared to other industries, healthcare systems will certainly have more stringent requirements for data quality, reporting requirements, and more.

While standards and certification programs will not be a silver bullet, they will ultimately provide a framework to use AI responsibly, measure the effectiveness and efficiency of their systems, manage risk, and continuously improve processes. It’s still a few years away, but the community is working toward that goal.

support of the decision-making process

So what can we do in the meantime? We should focus on AI that can support the decision-making process, including tools that can help medical professionals make informed decisions.

Systems that can perform or support routine tasks, such as They help healthcare professionals spend more time on pressing issues and create opportunities for more face-to-face interactions with patients.

For example, consider a technology solution that performs non-contact line-of-sight monitoring of vital signs such as heart rate, respiratory rate, and body temperature in places where people congregate. Installing such camera systems in nursing homes or residences where seniors “age in place” allows for continuous monitoring of their condition and can alert caregivers or healthcare professionals to any changes in a person’s health that may require attention.

As technology advances and our understanding of the AI-based decision-making process improves, we certainly expect it to play a bigger role in healthcare decisions.

According to the American Hospital Association, the nation will face a shortage of 124,000 doctors by 2033, and will need to hire at least 200,000 nurses a year to meet the increased demand. The American Health Care Association and the National Center for Assisted Living (AHCA/NCAL) also found that 99% of nursing homes and 96% of assisted living facilities face staffing shortages.

Given these sobering numbers, the growth of AI and automation for healthcare applications will be crucial in the coming decades. It underscores the need for AI to support healthcare professionals today and in the future to work more efficiently and smarter without sacrificing safety.

A few years away, AI-driven solutions will align with emerging industry standards to provide tools that safely assess and monitor residents, assist with patient diagnosis and recommended treatments, and dramatically improve the quality of patient care.

Photo: metamorworks, Getty Images

Comments are closed.