Ethical considerations of AI in healthcare

Visit BSI's Experts Corner: Home for insights from BSI’s practice directors and industry experts on digital trust, environmental, health, safety, security, and sustainability.

June 22, 2023 - In times of unprecedented demand, hospitals and medical institutions are utilizing artificial intelligence (AI) to help ease pressures on staff, while maintaining excellent quality of care. However, for AI models to make accurate predictions, it needs to be trained using real-world data that is currently riddled with many biases that plague society.

AI is revolutionizing healthcare

AI has great potential to accelerate medical research and preventative healthcare. For example, early signs of can be detected quickly and accurately using medical imaging tools overlaid with AI technology. Furthermore, highlights ’s AlphaFold system, which can from its amino acid sequence, solving the infamous “protein-folding” problem, and potentially transforming the field of medical research and drug discovery.

A moral dilemma

While AI is a transformational tool in healthcare, developing systems that align with ethical principles and values is critical for widespread adoption. Ethical AI involves ensuring that the technology is designed, deployed, and governed in a manner that respects human rights, accountability, and privacy. Ethical concerns with AI include:

  • Under-representation of minority groups. Diseases manifest different symptoms depending on the patient's gender and race. Therefore, the under-representation of minority groups in training data will exacerbate disparities in an AI model’s performance when it is applied to diverse patient populations. Moreover, training models to predict healthcare risk using healthcare spending data can proliferate socio-economic biases, as these would predominantly feature economically advantaged demographics who can afford healthcare.
  • Data privacy concerns. Healthcare data can have many sources ranging from hospital patient records to data collected by wearable sensors. This highly personal data comes with a network of ethical concerns around privacy and confidentiality, informed consent, and patient autonomy (read more in Keeping up with cyber risks in AI-powered healthcare wearables by Jeanne Greathouse). Access to this data must be governed carefully, as this information in the hands of bad actors can be exploited to limit economic opportunities for individuals with higher health risks.

Examining under an ethical lens

AI doesn’t exist in a vacuum. It involves organizational change with the model predictions often tightly coupled to decision-making processes. It is this systemization of AI that fuels its transformative power of scaling operations, but, when these decisions directly impact human lives, we are at risk of scaling the consequences of coded discrimination. For example, in 2019 it was found that prioritizing patients based on their health risk at the expense of constraining access for marginalized communities. This shows why it is crucial that the input data and the model outputs are examined critically under an ethical lens to recognize its limitations and define the situations within which a model’s performance is sufficient and can be rolled out at scale.

Establishing responsibility in the complex decision-making pathways of AI use is difficult, especially under unexpected circumstances or unintended outcomes which led to medical error or harm. Fostering transparency during the AI development and deployment lifecycle and instilling accountability by identifying responsible parties for adverse events is imperative to alleviate these concerns.

AI systems should complement, not replace, the judgment and expertise of healthcare providers, and patients should always retain the right to make decisions about their own care. A patient’s right to privacy and security of their personal health information shouldn’t infringe on their right to healthcare. Upholding individual human rights such as access to treatment shouldn’t mandate the patient being processed by an algorithm.

Responsibly applying AI

The AI standardization landscape is evolving at pace to cater to the rise in industrial AI adoption. With and underway, there is more guidance than ever for organizations to establish controls and maximize the value of AI use while minimizing risk. A broader organizational shift in mindset is required to stop addressing these ethical concerns as afterthoughts for compliance adherence.

The use of AI will undoubtedly push the frontiers of medical research and bring forth a new era of healthcare delivery, but it must be well-governed and have patients at the heart of all solutions. If these considerations are integrated early into the design process, organizations can foster responsible innovation and apply AI safely while protecting patients' rights and improving care.

For more insights on advancing technologies in healthcare, read Keeping up with cyber risks in AI-powered healthcare wearables and Digital trust in healthcare: Innovation vs protection. For further insights on digital trust, supply chain, sustainability and environmental, health, and safety topics that should be at the top of your organization's list, visit BSI's Experts Corner.