Champion safety and ethics: A new era for AI in healthcare
A wide body of literature now exists on how to assess AI technologies for use in healthcare settings, but this knowledge doesn’t lend itself to practical application. This blog post introduces a new national standard for AI in healthcare that synthesizes what’s been learned into a single, practical and auditable framework.
A body of guidance and academic literature has recently emerged that attempts to identify a range of key technical, professional, organizational and ethical criteria for the use of AI systems in healthcare. This is all valuable, but not necessarily easy to work with. Particularly since much of the content is given in the form of higher-level guidance that is neither practical nor directly actionable.
With this in mind, Guy’s and St Thomas’ Cancer Centre conducted a review of published validation frameworks, evidence standards and methods for the evaluation of AI technology in healthcare. The review was also combined with discussions with a range of individuals engaged in the field of AI and machine learning. A picture emerged of a lack of any nationally or internationally agreed validation frameworks for AI development and clinical evaluation in healthcare.
The role of standards
It was furthermore identified that this presents problems for both suppliers of AI systems for healthcare, and for healthcare organizations looking to assess systems for incorporation into their provision of care. Suppliers developing AI systems may either lack the capacity or the full range of expertise necessary to gather separate pieces of existing guidance and translate them into practical steps in their production process. Healthcare organizations looking to procure AI systems may not have the expertise to carry out their own assessments of those systems to make sure they meet sufficient standards of quality, safety and equity in outcomes.
In parallel, BSI started working with regulators, healthcare organizations and other bodies to consider the role of standards in supporting the regulation and governance of AI in healthcare. It was determined that what was missing from the current landscape was a standard focused on the use of AI in healthcare that is concretely auditable.
Such a standard would translate the assessment of complex functionality and ethical considerations into an actionable and informative framework, where, if necessary, responsibility for assessment could be devolved to a third-party auditor.
Building on the initial work from Guy’s and St Thomas’, and to fill the identified gap in the landscape, a diverse panel of experts and stakeholders from industry, government and the healthcare sector was convened by BSI to develop BS 30440:2023 Validation framework for the use of AI within healthcare – Specification
A single, practical framework
This standard synthesises a range of the aforementioned literature into a single, practical framework setting out a comprehensive set of requirements for important evaluation criteria such as clinical benefits, standards of performance, successful and safe integration into the clinical work environment, ethical considerations, and socially equitable outcomes from system use. This framework can serve as comprehensive guide for AI suppliers during their development process.
In addition, the standard consists of a set of concretely auditable clauses against which an AI system can be assessed for absolute conformity. This means that AI systems for healthcare can be certified by third party auditors with domain expertise as compliant with the specifications laid out in BS 30440:2022. Healthcare organizations can then mandate BS 30440 certification as a requirement in their procurement processes, thus ensuring these systems have met a known standard.
The net result is a document that can help users ensure that AI systems in healthcare settings are offering demonstrable clinical benefits, reach sufficient standards of technical performance, safely integrate into the clinical working environment and lead to socially equitable outcomes. It's hoped that in addition, this new standard can help accelerate innovation in AI healthcare systems and facilitate their trade, as well as improving the efficiency with which solutions are developed and deployed.
Learn more about BS 30440 or