The emergence of COVID-19 Pandemic brought the healthcare system worldwide at crossroads leading to health system failure in a few countries. The burden of chronic diseases, aging population, humanitarian crisis, limited finances, demand for healthcare services accessibility, affordability, availability and keeping healthcare sustainable lead to exponential healthcare cost development that outpace GDP growth rate. Post pandemic AI has created a revolution in healthcare.

The capability of integration of algorithms into technology i.e. systems or tools to learn from data so that automated tasks are performed without explicit programming by humans for each step is called Artificial Intelligence (AI).

AI possesses a huge potential to serve as a support structure for improving healthcare, however like all technologies it holds risks and can cause enormous harm if its design, development and implementation is not regulated. WHO in 2021 came up with Ethics and Governance of Artificial Intelligence in Healthcare. It focused upon maximizing the benefits of AI technologies without compromising the ethics which can come in the way of healthcare systems, practitioners and beneficiaries.  The guidelines emphasized on six principles: Protecting human autonomy, Promoting human well-being and safety and the public interest, ensuring transparency, explainability and intelligibility, fostering responsibility and accountability,  Ensuring inclusiveness and equity,  Promoting AI that is responsive and sustainable. The report pointed out that the benefits of AI and its opportunities are linked to serious challenges, threats and risks and brought forward the regulation of AI technologies that can come up with powerful commercial interests of large technology companies or the interest of governments for social control and surveillance, thereby leading to unethical collection and use of health data, biased algorithms, patient safety and rights and cyber security.

The rapid development of Large Multimodal models (LMMs) and its usage in healthcare is a matter of great concern. LMMs have the capability to accept one or more types of data input and can generate a new output which may or may not be associated with input data. The adoption of LMMs has been very rapid and compelling as it facilitates human computer interaction that mimics human communications and generates responses to inputs and queries which appear human-like and authoritative. Various regulatory, ethical challenges and risks have come in picture e.g., are the LMMs regulated by International Human Rights, National Data Protection Laws, consumer protections laws? Insufficient corporate commitment of ethics and transparency by large tech companies building these LMMs and their dominance over governments because of their significant contribution in financial resources, manpower, data and computing required for development of LMMs. Other risks include carbon footprints leading climate change and undermining human epistemic authority even in healthcare, science and medicine. WHO in January 2024 has come up with Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models. The governance is required at three levels of the AI Value chain i.e. Design and Development, Provision and Deployment.

Various Risks to be addressed during:

  1. Development – Bias, privacy, labor concern, carbon and water footprint, labour concerns, misinformation, epistemic authority of humans, safety, cyber security and exclusive control of LMMs
  2. Provision – System wide bias, misinformation, privacy, manipulation, automation bias
  3. Deployment – Inaccurate or false response, bias, privacy, accessibility, affordability, automation bias, skill degradation, labor and employment, quality of patient-professional interaction

Various actions need to be taken by:

  1. Developers – Training, data protection, transparency, fair wages, involving diverse stakeholders in design, design with focus on accuracy, prediction, based on consensus principles and ethical norms.
  2. Governments – Data protection and consumer protection laws, mandate outcomes, audits during early AI development, promote open source LMMs with funding, focus on ethical risks, certifications, carbon and water footprints and early registration of algorithms by developers, investment, provision of infrastructure, regulatory agencies for assessment and approval, mandate ethical and human rights standards, requirement of source code and data inputs, proof of performance and compliance, prohibition of non-trial experimental use, post release audits and impact assessments, enforcing operational disclosures, train healthcare workers, facilitating public participation, improve AI literacy, ensuring responsible practices by value chain actors.
  3. Deployers – No use of LMMs in appropriate settings, avoid using LMMs in inappropriate settings, communicating about clear warnings and measures the known risks, errors and harms, inclusive prices, language inclusion to ensure affordability and availability.

The guidance paves the way forward for regulated AI in Healthcare to reap maximum benefits without compromising on ethics and principles.

Source: WHO