WHO flags AI’s “enormous potential” for healthcare despite ethical challenges
29 Jun 2021 --- The World Health Organization (WHO) is warning that artificial intelligence (AI) must have ethics and human rights at its heart if it is to improve global healthcare. After almost two years of expert consultations, WHO has released its first global report on AI’s role in health, which lays out six guiding principles for its design and use.
“We know that countries are only now just starting to make use of AI, for health care and in many other domains. We hope that this report can help countries to consider the best ways to both design and use AI to benefit their own people and other countries and communities where appropriate,” WHO spokesperson Tarik Jašarević tells NutritionInsight.
The report aims to guide stakeholders – whether ministries of health, companies, providers, and civil society – to think through maximizing the public health impact of these technologies.
WHO plans to disseminate this report and its recommendations through country-level work in the coming period to ensure that it is both engaging with governments on this topic and designing appropriate frameworks, laws and policies for the future. It is also working with other UN agencies (and OECD) to ensure alignment.
Laying down key principles
WHO’s report ultimately lays down six principles to limit the risks and maximize the opportunities intrinsic to the use of AI for health.
- Protecting human autonomy: People should remain in control of health care systems and decisions, with data being protected.
- Promoting human well-being, safety and the public interest: AI designers should satisfy regulatory requirements for safety, accuracy and efficacy.
- Ensuring transparency, explainability and intelligibility: Sufficient information should be published before the design or deployment of AI technology.
- Fostering responsibility and accountability: Stakeholders must ensure AI is used under appropriate conditions by trained people.
- Ensuring inclusiveness and equity: AI for health should encourage the widest possible equitable use and access, irrespective of age, sex, gender, income, race, ethnicity, sexual orientation, ability or other characteristics.
- Promoting AI that is responsive and sustainable: AI should be continuously assessed during use to determine whether it responds appropriately to expectations and requirements.
“We hope the principles and recommendations can be put into practice by governments through new policies and laws,” notes Jašarević.
Dynamic applications
Jašarević explains that while the health industry is still very much in the early stages of AI use, there is an acceleration in the uses of this technology on a pilot basis and in some cases as a standard of care, especially for diagnosis.
Other uses of AI include strengthening health research and drug development and supporting public health interventions.
It can also help empower patients to take greater control of their own health care and better understand their evolving needs.
Within nutrition, AI has moved from an abstract buzzword to a key tool. In January, DSM partnered on the Artificial Intelligence Lab for Biosciences, which applies AI to full-scale biomanufacturing, from microbial strain development to process optimization and scheduling.
Meanwhile, Brightseed’s Forager AI is used by both Danone North America and Pharmavite to identify undiscovered plant phytonutrients for their health potential.
Pinpointing risks
The COVID-19 pandemic has also accelerated the willingness to use AI and the types of uses, according to Jašarević. This includes contact tracing, disease surveillance, drug development or AI-guided chatbots.
“Like all new technology, AI holds enormous potential for improving the health of millions of people around the world, but like all technology, it can also be misused and cause harm,” warns Dr. Tedros Adhanom Ghebreyesus, WHO director-general.
The report cautions against overestimating the benefits of AI for health, especially when this occurs at the expense of core investments and strategies required to achieve universal health coverage.
Key risks also include unethical collection and use of health data, biases encoded in algorithms, and risks of AI to patient safety, cybersecurity and the environment. Many of these themes are similar to concerns around genomic testing – the topic of a recent UK House of Commons report.
WHO also emphasizes that systems trained primarily on data collected from individuals in high-income countries may not perform well for individuals in low- and middle-income settings.
AI over the next decade
Looking to the future, Jašarević acknowledges that the AI space is hard to predict. “However, we know it will evolve rapidly – as has in just the eighteen months that the expert group put together this guidance.”
AI’s evolution will depend in part on the ability of different stakeholders to imagine appropriate uses of AI. This is while there is appropriate testing and evaluation of such technologies to ensure they are working appropriately.
“AI’s future will also rest in part on ensuring the right investments are made to ensure that we are avoiding bias in the use of AI, and also are overcoming a lasting digital divide that prevents many countries and communities from making full use of these technologies,” Jašarević concludes.
By Katherine Durrell
To contact our editorial team please email us at editorial@cnsmedia.com
Subscribe now to receive the latest news directly into your inbox.