36th international conference of the European Society for Philosophy of Medicine and Healthcare.
The claimed opportunities of using Artificial Intelligence (AI) in the context of population screening could challenge established norms for responsible implementation of screening programs. Recent development of AI-based systems for imaging with performance that can reach or even surpass that of expert clinicians makes these tools attractive to help implement screening programs; especially regarding the possibility of employing them to automate part of the processes via triage or prescreening. However, in light of the techno-moral change phenomenon, prospects will likely change earlier considerations concerning social acceptance and ethical acceptability, as well as introduce novel challenges to the ethical and legal norms of responsible screening.
Population screening for disease has been a topic of discussion in the field of ethics of healthcare for decades. Assessment of screening programs is often carried out through the use of screening criteria, which usually refer to the classic Wilson & Jungner's principles for screening, developed for the World Health Organization back in 1968. These criteria have stood the test of time and have often been reconfirmed as the gold standard for assessing screening programs. Nevertheless, over the last half-century, they have been challenged by several authors, who have attempted to adapt or reinvent them to better fit within their specific context of screening, particularly in the field of genetics.
In this article, we will briefly reconstruct the debate around Wilson & Jungner's principles for screening to show how they have been challenged and how they have developed in current practice. Subsequently, we will outline promises and expectations of using AI in imaging-based population screening presented in the literature. Based on these anticipated developments, we will critically analyze the renewed Wilson & Jungner's criteria to shed light on whether and how these criteria could accommodate responsible screening when using AI and contribute to their possible adjustment for the AI age.
Throughout this analysis, we will draw examples from different types of AI-enabled screening under study, especially Lung Cancer Screening and Diabetic Retinopathy Screening. Furthermore, we will consider critical aspects arising from the use of AI in screening programs, such as the issue of automation, the management of incidental findings and informed consent, as well as potential soft impacts of AI in the context of screening.