An innovative project led by GLIDE Fellow Arsenii Alenichev, and colleagues Koen Peeters and Patricia Kingori, has explored how AI may be contributing to the ongoing use of existing tropes and prejudices within global health visuals.
Research has shown that stereotypical global health tropes, such as the so-called suffering subject and white saviour, can be perpetuated through the images chosen to illustrate publications on global health (Charani E Shariq S Cardoso Pinto AM et al. The use of imagery in global health: an analysis of infectious disease documents and a framework to guide practice. Lancet Glob Health. 2023; 11: e155-e164).
In this project, the team used the Midjourney Bot Version 5.1 (released in May, 2023), to attempt to invert these tropes and stereotypes by entering various image-generating prompts to create visuals for Black African doctors or traditional healers providing medicine, vaccines, or care to sick, White, and suffering children.
However, the AI proved incapable of avoiding the perpetuation of existing inequality and prejudice in the images it produced. Instead the team unwittingly created hundreds of visuals representing white saviour and Black suffering tropes and gendered stereotypes.
The researchers conclude that the case study suggests, yet again, that global health images should be understood as political agents, and that racism, sexism, and coloniality are embedded social processes manifesting in everyday scenarios, including AI. They highlight that global health actors are already using AI for their media, reports, and promotional materials, making this an urgent, complex, and extremely relevant problem for science and society.
The research was presented at the 2023 Oxford Global Health and Bioethics International Conference, with a commentary published in The Lancet Global Health in August 2023: Reflections before the storm: the AI reproduction of biased imagery in global health visuals.