Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Close up of a camera lens.

An innovative project led by GLIDE Fellow Arsenii Alenichev, and colleagues Koen Peeters and Patricia Kingori, has explored how AI may be contributing to the ongoing use of existing tropes and prejudices within global health visuals.

Research has shown that stereotypical global health tropes, such as the so-called suffering subject and white saviour, can be perpetuated through the images chosen to illustrate publications on global health (Charani E Shariq S Cardoso Pinto AM et al. The use of imagery in global health: an analysis of infectious disease documents and a framework to guide practice. Lancet Glob Health. 2023; 11: e155-e164).

The rendered image produced by the prompt 'Black African doctor is helping poor and sick White children, photojournalism' shows a White doctor helping Black African children.In this project, the team used the Midjourney Bot Version 5.1 (released in May, 2023), to attempt to invert these tropes and stereotypes by entering various image-generating prompts to create visuals for Black African doctors or traditional healers providing medicine, vaccines, or care to sick, White, and suffering children.

However, the AI proved incapable of avoiding the perpetuation of existing inequality and prejudice in the images it produced. Instead the team unwittingly created hundreds of visuals representing white saviour and Black suffering tropes and gendered stereotypes.

The researchers conclude that the case study suggests, yet again, that global health images should be understood as political agents, and that racism, sexism, and coloniality are embedded social processes manifesting in everyday scenarios, including AI. They highlight that global health actors are already using AI for their media, reports, and promotional materials, making this an urgent, complex, and extremely relevant problem for science and society.

The research was presented at the 2023 Oxford Global Health and Bioethics International Conference, with a commentary published in The Lancet Global Health in August 2023: Reflections before the storm: the AI reproduction of biased imagery in global health visuals.