Academics are warning charities against using the “high-tech shortcut” of artificial intelligence imagery in their campaigning.
They warn that while AI images offer charities cost efficiency, flexibility and speed, especially as budgets in the sector tighten, they also risk potential reputational damage.
In their report Artificial Authenticity, academics at the University of East Anglia (UEA) warn that when AI images are used “the humanitarian cause effectively disappears from the conversation”.
Their research is based on analysis of 171 AI-generated images and more than 400 public comments on campaigns from 17 organisations, including Amnesty International, Plan International, the World Health Organization (WHO) and WWF.
Researchers found in some cases the public welcomed AI imagery as a way of protecting vulnerable individuals from exploitation.
But others see it "as a distraction from real solutions, particularly in emotionally sensitive campaigns such as cancer or famine”, said researchers.
“When AI is used, discussion often shifts away from the cause and towards debates about technology and trust.”
Among comments looked at, 141 from the public focused on AI ethics and authenticity concerns not the charitable cause, 122 were critical of the quality of the images and just 80, representing a fifth comments, engaged with the charity’s cause.
Emotional connection at risk
“Charities exist because people care about other people,” said report co-author David Girling of UEA’s School of Global Development.
“The moment when audiences start questioning whether what they are seeing is real, the emotional connection that drives support is put at risk.”
He added: “The debate about the ethics of AI is increasingly polarised. AI is not inherently wrong, but if it begins to overshadow the human story at the heart of charitable work, organisations could lose far more in trust than they gain in efficiency.”
Of images looked at seven in ten were designed to be appear photorealistic, and poverty was a dominant theme in 51 out of 171 images that often featuring children.
While more than four in five images were appropriately captioned as AI generated “this disclosure did not protect the cause and organisations from backlash, even when transparently labelled” warn researchers.
“Ultimately, the future of charity storytelling will not hinge on technological capability alone,” added report co-author, media, communications and development consultant Deborah Adesina, who is a former Master’s student at UEA’s School of Global Development.
“It will depend on whether organisations can maintain legitimacy, transparency and moral coherence in an environment where audiences are increasingly media literate and increasingly sceptical.
“For communications teams who opt to include generative AI in their workflow, proper training in ethical prompt engineering will be crucial to avoid reputational harm and unintended bias.”










Recent Stories