An experiment in participatory grant‑making shows what AI can learn from communities – and what it can’t, writes Natasha Friend, Director of Camden Giving.
___________________________________________________________________
Everyone in the charity sector is talking about what AI can and can't do right now. We decided to find out - and what we learned says as much about the limits of traditional grant-making as it does about the limits of artificial intelligence and LLMs.
At Camden Giving, funding decisions aren't made by professionals. They're made by community members with first-hand and lived experience of the issues we're addressing; young people, residents, people who've lived the problems that charities like ours are trying to solve and support. We call it participatory grant-making. And earlier this year, we ran an experiment to understand what that approach actually adds.
We fed ChatGPT our young panellists' funding criteria alongside standard grant-making principles - things like maintaining a balanced risk portfolio - and asked it to select four organisations to fund from a set of real applications. Then we ran our normal panel session without telling the young people about the AI. We compared the results afterwards.
Two of the four organisations matched. The divergences are where it gets interesting.
Interestingly, the AI recommended an organisation that our young panellists rejected. However, the young people had good reason to have selected differently. They knew of equivalent services that were already available for free in Camden. That is exactly the kind of local knowledge that no algorithm, and no professional funder sitting outside a community, could ever hold.
In the end, the young people funded an organisation that AI didn't select. It works with young people involved in gang violence. It resonated because several panellists knew it personally. "I've seen people go into that room and come out fundamentally different," one said. The reason that organisation works isn't necessarily in its application. It's in the background of its leader, and the trust it has built in very specific streets in a very specific borough. That doesn't show up in a grant assessment framework. And it doesn't show up in an LLM prompt. We didn't tell the young people that we'd consulted AI before they awarded the grants (but we did gain permission to use AI to access criteria from all our applicants). We wanted the exercise to be a kind of ‘blind test’ and ensure there was no artificial persuasion.
The AI also surprised us in a different way. One organisation that our own team had privately felt submitted a weaker-than-usual application was still ranked highly by the model. That made us examine our own assumptions and familiarity bias.
This is what participatory grant-making forces into the open: not just what communities know that funders don't, but what everyone in the room - including Camden Giving staff - is bringing to the table that has little or nothing to do with the application paper in front of them.
We'll keep running these experiments. We're building a voting platform that will eventually let us analyse patterns across hundreds of decisions - identifying where community panels diverge most from algorithmic recommendations, and what that divergence tells us about whose knowledge the sector is currently leaving out of funding decisions.
The question for funders, then, isn't whether AI is useful. It probably is; for due diligence, for summarising applications, and for the administrative weight that currently consumes so much of our time.
The question is whether we're using it in ways that add to community decision-making power, or inadvertently replacing it. At Camden Giving, we think we're starting to find out.










Recent Stories