(Illustration by Chris Gash)
From 2015 to 2021, I directed criminal justice grantmaking for Open Philanthropy, whose funding approach is grounded in effective altruism. This philosophy favors applying the best evidence and reasoning to maximize the impact of giving. To make the case for grants, I had to apply rationalist tools and methods to giving money for a social justice cause.
The two worlds of rationalism and social justice don’t often intersect. Indeed, funders in social justice may view rationalist tools with suspicion. I’ve seen funders recoil at the idea of setting a numerical target and measuring progress toward that goal. Philanthropy colleagues expressed sympathy when I described making percentage predictions for grant success, calculating the return on investment for grants, and rigorously assessing where dollars would make the most impact. Such approaches can seem very alien to work that prioritizes social justice, raising fears that the strategies and campaigns of organizers and movement leaders will be dismissed in favor of bean counting.
Contrary to such worries, these metrics did not hold me back. During my six years at the foundation, I made recommendations for Open Philanthropy to fund more than $150 million in grants, including tens of millions of dollars to organizing, advocacy, and political work, very often led by Black leaders and formerly incarcerated people. Nowadays I use the same tools as the head of Just Impact Advisors, a grantmaking and donor advisory group working to end mass incarceration, with seed funding from Open Philanthropy and other donors. Rationalist methods help me make smarter decisions on innovative investments that I might miss with more conventional methods.
I propose that social justice philanthropy could benefit from adopting rationalist approaches. We could start with Bayes’ theorem, named after Thomas Bayes, an 18th-century English statistician. Bayesianism helps correct our biases by offering a framework for examining what we think and why we think it, and what it would take to change our minds.
Bayesian Questions for Philanthropy
In probability theory and statistics, Bayes’ theorem is a formula for calculating the probability of something being true, based on prior knowledge of or beliefs about conditions that might be related to that thing. It then provides a way to “update” existing predictions or theories given additional information or evidence.
For example, imagine that someone shows you a picture of a US resident and asks if you think they are more likely to be a farmer or a librarian, and say how confident you are in that guess. They tell you this person is an avid reader. You might reasonably think that most librarians are probably avid readers, so you’re 70 percent sure that the person is a librarian. But we’re missing a crucial step here: How many librarians live in the United States, compared with the number of farmers? It turns out there are about 12 times more farmers than librarians. Even if twice as many librarians as farmers are avid readers, someone who’s an avid reader is still much more likely to be a farmer.
The likelihood of any random person being a farmer or librarian is called a “prior”—it’s a fact or belief about the world before we get evidence (such as reading habits). Before we know how evidence should shape our beliefs, we have to dig up and examine our priors. Sometimes priors are facts about the world we can look up, such as the actual ratio of farmers to librarians. Sometimes they are facts that are hard to pin down, but we can make some educated guesses. And sometimes these priors are beliefs about the world that we have built through experience.
Based on priors, we can make predictions, such as whether a particular person is a farmer or a librarian. New evidence, such as whether someone is an avid reader, allows us to update our prediction about a particular case. We can also update our priors, whether by looking something up or by examining whether the evidence we’ve seen over time about different cases really matches our priors.
This process of figuring out our priors, and then deciding how evidence is going to update us on our prediction or our priors, is something we’re doing implicitly all the time, yet we don’t usually examine the process. This tendency leaves a lot of assumptions untested, opening up room for bias, and can obscure the reasoning for a decision.
Adopting Bayes-influenced thinking can help funders overcome the risk aversion and bias that hinder our efforts to invest in justice work.
The Bayes approach does not require us to measure everything quantitatively. Rather, it’s a way of using numbers to express degrees of uncertainty. For example, if we think something is fairly likely to happen, we might give it a 60 percent probability. In 6 out of 10 reruns of a scenario, we could imagine it happening. If it’s not likely but still has a real chance, we might assess a 20 percent probability, whereas if we’re extremely confident, we might say 90 percent. Think of the framework as a tool for making our intuitions, beliefs, and predictions transparent. Over time, we can compare reality with the probabilities we’ve given, learn how accurate we were, and adjust accordingly. The whole process is about comparing what we expected with what we saw. So if the 20 percent predictions come true 6 out of 10 times, we’re being overly cautious, and if 90 percent predictions come true only half of the time, we can adjust our confidence downward for those types of conditions.
So how do we apply Bayesianism to philanthropy? Think of a grant as being like a prediction—a bet that a certain outcome will happen if we add money. Here are some questions we should ask when deciding whether to fund and how much to give:
- What are my priors?
- What are my predictions, at what confidence level?
- What type of evidence could update my prediction?
- What type of evidence could update my priors?
If you aren’t used to thinking this way, it can feel awkward at first, but it can really clarify your thinking.
For example, suppose we make a grant to fund a campaign to close a jail. Our assessment (based on grantee statements) is that the jail is 20 percent likely to close if there is a funded campaign. We assess a 25 percent chance the mayor will make a public statement that the jail should close, and if this happens, it will make it 50 percent likely that the jail will actually close (prediction). After a lot of work, the lead campaign organizers secure a meeting with the mayor, where she makes positive statements about the campaign and says she will make a public announcement (evidence). If we think the mayor usually follows through on this type of commitment made at a meeting (prior), we may update our prediction that the mayor would declare closure to 90 percent. That adjustment also positively updates our prediction on the jail campaign succeeding from 20 percent to almost 50 percent—a huge improvement! This result could lead us to increase the grant. However, if we know that the mayor regularly makes upbeat comments in meetings but doesn’t follow through on them (different prior), then we’d probably want to wait for more tangible results before updating our view. Contextual knowledge gleaned from on-the-ground assessments is crucial for determining the impact of new information.
Bayesianism as a Justice Framework
Bayes’ theorem provides a powerful method for defining and updating our beliefs based on evidence and questioning long-held assumptions about how change happens. Funders’ priors may, for example, lead them to improperly assess grants as having a “high risk” of failure or to set metrics that aren’t relevant for updating predictions on grant success, which can distort the grantee’s work.
Consider two priors I’ve heard many times before and their corrections. First: “Doubling an organization’s budget creates disaster.” This claim can be true, but is not always true, and should be refined to allow for cases where it could lead to transformative growth and high impact. Second: “Organizers can’t succeed in working behind the scenes with inside players in government.” This bias against funding organizers for not being strategic shows up frequently in policy-oriented philanthropy. This prior, based on existing political arrangements, doesn’t consider the possibility that the calculation could change. For example, when community organizers in Los Angeles raised and spent a large amount of 501(c)(4) dollars on a local ballot measure, the County Board of Supervisors altered their political calculations, began responding positively to the organizers, and reversed their position on building a multi-billion-dollar jail. Many funders had previously dismissed this campaign as unwinnable.
The Bayesian approach helps us choose relevant metrics to use when projecting if a grant is likely to succeed. For example, is the number of meetings held with a coalition the type of evidence that will accurately update our assessment of whether the grant will succeed (in reducing incarceration, or whatever is our goal)? If not, we should not signal to grantees that we particularly care about them, we should not count them, and we should not make grant renewal decisions based on them. Instead, we should ask grantees and other knowledgeable people what evidence would update their prediction about whether they were more or less likely to succeed. We should then measure those things.
Funder priors based on past experiences should be reexamined in the face of new evidence. If we could push ourselves to be transparent about our assumptions, making careful predictions about what should happen if we are right, then we could more clearly see when the evidence proves us wrong. We would then have the opportunity to update our views, recalibrating them to match reality. Adopting Bayes-influenced thinking can help funders overcome the risk aversion and bias that hinder our efforts to invest in justice work.
Read more stories by Chloe Cockburn.
