Blue ribbon on a yellow background (Illustration by Matt Chase) 

Trustees of charitable foundations in the United Kingdom are predominantly white, male, and above retirement age. Experience also suggests that most of them are posh, as are their staff. These traits make trustees a poor match, demographically, for the communities that their foundations serve. This disparity surely impedes their understanding of the problems they work to alleviate and the organizations they support, as well as possibly deterring some prospective applicants for grants and jobs.

Foundations are uniquely unaccountable. They don’t need to compete for anything, because most of them already have their resources. Government does not mandate any assessment of foundations’ performance or practices, and most assessments of foundations are opt-ins: Foundations decide whether to have themselves evaluated or to survey their grantees, and then whether to publish the results.

A group of UK grantmaking foundations believes that this is not enough—that more progress is required that some external scrutiny might prompt. Together, they have funded the Foundation Practice Rating, a groundbreaking initiative that each year picks 100 of the UK’s community foundations and large foundations and assesses their practices in diversity, accountability, and transparency. Defining the criteria, creating the system, and conducting the research and analysis are outsourced to Giving Evidence, an independent research and consultancy organization that I lead. The research uses only the foundations’ publicly available information, such as what is published on their websites and in statutory annual reports—i.e., the same kind of information available to prospective grantees or job-seekers.

Foundations selected for inclusion cannot opt out, nor can they influence the process or findings. Each foundation assessed receives a rating for each of three domains, plus an overall rating: A (highest), B, C, or D. The Foundation Practice Rating (FPR) is not a ranking, and it is not based on a curve: Everybody can receive an A or everybody can get a D, and one foundation can rise without another needing to fall. We publish all the results, together with analyses.

The rating represents an unprecedented audit of these powerful organizations. We noticed that years of meetings and discussions about diversity and accountability have had little effect on foundations’ practices. We reasoned that by creating such an audit, we could augment the incentives for progress.

Methods and Findings

In the interests of assessing a heterogeneous set of foundations, we review practices in diversity, accountability, and transparency that are broadly applicable and visible to outsiders. We seek to be as neutral and objective as possible, so our criteria draw on those in existing rating systems, such as GlassPockets (a now-retired US website run by Foundation Center to make foundations’ public information more findable, including information about foundations’ priorities, their staff, and whether/how they enable grantees to provide feedback) or the Racial Equity Index (a tool being created to increase accountability for racial equity in global development). Rather than our defining the characteristics by which to assess diversity, we use the three characteristics on which the UK Equality and Human Rights Commission advises for pay-gap reporting: ethnicity, disability, and gender.

We do not assess funding practice as such—e.g., grant sizes, restricting grants, or monitoring processes—not because those issues are unimportant, but rather because best practices typically vary by sector and goals, plus they are normally not visible from the outside. Similarly, we do not investigate impact: Establishing a foundation’s impact is a major task—Giving Evidence does do that elsewhere. Resourcing that work for 100 foundations, and making meaningful comparisons given the heterogeneity of their work, is not realistic.

Each year, we run a public survey to define and refine the FPR process and criteria. We then publish the criteria, along with advice on doing well, before starting the research—FPR is not intended to catch people out. Each year, our set of 100 foundations comprises the foundations that fund FPR (so that nobody feels that FPR’s funders are pointing the finger at others), the UK’s five largest foundations by giving budget (because they dominate grantees’ experience), and a random selection of UK community foundations and an independently published list of the country’s 300 or so largest foundations. Each foundation is researched by two investigators operating independently, and their findings are compared and moderated by a third. Foundations are exempted from irrelevant criteria (e.g., foundations with no or few staff are exempt from reporting their pay-gap data).

We collected the first year of data in autumn 2021 and published them in March 2022; the second-year data were collected in autumn 2022 and published this past March. We did not change the criteria between the two years (though we did move one exemption threshold), because we wanted to avoid confusion, maintain consistent messaging, and make year-over-year comparisons.

Perhaps our most surprising finding was that foundations have welcomed this initiative. Many have said that it has provided impetus for reform within their organizations, that it has bumped some issues up their agenda, and that it has highlighted some concerns hitherto overlooked. In both years, we heard from foundations using the FPR criteria as a checklist for self-assessment.

For many foundations, the contact details that outsiders might use to contact them do not actually enable people to reach them.

In the first year, just three foundations achieved an A overall; in year two, that number rose to seven. In both years, the top-rated foundations included a large foundation (Wellcome, the largest in Europe), a small endowed foundation, and a randomly selected community foundation. Clearly, foundations can score well irrespective of their structure or financial size.

Every criterion has been met by at least one foundation: Evidently, we are not asking for anything impossible.

Diversity has been the weakest area by far. In neither year has any foundation scored an A on diversity—whereas many did so in the other two areas—and in both years, almost all of the 10 lowest-scoring criteria concerned diversity. Our diversity criteria cover accessibility (e.g., whether the website is usable by visually impaired people), because that affects the range of people who can engage with the foundation.

On the actual diversity of foundations’ staff and trustees, we can make no comment because so few report that data. In FPR’s first year, we found only four foundations that published any breakdown of staff diversity (e.g., with respect to gender, ethnicity, and/or disability) and only one that published a breakdown of trustee diversity. In year two, the numbers rose a little: Six foundations reported diversity of staff, and five reported diversity of trustees. (Liban Abokor, cofounder of the Foundation for Black Communities, based in Canada, suggests that government mandate foundations to report board diversity.)

Many foundations are hard to contact. Many have no website. In the first year, 27 of the 100 assessed foundations had none (including one attached to Goldman Sachs, a large, multinational investment bank). In year two, 22 had none. Keep in mind that almost all the foundations we assess are among the country’s largest.

Furthermore, we send each foundation included in the assessment the data about it, for it to check. To do so, we use the contact details that it publishes. For disappointingly many, the contact information is a postal address, not an email address. For many others, the email address is a generic one—such as info@ or enquiries@. Foundations quite often say that those emails are not received; the emails presumably land in spam folders, which go unchecked. To put this point another way, for many foundations, the contact details that outsiders—including prospective applicants—might use to contact them do not actually enable people to reach them.

Financial size, by total assets or giving budget, does not correlate with the ratings. But the number of personnel does. In both years, all foundations scoring D overall have 10 or fewer staff (except one, which had 13). The same generally holds for trustees: Having more trustees correlates with better performance—e.g., in both of the two years, only one foundation with 10 or more trustees scored D overall. (Again, remember that foundations with few staff or few trustees are exempt from some criteria.) We don’t know why these patterns arise, but we suspect that good practices in diversity, accountability, and transparency require work, and having too few personnel precludes doing that work.

Next Steps

Is the FPR affecting foundations’ practices? It is too early to say, and we lack a robust counterfactual. Any relatively large UK foundation may be included in FPR in any given year, so they all have the incentive to improve. Furthermore, all foundations can see the criteria and use them to scrub up—plus we signpost to resources about improving.

However, we can see that, on aggregate, foundations that were rated in both years improved in all three areas. By contrast, among randomly selected foundations, changes in scores between the two years were mixed. We also hear many tales of the process and criteria being useful to foundations, as outlined.

Some people in the sector suggest that we broaden our scope—e.g., to examine funding practices or foundations’ environmental impact. We won’t do that, at least not for now, for the reasons given above.

The FPR will continue to rate 100 foundations each year and publish the results. We will also continue to collect anecdotal evidence of effect. We invite any foundation—in the United Kingdom or beyond—to use our criteria to assess itself and identify practices to sharpen up.

Other countries could make their own versions of FPR. Our system might need adapting—e.g., around how foundations are regulated in those countries, what they are already required to disclose, and existing national benchmarks and policies. Our experience thus far is that the hypothesis holds: Publicly rating foundations indeed creates an incentive, and consequently foundations are improving their practices in these important areas.

Read more stories by Caroline Fiennes.