Multicolored water drop and its ripples, close-up (Photo by iStock/kertlis)

This is a scenario I have repeatedly participated in or heard described in funder circles: A discussion about the importance of narrative change, often including examples of innovative work, generates great excitement and interest. Then, somebody speaks up: This is all great, but how do you know it works? How do you measure and assess progress? Somebody else attempts a half-hearted answer, and then the discussion peters out. This question comes up so often—and the responses to it are so predictably lacking—that I believe it is one of the major obstacles in the way of greater funding for this growing field.

There are usually two aspects to the question of how to measure and assess progress in narrative change: First, how can you know whether you are headed in the right direction, given that this is long-term work where significant results might take 10 years or more to achieve? Second, how can you tell whether it is your efforts that are contributing to any change, given the number and complexity of forces operating on widespread social shifts? I don’t believe answering each of these needs to be as difficult as it’s often made out to be.

Obstacles, Real and Imagined   

When it comes to the long-term nature of the work, I’m not convinced that narrative change is so different from many more established approaches to social change. Policy and legal advocacy and systems change work can also take many, many years to yield results, if they ever do—and often pose similar problems of attribution (which is unavoidable when dealing with complex challenges in complex societies over long time frames).

Narrative change is also not this mysterious thing that cannot be assessed, even though much of the current jargon, with competing, overlapping, and sometimes contradictory terminology used by different actors can make it seem like it is (perhaps to be expected from a field that is still in its “forming” phase). It involves a sequence of interrelated changes in observable things—such as the types of stories in circulation; the images, metaphors, and symbols (including language) in the world around us; and various types of behaviors and norms—and less observable things such as mindsets, attitudes, and beliefs. There are many existing approaches and tools that can be, and regularly are, used to measure all of these things. ORS Impact’s publication, “Measuring Narrative Change,” lists a number of these, such as content analysis for assessing shifts in media discourse and various types of surveys aimed at picking up changes in attitudes, beliefs, behaviors, and norms. A range of organizations use social listening tools to pick up trends in conversation on social media.

But if what I say is true—that narrative change is not so much more difficult to measure than other types of social change strategies, and that there are many tools available to do it—why does the question about measurement keep getting asked?

In my more cynical moments, the repeated questions—and lack of answers—about measurement strike me as a multi-layered avoidance mechanism. On the one hand, the objections about measurement represent a pseudo-empirical perspective that enables funders to avoid making a decision about “risky” but potentially game-changing investments (“we can’t fund this because we don’t know whether it works”). On the other hand, actually finding ways to do the measuring and assessing might compel funders to make major changes to how they work. As long as we keep saying we don’t know how to measure progress we can cling to what the Center for Artistic Activism (where I recently joined the board) calls magical thinking, which is “characterized by lack of knowledge, or even concern, of the relationship between cause and effect.” A genuine commitment to measurement and assessment would require a learning mindset in which grantees as well as foundation program staff feel safe enough to be able to say, “this didn’t work, we have to try something else.” This would require a degree of trust and vulnerability in the donor-grantee relationship, as well as by and within funder organizations, that in turn would require some internal culture change work.

That doesn’t mean there aren’t genuine barriers to assessing progress in narrative change. Many of the tools and approaches I mentioned earlier require specialized expertise and are expensive to implement rigorously and at scale. There are very few organizations large enough and sufficiently well-resourced to do this. Even many of those with relatively large budgets struggle to earmark funds for learning and assessment. Despite the frequent questions about measurement, the incentives in the funder and nonprofit world favor a focus on product and content over process, outcome, or impact.

While the envisaged outcome of narrative change work is often described as fundamental social transformation at a massive scale, the field is full of small organizations, plugging away at their specific piece of the puzzle. Organizations such as The Story Kitchen, focused on changing the narrative about women’s involvement in the political history and development of Nepal; Illuminative in the US, which works to shift narratives about Native Americans; or Cattrachas, which among other things is trying to change the “social imaginary” about the LGBTQI community in Honduras, are doing brave, innovative, and often dangerous work. They do what they can to measure, assess and learn, because their existence and often their lives, depend on their success (and success for some might mean deeper rather than wider change, or narrative shift at a local or regional level, rather than national or global). They would love to do more, but they and their budgets are often severely stretched.

The question about assessing progress in narrative change has to become less theoretical and much more applied. How does a small organization with a limited budget assess progress? What sort of evidence is appropriate and “good enough” for them while being compelling enough to convince funders to invest in their work? What tools might we develop or adapt that would enable such an organization to gather useful evidence to help it learn and become more effective, without imposing a huge extra burden?

Measuring the Ripple Effect

It could be helpful to think about measuring progress in terms of concentric circles. Organizations can start off by finding ways to measure and assess what is closest to them and their power to influence in the near term, and then they can expand in ever-increasing circles as time goes on and as resources are available.

For example, closest to an organization, and over a shorter time frame, one might investigate questions such as: are our organization’s narrative capabilities improving? Have we developed compelling new language, stories, and frames? Have we managed to collaborate with others in the field on developing these? Have we tested them with our envisaged audience and adapted/changed accordingly? The BROKE project, for instance, represents a collaboration between The Center for Public Interest Communications, an academic institute, The Radical Communicators Network, a nonprofit affinity group, and Milli, a creative agency, which offers research-based recommendations and toolkits for organizations seeking to advance new stories and frames about poverty, inequality and economic justice.

A little further out: Are our collaborators making use of the new language, stories, and frames? Are new voices speaking out on the issue in new ways? For example, after seeing research showing 69 percent of people in Latin America say God is very important to them, the organization PUENTES identified a need to forge greater connections between religious communities and human rights organizations—which had tended to steer clear of anything related to spirituality. Testimonials by human rights activists show they are taking note of Puentes’ recommendations for communicating with people of faith, and have started to adapt their language and imagery accordingly.

Moving further out: Are the new language, stories, frames, and voices being heard, quoted, and repeated? Are they reaching our collective audiences? Can we see any shift in the way others are talking about the issue, and in the nature and amount of media content? The US organization Translash is an example here. Led by Imara Jones, Translash uses truth-based storytelling to change popular narratives about transgender people of color. Translash is able to track a growing audience on its own platforms, while Jones’ regular appearances in a range of news outlets, such as MSNBC, the Guardian, The Nation, and NPR, are easy to observe. Outside affirmation of her growing reach and impact came when Jones was named as one of Time’s 100 most influential people of 2023, for her role in highlighting the voices of Black trans women in the face of rising anti-trans policy and rhetoric.

Rippling out even further: Is all of this leading to a shift in the public discussion? Are decision-makers starting to talk or respond differently? Is the broader culture changing in any way? Are we seeing changing beliefs and mindsets, people taking action, changing the way they behave? An example of the sort of measurement work that aims to track such long-term and widespread shifts is the Frameworks Institute’s Culture Change Project, which looks at the shifting balance between individualistic and systemic thinking among Americans, with respect to issues such as the economy, racism, health, and gender.

As the examples above hopefully illustrate, there is a great deal that can be tracked and measured with relatively minimal effort and resources. This should be an iterative process—if measurement and assessment shows something is not working at any circle of impact it becomes necessary to return to an earlier stage and adjust. However, with each ripple out, the time frame is longer, it likely takes more resources and effort to research and measure, and it’s increasingly difficult to attribute any changes directly to the efforts of any one set of actors. While complete tracking of causation is impossible at these outer reaches, it should still be possible to infer some degree of causal relationship by tracking the ripples back in sequence.

Finally, we have to accept that the results we are looking for may not happen for a long time. Sometimes we just have to keep on working at it, trying as best we can to assess, learn, and adapt along the way, but not knowing whether or not we will ultimately succeed. In his book How Minds Change, David McRaney argues that success is often random and depends on chance events—but you have to keep doing the work, so that you are prepared and can take advantage when the conditions just happen to align, enabling large-scale changes to take off. Again, this is not so different from everything else we do. For example, sex workers and their allies in South Africa have been advocating for decriminalization for many, many years, supported by a small group of funders. At times it seemed hopeless, promised reviews of the laws never materialized, and every few years, questions arose about whether this still deserved support and funding. But the sex workers kept at it—they had no choice, this was about their lives and their livelihoods. A small group of funders kept supporting it. In late 2022, the draft law to end criminalization of sex work was finally published for public comment and discussion.

Sometimes we have to keep planting the seeds and tending the soil, not knowing how long it will take for the plants to grow and bear fruit, or even whether anything will germinate. Funders and grantees may need to accept that ultimately only so much is knowable, and this is an inherent part of the sort of emergent strategic work that many narrative change actors are engaged in. What we do know for certain though is that if we don’t do the work, we can’t expect anything to change at all.

Read more stories by Brett Davidson.