Two activists, one Black woman and one white woman, holding raised hands. (Photo by iStock/jacoblund)

Advocating for human rights is tough. Frontline activists put years of dedication and commitment to the deeply held belief that all people deserve a life of dignity. Yet for large human rights and advocacy organizations, it’s rare that we can be sure that positive changes in people’s lives are a direct or causal result of human rights activism. It may take years of tireless organizing to see legislative change, and even more years before those policies create real impact in affected communities. And even when observing positive outcomes of a successful campaign, can an organization know that the change would have occurred without its intervention?

This is particularly a challenge for human rights advocacy where the goals are prescriptive, driven by a moral imperative, which can be at odds with what is measurable. Because the desired impact is transformative, planning and monitoring for a specific result can mask necessary nuance and risk oversimplifying the work. There is also a lack of standardized definitions for human rights indicators, such that when qualifiers like “widespread” or “prevalent” are used, it can be difficult to track concrete progress toward long-term change.  

As an internal evaluator at Amnesty International USA, I build the capacity of staff to design strategies that are informed by evidence and set measurable outcomes, oversee evaluations of AIUSA’s programming, develop knowledge management processes, and lead workshops and trainings to equip staff for evaluative thinking. I have come across many methods that help to assess the impact of advocacy, from policy tracking and media framing analysis to public polling and stakeholder interviews.  

But even though progress has been made for human rights organizations, much of the guidance is difficult to put into practice and antiquated perceptions about evaluation persist. Evaluators know that traditional scientific methods are not well suited for assessing advocacy, but the belief remains that qualitative evidence lacks methodological rigor, as does the expectation of conclusive attribution or proof of contribution. And while internal evaluators are taught to factor uncertainty into their processes and to consider context—because evidence related to advocacy is often subjective and rarely definitive—these guiding principles are typically met with skepticism, especially by staff.

Are you enjoying this article? Read more like this, plus SSIR's full archive of content, when you subscribe.

Where is the “hard” data, the clear-cut answers, the absolute proof that their work is making a difference?

Evaluators must play a balancing act. On the one hand, we must educate others about the limitations of advocacy evaluation: impact is nonlinear and combinational, reflecting the efforts of many actors working together to incite change. And unlike organizations that provide direct services—such as emergency relief, access to schooling, or medical care—human rights advocacy must account for significantly greater external forces that are beyond its sphere of control.

But on the other, we must also demonstrate the essential value of evaluation for making informed decisions about what meaningful and realistic outcomes to seek from human rights advocacy, how well strategies are working, and what progress has been made toward ultimate goals. Harnessing evaluation data while advocacy efforts unfold can empower activists to monitor their progress in real time and make changes as needed to their strategies without waiting for a project to end. A single summative evaluation will not offer a comprehensive overview of a campaign’s impact, but it offers a piece of the puzzle by offering plausible explanations about how activism efforts generated results.

There are no quick-fix solutions for these challenges, but there are a few practices that have guided my evaluator journey:

Launch a Pre-Evaluation Design Workshop With Key Stakeholders

Organizations typically set aside time to interpret findings once an evaluation has concluded, but tend to place less emphasis on collaborative spaces for designing the evaluation and its objectives beforehand. A pre-evaluation design meeting, however, is equally critical to manage expectations on what questions the evaluation can reasonably answer and how further exploration may be needed to parse out outcomes for their significance. These spaces can also allow for authentic collaboration to take place: with a simple dialogue among staff over what would be most helpful to learn (and what data could answer that question), expert hats can be taken off and the evaluation process can be made to seem less daunting.

For example, I have adopted this practice before launching any evaluation at AIUSA to create space for developing useful learning questions in partnership with the respective subject matter experts. Conversations with staff center less around the evaluation methods than around what information will illuminate opportunities for growth and future planning. This includes engaging deeply with staff at all levels, including the executive team, and asking them about institutional knowledge gaps and decisions that are in consideration that could be guided by the evaluation. I also work to reframe traditional concepts and terms—such as “evaluation report” to an “impact and learning review”—to explore how AIUSA is making an impact (and gather any organizational learnings), rather than simply distilling complex work into a performance assessment.  

Shift the Narrative to a Paradigm of Growth and Exploration

Language has power. The more we report impact through a binary lens of wins and losses, the more we limit our ability to measure the true influence of our work and learn from it. And while advocacy is, by nature, defined by a win-or-lose mentality, evaluating its impact does not need to be. We can debate what a “true win” looks like, or how a perceived “loss” can become a “win” through continued activism. Advocacy evaluators should therefore highlight all significant outcomes, examining the extent to which they are indicative of social progress and the perspective they offer for strategic learning.

At AIUSA, we still take note of “wins” we have achieved, but we are learning to unpack these outcomes by detailing AIUSA’s contribution and recognizing the many actors that played a role in achieving these outcomes. Outcome harvesting, for example, is a utilization-focused evaluation method designed to formulate, verify, and make sense of outcomes, particularly for situations where cause-effect relationships are unknown. Such a framework allows AIUSA staff to reflect on the tactics that enable them to create change, the type of contribution their work produces, and other factors that enhance the depth or quality of their outcomes. Outcome harvesting also encourages staff to verify their contribution and examine their blind spots by collecting stakeholder feedback, from an external source for evidence. By making sense of our sphere of influence, we are able to document our footprint within the human rights space.      

Employ and Elevate Quantitative Methods When Opportunities Arise

Certain quantitative methods can be of practical use for advocates if applied in a timely and relevant manner. Social Policy Research Associates has done great work around evaluating social media content to assess narrative change that result from advocacy initiatives. Through Rtweet and various data science packages, the evaluation team at AIUSA has analyzed the social media profiles of many target actors (decision-makers of harmful policies) to assess narrative change around a particular human rights issue and the extent to which the narrative change referred to AIUSA and its advocacy or campaigning efforts. Social network analysis can also be a noteworthy method to apply in practice if your organization has initiatives that focus on developing and strengthening the networks of activists.

In addition, many grassroots human rights organizations train and equip individuals to become compelling advocates for policy change. Evaluating the leadership development of advocates through the Kirkpatrick model can help to measure their growth at an individual level and discern their impact at a broader societal level: the model aims to measure knowledge acquisition at four levels (Reaction, Learning, Behavior, and Results), and assess the degree to which the knowledge gained has led to application resulting in social change. Each year, AIUSA hosts a Lobby Day where activists lobby members of Congress on select human rights issues. Prior to the Lobby Day, participants undergo a series of trainings on how to discuss these issues in their meetings with Congressional staff. Following the Kirkpatrick model, AIUSA evaluates participants’ individual knowledge acquisition on how to advocate for select human rights issues, but also, how participants have developed the confidence to lobby their local governments for policy change, beyond AIUSA’s Lobby Day.

The practice of assessing human rights impact will never be as simple as applying a few best practices, as progress for human rights cannot always be bound by rigid metrics. Impact is transformational and not much transformation happens in front of you. However, having a tenacious spirit—prepared to test and fail—being my own advocate while staying true to the organizational mission has kept me afloat. And evaluation is in itself a form of activism: intended to not simply collect information for the sake of inquiry, but to trigger change that is meaningful, significant, and aimed at advancing the social condition of the world.

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Zehra Mirza.