Teles and Schmitt are right: Advocacy is different. And advocacy evaluation is hard. Advocates and funders can rarely predict exactly how change will happen. In our advocacy evaluation practice, we help clients clarify their assumptions about how change will happen—so that we can check back with them along the way and help figure out which of their assumptions was wrong. As Teles and Schmitt indicate, many traditional evaluation methods are no match for the complexity of policy change. But the evaluation field has more than randomized control trials at its disposal. Some of the insights from social science offered by the authors are already reflected in the work of the hundreds of advocacy and policy change evaluators in the American Evaluation Association. Smart funders recognize that advocacy can add leverage to their work, and that smart evaluation can help advocates better define their contribution to change. Can evaluation fully capture precisely who made change happen, and how? Rarely. Can evaluators help funders and advocates learn and improve? Yes.
David Devlin-Foltz, Director, Advocacy Planning and Evaluation Program, The Aspen Institute
Co-Chair, Advocacy and Policy Change Topical Interest Group, American Evaluation Association
We want to thank the authors for bringing this important topic to the attention of SSIR readers. We think it is also important, however, to recognize that the last decade has been a time of tremendous growth in the advocacy evaluation field. Where few resources and little expertise existed before to address the challenges the authors raise, multiple tools and a growing base of experience now exist.
Pioneering funders that have been testing advocacy evaluation approaches and sharing their lessons in many publications and venues include The Atlantic Philanthropies, The California Endowment, Annie E. Casey Foundation, David and Lucile Packard Foundation, The Colorado Trust, and many others. Evaluation firms like Innovation Network, TCC Group, the Advocacy Planning and Evaluation Program at Aspen Institute, Organizational Research Services, Alliance for Justice, Blueprint Research and Design (now Arabella), and others have been sharing their innovative thinking with the field for several years now. Our own nonprofit—the Center for Evaluation Innovation—is working to build the field of advocacy evaluation in collaboration with others—supporting research, communications, training, and convening on this topic. And of course there are many cutting-edge advocacy organizations that recognize the important contribution evaluation can make to the success of their strategies, and are trying out both internal and external evaluation approaches that are tailored to their efforts.
Because the essay does not cite the wealth of existing work on this topic, we’d like to direct readers interested in learning more to Innovation Network’s free electronic clearinghouse of advocacy evaluation resources at their Point K Learning Center. It has publications by a variety of authors on, for example:
- Principles for evaluating advocacy
- Unique methods in advocacy evaluation
- Advocacy outcomes that make sense to measure
- Advocacy capacity assessment frameworks and tools
- Print and electronic resources that help to build advocacy evaluation plans
- Case studies of advocacy evaluations.
While the momentum generated by the hard work of these innovators has moved the field of advocacy evaluation well beyond the early stages of challenge identification, there is much left to do to reach those who are eager to learn more. As we continue to tackle the unique challenges of advocacy evaluation together, there is a solid foundation of methods, tools, and experience on which to build as we move forward.
Julia Coffman, Director, Center for Evaluation Innovation
Tanya Beer, Associate Director, Center for Evaluation Innovation
Thanks to David, Julia and Tanya for lifting up the many innovative tools that foundations, advocates and evaluators already have at their disposal to tackle the unique challenges of evaluating advocacy organizations and campaigns.
In addition to the excellent resources cited by Julia and Tanya, another tool that readers may find useful is NCRP’s series of seven reports, “Strengthening Democracy, Increasing Opportunities,” which documents the impacts of advocacy and community organizing in 13 states across the country.
As the authors note, it may be more appropriate for a grantmaker to spread risk across a portfolio of advocacy groups to achieve policy success, and to assess progress through an aggregate return on investment over an extended period of time, rather than focus on what can be accomplished by one grantee in one grant cycle. This is precisely what our reports do, by using quantitative and qualitative methods to determine the policy impacts of a set of organizations over a five-year period. The methodology draws on the same strengths of “network evaluation” mentioned by the authors to verify an advocate’s role in policy change through contact with policy makers, peer groups, the media, etc.
It is indeed true that no one measure is appropriate in advocacy evaluation. We are fortunate that the field has evolved so much over the last decade, and we now have a variety of tools at our disposal.
Lisa Ranghelli, Director, Grantmaking for Community Impact Project
National Committee for Responsive Philanthropy
We appreciate the comments from David Devlin-Foltz, Julia Coffman and Tanya Beer, and Lisa Ranghelli, and their long and diligent work to provide funders and advocates with new tools to assess and evaluate their work. As we note, the sense that advocacy is elusive and results hard to pin down can be one among several factors that discourages foundations and non-profits from fully embracing advocacy as one of their tools, and the work of these organizations has been essential in helping funders and advocates get over that barrier.
We write not as professionals in the field of evaluation, but from our backgrounds in political science, government, journalism and philanthropy, and based on sustained experience in and conversation with the objects of advocacy evaluation—think tanks, interest groups, public interest law firms, and university-based research organizations. We should be clear that we are suggesting, from the outside, a somewhat different approach. Many of the tools and resources offered by, for example, the Center for Evaluation Innovation, seem intended to provide some of the room for adaptability, intuition, and wisdom that we’re encouraging, but they are mapped onto familiar evaluation structures. One of the more current resources available at the Point K Learning Center, for example, suggests a “logic model” for understanding advocacy, but the model consists of 43 “impacts,” “activities,” and “interim outcomes” that the evaluator can pick from at will to create his or her own “logic model.” (http://www.hfrp.org/evaluation/publications-resources/a-user-s-guide-to-advocacy-evaluation-planning ) The evaluator or grantmaker who follows this logic model may feel validated by the sense of rigor of having used something called a logic model, but in fact she’s using her judgment and instinct in the very choice of components to assess, and we would encourage her to acknowledge that reality, which if nothing else will make the process simpler and more accessible for everyone involved.
We also see these new tools as somewhat limited in their acknowledgement of politics. Policy advocacy, as we note, is politics and politics is conflict. (Conflict doesn’t mean partisanship – even when debates follow party lines, as in health reform, the underlying conflict among constituencies and interests is more complex.) Most of the new tools look principally at positive outcomes, such as increasing public awareness of a problem. But at least as important as such forward steps is understanding the forces mobilized to stop such progress, and whether the organizations and advocates involved are prepared to adapt their strategies to those challenges, even when it means abandoning their previous models. Finally, the new evaluation tools seem to be focused largely on evaluating progress toward a particular policy goal, and assessing the individual organizations engaged in work toward that goal. The unit of analysis is taken for granted. We would argue that the unit of analysis should be not the cause or the project, but the organization, and the overall capacity of a network of organizations to adapt over time to opportunities and challenges around fairly broadly defined goals.
We appreciate the opportunity to be part of this overdue discussion about methods, and commend the evaluators who have commented for their thoughtful and important work in developing a new form of evaluation that is better suited to the subtleties and surprises of advocacy. We look forward to continuing to learn from one another.
COMMENTS
BY David Devlin-Foltz
ON May 19, 2011 09:53 PM
Teles and Schmitt are right: Advocacy is different. And advocacy evaluation is hard. Advocates and funders can rarely predict exactly how change will happen. In our advocacy evaluation practice, we help clients clarify their assumptions about how change will happen—so that we can check back with them along the way and help figure out which of their assumptions was wrong. As Teles and Schmitt indicate, many traditional evaluation methods are no match for the complexity of policy change. But the evaluation field has more than randomized control trials at its disposal. Some of the insights from social science offered by the authors are already reflected in the work of the hundreds of advocacy and policy change evaluators in the American Evaluation Association. Smart funders recognize that advocacy can add leverage to their work, and that smart evaluation can help advocates better define their contribution to change. Can evaluation fully capture precisely who made change happen, and how? Rarely. Can evaluators help funders and advocates learn and improve? Yes.
David Devlin-Foltz, Director, Advocacy Planning and Evaluation Program, The Aspen Institute
Co-Chair, Advocacy and Policy Change Topical Interest Group, American Evaluation Association
BY Julia Coffman
ON May 20, 2011 08:01 AM
We want to thank the authors for bringing this important topic to the attention of SSIR readers. We think it is also important, however, to recognize that the last decade has been a time of tremendous growth in the advocacy evaluation field. Where few resources and little expertise existed before to address the challenges the authors raise, multiple tools and a growing base of experience now exist.
Pioneering funders that have been testing advocacy evaluation approaches and sharing their lessons in many publications and venues include The Atlantic Philanthropies, The California Endowment, Annie E. Casey Foundation, David and Lucile Packard Foundation, The Colorado Trust, and many others. Evaluation firms like Innovation Network, TCC Group, the Advocacy Planning and Evaluation Program at Aspen Institute, Organizational Research Services, Alliance for Justice, Blueprint Research and Design (now Arabella), and others have been sharing their innovative thinking with the field for several years now. Our own nonprofit—the Center for Evaluation Innovation—is working to build the field of advocacy evaluation in collaboration with others—supporting research, communications, training, and convening on this topic. And of course there are many cutting-edge advocacy organizations that recognize the important contribution evaluation can make to the success of their strategies, and are trying out both internal and external evaluation approaches that are tailored to their efforts.
Because the essay does not cite the wealth of existing work on this topic, we’d like to direct readers interested in learning more to Innovation Network’s free electronic clearinghouse of advocacy evaluation resources at their Point K Learning Center. It has publications by a variety of authors on, for example:
- Principles for evaluating advocacy
- Unique methods in advocacy evaluation
- Advocacy outcomes that make sense to measure
- Advocacy capacity assessment frameworks and tools
- Print and electronic resources that help to build advocacy evaluation plans
- Case studies of advocacy evaluations.
While the momentum generated by the hard work of these innovators has moved the field of advocacy evaluation well beyond the early stages of challenge identification, there is much left to do to reach those who are eager to learn more. As we continue to tackle the unique challenges of advocacy evaluation together, there is a solid foundation of methods, tools, and experience on which to build as we move forward.
Julia Coffman, Director, Center for Evaluation Innovation
Tanya Beer, Associate Director, Center for Evaluation Innovation
BY Lisa Ranghelli
ON May 23, 2011 08:29 AM
Thanks to David, Julia and Tanya for lifting up the many innovative tools that foundations, advocates and evaluators already have at their disposal to tackle the unique challenges of evaluating advocacy organizations and campaigns.
In addition to the excellent resources cited by Julia and Tanya, another tool that readers may find useful is NCRP’s series of seven reports, “Strengthening Democracy, Increasing Opportunities,” which documents the impacts of advocacy and community organizing in 13 states across the country.
As the authors note, it may be more appropriate for a grantmaker to spread risk across a portfolio of advocacy groups to achieve policy success, and to assess progress through an aggregate return on investment over an extended period of time, rather than focus on what can be accomplished by one grantee in one grant cycle. This is precisely what our reports do, by using quantitative and qualitative methods to determine the policy impacts of a set of organizations over a five-year period. The methodology draws on the same strengths of “network evaluation” mentioned by the authors to verify an advocate’s role in policy change through contact with policy makers, peer groups, the media, etc.
It is indeed true that no one measure is appropriate in advocacy evaluation. We are fortunate that the field has evolved so much over the last decade, and we now have a variety of tools at our disposal.
Lisa Ranghelli, Director, Grantmaking for Community Impact Project
National Committee for Responsive Philanthropy
BY Mark Schmitt and Steven Teles
ON May 25, 2011 02:09 PM
We appreciate the comments from David Devlin-Foltz, Julia Coffman and Tanya Beer, and Lisa Ranghelli, and their long and diligent work to provide funders and advocates with new tools to assess and evaluate their work. As we note, the sense that advocacy is elusive and results hard to pin down can be one among several factors that discourages foundations and non-profits from fully embracing advocacy as one of their tools, and the work of these organizations has been essential in helping funders and advocates get over that barrier.
We write not as professionals in the field of evaluation, but from our backgrounds in political science, government, journalism and philanthropy, and based on sustained experience in and conversation with the objects of advocacy evaluation—think tanks, interest groups, public interest law firms, and university-based research organizations. We should be clear that we are suggesting, from the outside, a somewhat different approach. Many of the tools and resources offered by, for example, the Center for Evaluation Innovation, seem intended to provide some of the room for adaptability, intuition, and wisdom that we’re encouraging, but they are mapped onto familiar evaluation structures. One of the more current resources available at the Point K Learning Center, for example, suggests a “logic model” for understanding advocacy, but the model consists of 43 “impacts,” “activities,” and “interim outcomes” that the evaluator can pick from at will to create his or her own “logic model.” (http://www.hfrp.org/evaluation/publications-resources/a-user-s-guide-to-advocacy-evaluation-planning ) The evaluator or grantmaker who follows this logic model may feel validated by the sense of rigor of having used something called a logic model, but in fact she’s using her judgment and instinct in the very choice of components to assess, and we would encourage her to acknowledge that reality, which if nothing else will make the process simpler and more accessible for everyone involved.
We also see these new tools as somewhat limited in their acknowledgement of politics. Policy advocacy, as we note, is politics and politics is conflict. (Conflict doesn’t mean partisanship – even when debates follow party lines, as in health reform, the underlying conflict among constituencies and interests is more complex.) Most of the new tools look principally at positive outcomes, such as increasing public awareness of a problem. But at least as important as such forward steps is understanding the forces mobilized to stop such progress, and whether the organizations and advocates involved are prepared to adapt their strategies to those challenges, even when it means abandoning their previous models. Finally, the new evaluation tools seem to be focused largely on evaluating progress toward a particular policy goal, and assessing the individual organizations engaged in work toward that goal. The unit of analysis is taken for granted. We would argue that the unit of analysis should be not the cause or the project, but the organization, and the overall capacity of a network of organizations to adapt over time to opportunities and challenges around fairly broadly defined goals.
We appreciate the opportunity to be part of this overdue discussion about methods, and commend the evaluators who have commented for their thoughtful and important work in developing a new form of evaluation that is better suited to the subtleties and surprises of advocacy. We look forward to continuing to learn from one another.