We appreciate Patrick Lester’s recent response to our op-ed “Getting ‘Moneyball’ Right in the Social Sector,” in which we call for a broadened evidence base to guide social funding decisions and argue that to achieve stronger outcomes, we must embrace—rather than control for—adaptive integration and real-world complexity. 

We cannot agree, however, with Lester’s contention that our call to rethink prevailing current practices for identifying what is worth funding and scaling up, would “define evidence down” and promote “a form of evidence relativism, where everyone is entitled to his or her own views about what constitutes good evidence.” Because we suggest that decisions about what to fund and what to implement should be based on the potential effectiveness of the interventions, rather than on the elegance of the method of determining effectiveness, Lester accuses us of advocating “a return to the past, where change has too often been driven by fads, ideology, and politics, and where entrenched interests have often preserved the status quo.”

Quite the contrary. Rather than “defining evidence down,” we call for stronger evidence. Our approach to generating and applying a broad range of evidence from multiple sources would strengthen, not weaken, the social sector’s capacity to improve outcomes. It would build a sturdy knowledge infrastructure to undergird the many, increasingly promising social policy efforts, which:

  • Emphasize continuing adaptation and improvement
  • Ground strategies in solid theory and in research that includes but goes beyond examining the effectiveness of individual programs
  • Take account of local values, context, strengths, resources, and history
  • Bring together organizations that set shared goals and use common measures of success
  • Align their implementation efforts
  • Bring about the systems level changes that will support, not undermine, their interventions

Initiatives with these characteristics are likely to achieve many of the goals we value most highly (including reducing race- and income-based disparities in health and education outcomes, and responding to the needs of populations that have been poorly served in the past), and we can best understand these characteristics with stronger and deeper evidence derived from multiple methods and sources. We cannot achieve these ambitious goals if we primarily base decisions on the results of individual programs that are most readily assessed with experimental evaluations that involve a randomized treatment group and control group.

Are you enjoying this article? Read more like this, plus SSIR's full archive of content, when you subscribe.

While Lester appears to agree with our viewpoint on an inclusive definition of research, he states that “all evidence is not equal” and makes the argument that “biases” often hinder judgment when people use forms of evidence that are not rooted in randomized control trials (RCTs). We would make the counter-point that RCTs aren’t free of biases; they just manifest in a different way. In our original piece, we cited a recent American Enterprise Institute report that chronicles various “threats to usefulness” that RCTs face, especially when applied to complex interventions. As Lester acknowledges, we recognize the usefulness of RCTs for many purposes. We also fully embrace the notion of “experimentation” in terms of A/B testing and rapid iteration. What we oppose is the notion that RCTs are the “gold standard” for assessing every kind of social policy, including the full range of complex, multi-faceted interventions.

In Lester’s view, questions like whether charter schools are good or bad, or whether social impact bonds represent a step backward or forward, usually “depend on who you ask,” implying that the only way to answer these questions confidently is by using an RCT-based approach. In our view, these are the wrong questions. The right questions would be: “In what circumstances, under what conditions, for what populations, and in what ways have charter schools or social impact bonds proved effective, and why?” When we ask the questions that way, it becomes readily apparent that RCTs alone are unlikely to provide the answers that would lead to better outcomes.

As we observe and learn from those who are designing and implementing the most promising complex interventions, we are impressed with their commitment to rigor in generating and applying evidence. Far from returning to a past “driven by fads, ideology and politics,” they:

  • Use multiple methods of evaluating their progress in real time
  • Draw on many sources of evidence beyond program evaluations to design their interventions and improve their impact
  • Use data as part of ongoing management and regular feedback to shape and reshape implementation, and assure continuous learning
  • Involve those most affected by the intervention and those implementing the interventions in generating and analyzing their successes and failures
  • Document their processes and results, both to improve their impact and to share their learning in ways that others can make use of it

While we disagree with Lester on many issues, we are pleased to see such robust and rich dialogue around this issue. We need this dialogue if we are to build a knowledge base that can inform breakthrough efforts to improve lives on a large scale.

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Lisbeth Schorr & Srik Gopal.