Over the past few years, the term “Moneyball”—pioneered in the field of baseball—has become synonymous with a data-driven, evidence-based approach to decision-making. The social sector has heavily promoted and increasingly embraced the word with vigor and enthusiasm—and now uses it as shorthand for the rigorous use of data to guide social policy decisions. At first blush, its appeal is obvious. After all, how could anyone be against a more scientific process for allocating scarce public and philanthropic resources?

However, the Moneyball concept, as currently applied by some to social policy, is flawed. It ignores the fundamental realities of how complex social change happens. It’s weighted in favor of single, simple, circumscribed interventions that have been shown to work by randomized control trials (RCTs), the perceived gold standard of evaluation methodology. It fetishizes the notion that if a program is certified as “effective” in one setting, it will—when replicated with fidelity to the original model—be effective elsewhere.

We have both written previously about the advantages of expanding the horizons of the social sector beyond RCTs as the single credible evaluation method, and the value of recognizing and incorporating other forms of evidence. Beyond the methodological threats to usefulness that RCTs face, the main reasons are worth restating:

  1. Context matters. Context can make or break an initiative. What worked in Peru may not work in Poughkeepsie, even if the same intervention is used with the same “dosage” and even in clinical trials, which are in many ways the intellectual forbearers of social sector RCTs. “Adaptive clinical trials”—which allow for ongoing changes in design, delivery, and dosage based on characteristics of target populations—are becoming increasingly popular.
  2. We cannot truly isolate interventions. The popularity in recent years of multi-faceted, often multi-actor initiatives (including “collective impact,” place-based, and “intersector” approaches) to solve chronic social problems reflects a growing understanding of a fundamental principle: Complex problems demand complex interventions. “Island interventions” with clear boundaries and fixed practices may be ideal for experimental evaluations but are unlikely to solve serious problems.

Despite these reservations, we aren’t asking anyone to discard the idea of Moneyball. We agree that we need stronger, more data-informed ways to make decisions about how we spend public and philanthropic dollars. This is a call to get it right, and in the context of social change efforts, getting it right requires that we:

  1. Broaden the base of evidence. Prominent health policy reformer Don Berwick describes the situation this way: “The world we live in is a world of true complexity, strong social influences, tight dependence on local context—a world of uncertain predictions, a world less of proof than of navigation, less of final conclusions than of continual learning.” To get better results, we must be willing to shake the intuition that certainty should be our highest priority. We must draw on, generate, and apply a broader range of evidence to take account of factors we have neglected in the past, including the complexities of the most promising interventions, the practice-based evidence that highlights the subtleties of implementation, and the importance of fitting interventions to the circumstances of particular populations and communities.
  2. Focus on principles of practice. Renowned evaluator Michael Quinn Patton has described why “best practices aren’t.” In the social sector, the idea that there can be a single best way to do something—irrespective of context and target population—is simply misguided. Instead, we should focus on effective “principles of practice.” Principles—as opposed to models, rules, or recipes—provide guidance, but don’t mandate a lock-step design and way of operating. For example, while each collective impact initiative is different, collective impact principles of practice provide customizable guidance to practitioners.
  3. Embrace adaptive integration over fidelity. With complex interventions, ensuring fidelity of implementation is often counter-productive. In their book, Learning to Improve, Carnegie Foundation’s Anthony Bryk and colleagues talk about using “adaptive integration” as a guide to implementing complex efforts across diverse settings. In their view, the decision of whether to focus on the narrower definition of fidelity or a broader definition of integrity is based on the nature of the intervention and the demands that it places on its context.

Are you enjoying this article? Read more like this, plus SSIR's full archive of content, when you subscribe.

Increasingly, we are seeing examples around the country of practitioners, community-based organizations, reformers, advocates, and other experts, who are taking a more-inclusive view of credible evidence, who focus more on principles than models, and who are committed to adaptive integration. However, to support their efforts, we believe the sector as a whole needs to make some practical shifts:

  1. Strengthen today’s “what works” directories. The Friends of Evidence at the Center for the Study of Social Policy has identified three steps to enhancing the amount and quality of evidence that directories offer policymakers to guide decisions about what’s worth funding and implementing: First, encourage program directories to address whether and under what circumstances programs will likely be effective in new settings and with different populations; second, compile and disseminate evidence about solutions that consist not just of individual programs, but of a range of interactive strategies aimed at community change and systems reform; and third, develop the capacity to field “evidence coaches” to support decision makers in making use of the best available evidence, because neither the expanded directories nor the new compilations of evidence can speak entirely for themselves.
  2. Rethink the current “evidence hierarchy” in determining what is worth funding and scaling up. We are skeptical of the prevailing enthusiasm for a social change strategy that relies on scaling up model programs with preference going to those with the most elegant evaluation methodology. When the method of determining effectiveness takes precedence over other ways of assessing impact, it sidelines promising interventions, such as this systemic approach to improving college access and success, that are too complex and adaptive, to fit experimental evaluations. The continuing popularity of scaling up “evidence-based” programs, and pressures to assess all interventions with experimental evaluations is probably due to the appeal of what Harvard University professor Jal Mehta describes as the “allure of order”— the longstanding faith among policymakers in the potential to use principles of rational management to discipline otherwise “soft” social interventions. As Mehta suggests, we need to extend the political discourse and build an “improvement infrastructure” that can assure that the allure of order doesn’t overshadow the allure of improvement.
  3. Address structural impediments that collude to incentivize individual “proof” studies. The research community is increasingly coming to terms with the fact that there is a “replication crisis” in most fields. Most researchers are engaging in individual, one-off studies in controlled settings that often fail to achieve the same results when applied elsewhere. In the field of psychology, for example, Stanford University’s John Ioannidis estimates that the replication failure rate may be 80 percent or more. This is because current incentive structures—including funding, publication standards, and tenure—all push researchers to go after individual “proof” studies. Addressing these structural impediments will be essential if we are to take a more balanced and meta-analytical approach to, for example, identifying “common elements” that seem to work across multiple interventions.

In sum, our enthusiasm for applying the Moneyball approach to social policy is tempered by the understanding that stronger outcomes require that we embrace, rather than control for, real-world complexity. We can apply Moneyball’s ingenious use of statistical analysis—to identify players undervalued by the market and assemble them into a winning ball club—to some social sector activities. But we can’t reduce the variables that determine the success of today’s most promising social interventions, which are predominantly complex and continuously improving, to balls, strikes, and box scores. That is why we are encouraged to see signs of the social sector broadening its approach to evidence.

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Srik Gopal & Lisbeth B. Schorr.