Measurement & Evaluation

Reconsidering Evidence: What It Means and How We Use It

The tide that has swept experimental program evaluation to the forefront of knowledge building about social policy is suddenly ebbing.

The latest winner of the Nobel Prize in Economics, Princeton’s Angus Deaton, was described by Justin Wolfers in the New York Times as “an influential counterweight against a popular strand of econometric practice arguing that if you want to know whether something works, you should just test it, preferably with a randomized control trial. In Mr. Deaton’s telling, the observation that a particular government intervention worked is no guarantee that it will work again, or in another context.”'

Vincent DeVita, MD, former head of the National Cancer Institute and physician-in-chief of the Memorial Sloan Kettering Cancer Center, is also skeptical, but in a medical context. In his book, The Death of Cancer, he characterized evidence-based guidelines for the treatment of cancer as “backwards looking.” He wrote, “With cancer, things change too rapidly for doctors to be able to rely on yesterday’s guidelines for long. Reliance on such standards inhibits doctors from trying something new.”

 Evaluation guru Thomas Schwandt also urges caution in how we approach documenting effectiveness. In the 2015 book, Credible Evidence in Evaluation and Applied Research (S. Donaldson, C. Christie & M. Mark, Eds.), he wrote, “ ... the field of evaluation seems captivated by discussions of methods needed to pro­duce evidence of impact ... [distracting] us from care­fully attending to a variety of important issues related to evaluative evidence and its use.” He suggests that “the term evidence base must be interpreted with caution: To claim that evidence may figure importantly in our decisions is one thing; to claim it is foundation for our actions is another. We would be well advised to talk about evidence-informed decision making instead.”

From a philanthropic perspective, Vivian Tseng, vice president of the WT Grant Foundation, writes in a similar vein, in “Evidence at the Crossroads”: “A narrow focus on evidence-based programs encourages people to run after silver bullet solutions that are not necessarily aligned with the myriad other interventions that they are running.”

These are compelling points of view. When it comes to addressing serious problems such as poverty, and race- and income-based disparities in health and education, the world is beginning to discover that the most effective interventions consist of far more than individual, circumscribed programs. This may help to explain why the tide seems to be shifting away from a narrow focus on experimental evidence of program impact.

Thinking that we’re probably only in the early stages with this realization, I was surprised that in a session on this subject at last November’s American Evaluation Association meeting, the message that we need a broader approach to evidence was enthusiastically received. There seemed to be considerable agreement that a narrow focus on trying to identify which programs “work” is actually keeping us from getting better results and that the social sector’s program-centric focus has been based on several erroneous assumptions. These include that individual, stand-alone programs can achieve ambitious goals; that if we know from RCTs that a program works in one place, it will work everywhere; and that innovation won’t be discouraged by an over-arching reliance on programs that have been shown to work in the past.

No one questions the importance of evidence. But it is time for all of us to think more expansively about evidence as we strive to understand the world of today and to improve the world of tomorrow.

Don Berwick, health policy reformer extraordinaire (and my colleague in the Friends of Evidence), describes the situation this way: “The world we live in is a world of true complexity, strong social influences, tight dependence on local context—a world of uncertain predictions, a world less of proof than of navigation, less of final conclusions than of continual learning.” (“Eating Soup with a Fork,” Keynote, 2007 Forum on Quality Improvement in Health Care.)

To get better results in this complex world, we must be willing to shake the intuition that certainty should be our highest priority. We must draw on, generate, and apply a broader range of evidence to take account of at least five factors that we have largely neglected in the past:

  1. The complexities of the most promising interventions
  2. The practice-based evidence that spotlights the realities and subtleties of implementation that account for success
  3. The importance of fitting interventions and strategies to the strengths, needs, resources and values of particular populations and localities
  4. The heavy context-dependence of many of the most promising interventions
  5. The systematic learning and documentation that could inform future action

One way to accomplish this goal is for all those involved in intentional social change— including philanthropies, public policy makers, and nonprofit organizations—to go about the business of knowledge development in a way that would enable us reliably to achieve greater results at scale in tomorrow’s world by making sure that all public and philanthropic funding is evidence-informed. For a start, this would require:

  • Investment in structures that could identify the common underlying elements of diverse attempts to reach similar goals
  • The development and maintenance of directories that would address the contextual factors, whether and under what circumstances programs are likely to be effective in new settings and populations, and add a focus on the work that focuses on systems and community change
  • A means to identify ways of making systems more hospitable to interventions that are evolving and improving, and take seriously the challenges of implementation

This approach to knowledge development and learning, in the United States at least, would contribute substantially to the nation’s capacity to solve big problems. Of course, solving big problems takes political will, not just more and better knowledge. But by becoming smarter in how we approach the generation, analysis, and application of knowledge and evidence, we can contribute mightily to building the needed political will.

Tracker Pixel for Entry
 
 

COMMENTS

  • Amen!

  • BY Nancy L. Seibel

    ON January 9, 2016 05:36 PM

    This is such an important article. Thank you, Lisbeth Schorr for this thoughtful, well informed analysis.

  • BY Debra Natenshon

    ON January 9, 2016 09:53 PM

    This more expansive approach is not easy, but not much worthwhile ever is.  As to the “how”, it is very much aligned with the recent collaborative work on high performance (http://www.performanceimperative.org).  Thank you for publishing such a succinct and brilliant article.

  • BY Gabriele Bammer

    ON January 10, 2016 01:09 AM

    Thanks for this thoughtful article. Another angle on this issue is that we need a more balanced research system, where the tools we have for thinking about and dealing with complex problems are as well-developed as the tools we have for straightforward problems. The research tools we have, like RCTs, are great, but not enough. We need to develop more and better concepts and methods for dealing with systems, unknowns, context and imperfection. Advances are being made, but the communities working on this are highly fragmented, so there is more wheel-reinvention than real progress. Fragmentation is of two types. One is between problem areas, so population health researchers developing tools for dealing with complex problems don’t talk to environmental researchers or security researchers or education researchers and each community develops its own dialogue, modelling and other techniques. There’s also fragmentation between communities of practice, so systems thinkers don’t talk to action researchers or transdisciplinarians or implementation scientists and so on. Readers interested in overcoming this fragmentation to improve practices for dealing with complex problems are welcome to also contribute to this blog: http://I2Insights.org.

  • BY Lawrence W. Green

    ON January 11, 2016 03:26 PM

    Bravo, Lee. To Gabrielle Bammer’s comment on fragmentation, I would add that the researchers too often apply their evaluation methods to hothouse versions of intervention that are poor representations of the realities of implementation. This has led me to argue that if we want more evidence-based practice, we need more practice-based evidence.

  • Jack kevin's avatar

    BY Jack kevin

    ON January 11, 2016 11:32 PM

    This specific expectation is usually contrasted with all the practical limitations about reasonable techniques standard throughout real life involving political decision-making, which can be characterised simply by bargaining, entrenched obligations, and the interaction involving diverse stakeholder ideals and passions. http://www.theessayservice.com

  • BY Jason Hahn

    ON January 13, 2016 01:02 PM

    Thanks for this thoughtful piece.  I do note though that you don’t present any evidence to back your thesis which seems to be that we are overly dependent on RCTs.  Just in the case of medicine - the doctor you quote states that doctors always need to be trying new things.  From my perspective, I would much rather have an evidence based approach to medicine (which I have seen work in my own health) than someone trying new things.  Your argument would be stronger if you could present some examples of RCTs that have failed and connect them to the five factors you have laid out as well as evidence of programs not evaluated by RCTs which were known to be successful and replicated.

  • BY Patrick Lester

    ON January 14, 2016 09:39 AM

    While I always appreciate Lizbeth’s the contrarian point of view (which is important), it is not clear to me that many of these arguments hold up under further scrutiny. In fact, some of them strike me as straw men.

    First, it is not clear to me that the evidence movement is as binary as is suggested here (i.e., does a program work or doesn’t it). In fact, considerable attention is being paid to the “how” and “why” of whether programs work. For evidence, I would refer you to the following link, among others:
    http://www.opremethodsmeeting.org/2014presentations.html

    Second, it seems to me that too much emphasis is being placed here on the issue of replicability, as if this has not occurred to those who support more rigorous evidence. Indeed, this is a major focus. Attempts to break down this barrier are evident not only in the “black box” discussions in the link above, but also in the growing field of implementation science.

    Third, I would argue strongly that those who support rigorous evidence do not discount less rigorous evidence. In fact, those who support high levels of rigor would *insist* that we have such evidence before attempting the more expensive and difficult work of an RCT. Qualitative and other forms of evidence are absolutely important. They are the basis of formative evaluations, which properly precede summative evaluations.

    Fourth, the observation that RCTs suffer from external validity issues (i.e., that something that works in one place may not work in another) is not new. Better understanding how, why and when certain interventions work is absolutely something that those who support stronger evidence are aware of, which is why (for example) strong preference is given to multi-site evaluations as one of the highest forms of evidence.

    Finally, the notion that those who support RCTs are too narrowly focused on singular “silver bullet” interventions to the exclusion of system-wide efforts is also untrue. Systemic change can be analyzed, even if not through an RCT. This again is a false choice. Both are important and, in fact, synergistic.

    Many of these arguments strike me not only as false, but arguably defeatist (i.e., the word is simply too complex for us to really draw any conclusions) – which to me seems more akin to saying that because cancer is complicated maybe we should stop trying to cure it. Rather than bolstering evidence, they strike me as retreating from evidence, or defining it down so thoroughly that it begins to lack meaning. If anyone can define evidence in whatever way suits them, this plays into the hands of those who would support the status quo. I think we can agree that the status quo is not good enough.
    Thankfully, the assertion that “the tide seems to be shifting away from a narrow focus on experimental evidence of program impact” seems similarly untrue.  In fact, at the national level there is considerable evidence that Democrats and Republicans are working to strengthen and extend the use of rigorous evidence, not reduce it.
    What counts as evidence is an important topic of conversation, as indeed it should be.  But we need to be careful to make sure that such arguments advance our understanding and do not act as excuses and cover for defending the status quo.

  • BY Bernadette Wright

    ON January 14, 2016 01:35 PM

    Amen! Thank you.

  • Andrew Frishman's avatar

    BY Andrew Frishman, Big Picture Learning

    ON January 14, 2016 06:26 PM

    This resonates deeply with the way that we think about our work at Big Picture Learning. “One Student at a Time”, “One School at a Time”... While we are highly “data driven” and care most deeply about the longitudinal outcomes for our students->alumni we know that it’s much more complicated than the prescriptive application of some perfect one-sized-fits-all model. - http://www.bigpicture.org/data/

  • Thanks to Patrick Lester for his remarks!

  • BY Julie DiBari

    ON January 18, 2016 09:59 AM

    Reminds me of the Evaluating Complexity Framework out of FSG that we have been using at The Capacity Group. With complex change efforts we need evaluators that seek to understand and describe what is working well, what the patterns are, barriers, etc at regular intervals so the group can continually reflect and make improvements as things move forward, instead of simply developing, implementing and measuring. I like that the writers still note the importance of what we in the nonprofit sector call “promising practices” and what the writer calls “evidence informed practice” because too many orgs - government and nonprofit - still try to completely reinvent the wheel!

  • BY Andrew Taylor

    ON January 20, 2016 01:47 PM

    Great points!  I don’t think the shift from an “evidence based” mindset to one that is more evidence-informed and context sensitive has to be complicated.  One of the simple things that we can do is to put more thought and energy into building strong, reciprocal, respectful working relationships with the people involved in an intervention.  This isn’t a new idea, of course, but I think it is one that bears repeating!

  • Jossie O'Neill's avatar

    BY Jossie O'Neill

    ON February 12, 2016 09:46 AM

    Great article! I couldn’t agree more with Mr. Taylor’s comments. He stated, “One of the simple things that we can do is to put more thought and energy into building strong, reciprocal, respectful working relationships with the people involved in an intervention.”

  • The most insightful aspect of this article was the comments by Patrick Lester above, which were spot on.

  • Dorival's avatar

    BY Dorival

    ON April 9, 2016 01:39 PM

    Thank you for this article. When I read a batalha de okinawa e o militarismo japonês it was very interesting.

Leave a Comment

 
 
 
 
 

Please enter the word you see in the image below:

 

SSIR reserves the right to remove comments it deems offensive or inappropriate.