Great article! I’ve long felt that nonprofit executives and social entrepreneurs could benefit from more scientific thinking about social policies and programs that they design, run or promote. There is probably a ton of under-utilized knowledge about what works or doesn’t work in what circumstances and lots of social innovations that aren’t being evaluated.
Thanks Caroline for raising such an important issue and bringing attention to the importance of really doing one’s research before you roll out large scale initiatives. Your article raises a number of incredibly important points. I wish the example you shared were the only one, but of course there are many examples of well-intentioned groups ‘jumping on the bandwagon’ of one approach or another, at enormous cost and investment, to only find, years later, that the outcome they are seeking to drive is not shifting.
Having said that—I am very nervous about the impact of your message on many organizations. I have heard time and time again that measuring is ‘all too hard’ from nonprofit leaders (, and despite the best efforts of funders, and the shift that is starting to happen, the reality is that there are hundreds of thousands of non-profits in America alone who do not measure their impact in any meaningful way.
My experience is that these organizations are doing so NOT because they are dismissive of the scientific method. In fact, it is precisely because they are so convinced that the ONLY meaningful research is a peer-review control study, that they do nothing as Megan Golden and I wrote about in http://www.ssireview.org/blog/entry/just_do_it). Of course you are right, such research is the gold standard, and well it should be. But suggesting that nonprofits wait around for someone else to do the research and challenge themselves only to monitor implementation (as per your previous articles) has me breaking out in sweats.
I’ve witnessed the enormous power of encouraging organizations to take on responsibility for asking the question — is what we are doing working for our clients?
When organizations start to monitor outcomes (not process), they start to ask much smarter questions. They start to ask not just “how many attendees did you have at your after school program”, but “what was the change in (obesity), (vocabulary), (school attendance) across different program areas? When they do that, they notice differences. These observed differences then lead to meaningful conversations — which drives critical innovation and improvement. While of course in the day-to-day running of a program it is difficult to account for a range of control variables with a great degree of rigor, powerful insights are still generated. It also shifts the dynamic of staff on the ground to actually drive innovation and accountability.
Another example is how this can play out in the medical world. Of course hospitals implement only clinically proven and safe interventions. They don’t feel obliged to run control studies at every turn. But even so, if they do not hold themselves accountable for monitoring outcomes, they put people’s lives at risk.
While at McKinsey, I worked with hospital management teams to collate and review comparative data on outcomes across a range of standardized operations. What we saw were meaningful variations in life/death outcomes. This information of course drove much deeper investigation, uncovering both positive (and negative) innovations by surgeons and nurses. By encouraging management to track and understand these differences, all kinds of innovations and improvements can be made which literally save people’s lives. Sometimes it is implementation (hand washing), sometimes it is innovation (checklists a la Atul Gawande). But if an organization does as you suggest and holds itself accountable only for monitoring implementation, there is little incentive to innovate and improve.
You are right on so many fronts. Funders should think about getting behind testing interventions in a rigorous way. Information should be made open and shared. But I believe organizations should absolutely be encouraged and supported to check whether their work is making a difference in people’s lives.
Thanks.
The key is in your concluding sentence: most operating charities should CHECK that their work is making a difference, i.e., check that the change in outcomes pre- and post- their interventions are in line with rigorous evidence. That is precisely what happens in hospitals, the good example that you cite.
I really think that most operating charities should not ‘measure their impact’ because they have neither the skills nor the money to deal with confounding variables, i.e., to ascertain whether changes they are seeing are due to them, or something else, or random chance.
On what then can charities do if nobody has yet provided rigorous evidence, that certainly is a problem, but two thinks they can do are:
- use relevant evidence: almost nothing is completely innovative and new
- ensure that they’re giving ‘beneficiaries’ what they want.
I’ve written more about this at http://www.giving-evidence.com/m&e
It’s a longer topic: Dean Karlan is currently co-authoring a book about it: see here: http://www.ssireview.org/blog/entry/measuring_impact_isnt_for_everyone
COMMENTS
BY Chester Davis
ON July 24, 2015 09:50 AM
Great article! I’ve long felt that nonprofit executives and social entrepreneurs could benefit from more scientific thinking about social policies and programs that they design, run or promote. There is probably a ton of under-utilized knowledge about what works or doesn’t work in what circumstances and lots of social innovations that aren’t being evaluated.
BY Caroline Fiennes
ON July 28, 2015 01:45 PM
Caroline Fiennes here, author of the article above. To be clear, this article was written on the basis of the two LSTHM studies and the Cochrane paper. Kremer and Miguel have since published a response to the studies, a summary (by them) of which is here: http://emiguel.econ.berkeley.edu/assets/miguel_research/63/Deworming-summary_Kremer-Miguel_2015-07-24-CLEAN.pdf
BY Liana Downey
ON July 31, 2015 12:07 PM
Thanks Caroline for raising such an important issue and bringing attention to the importance of really doing one’s research before you roll out large scale initiatives. Your article raises a number of incredibly important points. I wish the example you shared were the only one, but of course there are many examples of well-intentioned groups ‘jumping on the bandwagon’ of one approach or another, at enormous cost and investment, to only find, years later, that the outcome they are seeking to drive is not shifting.
Having said that—I am very nervous about the impact of your message on many organizations. I have heard time and time again that measuring is ‘all too hard’ from nonprofit leaders (, and despite the best efforts of funders, and the shift that is starting to happen, the reality is that there are hundreds of thousands of non-profits in America alone who do not measure their impact in any meaningful way.
My experience is that these organizations are doing so NOT because they are dismissive of the scientific method. In fact, it is precisely because they are so convinced that the ONLY meaningful research is a peer-review control study, that they do nothing as Megan Golden and I wrote about in http://www.ssireview.org/blog/entry/just_do_it). Of course you are right, such research is the gold standard, and well it should be. But suggesting that nonprofits wait around for someone else to do the research and challenge themselves only to monitor implementation (as per your previous articles) has me breaking out in sweats.
I’ve witnessed the enormous power of encouraging organizations to take on responsibility for asking the question — is what we are doing working for our clients?
When organizations start to monitor outcomes (not process), they start to ask much smarter questions. They start to ask not just “how many attendees did you have at your after school program”, but “what was the change in (obesity), (vocabulary), (school attendance) across different program areas? When they do that, they notice differences. These observed differences then lead to meaningful conversations — which drives critical innovation and improvement. While of course in the day-to-day running of a program it is difficult to account for a range of control variables with a great degree of rigor, powerful insights are still generated. It also shifts the dynamic of staff on the ground to actually drive innovation and accountability.
Another example is how this can play out in the medical world. Of course hospitals implement only clinically proven and safe interventions. They don’t feel obliged to run control studies at every turn. But even so, if they do not hold themselves accountable for monitoring outcomes, they put people’s lives at risk.
While at McKinsey, I worked with hospital management teams to collate and review comparative data on outcomes across a range of standardized operations. What we saw were meaningful variations in life/death outcomes. This information of course drove much deeper investigation, uncovering both positive (and negative) innovations by surgeons and nurses. By encouraging management to track and understand these differences, all kinds of innovations and improvements can be made which literally save people’s lives. Sometimes it is implementation (hand washing), sometimes it is innovation (checklists a la Atul Gawande). But if an organization does as you suggest and holds itself accountable only for monitoring implementation, there is little incentive to innovate and improve.
You are right on so many fronts. Funders should think about getting behind testing interventions in a rigorous way. Information should be made open and shared. But I believe organizations should absolutely be encouraged and supported to check whether their work is making a difference in people’s lives.
BY Caroline Fiennes
ON August 2, 2015 10:09 AM
Thanks.
The key is in your concluding sentence: most operating charities should CHECK that their work is making a difference, i.e., check that the change in outcomes pre- and post- their interventions are in line with rigorous evidence. That is precisely what happens in hospitals, the good example that you cite.
I really think that most operating charities should not ‘measure their impact’ because they have neither the skills nor the money to deal with confounding variables, i.e., to ascertain whether changes they are seeing are due to them, or something else, or random chance.
On what then can charities do if nobody has yet provided rigorous evidence, that certainly is a problem, but two thinks they can do are:
- use relevant evidence: almost nothing is completely innovative and new
- ensure that they’re giving ‘beneficiaries’ what they want.
I’ve written more about this at http://www.giving-evidence.com/m&e
It’s a longer topic: Dean Karlan is currently co-authoring a book about it: see here: http://www.ssireview.org/blog/entry/measuring_impact_isnt_for_everyone