FIRST, IT SEEMS THAT THE MAIN POINTS COULD HAVE BEEN MADE MORE SUCCINCTLY.
SECOND:QUOTING FROM THE NYTIMES IMMEDIATELY MOVES THINGS TO THE LEFT. (HARD TO BELIEVE, BUT ARTICLES IN THE TIMES MUST FOLLOW THE EDITORIAL PARTY LINE THAT IS SET EVERY MONDAY MORNING.)
NOTHING IS SAID ABOUT THE PLUSH LIVES THAT SOME FOUNDATION HEADS LIVE. A LIFE THAT IS PERPETUATED BY GIVING THE APPEARANCE OF SUCCESS.
PERHAPS SOME HARD DATA IN THE ARTICLE WOULD HAVE HELPED.
IN SHORT, I AGREE THAT THE MANAGEMENT AND CONDUCT OF FOUNDATIONS ARE A MAJOR PROBLEM.
Add to all that the article highlights the fact that most nonprofits are understaffed and under-resourced in every way, and the challenges that “traditional” evaluation presents become even greater. We love that the kicker to your article used the word “frenzy” to describe current state re accountability. It has felt just that way – part of what led us to write new book LEVEL BEST: How Small and Grassroots Nonprofits Can Tackle Evaluation (Wiley, 2006). It can be done and be meaningful, without the cost, pain, waste and confusion associated with approaches that just don’t fit the operating or programming realities of vast numbers of nonprofits. One director of a grassroots agency who recently read the book told us, “We love this—we get this—but the ‘evaluation police’ are going to hate it!”
This is a great article, and lays out the facts very clearly. I know of one other solution (to the scientific problem at least; there’s no guarantee funders will like it) that was not mentioned. That is, replicating scientifically evaluated programs.
This approach has really caught on in the teen pregnancy prevention field in the past five years. The National Campaign to Prevent Teen Pregnancy commissioned researcher Doug Kirby to review all existing well-designed evaluations of teen pregnancy prevention programs. He summarized these evaluations and created a guide for service providers on how to use the information, in a monograph called, “Emerging Answers.” For the same reasons mentioned in the article, he recommended that providers not even try to evaluate their programs unless they could afford to use a control group and a large sample size. This kind of evauation normally costs more than the entire budget of most programs. Instead, he suggested replicating one of the handful of programs that had been shown to be effective.
Since then, several more reviews have been written and popularized among service providers. Other fields, such as early childhood intervention and adult employment services, could pool their resources to use a similar approach.
It’s the monetary arrogence of the philanthropic community that leads to its entitlement to meddle in organizations being funded. Before there was “evaluation” there was “collaborations” and before this some other trendy catchphrase foundations tried to impose on nonprofits. How many program officers are real organizational development experts, let alone experts in the fields they fund? The “evaluation” trend moving through the philanthropic community is just another in a long line of “improvments” that foundations think they can make in the nonprofit sector. Nonprofits aren’t guinea pigs, we’re mostly real organizations serving real communities with real needs - just because we can’t always quantify them in some nicely packaged white Western evaluation package isn’t reason to impose this structure. All this article says is what nonprofit folks on the front lines have been experiencing for the past few years - that foundations make reporting such a priority it gets in the way of the work being done. And worse, foundations see no power dynamic in their roles as holding the purse strings that lead nonprofits to jump when they say “frog.”
Alana Connor Snibbe has written an excellent summary of the state of nonprofit evaluation. And she has done this with input from some of the best thinkers in the field: Patrizi, Cook, and Patton, among others. Every funder and grantee should be discussing the issues raised in this article. However, I feel compelled to disagree with two central points made by the author. First, it isn’t a matter of one evaluation method being better or worse than another. It’s a matter of what question you are trying to answer and how confident you need to be in the evidence. If I want to know how much one intervention is better than another intervention, or how much better an intervention is than no intervention, and I need statistically significant results, then I have to use some fashion of a quasi-experimental design (control groups, comparison groups, time series, etc.). But it is quite rare that funders and nonprofits want to know the answers to those particular questions. Most of the time, what they really want to know is if an intervention can have positive outcomes given the right conditions, and if the results are worth the investment, and they only need to know these answers “beyond a reasonable doubt”. These questions can be answered by collecting data from stakeholders that tells the story of change and makes logical links between the intervention and changes in individuals, groups, organizations, or communities. Usually, this approach doesn’t require a great deal of time or money. It does, however, require being very clear about what you want to know, why you want to know it, and how you are going to use the information.
Ms. Snibbe raises some excellent points that foundations, nonprofits, and evaluators should heed. Furthermore, we wholeheartedly agree with Dr. Gill’s comment that the focus should be on what question one is trying to answer, with what amount of confidence. We have recently been working with foundations in the area of evaluating policy and advocacy work. As more foundations are funding advocacy, they are confronting unique challenges in measuring the effectiveness of any single organization’s contribution to work that is—by its very nature—collaborative, complex, subject to forces beyond the organization’s control, long-term, and evolutionary. Experimental and quasi-experimental models don’t make sense in these environments, and as Ms. Snibbe and Dr. Gill point out, the questions an experimental design sets out to answer may not be the right questions to ask.
Our work on behalf of The California Endowment, “The Challenge of Assessing Policy and Advocacy Activities: Strategies for a Prospective Evaluation Approach” (Kendall Guthrie, Justin Louie, Tom David and Catherine Crystal Foster) http://www.calendow.org/reference/publications/pdf/npolicy/51565_CEAdvocacyBook8FINAL.pdf offers some tools for coping with the difficulties of evaluating advocacy and promoting meaningful evaluation that is useful to both non-profits and their funders.
To their credit, many foundations are wrestling with these issues and are truly seeking to develop evaluation approaches that ease grantee burden and promote learning for both the foundation and the grantee. When funders, evaluators, and non-profits all are willing to take risks, move beyond traditional notions of success and failure, and seek out new strategies for measuring impact, everyone can win.
Accurate depiction of foundation’s lack of certainty and clarity about most things they call evaluation.
If there continues to be a movement to make causation the gold standard for “value” or “worth” in efficacious use of foundation dollars thats OK if those foundations want to occupy the very narrow niche of efforts that allow for that.
The rest of philanthropy can then be left to organizations that want to continue to pursue the legacy of a hundred years of supporting and stimulating some of the most important social, health and educational changes of the past century. But dont terrorize non-profits that cant prove causation and dont lock out the ones that wont pretend to.
Evaluation, as the article suggests, should be for the joint learning of the grantee, community and philanthropy. Not to prove to your Board how hard-ass you are or to have something to take to the next Foundation gab-fest to brag on your “science-based rigor”.
I am inspired by the title of the article and happy to add a reference in conclusion section of my article on empowerment of construction workers, an action research project:
Conclusion:
The feedback as obtained from the subjects gave an indirect measure on the efficacy of the training program and the subjects could be termed as special students who require continued support for their intended development, especially the daily wages labor category to further their skill as masons.
The shock which the project received at a critical time also points to a fact that whereas the villagers are very well prepared to receive the in puts from us towards development, only we are not well prepared in meeting their expectations. One can blame the author in this project who was the manager at the field level for not foreseeing the bureaucratic delays. Two months construction time for a demonstration project appears to be very ambitious schedule when we compare the time lost in between which is about three months time for the project to get resumed. Cost over run in staff salary alone would be a single factor to kill such projects. The apathy is that payments are not yet made to the groups who worked in the initial phase for none of their fault, as final approval of revised budget is perhaps still pending.
The managers of such projects also face lots of risk in under-performance in meeting the budget, time schedule and attrition of name. May be this is one reason that we may not find many action based projects within government and multilateral systems with strict accounting procedures. This also raises the issue on accountability on performance. The MOU (reference-1) is very clear on the measurable outcome, may be not on the time element, or which is implied. In contrast to “Drowning in Data” reference-8, this is a situation caught with system procedures. We struggle to meet the expectations of the remote villagers.
An action research project as this appears to be a very live platform when the teacher and subjects continuously interact, provide an evaluation platform in many dimensions (as enumerated in the “Lingo To Go” reference-8) such as formative evaluation, summative evaluation, efficacy etc.
Hi,
The article itself is very impressive and explanation given in the article are no doubt make us realize some very important issues. Individuals can really make it possible by their active participation and co-ordination
COMMENTS
BY DR. PAUL W. HORN, PH.D.
ON September 29, 2006 06:08 PM
FIRST, IT SEEMS THAT THE MAIN POINTS COULD HAVE BEEN MADE MORE SUCCINCTLY.
SECOND:QUOTING FROM THE NYTIMES IMMEDIATELY MOVES THINGS TO THE LEFT. (HARD TO BELIEVE, BUT ARTICLES IN THE TIMES MUST FOLLOW THE EDITORIAL PARTY LINE THAT IS SET EVERY MONDAY MORNING.)
NOTHING IS SAID ABOUT THE PLUSH LIVES THAT SOME FOUNDATION HEADS LIVE. A LIFE THAT IS PERPETUATED BY GIVING THE APPEARANCE OF SUCCESS.
PERHAPS SOME HARD DATA IN THE ARTICLE WOULD HAVE HELPED.
IN SHORT, I AGREE THAT THE MANAGEMENT AND CONDUCT OF FOUNDATIONS ARE A MAJOR PROBLEM.
BY Marianne Philbin
ON September 30, 2006 09:29 AM
Add to all that the article highlights the fact that most nonprofits are understaffed and under-resourced in every way, and the challenges that “traditional” evaluation presents become even greater. We love that the kicker to your article used the word “frenzy” to describe current state re accountability. It has felt just that way – part of what led us to write new book LEVEL BEST: How Small and Grassroots Nonprofits Can Tackle Evaluation (Wiley, 2006). It can be done and be meaningful, without the cost, pain, waste and confusion associated with approaches that just don’t fit the operating or programming realities of vast numbers of nonprofits. One director of a grassroots agency who recently read the book told us, “We love this—we get this—but the ‘evaluation police’ are going to hate it!”
BY Alice Leibowiz, MA
ON October 6, 2006 08:03 AM
This is a great article, and lays out the facts very clearly. I know of one other solution (to the scientific problem at least; there’s no guarantee funders will like it) that was not mentioned. That is, replicating scientifically evaluated programs.
This approach has really caught on in the teen pregnancy prevention field in the past five years. The National Campaign to Prevent Teen Pregnancy commissioned researcher Doug Kirby to review all existing well-designed evaluations of teen pregnancy prevention programs. He summarized these evaluations and created a guide for service providers on how to use the information, in a monograph called, “Emerging Answers.” For the same reasons mentioned in the article, he recommended that providers not even try to evaluate their programs unless they could afford to use a control group and a large sample size. This kind of evauation normally costs more than the entire budget of most programs. Instead, he suggested replicating one of the handful of programs that had been shown to be effective.
Since then, several more reviews have been written and popularized among service providers. Other fields, such as early childhood intervention and adult employment services, could pool their resources to use a similar approach.
BY Tracy K.
ON October 6, 2006 04:57 PM
It’s the monetary arrogence of the philanthropic community that leads to its entitlement to meddle in organizations being funded. Before there was “evaluation” there was “collaborations” and before this some other trendy catchphrase foundations tried to impose on nonprofits. How many program officers are real organizational development experts, let alone experts in the fields they fund? The “evaluation” trend moving through the philanthropic community is just another in a long line of “improvments” that foundations think they can make in the nonprofit sector. Nonprofits aren’t guinea pigs, we’re mostly real organizations serving real communities with real needs - just because we can’t always quantify them in some nicely packaged white Western evaluation package isn’t reason to impose this structure. All this article says is what nonprofit folks on the front lines have been experiencing for the past few years - that foundations make reporting such a priority it gets in the way of the work being done. And worse, foundations see no power dynamic in their roles as holding the purse strings that lead nonprofits to jump when they say “frog.”
BY Stephen J. Gill, Ph.D.
ON October 9, 2006 11:25 AM
Alana Connor Snibbe has written an excellent summary of the state of nonprofit evaluation. And she has done this with input from some of the best thinkers in the field: Patrizi, Cook, and Patton, among others. Every funder and grantee should be discussing the issues raised in this article. However, I feel compelled to disagree with two central points made by the author. First, it isn’t a matter of one evaluation method being better or worse than another. It’s a matter of what question you are trying to answer and how confident you need to be in the evidence. If I want to know how much one intervention is better than another intervention, or how much better an intervention is than no intervention, and I need statistically significant results, then I have to use some fashion of a quasi-experimental design (control groups, comparison groups, time series, etc.). But it is quite rare that funders and nonprofits want to know the answers to those particular questions. Most of the time, what they really want to know is if an intervention can have positive outcomes given the right conditions, and if the results are worth the investment, and they only need to know these answers “beyond a reasonable doubt”. These questions can be answered by collecting data from stakeholders that tells the story of change and makes logical links between the intervention and changes in individuals, groups, organizations, or communities. Usually, this approach doesn’t require a great deal of time or money. It does, however, require being very clear about what you want to know, why you want to know it, and how you are going to use the information.
BY Catherine Crystal Foster and Justin Louie
ON October 17, 2006 10:25 PM
Ms. Snibbe raises some excellent points that foundations, nonprofits, and evaluators should heed. Furthermore, we wholeheartedly agree with Dr. Gill’s comment that the focus should be on what question one is trying to answer, with what amount of confidence. We have recently been working with foundations in the area of evaluating policy and advocacy work. As more foundations are funding advocacy, they are confronting unique challenges in measuring the effectiveness of any single organization’s contribution to work that is—by its very nature—collaborative, complex, subject to forces beyond the organization’s control, long-term, and evolutionary. Experimental and quasi-experimental models don’t make sense in these environments, and as Ms. Snibbe and Dr. Gill point out, the questions an experimental design sets out to answer may not be the right questions to ask.
Our work on behalf of The California Endowment, “The Challenge of Assessing Policy and Advocacy Activities: Strategies for a Prospective Evaluation Approach” (Kendall Guthrie, Justin Louie, Tom David and Catherine Crystal Foster) http://www.calendow.org/reference/publications/pdf/npolicy/51565_CEAdvocacyBook8FINAL.pdf offers some tools for coping with the difficulties of evaluating advocacy and promoting meaningful evaluation that is useful to both non-profits and their funders.
To their credit, many foundations are wrestling with these issues and are truly seeking to develop evaluation approaches that ease grantee burden and promote learning for both the foundation and the grantee. When funders, evaluators, and non-profits all are willing to take risks, move beyond traditional notions of success and failure, and seek out new strategies for measuring impact, everyone can win.
BY ALLEN
ON October 23, 2006 10:00 AM
Accurate depiction of foundation’s lack of certainty and clarity about most things they call evaluation.
If there continues to be a movement to make causation the gold standard for “value” or “worth” in efficacious use of foundation dollars thats OK if those foundations want to occupy the very narrow niche of efforts that allow for that.
The rest of philanthropy can then be left to organizations that want to continue to pursue the legacy of a hundred years of supporting and stimulating some of the most important social, health and educational changes of the past century. But dont terrorize non-profits that cant prove causation and dont lock out the ones that wont pretend to.
Evaluation, as the article suggests, should be for the joint learning of the grantee, community and philanthropy. Not to prove to your Board how hard-ass you are or to have something to take to the next Foundation gab-fest to brag on your “science-based rigor”.
BY purushothaman pillai
ON December 4, 2006 06:27 AM
I am inspired by the title of the article and happy to add a reference in conclusion section of my article on empowerment of construction workers, an action research project:
Conclusion:
The feedback as obtained from the subjects gave an indirect measure on the efficacy of the training program and the subjects could be termed as special students who require continued support for their intended development, especially the daily wages labor category to further their skill as masons.
The shock which the project received at a critical time also points to a fact that whereas the villagers are very well prepared to receive the in puts from us towards development, only we are not well prepared in meeting their expectations. One can blame the author in this project who was the manager at the field level for not foreseeing the bureaucratic delays. Two months construction time for a demonstration project appears to be very ambitious schedule when we compare the time lost in between which is about three months time for the project to get resumed. Cost over run in staff salary alone would be a single factor to kill such projects. The apathy is that payments are not yet made to the groups who worked in the initial phase for none of their fault, as final approval of revised budget is perhaps still pending.
The managers of such projects also face lots of risk in under-performance in meeting the budget, time schedule and attrition of name. May be this is one reason that we may not find many action based projects within government and multilateral systems with strict accounting procedures. This also raises the issue on accountability on performance. The MOU (reference-1) is very clear on the measurable outcome, may be not on the time element, or which is implied. In contrast to “Drowning in Data” reference-8, this is a situation caught with system procedures. We struggle to meet the expectations of the remote villagers.
An action research project as this appears to be a very live platform when the teacher and subjects continuously interact, provide an evaluation platform in many dimensions (as enumerated in the “Lingo To Go” reference-8) such as formative evaluation, summative evaluation, efficacy etc.
BY Amy Dyslex
ON October 21, 2011 11:38 AM
Hi,
The article itself is very impressive and explanation given in the article are no doubt make us realize some very important issues. Individuals can really make it possible by their active participation and co-ordination