I see evaluations and data as tools. Used correctly, they can help us effectively analyze social change programs, services, and communities. They can also create transparency and accountability. But I’ve become more and more frustrated by them. People often consume evaluations as if they represent the absolute truth, without context, and this can hinder initial improvements and future developments. They also paint only part of the picture; they scratch the surface of how a person thinks or a community operates, but few get to the why. An evaluation may show us that a program is not meeting its goals but offer no insight as to the underlying reasons.

Unfortunately, data on Muskegon Heights, Michigan—my community, my home—paints a picture that doesn’t attract people, businesses, or investment. It does quite the opposite. In recent months, for example, my city has made statewide news as one of the most violent in the state. Examined closely, the violence rates relate to murders mainly among young black males; they’re not “random acts of violence” that would affect the average person. Nevertheless, these types of headlines deter businesses, which means fewer jobs—and yet jobs are precisely what the poverty-stricken population needs. The data essentially deprives the next generation in our community of even a glimpse of what the world has to offer. Data on Muskegon Heights also says that we live in a “food desert” and need a grocery store, but it does not tell you that we want jobs, which have the potential to build self-esteem and restore hope. 

Communities Creating Health
Communities Creating Health
Communities Creating Health, presented in partnership with Creating Health Collaborative, is a series on how the design, implementation, and evaluation of interventions in health can align more closely with what communities value. #creatinghealth

Essentially, in using evaluations, we risk exposing only the problems within our current systems and structures, instead of questioning the systems and structures themselves. 

Part of why this happens—and one powerful thing to keep in mind—is that systems and structures are never neutral; public policies and services, private practices, and people with power all have built-in biases. We can see this in the glaring similarities of most urban communities in the United States; while geographically distinct, they operate from the same blueprint, influenced by the same biases from established systems. These biases flow through every aspect of the evaluation process and greatly affect the three main groups involved in evaluations:

  1. Evaluators are part of “the system” and develop their approaches within the prevailing perspective. In essence, the evaluator’s mind is skewed toward valuing particular outcomes that are determined by the requirements of the system.

  2. Those who are evaluated are “measured” for how they perform within the system’s framework, using tools that evaluators develop in response to built-in system biases. 

  3. Those who read and use the evaluation results, while reading consciously, will have subconscious biases based on the current system, its perspective, and the framework it exerts.

But what about the things that happen outside of that prevailing framework? Data cannot tell you about the lady who had a stroke but bakes cookies for volunteers when they come together to clean her yard. Data cannot tell you about the 86-year-old farmer who teaches children in his neighborhood to farm and who sells produce to pay for his medication. For me, this is real “data” about my community—people surviving outside of “the system” and untouched by “the structures.” How do we capture these “facts”? 

There are a few ways to make community evaluations more effective:

  1. Leaders of initiatives can develop the tool to simultaneously measure outcomes and the program itself. Start with the end in mind: What is the desired outcome? And, using the ideas below, remove potential bias from the beginning.

  2. Evaluators can spend time with people from the community they are evaluating, allowing for more context-sensitive results. The members of the community can ensure that evaluators understand the community before deciding what data to collect. They can also act as interpreters to ensure that the data translates in a way that considers local context.

  3. Evaluators can go back to good, old-fashioned conversation. In the technology age, we tend to stare at screens and data, and shy away from the realities of the people we engage. Knowing the community and building relationships should be a component of program evaluation, as it encourages evaluators to become vested in the information they share.

  4. Evaluators should compare the mindsets and behaviors of the community before, during, and after a program. I believe this is the only way to measure the true effectiveness of a program. Observation of this kind needs to occur over time. It can be time-consuming, but it’s likely important to creating sustainable outcomes.

  5. When evaluators and organizations communicate results, they should do so in partnership with agents of the community and include a history of its people. This context can help minimize misinterpretation and negative consequences. 

How we approach and report data can mean life or death for communities like mine. It can empower or further marginalize; it can help build or erode. As we develop and implement evaluation, we need consider the powerful effects that it can have on communities.

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Kimberley Sims.