Diverse group of people standing together. One person is playing a trumpet. (Illustration by Raffi Marhaba, The Dream Creative)

The social sector generally considers leadership development a good investment, especially when it comes to cultivating systems-level change. But ask leadership program developers and evaluators, “What is leadership?”, and you will often get vague responses rife with potential bias: “You’ll know it when you see it,” or “It’s the special sauce,” are two surprisingly common answers.

Yet clearly defining leadership is essential to determining the quality and efficacy of a development program. It establishes the “what” and “why”—concrete intentions that help guide the way to understanding what does and does not work.

Recognizing Leadership in All Its Forms
Recognizing Leadership in All Its Forms
This article series, presented in partnership with the Robert Wood Johnson Foundation and other organizations involved in the Beyond the Hero leadership initiative, explores the social sector’s need to broaden its narrative of leadership so that it supports leadership in all its complex, dynamic forms.

So why is it so hard to do? The reason is that leadership for social change is complex, dynamic, and contextual. Approaching leadership development evaluation with a traditional theory lens ensures that the evaluation is well-grounded and intentional. But simultaneously approaching it with an “emergence” lens expands the focus to include the contexts, adaptations, and insights that arise from the realities of operating within complex systems. Combining these approaches gives evaluators a richer and more dynamic picture of what allows participants to exercise leadership and how it can support systems change.

As other essays in this series have highlighted, conventional approaches to leadership development, which tend to focus on a linear path from non-leader to leader, are outdated. Rather than continuing to operate under this dichotomy model, more organizations need to think of leadership development as a continuum. The exercise of leadership—as opposed to the existence of leaders—ebbs and flows over time. Exercising leadership may vary across contexts, require collaboration, and even include ceding power to others. Therefore, an emergence approach allows practitioners to look not only at what they planned and theorized would be the result of their leadership development actions, but also at the process and results of what actually emerged, including unexpected realities, in the complex act of leading and changing systems.

Incorporating an emergence approach into leadership development programs presents both challenges and opportunities in their evaluation. It allows for multiple definitions of success, but it also enables better contextualization, which increases accuracy, insight, and improvement. The Handbook of Leadership Development Evaluation, published in 2006, laid a solid foundation for a general evaluation approach. Since then, evaluators have become even more comfortable with complexity, participatory methods (such as involving direct stakeholders in all parts of the evaluation), and multi-method frameworks (including using a variety of methods within the same evaluation). In addition to these, there are four practices that can help organizations keep up with the complexity, nuance, and emergence in leadership development programming. These include: adapting program logic models, mixing evaluation methods that have different intents, shifting how we think about the utility of evaluation, and rightsizing evaluations.

Adapt, Do Not Abandon, Your Logic Model

A logic model lays out assumed, causal relationships between a program’s resources and activities and its intended results. Our use of the term encompasses related terms, including theory of action (how specific actions create change) and theory of change (how change happens more broadly). As a tool, logic models flesh out assumptions that underlie programs by examining questions such as: Given potential participants’ existing strengths, why do we think leadership development is necessary? Do we expect that individuals will be able to use skills they develop during the program in their existing context, or do they need help to facilitate the use of those skills? In a complex environment, how does our intervention help individuals navigate complexity? And, importantly, when we say leadership (or network or community), what do we really mean?

Organizations that do not take the time to work through these kinds of questions will likely have less impact. For example, if a program aims to change communities but is primarily designed to build individual leaders’ skills, program designers will likely be disappointed by program results that disproportionately show individual change. A program using a logic model, by contrast, would identify this mismatch from the beginning. We therefore encourage organizations to use logic models as tools to connect the dots across their programs, keeping the following three things in mind.

First, give yourself permission to change your logic model as it makes sense. Some people feel logic models lock them into one way of thinking, but they do not need to. If you learn something is not working don’t just change your program, change your logic model and the relevant assumptions. For example, if participants are not using a program’s training modules, rather than just scrapping them, dig into the reasons why and consider making adjustments to your logic model so that the modules generate the outcomes you intend. 

Second, invite program participants to develop the logic model with you. They can provide valuable insights into how change might happen, and the process helps align everyone’s program expectations and approach. Participants in one program we evaluated, for example, learned that building a network was an important goal for the designers. They identified creative ways to build them that the program designers had not thought of and then set their own goals for success.

Finally, rather than focusing on one macro view, break it up. What does the logic model look like from a community perspective? An individual’s perspective? An organizational perspective? What, beyond funding, is the funder contributing? At the end of this exercise, rather than picking one, see if they can co-exist. For one evaluation, we asked the team implementing the program to describe what they thought the funder’s logic model looked like. This simple exercise revealed that the funder was underutilizing its non-financial resources and highlighted new ways they could support the program.

Use Multiple Methods

Incorporating a variety of methods that serve different purposes paints a fuller picture of what is working and not working, and what matters to everyone involved. Traditional, deductive methods like pre-post surveys, participant interviews, and network analysis, for example, help evaluate whether the program has achieved its intended outcomes, while sense-making methods reveal important context and system elements and opportunities that program designers initially did not consider. Sense-making methods give voice to a greater variety of collaborators, including participants, and maintain their relevance even as programs evolve. Here are a few examples of methods and their potential uses.

  • Competency assessment (deductive): This involves using self, peer, or manager insights to identify strengths or gaps in skills, knowledge, and abilities relevant to an opportunity. It serves as a learning tool for professional development, opportunity alignment, and team composition.
  • Community partner interviews (deductive): This method gathers perspectives from people in a participant’s network about the participant’s role in the community and/or changes in their mutual working ecosystem that the participant’s work may influence. Note: We do not encourage asking network members if they think of the participant as a leader or whether they have improved.
  • Leadership action self-report (sense-making): This involves understanding how program participants define leadership in their context, based on a specific question such as, “Thinking about situations where you have taken a leadership role during the last year, what actions are you most proud of?” It centers participants’ voices and indicates what kinds of leadership they think are noteworthy.
  • Most significant change (sense-making): This method asks participants what they feel has changed as a result of the program, and then systematically reviews the stories to understand how the people implementing, funding, and participating in the program view significant change. It allows them to reflect on and define why they think participants’ stories constitute success, and compare that to their own views on success. This provides an opportunity for different groups to build consensus and clarity about program intention, and then refocus and re-align around program goals.
  • Rapid-cycle learning (sense-making): This gathers real-time data that allows for continuous program improvement and adaptation to changing environments. It includes tools such as A/B testing and After Action Reviews that shorten the evaluative feedback loop and provide immediate program feedback.

Integrate Findings Along the Way

Using methods like these, especially sense-making methods, allows organizations to interpret what is happening in the program environment and make adjustments to achieve the desired impact.

When organizations understand how participants describe their leadership activities, for example, they can more effectively design their programming. In response to participant feedback, for instance, the Robert Wood Johnson Foundation’s Change Leadership Initiative began incorporating regional convenings to foster collaboration and work toward achieving broader health equity goals. And Global Health Corps, another leadership development program in the area of health, uses similar methods to communicate, target, and triage programs so that participants can achieve growth across a variety of leadership areas.

These approaches can also help people running leadership development programs understand how participant needs may be changing. For example, using rapid-cycle learning methods gave the Robert Wood Johnson Foundation an advantage when COVID-19 struck and racial justice movements gained momentum in 2020. Data showed that many leadership program participants were taking on new roles, including providing telehealth services, raising awareness of the pandemic, advocating for equity in education and health care, and more intentionally integrating a racial justice lens in their work. These data allowed the foundation and leadership program implementers to consider changes to their interventions, such as increasing financial support and connecting participants working on similar challenges or in similar geographies.

Finally, these approaches can help maintain consensus between participants, the people implementing the program, and funders about a program’s focus and purpose, especially as it adapts to change and complexity. One evaluation we did found that over time, the team running the program and funders had developed very different understandings of what program success looked like, leading to confusion and frustration. We used the most significant change method to help both groups articulate what they thought success meant and, by using concrete examples, brought them into closer alignment.

Rightsize the Evaluation

Even with a strong belief in collecting and using data, conducting evaluation can still feel daunting, especially for small organizations. Moreover, organizations can become mired in repetitive cycles of evaluations that feed donor requests or program learning, without really understanding the impact of their efforts or where to adapt. Effective evaluation through an emergence lens is within reach for any leadership development program, but not every method fits. Evaluation size, method, and scope must match the level of staffing and monetary investment, demands on participants, and intervention timeframes.

One way to plan an adaptive, meaningful evaluation of any size is to frame it as part of the intervention. Organizations often put evaluation in its own silo, underutilizing it as a resource. Rather than conducting interviews about a single issue (such as program efficiency or participant expectations), for example, evaluators can consider whether others in their organization might have related questions (such as for brand awareness or upcoming change management communication) and if it makes sense, fold them into the same process. It can also be helpful to think about sharing quotes from participants on the program’s impact with development or communications staff (within the constraints of confidentiality agreements). Organizations can also make data collection part of the program experience. Competency assessments can help gauge the impact the program is having on participants’ skills, as well as give participants a chance to reflect on their strengths, skill gaps, and growth. Finally, joint analysis of evaluation findings that includes participants and those implementing the program, can yield both evaluation insights and individual growth.

It's also important to focus evaluation questions. Rather than trying to assess every possible aspect of the program, evaluators should focus on high-value evaluative questions. A program that aspires to create networks, for example, may find that carefully documenting each relationship within a network yields little benefit compared to the level of effort.

Leadership is complex and dynamic, and development programs looking to support social change need to find new ways to respond to the wide-ranging contexts and changing environments in which leaders operate. Continuously evaluating and improving programs is an essential part of this work. An emergence lens makes evaluations as dynamic as the leaders and systems in which they operate. By using evaluation practices such as adapting program logic models, combining traditional and emergence methods, expanding the use value of evaluations, and strategically focusing evaluation scopes, leadership program developers, participants, and funders can develop a fuller and more nuanced picture of what it takes to create systems-level change and then find more effective ways of getting there.

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Lisa Frantzen, Jared Raynor & Hannah Taylor.