Defining Positive Outcomes
Defining Positive Outcomes
What do we really mean when we talk about "positive outcomes"? In this series, produced in partnership with Third Sector Capital Partners, contributors from a variety of sectors discuss how they apply the term to programs and policies.

Improving US education requires a deliberate managerial focus on short-term outcomes for students. Unfortunately, many outcomes that accountability systems currently measure are based on data collected too infrequently and on too broad a level to be of much use in helping managers and system leaders make practical decisions. We suggest the sector refocus on actionable near-term results that facilitate more efficient and effective management of interventions, and of the system as a whole.

The goals for primary and secondary education in the United States have been articulated in myriad ways. Departments of education tend to talk in terms of providing students with the knowledge and skills to be productive, global citizens. Economists tend to focus on individual earnings, college matriculation and graduation, or employment. These are valuable long-term goals, but are too vague (knowledge and skills) or too distant (earnings and college matriculation) to allow for meaningful decision-making.

As a result, many education leaders and managers focus on somewhat more proximal achievement and attainment measures, such as student performance on state assessments and high school graduation rates. While these measures are correlated with the ultimate outcomes we care about, are cheap and easy to measure, and are directly related to schools and educational interventions, they have significant limitations.

One such limitation is that these measures are infrequently collected. State tests occur on an annual basis and, therefore, data are only available for decision-making once a year. Similarly, graduation rates are measured for a single cohort every four years. This means that decision-makers are unable to identify and target interventions in a timely manner. State assessments also are quite broad and can be relatively insensitive to whether a particular, focused intervention is working. When assessment results are finally made available to local decision-makers, it is often over the summer or even the fall of the following school year. This timeline does not provide managers with adequate time to effectively adjust mid-year or plan for the upcoming academic year. Lagging yearly feedback is insufficient for management to make smart decisions.

While state assessment results and graduation rates are critical outcomes, education leaders would do better to focus most of their management attention on nearer-term outcomes that are more directly responsive to management interventions and can change within a single year. Such outcomes would include student performance and demonstration of growth on interim assessments, attendance patterns including chronic absenteeism and suspensions, grades and credit accumulation, and within-year dropout rates. These are measures that system leaders and principals could more quickly affect and, as importantly, see if they were having an impact.      

Shifting to a new, and more practical, outcomes mindset would require researchers and practitioners to work differently to overcome several systemic barriers.

First, school districts often lack the human and technical capacity to monitor outcomes and determine which programs or interventions are responsible for causing changes. Even near-term results in education are often connected to multiple programs or strategies. Disentangling the impact of one program often requires substantial methodological and analytic skills—skills not always present within districts. 

Several programs are attempting to fix this. The Strategic Data Project at the Center for Education Policy Research, where we work, trains “data strategists” and places them in educational agencies, where they support a culture of outcomes and evidence. For instance, Matthew Linick, an SDP fellow and ‎executive director of research and evaluation at the Cleveland Metropolitan School District, has created a catalogue of programs and interventions available to principals with information on their efficacy and impact in Cleveland. Bo Yan, another SDP fellow with the Jefferson County Public Schools in Kentucky, has created a budgeting process that requires budget requests to come attached to a set of clear outcomes to be achieved and a timeline for achieving them. 

A second challenge facing education agencies is the lack of adequate infrastructure needed for steady, careful measurement and evaluation. There are really three problems here. First, to evaluate whether or not something “worked” on an ad hoc basis typically takes days (or months) of data processing and complex statistical programming on the part of analysts. This severely limits the number of programs and interventions that can be evaluated on a timely basis. Second, most education agencies are not large enough to conduct rigorous evaluation. The number of classrooms or schools is simply too small for analysts to determine whether changes in outcomes are the result of a particular program or just random noise in the data. As a result, it may not even be possible to determine whether certain programs are helping to improve outcomes. Third, agencies often do not have a network of other agencies from which they can learn and consistently improve their practice. Because they don’t know how interventions are working in other agencies, they may either continue implementing a program that is not working or give up on a program that might work if it were tweaked.

Here again, there are some signs of hope. New tools and networks are being developed specifically to help education agencies make faster progress. The Center for Education Policy Research’s Proving Ground project is organizing a network of 13 participating districts and charter management organizations to overcome the scale issue, allowing even relatively small systems to gain insight into what is working and what is not. In less than one year, members of the Proving Ground network have engaged in a full continuous improvement cycle: Agencies in the network have seen the impact of their selected interventions, individually and collectively strategized on how to improve implementation, and rolled out new strategies to help improve student progress. Twelve out of the 13 sites have made management decisions that affect program implementation across the network. 

Other organizations are offering solutions as well. Mathematica Policy Research, through a grant from the Institute for Education Sciences, has recently released new tools designed to guide agencies through the development of simple randomized control trials and rapid-cycle evaluations. The Carnegie Foundation for the Advancement of Teaching is establishing networked improvement communities to mobilize “improvement science” in service of better outcomes.      

These examples illustrate how we can move toward a culture focused on practical outcomes in education. Ongoing, systemic change, however, will require researchers to have the will to continue to provide timely and actionable information, and for practitioners to follow the evidence where it leads.