By 2013, according to a study by the Pew Research Center, more than 170 US-based nonprofit news outlets had established a presence online. These organizations cover everything from hyper-local issues to matters of global concern. Significantly, more than 70 percent of them came into being in or after 2008.1 This boom represents a new path for media, and it raises a new set of questions for those who seek to understand the impact of these outlets. Whether one is an editor who needs to gauge the real-world ripples of an investigative journalism project or a funder who needs to evaluate the case for supporting such work, access to accurate and meaningful metrics is critical to navigating this nascent industry.

It’s an industry that has risen from the ashes of traditional newspaper publishing. An initial crop of a dozen or so nonprofit outlets—including our organization, Grist—sprang up in the late 1990s, joining well-established predecessors such as National Public Radio and the Center for Investigative Reporting. Just a few years later, the post-millennial implosion of the newspaper business and the explosion of social media led to a sea change in how journalists create and disseminate their work.

media_impact

(Illustration by Curt Merlo) 

Philanthropic support has played a critical role in this transformation. Among outlets that took part in the Pew survey, 74 percent reported that they had secured grant funds to launch or maintain their operations. And the scale of such funding has accelerated. Between 2009 and 2011, foundation support for media grew by 21 percent, compared with a 5.8 percent increase in overall domestic grantmaking, according to a report by the Foundation Center.2 During that period, more than 1,000 foundations made a total of $1.86 billion in media-related grants.

The rapid growth of foundation-supported media makes the question of impact keenly relevant to journalists and philanthropists alike. How can those who operate and fund these organizations measure the full impact of journalistic work? What are the best methods for determining the connection between published content and real-world change? What, fundamentally, is the role of a journalist in the 21st century? Across the United States, efforts are under way to address these questions—efforts that range from newsroom experiments to ambitious research projects. The result is a conversation that can be staggeringly complex and vaguely navel-gazing, and so far it has revealed exactly one truth: There is no easy answer.

Questions, Questions

For nonprofit media, the act of measuring impact is not nearly as straightforward as it is for other nonprofit organizations: There are no trees planted, no cans of soup distributed, no lawsuits won. Existing resources for nonprofits have little to say about media. The IRIS catalog, for example—a project of the Global Impact Investors Network that offers a bevy of options for evaluating work in areas such as banking, health care, and conservation—doesn’t include a media category. What’s more, the numbers held in high esteem by old media—circulation figures and advertising dollars, in particular—have minimal relevance in this new world. Operating somewhere between the mission-driven world of traditional nonprofits and the profit-driven realm of traditional media companies, nonprofit media are a fence-straddling lot.

Many observers have noted a misalignment between traditional metrics and new-media needs. In 2013, the John S. and James L. Knight Foundation issued a report that offered this assessment: “The near-universal perception is that standard metrics … used by nonprofit news organizations are simplistic and often misleading.”3 The view inside those organizations isn’t much different. “The large majority of [nonprofit media outlets surveyed by Knight] feel completely lost when it comes to measuring their impact,” says Jonathan Sotsky, director of strategy and assessment at the foundation.

So what’s a news outlet to do? As a starting point, many of them rely on the same metrics that other Web-based organizations use. These metrics include page views (which count, as the name suggests, the number of times that visitors request a single Web page), unique visitors (a tally of each device that accesses a site over a given time period), and time on site (the length of time that visitors keep a particular site open on their browsers). Data of this kind are relatively easy to access, thanks to widely available tools like Google Analytics. They are wonderfully tangible. But they are flawed. Time on site is especially problematic; a better term for it might be “time on site while intending to read an article but wandering away to put the kettle on, then taking a phone call from Aunt Midge, then—wait, what was I doing?”

Standard metrics tell only part of the story. Yes, it’s vital to know how many people a news outlet is reaching and which links those people are clicking. But other questions are equally important, if not more so: Are people actually reading or watching the content that they access? Are they sharing or commenting on it? Does their engagement with the content spur offline conversation and action? Does that engagement ultimately lead to shifts in public opinion or policy? The answers to those questions are much harder to determine, but they are essential to understanding the impact of a media organization.

Standard metrics tell only part of the story. Yes, it’s vital to know how many people a news outlet is reaching and which links those people are clicking. But other questions are equally important, if not more so: Are people actually reading or watching the content that they access? Are they sharing or commenting on it? Does their engagement with the content spur offline conversation and action? Does that engagement ultimately lead to shifts in public opinion or policy? The answers to those questions are much harder to determine, but they are essential to understanding the impact of a media organization.

Trial and Error

Our experience at Grist offers an instructive example of what it has meant to be a nonprofit news outlet in this brave new millennium. Initially, in the absence of other options, we relied on the existing online metrics to chart our progress. We were thrilled to be able to point to hard numbers: We’ve grown from an audience of 100 unique visitors to an audience of 10,000! 100,000! 250,000! (Today our total monthly audience, including unique visitors and those who interact with us via social media, is close to 2.5 million.)

As our readership mushroomed, we began to focus on another factor that signaled progress toward our goal of shaping the national environmental conversation: influence. We started tracking indicators such as media mentions, awards, testimonials, public-speaking invitations, and interactions with notable decision makers. During our first decade, this suite of metrics offered strong evidence—to our team, to our board, and to our financial supporters—that Grist was having an impact. We were reaching a growing number of people, they were clicking on our links, and influencers were discussing and acting on the ideas and stories that we put into the world.

Given the social mission that underlay our journalism, however, we yearned for more information about how our work was resonating with readers and translating into real change. The occasional anecdote made its way to us—a Grist-inspired debate that took place behind closed doors at the US Environmental Protection Agency, a shift in the farming practices of a Native-American tribe, a clean energy referendum in a US city—and we treasured these bits of qualitative data. In many ways, they told us more about our impact than hard numbers could ever do. But we needed more reliable ways to evaluate how readers were engaging with our content, both online and offline.

We created a metric that we called—tongue firmly in cheek—the Very Special Index of Goodness. This complex amalgam, designed to improve our understanding of reader engagement, combined external and internal data to yield a single number that we could track over time. The intentions behind this tool were as earnest as its name was wry, and we weren’t the only ones who were thinking along such lines: In 2010, the online arm of the Philadelphia Inquirer released a reader engagement formula of its own: &#931 (Ci + Di + Ri + Li + Bi + Ii + Pi). That formula took into account several factors—clicks, duration, “recency,” loyalty, brand, interaction, and participation—and, like the one that we had concocted, resulted in a single number.4

For us, the limits of this single-number approach quickly became apparent. It reminded us of the assertion in The Hitchhiker’s Guide to the Galaxy—the cult novel by Douglas Adams—that the “Answer to the Ultimate Question of Life, the Universe, and Everything” is 42. So we shifted course and focused anew on qualitative methods for measuring engagement. We now conduct online surveys and carefully track the flow of social media, and what we’ve found has pleased staff members and financial supporters alike: In surveys, up to 70 percent of readers say that they recently took action on the basis of Grist content. We aren’t an advocacy organization, but our storytelling has clearly inspired change on the ground.

After more than a decade of trial and error, we arrived at a set of metrics that work for us. For these metrics, we use terms now familiar to most people who work in nonprofit media. We measure reach, which covers the size of our audience—the number of people who access our content either at our site or elsewhere online. We measure impact and engagement, which involve reader activity both online (in the form of likes, shares, and comments) and offline (in the form of behavior change). And we measure influence, which encompasses media citations, policy changes, and other elements that make up the environmental conversation.

The meaning of these terms, like the field of nonprofit media as a whole, is fluid. As yet, people are not using them consistently. In their work on this topic, for example, Anya Schiffrin and Ethan Zuckerman define “influence” in a way that resembles our use of the term “impact”—and vice versa. (See “Surveying the Field.”) But the core idea is the same in each case: How users respond to your content is distinct from how your content affects the larger media or policy environment, and both of those variables are distinct from how many people simply read or view your content.

Although we sometimes felt alone in our explorations, other players in this field were also experimenting with ways to evaluate the connection between content and social impact. Over the past several years, a national conversation on this topic has started to develop—one that includes practitioners in nonprofit media, funders who support them, and a growing cadre of researchers. Recently, we spoke with several influential figures who are contributing to that conversation.

The Search for Solutions

Jessica Clark has been thinking about how to chart media impact since 2004. She first ventured into the fray in a moment of journalistic upheaval: “In the wake of the [2000 US presidential] election and the Iraq War, there was a wave of new media projects that expanded the possibilities for different kinds of journalism,” says Clark, who is now the research director at Media Impact Funders, a network of more than 50 funding institutions. Amid those developments, she notes, journalists were being asked to leave objectivity behind and to express opinions about the news. Over the next several years, Clark explored that trend while serving as editor of In These Times, a progressive magazine, and as director of the Future of Public Media Project at the Center for Social Media at American University. She then co-authored a book, Beyond the Echo Chamber: How a Networked Progressive Media Can Reshape American Politics (2010).

When the book came out, Clark and her co-author, Tracy Van Slyke, opted out of a conventional book tour. Instead, they organized a series of “impact summits” that took place in seven US cities. Drawing on insights gathered at these events, Clark and Van Slyke developed a report titled “Investing in Impact.” The report included strong advice for funders, journalists, and other stakeholders: “Shifts in technology and user habits mean that old assumptions about what constitutes impact must be reconsidered. Simply reporting on an issue or community is no longer the final outcome in an era of multi-platform, participatory communication.”5

It isn’t just technology that has changed, Clark argues. By partnering with foundations, nonprofit news outlets have carved out a new business model. And funders, having entered what Clark calls “uncharted territory,” are raising questions about the industry in which they are investing. They are eager for insights on “how to understand the impact dynamics of emerging platforms, how to build rigorous case studies that track the movement of coverage across platforms and contexts, and how the increased ability of users to participate in production shifts the impact equation,” she explains. More to the point, funders are also investing in serious efforts to address these questions.

Major players such as the Knight Foundation, the Bill & Melinda Gates Foundation, and the Ford Foundation have directed significant funding to this area. In 2013, Gates and Knight created the Media Impact Project (MIP), a $3.25 million initiative that is housed at the Annenberg School for Communication and Journalism at the University of Southern California. (The project now also receives funding from the Open Society Foundations.) MIP bills itself as nothing less than a “hub for best practices, innovation and thought leadership in media metrics.”6

New Challenges, New Tools

Dana Chinn, who runs MIP, is a media analytics strategist who serves as a lecturer at the USC Annenberg School. Previously, she worked at organizations such as the Gannett newspaper chain and the Los Angeles Times. According to Chinn, the nonprofit media industry could learn a lot from industries such as e-commerce and technology. “Analytics are essential to any business, and they are integrated into the operations and management philosophy of most companies,” she says. “If the very survival of the news industry is at stake here, shouldn’t we be taking the same approach?”

MIP is now collaborating with nonprofit and for-profit news organizations that include The Seattle Times, Southern California Public Radio (KPCC), and a trailblazing outlet called The Texas Tribune. Together, these partners are testing ways to improve their capacity to gather and analyze impact data. Among other projects, MIP served as a consultant to Participant Media (an entertainment company founded and led by philanthropist Jeff Skoll) on the creation of the Participant Index, a tool that measures the effectiveness of films, TV shows, and online videos that feature social causes.7 “We’re not going to get the 100 percent answer” to the impact question, says Chinn. “But we can get one level above where we’ve been in the past, which is throwing up our hands and saying, ‘It can’t be done.’” A signature project of MIP is the Media Impact Project Measurement System, a data repository that will combine public and proprietary sources of information. The system will likely be operational by the fall of 2015. As the repository grows, Chinn says, MIP and its partners will be able to analyze impact over time and across different media types.

A similar effort is in progress on the other side of the country. Researchers at the Tow Center for Digital Journalism at the Columbia University Graduate School of Journalism, with funding from the Tow Foundation and the Knight Foundation, have created a tool called NewsLynx. The tool collects quantitative and qualitative data in one central place. It aggregates data from sources such as Facebook, Google Analytics, and Twitter; it offers a way to track anecdotal evidence; and it provides a system for monitoring links and discussion threads related to news content. By using keywords and alerts that apply to a specific organization, topic, or piece of coverage, users can create a custom dashboard that offers a full-spectrum report on the impact of their work.

Over the past year, about a dozen US news organizations—from small-city newspapers to national outlets—have participated in a pilot test of the NewsLynx tool. (Those beta testers include organizations that work with MIP, and the head of the Tow Center sits on the MIP advisory board. It is, as Chinn notes, “a small news-metrics world.”) “People had never been able to get easy access to things like share counts of an article over time,” says Brian Abelson, co-creator of NewsLynx. (Abelson, a former fellow at the Tow Center, now works at Enigma, a data analytics company.) This tool provides a fix for that problem, he explains: “Now anyone can keep track, with very little effort, of how many times an article has been shared, when it was shared most [widely], and how that information lines up with how many people visited the article over time.” Abelson and Michael Keller, a data journalist at Al Jazeera America who helped create NewsLynx, recently produced a research paper on the project. They concluded that the increasing flow of open source data will provide newsrooms with an unprecedented amount of information about media impact.8

But the prospect of navigating a mighty river of data is a mixed blessing. In 2014, Grist undertook an experiment—funded, like NewsLynx, by the Knight Foundation—in which we developed a prototype open-source tool that measures “attention minutes.” Pioneered by for-profit media sites such as Medium and Upworthy, this metric tracks how far users actually make it into an article or a video. Our use of this metric has yielded data that give us new insight into how readers engage with our content. In the past, we might have assumed that two articles with the same number of page views had performed equally well. Now, by looking at how long each article held readers’ attention, we can see that one piece may have gripped readers more deeply than the other. We can then apply that information on what makes an article “sticky” to other items of content. It’s a promising tool, but there’s a catch: It delivers more data than we can feasibly store and regularly digest. As a next step, we are working to partner with an organization that can help us manage and analyze this rich lode of data. In the meantime, we have news to cover and a site to produce.

Resource constraints, of course, are a common challenge for nonprofit newsrooms. But another obstacle to the widespread adoption of data-tracking tools is the fact that most news organizations operate in a self-imposed silo. “Everyone is slightly different and interested in slightly different things,” Abelson says. “So how do we build something that can accommodate all those needs, while still being coherent and workable and easy to start using?”

A Shared Language

Lindsay Green-Barber arrived at the Center for Investigative Reporting (CIR) in 2013. She had recently completed a dissertation on the use of communications for political mobilization in Ecuador. Now, under a fellowship awarded by the American Council of Learned Societies, she took on the newly created role of media impact analyst at CIR. Her first assignment: to define what “impact” actually means to the organization.

Green-Barber spent two months surveying various stakeholders about that question. She then spent a year creating and refining systems that allow CIR journalists and other staff members to track data related to audience feedback, requests for interviews, and social media activity. “Rather than think about analytics and metrics being the end measure of success, we started thinking about them as part of the broader picture,” she says. Green-Barber also used her understanding of social movements to help CIR expand its notion of success to encompass more than just a shift in law or policy. “An investigation of a vulnerable community is not going to lead a lawmaker to ‘do a 180,’” she says. “If you’re looking just at legal change, you’ll miss a lot of other important change.”

Indeed, the simple act of informing and engaging readers can be among the most important forms of impact that a media outlet can pursue. “The fact that a user not only visits a site but visits it regularly, and engages through sharing or commenting, means that [the user has] an emotional connection to the organization,” says Elise Hu, a culture and technology reporter for NPR. (Hu cofounded The Texas Tribune, and serves both as an advisor to the Knight Foundation and as a member of the Grist board.) “That emotional connection will lead to other actions.”

The sense that there’s more to life than policy change led Green-Barber to identify three types of impact for CIR to track: macro, which includes legal and regulatory changes; meso, which includes social shifts, such as a change in public opinion; and micro, which includes changes at an individual level, such as increased knowledge. Using this framework, she collaborated with MIP, the Tow Center, and other organizations to create a taxonomy of impact. This tool, known as the Offline Impact Indicators Glossary, “is giving people a methodology to look at things they’ve been thinking of as unmeasurable or unknowable,” Green-Barber says. The glossary is broad in scope, encompassing everything from the reversal of a legal decision to an increase in social capital.9 These aren’t the sorts of things that can be measured by Google Analytics, but they are critical to understanding the full impact of journalism.

Abelson, a collaborator on the glossary project, hopes that it will help news organizations develop both a shared language and a habit of sharing data. “This work has to be done on an inter-newsroom level,” he says. “More newsrooms have to be willing to share information in a more transparent way.”

The Conversation Continues

“If there is one thing that seemingly all media organizations can agree on, it is that impact is not any one thing,” Green-Barber wrote last year in a report for CIR.10

That’s not just a Zen koan. For nonprofit media, metrics pose an especially knotty challenge because they must serve multiple purposes. They must offer meaningful evidence for foundations and other impact-oriented investors. They must make sense to advertisers who still think in terms of CPI (cost per impression) and other traditional standards. (Not all nonprofit media organizations rely on income from advertising as part of their revenue stream, but many do.) They must convey organizational progress to board members and other internal stakeholders. Ideally, moreover, they will offer information that’s relevant to journalists and others in the newsroom.

During a period that overlapped with Green-Barber’s stint at CIR, Grist also dedicated a position to studying the question of impact. Our self-dubbed “actionable metrics engineer” was able to track data and unravel mysteries in ways that even the most well-meaning editor would never find time to do. One of his most important conclusions was that the topline numbers that we track—the ones that help make the external case for Grist—didn’t always resonate with individual team members. Today, like many other outlets, we are working to resolve that tension between external and internal needs.

But the core problem that nonprofit outlets face may not lend itself to resolution. After all, any metrics that work today might cease to be relevant tomorrow. News organizations must therefore be flexible and innovative when it comes to measuring impact. Philanthropists, meanwhile, must understand that impact metrics in this field might never be as black-and-white as those in other sectors. “The best we can do is find out which organizations are doing interesting things in this area and which practices can be replicated,” says Sotsky.

The real solution to this challenge most likely will not arrive in the form of cutting-edge tools or complicated formulas. In fact, it might resemble what journalists already do best. “Impact analysis is like reporting: You have to cover the five Ws [who, what, when, where, and why],” says Clark. “If I were an editor and I were assigning a story on what happened with your site, I would want to know the numbers. They are important to measure, and they make your newsroom smarter. But measuring impact is not the only way to think about it. You can also share data, information, or strategic intelligence about a project. What you are doing is storytelling.”

Tracker Pixel for Entry