In Isaac Asimov’s Foundation trilogy, Hari Seldon invents the science of “psychohistory.” He uses it to predict the actions of billions of people and thereby predict the future history of civilization. But he keeps his ultimate predictions and responses to them secret, which sets up the conflicts that play out in the series.

Seldon has inspired untold numbers of data-heads (including the prominent Nobel Prize-winning economist Paul Krugman), and his spirit—though fictional—was a strong presence among the 260 programs recognized at the recent Computerworld Honors 2013 gala. Laureates included UN Global Pulse, which seeks to use “big data” and “open data” for the common good; Grameen Foundation, for its use of text and voice to deliver vital pre-natal and early childhood health information to women in Ghana; and the Unique Identification Authority of India, which is using biometric data to identify millions of low-income people so that they can access banking and other services, and receive direct electronic transfers of benefits. (Disclosure: Our United Way/California 2-1-1 network partnership with California Emerging Technology Fund also was a laureate.)

In contrast to Seldon’s reticence, the “World Wide Computer,” to use Nicholas Carr’s term, uses our interactions with it—searches, social media, mobile voice and data services, and more—to try to sell us what it predicts we will need. Companies that want to sell us something use our data whenever we search the web or log in to Facebook or Twitter, regardless of our wishes. Twitter clients seem to know just what age and profession-appropriate ads to show us, for example. Sometimes this is convenient, but often it is creepy.

Of course, search engines and social media platforms do more than push products. They offer us the ability to connect, to find community. In the process, though, they take the data we provide about our interests and the content we create to share with friends and communities, and use that to make money for themselves and their shareholders. Whether we call it “digital sharecropping” (Nicholas Carr) or “digital serfdom” (Jaron Lanier), in many ways we serve the machine’s needs at least as much as our own.

Are you enjoying this article? Read more like this, plus SSIR's full archive of content, when you subscribe.

We seem increasingly resigned to this kind of intrusion, in the marketplace at least (our feelings about NSA data gathering notwithstanding). But if we’re stuck with this reality, is there some way to put those abilities to good use?

Why don’t philanthropy and nonprofits similarly use their access to personal information to better target funding and services? Is it because they respect and protect privacy, or is the reason less noble—that we simply don’t know how and can’t afford to do it? Or worse, are we just being squeamish—do we shrink from the hard question of whether we have an obligation to use personal data targeting to prevent harm?

In human services, everyone agrees that it would be great if, whenever a low-income family or individual interacted with a resource (a school, clinic, or social services agency), they could connect with every relevant resource; if a mother comes in looking for job training, she could enroll in health coverage, SNAP food assistance, and subsidized child care. This vision of a “no wrong door” system is still an unattained ideal. Many funders, nonprofits, and advocates struggle to create such a holistic, integrated system, but bureaucracy, limited funding, legacy human and IT systems, and even privacy policies frustrate them. The “no wrong door” ideal requires that a consumer present a need, so it is a “pull” model in the typology of push vs. pull strategies in marketing; when a consumer comes to “pull” a service, the system then seeks to meet more than just the presented need by connecting them with as many relevant benefits as possible.

The 2-1-1 information and referral service aims to advance this “no wrong door” goal, and to date, it also is a “pull” model—consumers call or search 2-1-1 sites seeking services. These 2-1-1 programs use rich local resource data, trained information and referral specialists, and the three-digit dialing code authorized by the Federal Communications Commission to connect more than 16 million people annually to resources (such as food, housing, health and mental health care, and education) by phone; many more also use 2-1-1 databases to find resources online. With smart search and “opt-in” permission from users, we could use that information to send follow-up messages that connect people to other resources or support beneficial lifestyle changes. The scale of data collected also could inform government and philanthropy about trends and gaps or disparities in access to resources. (This is something the 2-1-1 field is working toward.)

Big data has a lot of potential for good, and there is great excitement in the philanthropic world about its macro, system-level potential. But it clearly poses a significant threat.

But can we get more aggressive? Might it be more effective to use big data to push resources to people who haven’t sought them out? Opt-in is important ethically, but it also takes work and is a bit of a barrier. If we could get results on a greater scale by pushing messages without an opt-in (the way search engines and social media applications do) would we? Take San Bernardino County in California, where approximately 140,000 people are eligible for SNAP food assistance but do not receive it. A supermarket chain or a grocers’ association probably could purchase or collect data from these families to reach out to them, and we wouldn’t criticize them—in fact, we may honor them for it. Shouldn’t a hunger charity be able and willing to do the same?

Big data has a lot of potential for good, and there is great excitement in the philanthropic world about its macro, system-level potential. But it clearly poses a significant threat. The annoyance of targeted ads is likely only the visible tip of the iceberg; using big data to exclude people or deny them services may be the huge, unseen base. In “Big Data Is Our Generation’s Civil Rights Issue and We Don’t Know It,” Alastair Croll provides a great summary of the promise and perils of big data:

We’re great at using taste to predict things about people. OkCupid’s 2010 blog post “The Real ‘Stuff White People Like’” showed just how easily we can use information to guess at race. It’s a real eye-opener. … They simply looked at the words one group used which others didn’t often use. The result was a list of “trigger” words for a particular race or gender.

Now run this backwards. If I know you like these things, or see you mention them in blog posts, on Facebook, or in tweets, then there’s a good chance I know your gender and your race, and maybe even your religion and your sexual orientation. And that I can personalize my marketing efforts towards you.

That makes it a civil rights issue.

If I collect information on the music you listen to, you might assume I will use that data in order to suggest new songs, or share it with your friends. But instead, I could use it to guess at your racial background. And then I could use that data to deny you a loan.

Croll suggests that the answer “is to somehow link what the data is with how it can be used”—and that this will be extremely hard.

UN Global Pulse director Robert Kirkpatrick wrote in response: “Big data is a human rights issue. We must never analyze personally identifiable information, never analyze confidential data, and never seek to re-identify data.”

In my gut, I agree with Kirpatrick; businesses and governments shouldn’t use that data in negative ways. But they do. Every day, millions of us give away the kind of information that allows companies to target ads toward us or deny us services because of a risk profile.

In that light, though we have qualms, we must take seriously the potential power of using data to push benefits to low-income and vulnerable people, while at the same time working toward Croll’s idea of coupling data with conventions on acceptable use. No doubt using data to target individuals poses serious logistical and ethical challenges for nonprofits and philanthropy, but it also could save and improve millions of lives. Let’s not avoid those challenges, but instead see if we can make good use of the dark arts of information technology.

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Peter Manzo.