Close up of ChatGPT and other app icons on a mobile screen (Photo by iStock/Robert Way)


The brouhaha at OpenAI is just another reminder about Silicon Valley: It’s all about the money. Once OpenAI took hundreds of millions of dollars in for-profit investment funding, the capitalists were going to be in the driver’s seat. Understanding these dynamics is crucial for a community of social innovators under pressure to “do more AI.”

Much of the hype and attention around generative AI is calculated based on financial objectives. Painting AI as such an incredibly exciting and powerful technology that it poses an existential threat to humanity (even if it is currently far from it) does wonders for the financial valuation of generative AI companies like OpenAI.

The leadership struggle at OpenAI last year was almost certainly not initiated with a financial intent, but the nonprofit board members who tried to assert what they saw as their responsibility lost in the end. Of course, the nonprofit mission of OpenAI was atypical: to ensure the safer creation of artificial general intelligence, a.k.a. our future robot overlords. With the reinstatement of the CEO, and replacing the board with one seen as much friendlier to the investors (especially Microsoft, OpenAI’s major tech company partner), it’s now clear helping themselves make money has taken precedence over helping humanity.

The fiasco has certainly not hurt OpenAI. Their next round of investment is reported to be based on a valuation of over $100 billion. This has all the makings of a classic investment bubble, a classic Gartner hype cycle event. Just because there’s a feeding frenzy amongst investors and companies around getting rich with generative AI does not mean that every nonprofit should stop what it is doing and invest charitable funding in this technology. Let the investors drop billions of their funds to help these companies search for the most valuable applications of their innovations. As amazing as the technology is, social impact leaders need to be evaluating it today against real needs with clear-eyed intention.

Are you enjoying this article? Read more like this, plus SSIR's full archive of content, when you subscribe.

The social good sector greatly needs better technology, particularly in basic software and data improvements or even plain old regular AI technology. It’s not clear that starting with generative AI before addressing more foundational technology needs is a wise investment of scarce nonprofit resources. Our social impact missions to serve the people and the planet should remain our North Star, not helping well-funded AI startups concentrate wealth.

Replacing Humans With AI?

Tech pundits like Cory Doctorow have pointed out that the amounts of money being plowed into the latest AI technologies cannot be justified unless they are immensely profitable. And, the only plausible way to immense profits is replacing humans with machines. As impressive as the latest AI tech is, it’s not yet ready to replace human beings at scale. Even if it was, it isn’t all that clear why wholesale job losses are in the interest of society.

Replacing humans with robotic systems is problematic because the AI systems actually aren’t that smart. They don’t have judgment (or empathy, or compassion). Not that replacing humans hasn’t been tried!

  • Self-driving cars have been the coming thing for years, but the price of this experiment became clear when a Cruise self-driving taxi maimed a pedestrian in San Francisco by doing something a human driver wouldn’t do (deciding to drive 20 feet to park at the side of the road after a collision, ignoring it was dragging a woman caught under the car).
  • The National Eating Disorders Association fired its human counselors (who were in the process of unionizing) and put Tessa, a generative AI based chatbot on the lines. Predictably, Tessa was caught a week later repeatedly giving helpline texters the exact opposite advice that a modern weight disorder professional counselor would give. That’s probably because the average advice from the Internet (where chatbots get the content for their training) on weight disorders isn’t sound. The result? Helpline shut down, people in need shortchanged, organization with immense reputational damage.

A dramatically underappreciated aspect of AI solutions is the inevitable cost of their mistakes. The press is full of other examples of where AI has failed to measure up, and no AI tool is perfect in real-world applications. For the social sector, it’s crucial to choose applications where the cost of an AI error is minimal, or can be actively mitigated by humans. Applications where an error could cause significant harm to your stakeholders or your organization, such as in the case of Tessa, should be avoided. The best approach is to keep a human in the loop to catch errors and fix them. Even better, imagine how to use AI solutions to make the human beings in your organization (and those you serve) smarter, more effective, and more powerful. Do not hand over life or death situations to an unsupervised robot!

What Is a Social Innovator to Do?

First, don’t buy into the hype. Seven years ago, the tech industry was similarly abuzz about blockchain. That hype resulted in zero, as far as I can find, examples of blockchain tech delivering social impact at scale. Don’t get me started on the metaverse! Think twice about whether developing an AI application might require putting data from the vulnerable communities you serve into a for-profit database model that might enable these companies to betray the interest of poor and disadvantaged people. It is not Silicon Valley’s explicit goal to fail these people, it just happens frequently as a by-product of the relentless pursuit of profits. Unlike for-profit tech companies, the highest obligation of nonprofit organizations is to act ethically in the interests of the people we serve. Don’t let the for-profits get their hands on data from your communities.

Second, stop, look and listen. Do not give in to the exhortations from industry to run around with an AI hammer looking for nails. Projects created with a primary focus on the tech to be used, rather than the real-world need, tend to be doomed from the start. Look at solving the real problems you have with the best and most affordable tech that is a good match for the job, which might not be AI-based at all. Listen to your peer leaders for case studies of what has worked with AI-based technology, and even more importantly, listen to where it failed to work. And don’t listen to technologists or companies who promise miracles they are unlikely to deliver. At least not for social impact applications.

Third, start slow and experiment. Readily available generative AI tools are free or modestly priced, and can be quite useful assisting with writing tasks. You are highly likely to get some value out of the standard products compared to their costs, especially if you are not trying to wholesale replace your staff. They are not ready to replace humans.

Almost all nonprofits lack the staff capacity to build AI solutions themselves. Investing in custom AI deployments is quite expensive, thanks to data scientists commanding the big bucks in salaries. The case to invest in AI has to be truly powerful to spend hundreds of thousands of dollars paying consultants to build something for your enterprise.

Real-World Examples of Generative AI Tools in Social Impact

As a longtime AI technologist, I love what AI can do. Although there is currently an outsized bubble of hype thanks to OpenAI and its peers, regular AI has a far better track record of actually delivering value in the social sector. It’s just that AI will probably work for only 5-10% of the flashy applications I hear bandied about these days. By keeping ethics and mission in mind, it gets easier to come up with successful applications. Here are a few examples:

Spell-Checker on Steroids

First, ChatGPT and its multiplying cousins and competitors have been derisively called “stochastic parrots” and “spicy auto-complete.” My nickname for them is “spell-checkers on steroids.” That may sound derisive too, but I mean it positively. If a modern spell-checker is an indispensable writing tool, imagine a next generation that is five or ten times more powerful for certain writing tasks!

Joan Mellea, the co-founder of my nonprofit, Tech Matters, figures that ChatGPT saves her 20-25% of her time on writing tasks. It’s very handy for squeezing a 300-word answer to a grant question to the 250-word limit. Or taking an essay or explanation drafted by someone on the team and simplifying it. She’s used it to create policies needed to comply with government or funder requirements. One critical point: like a spell-checker, Joan does not ever trust the unedited output of ChatGPT. Unlike a spell-checker, where you just accept or reject its recommendations, Joan uses ChatGPT as a source of ideas for saying things more clearly. Her bottom line: it’s great for people who understand their subject material and want a tool to help communicate more clearly. However, it is going to create big problems for someone who doesn’t know what they are writing about, because they are likely to miss the errors.

Guide by the Side

The problems with the Tessa chatbot on the weight disorder helpline were all too predictable. The large language models behind tools like ChatGPT don’t understand what they can and can’t say to people in crisis, and today it would be unethical to inflict them on help-seekers. I live in fear that someone is going to reach out about possibly harming themselves and an open-ended chatbot will encourage them to do so.

Keeping the cost of errors in mind, though, it’s not hard to imagine many exciting AI applications for the helpline movement, where I have been working for the last five years. For example, the Danish child helpline Børns Vilkår is staffed by volunteers. They have created an AI “Guide by the Side” for their volunteers, which watches the chat conversation between a volunteer and a young person seeking counseling. The AI guide spots up to three conversational topics (parents getting a divorce, worries about COVID, substance abuse) and pops up helpful suggestions to the volunteer to do a better job of counseling (reminding the texter of their rights during a parental divorce, explaining health facts). If the AI guide surfaces an issue which is not relevant, the volunteer just ignores the suggestion.

Another great example is the Trevor Project, which had a bottleneck in training volunteers for their helpline, which supports LGBT youth. They needed more human trainers than they had to deal with rapid growth and the predictable turnover in volunteers. They built an AI-driven conversational simulator for training, to simulate a young person reaching out for counseling. New volunteers would start with training sessions where the AI chatbot would role-play as a help-seeker. If the AI chat simulator made a mistake, it was unlikely to have a negative impact on a real LGBT youth looking for counseling. After practicing with the chatbot, the volunteers would graduate to training sessions with a human trainer to confirm they were ready to take real counseling conversations. This allowed Trevor to train many more volunteers than when human trainers did all of the training sessions.

More Good AI Examples

Beyond fundraising and helplines, other nonprofits are using generative AI tools for user support. Rather than using open-ended chatbots which can be asked about anything (and might end up saying anything!), the responsible applications are more close-ended. This means that the topics being discussed are limited to the task being performed. For example, if you have 100 help articles on your website and a chatbot isn’t allowed to do more than point you at an article, that is not a risky application. The cost of an error is that the user is shown a help article which is not particularly useful.

Of course, the OpenAI fervor is based on the latest AI technology, generative AI. There are so many other AI applications which are already widely deployed. I started my career with Benetech making reading machines for the blind, with AI technology which 30 years ago was leading edge. MapBiomas is a Brazilian Skoll Award-winning organization using AI to analyze land use based on satellite imaging. They can recognize a new logging road going into a protected rainforest within a day or two, hopefully reducing illegal logging. My team at Tech Matters is even using reasonably basic AI to design an app for recognizing soil types, so that farmers and ranchers can quickly understand what can grow in a given field.

Conclusion

The responsibility of social change leaders to the people we serve is central to ethical and effective action. Unlike the commercial tech industry, our North Star is not making money, it’s making positive change. Our communities are counting on us to apply new technology mindfully, with their best interests in mind. I have no doubt that AI is going to play a larger and larger role in social change, but it’s not going to happen this year, and it’s not going to have the positive impact being promised by industry. I hope you join me and other nonprofit technologists in helping to see that AI gets applied ethically for maximum positive social impact.

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Jim Fruchterman.