The Smart Nonprofit: Staying Human-Centered in An Automated World

240 pages, Wiley, 2022

Buy the book »

“Smart tech” is an umbrella term we created to describe advanced digital technologies that make decisions for people, instead of people. It includes Artificial Intelligence (AI) and its subsets and cousins such as machine learning, natural language processes, smart forms, and chatbots, robots, and drones.

Right now, smart tech is best at doing rote tasks like filling out intake forms, and answering the same questions over and again (“is my contribution tax-deductible?”). However, the technology is quickly embedding itself into the heart of nonprofit work in a wide variety of functions. As a result, we anticipate that staff will be free to focus on other activities. We call this benefit the “dividend of time,” which can be used to, say, reduce staff burnout, get to know clients in deeper ways, and focus on problem-solving like addressing the root causes of homelessness in addition to serving homeless people. 

Smart tech has recently reached an inflection point common to technologies that reach everyday use: An enormous increase in computing power meets a dramatic decrease in the cost of the technology. As a result, technology that was previously available only to elite institutions like NASA or embedded in widely complicated systems has suddenly become available to everyday people and organizations for fundraising, accounting, human resources, service delivery, and more.

Grabbing software off-the-shelf that is “smart” may look like a technical decision, at its heart it is a deeply and profoundly human challenge that requires informed leadership to do well. There is a sweet spot of balancing the capability of the technology with the interests and needs of the people inside and outside that organizations need to identify. Some people call this convergence “co-botting.” The responsibility for identifying this sweet spot cannot rest with the IT department alone. Organizational leaders need to be interested, knowledgeable, and engaged enough to ensure smart tech is used in human-centered ways.

In the following excerpt from our chapter on staying human-centered in our new book, The Smart Nonprofit, we discuss how being human-centered means prioritizing the interests, strengths, and unique talents of people over the speed and wizardry of the technology. Valuing humans has never been more important as our workplaces become more and more automated.—Allison Fine and Beth Kanter

* * *

Caesar Chavez said, “It was never about grapes or lettuce and always about people.”1 The same holds true for smart tech. It is not about the code or the wizardry; it’s about ensuring that people matter the most. Being human-centered means prioritizing the interests, strengths, and unique talents of people over the speed and wizardry of the technology. Valuing humans has never been more important as our workplaces become more and more automated.

Smart tech is a fundamentally new way of working and has the potential to do more harm than good if treating people well inside and outside isn’t the top priority. This chapter explores the differences between human and machine intelligence, describes how to marry people and bots inside of organizations, and outlines steps for designing human-centered efforts to ensure smart tech is enhancing and not subjugating the needs of people.

Man vs. Machine

Since the 1950s experts have been forecasting that smart tech will reach human-level intelligence in 20 years. “In other words, it’s been 20 years away for 60 years,” according to MIT Professor Thomas Malone.2

Over the past few years, Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and many other big names in science and technology have expressed concerns in the media about the potential for AI to be smarter than humans. There have been a number of surveys asking when this will happen and they all reach the same conclusion: we simply don’t know. But we can say with absolute confidence that right now human and machine intelligence are not equal.3

In a gross oversimplification, intelligence has two components: fact-based knowledge and emotional intelligence. Smart tech is clearly gaining ground on fact-based knowledge, but it is also in the very early stages of incorporating emotional intelligence. At the heart of emotional intelligence is empathy, understanding what other people are feeling. Smart tech is not as empathetic as people yet, and may never be, but it can mimic empathy through sentiment analysis.

Smart technologies are more accurate, faster, and more consistent at doing particular tasks like filling out forms. Smart technologies never get tired or need to take a lunch break or vacation. However, currently, bots are not empathetic. What they can do is simulate an emotional response. For instance, a customer support chatbot may be taught to apologize in a caring or helpful tone, even calling you by your name. Imitating emotions is not the same as having them or understanding them.

People have the unique ability to imagine, problem solve, anticipate, feel, and judge changing situations, which allows us to shift perspectives. Our memories, hopes, concerns, and personality also contribute to how we react to the world around us. Smart technologies simply are not capable of empathy, love, or other emotions, yet. Stuart Russell, professor of computer science at the University of California, Berkeley, writes, “. . . while AI systems may be able to mimic human empathy, they can’t truly understand what empathy is like. It’s a distinction that nonprofits may not understand, but it is an essential tenet of being human-centered.”4

The gap between human and bot intelligence is reflected in the growing field of therapy chatbots. Dr. Freud in a Box and other therapy chatbots are attractive products because they are inexpensive and always available. But research has shown that bots make terrible therapists because of the limitations of smart technology to understand subtext.5

There are other significant challenges to therapy bots. Private companies are often not transparent about how their algorithms work, amplifying the potential for the chatbot therapist to provide bad or biased advice.

If all this was not bad enough, there is potential to weaponize private information by sharing it with marketing companies. For instance, there’s Woebot. It is a chatbot therapist providing cognitive behavioral therapy through Facebook Messenger (Editorial note: Woebot is no longer on Facebook messenger and is available via the app store on a smartphone.) It is not regulated or licensed as a therapist and although it has no plan to do so today, the company could choose to sell users’ data to pharmaceutical companies or employers in the future.6

Co-Botting

Getting the balance right between people and smart tech is called co-botting or augmented intelligence.7 H. James Wilson and Paul R. Daugherty have conducted research with over 1500 companies and found that significant performance improvements happen when humans and machines work together. “Through augmented intelligence, humans and AI actively enhance each other’s complementary strengths: the leadership, teamwork, creativity, and social skills of the former, and the speed, scalability, and quantitative capabilities of the latter.”8  

Smart tech is an equal opportunity job disrupter and doesn’t care if a job is low paying or high paying. If it involves analysis of large amounts of data, the job is going to change. Curtis Langlotz, a radiologist at Stanford, predicts, “AI won’t replace radiologists, but radiologists who use AI will replace radiologists who don’t.”9

Most experts doubt AI will replace doctors any time soon because even if an algorithm is better at diagnosing a particular problem, combining it with a doctor’s experience and knowledge of the patient’s individual story will lead to a better treatment and outcome.

The Trevor Project provides crisis counseling to young lesbian, gay, bisexual, transgender, queer, and questioning (LGBTQ+) people. They created Riley, a chat bot to help train counselors by providing real-life simulations of conversations with potentially suicidal teens. Riley is always available for a training session with volunteers and that helps the staff scale the number of trained counselors without adding more resources. Riley will never work on the front line directly with youth in crisis because The Trevor Project sees this role as a human-centered one.10

Co-botting goes beyond working with chatbots. Benefits Data Trust is a Philadelphia-based poverty reduction organization. It created a co-botting system for integrating smart tech into its efforts to help its call-in center staff assist clients to navigate and complete public benefits’ application processes. The pain point they were trying to solve was the enormous amount of time and documentation it takes for clients to apply for and receive benefits. The computer system was trained on thousands of interactions between call-in staff and clients to make recommendations among dozens of possible public benefits. The system also pre-populated forms for clients, saving staff an enormous amount of time.

Ravindar Gujarl, chief data and technology officer from the Public Benefits Trust, told us, “At the end of the day, our role as a nonprofit is to create a human connection. We won’t replace our call-in staff who directly interface with our clients. Our nonprofit’s work is about building relationships with our clients. They come to us in distress and we want them not to have to worry about having to collect documents or wade through a complicated application process.”11

These examples involved careful planning to ensure that the technology augmented and didn’t replace the work of staff. There is no special formula for ensuring you get the right balance between people and technology. It takes careful planning, monitoring, and continuous adjustments to ensure your organization is staying human-centered and getting the best out of both. Without this kind of care and thoughtfulness, your nonprofit could end up adding a bot like Flippy to your staff.

In March, 2018, Miso Robotics and Caliaburger, a fast-food franchise in Southern California, announced the public debut of “Flippy,” the world’s first autonomous robotic kitchen assistant powered by artificial intelligence. Flippy’s job was to flip burger patties and remove them from the grill for its human co-workers to put the cheese on top at the right moment and add the extras, such as lettuce and sauce, before wrapping the sandwiches for customers.

The press release for the launch described how Flippy would disrupt and transform the fast-food industry by taking over the hot, greasy, and dirty task of flipping burgers. The company touted Flippy as a cost-effective and highly efficient solution that could flip 150 burgers per hour, far more than the cooks it was replacing. What the press release didn’t mention was that in addition, Flippy wouldn’t complain about the low pay, scanty benefits, and long hours.12

After two days on the job, Flippy was fired. News of Flippy, the robot cook, went viral on social media. This prompted a surge in interest and while Flippy flipped away, the human kitchen staff could not keep up with the demand. The restaurant realized it needed to spend more time on its internal systems and training people to work side-by-side with the robots.

This story shows how easy it is for an organization to choose a bot to solve a problem without engaging staff in the process and keeping the entire system human-centered.13,14

Human-Centered Design

COVID-19 highlighted the bad habit some organizations have of not staying human-centered during stressful times. A hospital system in Washington State welcomed donors who had given at least $10,000 to set up vaccination appointments on an “invite-only” basis. The chief executive of a high-end nursing home and assisted-living facility in West Palm Beach, Florida, invited board members and major donors to receive immunizations. These were not the only two examples of hospitals and care facilities offering donors first shot at the shots.

Mike Geiger, president of the Association of Fundraising Professionals, said in response, “The idea of hospital systems, or any charity, ignoring protocols, guidance, or restrictions—regardless of origin—and offering certain donors and board members the opportunity to ‘skip the line’ and receive vaccinations ahead of their scheduled time is antithetical to the values of philanthropy and ethical fundraising.”15

While this example is not specifically about smart tech, it illustrates how easy it is for organizations to slip away from keeping clients and patients front and center. The use of smart tech makes staying human-centered even more pressing. We recommend engaging with end users through human-centered design techniques noted in the sidebar. Human-centered design focuses on developing deep empathy for end users or those who are impacted by smart tech. At the heart of this process is designing processes and services with people, not at them, through interviews, observation, and developing personas or models of end users to test processes and assumptions.

There are many excellent tools and resources for human-centered design. The essence of these processes is to:

  1. Get input from key stakeholders about what issues are most important to them.
  2. Outline an idea, process or service that delineates responsibilities.
  3. Test, reflect, improve.

Public Benefits Trust used this kind of process to determine what parts of their process should be automated. Ravinder said, “You can’t build an algorithm that powers a public benefit system without getting feedback from the people using it.”16

Conclusion

The very first step that nonprofits must do when embracing smart tech is to put humans first and deeply understand how machines and people can work together. Human-centered principles and approaches are critical for the successful use of smart technologies by nonprofits as we have discussed throughout this chapter.

* * *

Human-Centered Design Resources

There are many excellent human-centered design resources that offer step-by-step guidance for implementing a human-centered design process, which is beyond the scope of this book. Many are techniques that any nonprofit can use without hiring an expensive consultant.

If you want to quickly get up to speed, we recommend these resources for additional reading about human-centered design. Many of these organizations also offer training.

Ideo Design Kit: IDEO has been a thought leader in human-centered design methods. The design firm has a nonprofit spinoff (ideo.org) that focuses on methods for nonprofits and social change and includes many free practical resources and examples. In addition, IDEO has also developed specific human-centered design methods for artificial intelligence, including these cards to help understand unintended consequences of smart technologies.

Ideo.Org Design Kit: Methods
https://www.designkit.org/methods

AI & Ethics: Collaborative Activities for Designers
https://www.ideo.com/post/ai-ethics-collaborative-activities-for-designers

Luma Institute: The Luma system is one of the most practical, flexible, and versatile approaches to use for design thinking. It offers a playbook with simple techniques that anyone can use.

Luma System
https://www.luma-institute.com/about-luma/luma-system/

Stanford Design School: In 2018, the D-School (as it is known), launched an initiative called “Radical Access,” a program and resources to develop fluency in emerging technologies as a medium of design for all people. The rationale is that for any use-case of artificial intelligence to serve us, we must be involved in the design. These two techniques are especially useful to designing human-centered algorithms or mapping problems to solutions for artificial intelligence.

I Love Algorithms
https://dschool.stanford.edu/resources/i-love-algorithms

Mapping Problems to Solutions: Artificial Intelligence
https://dschool.stanford.edu/resources/map-the-problem-space

Participatory Machine Learning: Defined as the practice of using human-centered design methods to inform the design and iteration for automation projects. Google has recently published a guide that actively involves a diversity of stakeholders—technologists, UXers, policymakers, end users, and citizens—in the process of feedback for the project. The guidebook provides an overview of how human perception drives every facet of machine learning and offers up worksheets on how to get user input.

People + AI Guidebook
https://pair.withgoogle.com/

Agentive Design: When designing chatbots and “intelligent agents for automation,” it must be grounded in human-centered design principles. The concept was developed by Chris Noessel, an interface designer for Watson IBM. Design principles include the focus on easy setup and informative touch points. Also, when the chatbot is working, it’s out of sight. When a user must engage its touch points, they require attention and consideration. Overall, well-designed chatbots and agents require lots of constant attention to manage. Effectively designing chatbots or intelligent agents requires a lot of user testing and feedback to properly for training. The more a chatbot or intelligent agent interacts with humans, the better it learns to respond. It is different than designing other types of technology where the user is actually performing the actions versus the programming code. The metaphor often used is that it is less like designing a hammer and more like designing a butler.