In 1983, at the height of the Cold War, just one man stood between an algorithm and the outbreak of nuclear war. Stanislav Petrov, a colonel of the Soviet Air Defence Forces, was on duty in a secret command center when early-warning alarms went off indicating the launch of intercontinental ballistic missiles from an American base. The systems reported that the alarm was of the highest possible reliability. Petrov’s role was to advise his superiors on the veracity of the alarm that, in turn, would affect their decision to launch a retaliatory nuclear attack. Instead of trusting the algorithm, Petrov went with his gut and reported that the alarm was a malfunction. He turned out to be right.

This historical nugget represents an extreme example of the effect that algorithms have on our lives. The detection algorithm, it turns out, mistook the sun’s reflection for a missile launch. It is a sobering thought that a poorly designed or malfunctioning algorithm could have changed the course of history and resulted in millions of deaths.

Algorithms in Modern Society

Technologies such as artificial intelligence (AI), augmented reality (AR), self-driving vehicles, and the Internet of things (IoT) are leading us into the Fourth Industrial Revolution. They are bridging the chasm between the physical world and the digital realm, hinting a radical change in the way technology mediates human communication and experiences. The behaviors of these technologies are the result of complex algorithms. Code making up AI agents determines if a customer is to see a particular ad or receive a product offer, but not another one. Rules determine the information AR overlays onto the world in front of our eyes. Programs guide self-driving vehicles on congested roads and make them brake if a child crosses their path. Rules in IoT devices decide when to trigger a tsunami alert. These examples highlight the hidden yet central role algorithms play in modern societies.

Stanford University Dynamic Design Lab)">

An autonomous vehicle. (Photo courtesy of Stanford University Dynamic Design Lab)

A computer algorithm—a digitally coded set of rules that solve a problem—is ethical when the procedures it encapsulates or the solutions it presents do not breach our moral principles. While we do not ascribe to any particular set of ethical principles, we believe algorithm behaviors and outcomes should be consistent with the ethics of their stakeholders. Without delving into the particulars of what can make an algorithm right or wrong, which may be partial to a setting or geography, we call on the technology and business communities to evaluate the ethics of algorithms against their standards. Technology and algorithms can help humankind in solving its big social and environmental problems. At the same time, as algorithms drive the behavior of an increasing number of processes and technologies embedded in our daily lives, we should be deliberate and avoid the creation of new and unintended problems: Are we paying attention to ensuring that the ethical principles of our societies also drive the behavior of our algorithms? Are we accurately assessing the social impact of algorithms? How can we develop algorithms that follow our ethical principles?

When Algorithms Challenge Our Ethical Principles

Social media platforms Facebook and Twitter are under scrutiny for their use of algorithms that target individuals with political ads that may accentuate their fears and sway their views. And Google and Facebook are again in the thick of controversy on the eve of the deadliest mass shooting in modern US history, as they relayed unverified news reports from dubious sources that included false information. The algorithms behind these platforms apparently contribute to the spreading of misinformation, but we can hardly blame algorithms for performing the tasks they were coded to execute: finding news and personalizing the information presented to their users. Eric Schmidt, Alphabet’s chairman and a fervent proponent of consumer targeting, explained the evolution of targeting algorithms in an interview: “The power of individual targeting—the technology will be so good it will be very hard for people to watch or consume something that has not in some sense been tailored for them.” Schmidt also shared his position about improving the targeting algorithms and monetization: “The only way the problem [of insufficient revenue for news gathering] is going to be solved is by increasing monetization, and the only way I know of to increase monetization is through targeted ads. That's our business.” What seems patently missing in his comments about how far the algorithms can take targeting is the consideration of consequences beyond revenue generation.

Sheila Scarborough, CC-BY 2.0)">

A facial recognition algorithm. (Image by Sheila Scarborough, CC-BY 2.0)

Only a few weeks ago, in mid-September, when Hurricane Irma battered the Florida peninsula, the algorithms airline companies use to price flights increased rates in response to peaks in demand, as affected residents tried to evacuate high-risk areas. This is how airlines typically operate, and some may argue they should not be penalized for doing so. Even scholars such as Matt Zwolinski—philosophy professor and director of the Center for Ethics, Economics and Public Policy at the University of San Diego—contend that price gouging is a good feature of free markets, rationing supplies that are in high demand. Even so, before Irma reached the mainland, the airlines fell in line and turned off the algorithms, capping prices for flights out of affected areas. The social impact of not doing so would have been too high.

Algorithms also transgress ethical redlines when their application involves minority groups, such as the case of hiring algorithms, lending algorithms, and face recognition algorithms. Years ago, as part of a research project, one of us (Ruben) was searching for predictors of loan repayment to build a microlending algorithm for a nonprofit. The objective was to generate a model that did not make use of traditional credit scores: a fairer algorithm. The initial solutions had decent predictive power, but work on the project stopped once we realized the predictors the algorithm chose were proxies for the socioeconomic indicators we were trying to avoid, and the algorithm was using them to discriminate against minorities. As the African-American population in the United States faces continuous setbacks in fighting for equal treatment by law enforcement, racially-biased face recognition algorithms that disproportionally misidentify minorities and result in the targeting of innocent African-Americans only perpetuate existing patterns of discrimination.

Forget Smart and Efficient: Are Your Algorithms Ethical?

Examples of algorithms resulting in unethical outcomes are commonplace. For organizations, employing unexamined—potentially unethical—algorithms can result in the digitalization of biases and blindness to principled ethics. Far from a technical discussion relegated to engineers, the conversation about the ethics of algorithms is one that businesspeople, nonprofit organizations, civic action groups, and technologists must have. We offer five recommendations to guide the ethical development and evaluation of algorithms used in your organization:

  1. Consider ethical outcomes first, speed and efficiency second. Organizations seeking speed and efficiency through algorithmic automation should remember that customer value comes through higher strategic speed, not higher operational speed. When implementing algorithms, organizations should never forget their ultimate goal is creating customer value, and fast yet potentially unethical algorithms defile that objective.
  2. Make ethical guiding principles salient to your organization. Your organization should reflect on the ethical principles guiding it and convey them clearly to employees, business partners, and customers. A corporate social responsibility framework is a good starting point for any organization ready to articulate its ethical principles.
  3. Employ programmers well versed in ethics. The computer engineers responsible for designing and programming algorithms should understand the ethical implications of the products of their work. While some ethical decisions may seem intuitive (such as do not use an algorithm to steal data from a user’s computer), most are not. The study of ethics and the practice of ethical inquiry should be part of every coding project.
  4. Interrogate your algorithms against your organization’s ethical standards. Through careful evaluation of the your algorithms’ behavior and outcomes, your organization can identify those circumstances, real or simulated, in which they do not meet the ethical standards.
  5. Engage your stakeholders. Transparently share with your customers, employees, and business partners details about the processes and outcomes of your algorithms. Stakeholders can help you identify and address ethical gaps.

Algorithms can be an asset to nonprofit organizations, reducing costs and making processes more efficient, but they can also be an ethical liability. The unexamined algorithm may encode existing biases and forms of unethical behavior, and perpetuate them. In an age where digitalization and automation are the norm, and algorithms accomplish increasingly advanced tasks without human input, it is essential that they abide by the same ethical standards as the rest of the organization.