The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking
Shannon Vallor
272 pages, Oxford University Press, 2024
What happens to a person, or an intelligent species, when they stop telling their own story?” philosopher Shannon Vallor wonders at the beginning of her new book, The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking. “What do we lose when self-knowledge and self-determination yields to the predictive power of an opaque algorithm?”
Emergent and rapidly developing technologies have always been met by existential panic about the future of humanity. In the early 19th century, Mary Shelley’s Frankenstein depicted the awesome potential of electricity to galvanize life. In the 20th century, the sci-fi and fantasy genres represented the imagined futures of nuclear technologies and alien encounters. Similarly, the anxiety about artificial intelligence (AI), Vallor says, is a result not of a real, external threat, but of one from humans and their choices. “AI does not threaten us as a future successor to humans,” she writes. “It is not an external enemy encroaching upon our territory. It threatens us from within our humanity.”
Indeed, it is not the technology that is inherently dangerous, Vallor contends, but its designers. AI is founded “on the values of the wealthy postindustrial societies that build them”; thereby, it holds “a mirror of ourselves, not as we ought to be or could be, but as we already are and have long been,” she explains. The automated technology’s algorithms are composed of patterns of human bias and bigotry that have led to well-known public AI failures. For example, Amazon’s internal recruitment tool, which used machine learning to assess candidate applications in an effort to remove human bias from the hiring process, was scrapped in 2018 because it was downranking women applicants. And a 2016 ProPublica study found that an algorithmic tool offering guidance to courts in bail and sentencing decisions was predicting almost twice the false-positive rate of recidivism for Black defendants as it did for white defendants.
Vallor’s primary claim is that AI’s biggest threat to humanity is its ability to make us forget our actual humanity—our agency, our creativity, our capacity for care—because it is “constructed entirely from the amalgamated data of humanity’s past and is based on optimizing algorithms that are mathematically guaranteed to reproduce the unsustainable patterns of the past.” To illustrate her argument, Vallor invokes the myth of Narcissus, a handsome youth who found himself enthralled with the beautiful boy he saw while bending over a reflecting pool. So enamored with the person he saw looking back at him, he starved himself to death while waiting for the person to emerge from the water. “Our dependence on these mirrors for self-knowledge risks leaving us captive like Narcissus, unwilling to move forward and leave behind what the mirror shows,” she writes. “At the very moment when accelerating climate change, biodiversity collapse, and global political instability command us to invent new and wiser ways of living together, AI holds us frozen in place, fascinated by endless permutations of a reflected past that only the magic of marketing can disguise as the future.”
Vallor asserts that our collective fascination and wholesale optimism about AI, like Narcissus’ enchantment with his reflection, has caused us to ignore the realities and limitations of what we are seeing. And again, like Narcissus, we cannot recognize who we see in the AI mirror, which can only offer a distorted view of our humanity. By allowing AI systems to shape what is an ideal expression of us, we are “surrendering every hope of making ourselves more than what we have already been,” Vallor argues, “[because] these tools are increasingly being used to tell us who we are, what we can do, and who we will become.”
Vallor’s use of the mirror metaphor is more complex than simply describing the nature of AI systems and their relationship to humans as that of reflection. Analogue mirrors create reflections of us when we look into them, she observes, while digital mirrors like AI continue to present something like a reflection even in our absence—creating the illusion of sentience, with the potential to deceive us into believing that the machine is like us or smarter than us.
“The AI mirror phenomenon is revealed in … data-powered machine learning models designed to collect, ingest, and project an image of what is nearest to our being—human words, movements, beliefs, judgments, preferences, and biases, our virtues and our vices,” she says. “It is these tools that are increasingly being used to tell us who we are, what we can do, and who we will become.”
We need to reclaim AI, and technological culture more broadly, for a sustainable moral vision.
AI systems are thus touted as more optimal—more efficient, more accurate, and more satisfying—than humans, not just in the workplace but in relationships. The growing belief in AI’s superiority, in Vallor’s estimation, is the death knell of humanity. “It is the gradual erosion of human moral and political confidence in ourselves and one another,” she writes. “In the coming years, we will hear the same song again and again: that humans are slower, weaker, less reliable, more biased, less rational, less capable, less valuable than our AI mirrors.”
For Vallor, the source of the problem is also the answer. “We are the source of the danger to ourselves from AI, and this is a good thing—it means we hold the power to resist, and the power to heal.” The very fact of human agency over technology is what Vallor demands we remember if we are to reimage the purpose and uses of AI for the future. Yet if AI remains a mirror of humanity’s past, it will be incapable of helping us devise solutions to our most pressing issues. “We face planetary and civilizational crises that humanity has never encountered or navigated before,” she observes. “Would you chart your path up a dangerous and unfamiliar mountain while looking in a mirror that is pointing behind you?”
However, Vallor does not hold a doom-and-gloom view of AI—because, again, humans have created AI and therefore have the power to change it. Reorienting our relation to AI demands a shift in values and specifically a “shift in what technology means to us—what we think and are taught that it’s for,” she says. We must destroy the AI-human hierarchy that we have ourselves created. Vallor reminds us, first, that at the personal level, technology is not artificial but inherent to human creation. Here she invokes Spanish philosopher José Ortega y Gasset’s concept of autofabrication, or “the task of creating ourselves,” whereby technologies are some of the influences that materially affect how we craft our lives. The ethics of self-fashioning, in this sense, correlate with the ethical work required at the level of society. “We need to reclaim AI, and technological culture more broadly, for a sustainable moral vision,” she asserts. “We need … a shared heroic project—a movement of collective autofabrication, inspired by creative practical wisdom to jointly explore the renewal and expansion of new and better techno-moral possibilities.”
Vallor earmarks this section with an apology: that hers is not a book about how to govern and regulate AI. She says that her solutions are ideas and not evidence-based. So Vallor’s first step to shift “what technology means to us” consists of the idealistic call to “change the economic incentives of the current AI ecosystem, which are aligned only with short-term profits and are directly incompatible with a sustainable human future.”
But rather than indict capitalism as the primary mover of the AI industry, Vallor turns her attention to what she considers the fallacious binary of regulation versus innovation. Politicians and entrepreneurs alike espouse this dichotomy to stymie necessary government regulation of industry, especially its negative externalities such as carbon emissions. Yet, Vallor argues, “the problem isn’t that we don’t know how to govern dangerous technologies. We do. The problem is that we gave up the political will to do it, in large part because we swallowed a story that told us that regulation is the enemy of innovation. We know that this is false because history tells us so,” she says, pointing to the regulating of the automotive and aviation and aerospace industries in the 20th century as cases in point. For example, the existence of safety-engineering practices, driver licensing, and traffic-safety laws has not stopped innovation in the auto industry, as more and more manufacturers are converting to hybrid and all-electric vehicles to align with our climate and sustainability needs.
Vallor asserts that a positive use of AI requires us to consider the technology’s potential to provide care. AI could be used, for example, to find and remedy injustices in the health-care system, to investigate institutional corruption, or to establish and sustain networks for mutual aid. If AI becomes not a mirror of efficiency but an “act of generosity,” she claims, it can be used “to perform the necessary services for others to survive; to shield them from harm; to repair and heal; to educate and train; to feed, nurture, and comfort.”
The hard work of self-reflection and analysis undergirds the collective action needed to affect policy and shift societal norms. This is a significant undertaking, but nothing outside of the scope of what many communities are doing or attempting to do now. Some Indigenous communities, for example, are using AI to revive their native languages while developing governance practices that ensure that they maintain control over how their data is used and by whom. Other communities are using AI to help protect biodiversity. Still others are creating and governing their own data repositories to be used to combat health-care disparities.
While short on concrete recommendations or step-by-step instructions for how to stave off AI-induced terrors, The AI Mirror successfully shatters the mirror itself—its illusions, its myths of supremacy and godliness. Vallor compels readers to remember that AI is a tool that we have created, and it is up to us to decide to use this tool in service to others, as part of our collective responsibility to each other and to the planet.
