(Photo by iStock/kieferpix)
Whether rooted in science or metaphysics, people throughout history have often held a belief that hidden patterns beneath everyday life set the world’s course, and that revealing those patterns can unlock flourishing and prosperity.
Today, a version of this belief frequently appears in the public statements of major artificial intelligence company executives. For instance, Sam Altman, the co-founder of OpenAI, extolling the future impact of the work being done by his company, asked, “How did we get to the doorstep of the next leap in prosperity?” and answered, “humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying ‘rules’ that produce any distribution of data).”
Ilya Sutskever of Safe Superintelligence described training a large language model as “learning a world model” that captures an increasingly granular representation “of the human condition.” Coming from a slightly different angle, Dario Amodei of Anthropic portrayed the slow pace of change in the “outside world” as a bottleneck for AI’s pattern-recognition-based innovation.
Indeed, the capability of large language models to disclose, extend, and recombine such patterns appears likely to accelerate progress on known challenges in many fields of science and technology.
But as old as these beliefs is the question of how we humans fit into any such underlying order and potential of prosperity—are we just passive pieces in a predetermined puzzle, or are we actors who can fashion new puzzles? Are we merely discovering the patterns of existence, or are we also creating that existence as we go? Is the recipe for our economic flourishing hidden in the fabric of reality, awaiting an algorithm powerful enough to find it?
Innovation Requires a Desire to Create
Not all students of economic progress and innovation would agree with the view that the road to a society’s prosperity is “out there” to be exposed by superior computations, or that we could just outsource innovation to AI and step back to enjoy the yield. Innovation—meaning not just figuring out how something works, but also what is actually worth figuring out—is not a matter of finding but of forging.
Edmund Phelps, Nobel laureate and one of the most prolific and wide-ranging economic thinkers of our time, has spent the past two decades studying economic vitality and innovativeness of nations. He argues that the underlying force in economic dynamism is the human drive to explore the unknown and to create the new.
The reason for this, as Phelps attests in the seminal work Mass Flourishing, is that genuine innovations that open up new realms of possibility and fuel dynamism “are not determinate from current knowledge” because “[b]eing new, they could have not been known before.” No amount of data analysis, hypothesis testing, or even superior knowledge of the present gives birth to the meaningfully new.
Using the terminology of AI, innovation is about moving outside of those distributions of data we have gathered in the past. As the complex systems scholars Teppo Felin and Matthias Holweg remind us: “AI uses a probability-based approach to knowledge and is largely backward-looking and imitative, while human cognition is forward-looking and capable of generating genuine novelty.”
True innovation is something that creates the very standard by which it will later be understood. Innovating cannot be driven by probabilistic reasoning, since the value and meaning of the end result can only be analyzed after the fact. Instead, it requires a sense of vision but also an ability to venture into the unknown by starting to build something that initially seems, by definition to a degree, irrational. Whether innovation is driven by a desire to “make a difference,” or “to see whether their insights prove right,” or “to give something to their community,” Phelps maintains that “at the heart of a nation’s system for high dynamism are people with the desire or occasional urge to innovate.”
The AI founders have obviously expressed such a desire themselves. Sam Altman founded OpenAI at a time when the idea of general purpose AI still seemed far-fetched even to most of Silicon Valley. Making a bet driven by belief rather than calculation, investors committed to spending $1 billion on the company in its early days, before the demonstration of any commercially viable products. Amodei and Sutskever, in turn, started their companies on leaps grounded in imagined futures that include a safer AI.
Of course, humans can and do fall into predictable patterns. Still, we cannot forget that it is also distinctively human to venture into the unknown pushed forward by open-ended curiosity and a sense that something else is possible.
The Innate Sense of What Might Make Sense for Other Humans
We should also not forget that venturing into the unknown is more than just a solitary leap. It’s something we do together, even if implicitly.
What takes place in human innovation is not only bold exploration, but a kind of dynamic regulation—an embedded, intuitive sense of what might work for others, and what might not, even without quite knowing why. The human will operates always in reference to the minds and wills of other people, in the thick of social life.
The ability of humans to intuit something about the needs and motivations of other humans is central when developing something novel, when the utility and meaning of the innovation cannot be extrapolated from prior experience. Current-generation AI simply does not have that capacity. While AI can effectively mimic socially intelligent behavior, it lacks the basic capacity for social cognition that humans (and even other animals) are born with, as widely evidenced, for instance, in neuroscience and developmental psychology. As a result, in venturing to the unknown, AI struggles to make decisions that make sense in the social context of the innovation, as that social context itself is still being created.
Even when innovators are motivated by the seemingly self-centered desire to bring into existence something that they themselves would want, what humans create is always to some extent attuned to other humans, simply by virtue of being humans themselves. For example, Linus Torvalds described the creation of Git by saying, “Okay, I will do something that works for me and I won’t care about anybody else,” but then, what worked for him turned out to work for lots of other people, becoming the global standard in software development. Similarly, the maker of the game Balatro said he wasn’t really intending to make a game that other people would play, but then “I gave this to one of my friends and he came back to me a couple of months later and was like ‘I played that game for like 20 or 30 or 40 hours’” and GQ called it “the best game of 2024.”
Because truly valuable innovations make little sense when projected from the present, a neural network cannot distinguish them from the many irrational projects that might lead to momentous but wholly detrimental outcomes. Humans, in contrast, have this ability essentially built in (even if this does not mean that human-made creations are always beneficial).
While there is not yet much documented history of granting AI full autonomy to innovate in the real world, analogous cases from more everyday situations offer valuable insights into how dramatically AI might fail if allowed to innovate independently. Consider, for example, an AI therapist recommending “a small hit of meth” to a recovering addict, or an AI girlfriend suggesting a suicide. Humans immediately recognize that these are outrageously inappropriate behaviors. The AI, however, lacking any attunement to the tissue of social expectations that humans effortlessly move within, is clueless. Or, more mundanely, consider Chevrolet’s AI chatbot promising a Tahoe for $1 after a customer re-prompted it in a way that overrode its original logic. A human operator would effortlessly spot the discrepancy between the customer’s request and doing what “makes sense” in the social context of representing Chevrolet. (While that was in 2023, OpenAI still warns how the recently launched agent mode of ChatGPT is vulnerable to similar “prompt injection” attacks.)
In the worst case, the artificial innovator will confidently take actions that are completely detached from any notion of human interests and values. Guardrailing and reliability engineering will not work at the frontier of knowledge because, by definition, the AI is venturing into a domain where no one has ever been. This does not mean AI cannot be helpful in the innovation process, but it is dangerous to place the AI in the driver’s seat.
Flourishing Cannot Be Automated
Phelps’ most important insight is that human flourishing in a flourishing economy is not merely about the utility of the economic or technological outcomes, but about the creative and explorative process itself.
While Phelps accepts that “success or attainments may well be gratifying,” he argues that when people get to search for new possibilities, venture into the unknown, and appreciate the journey, “thus allowing them to grow and express one’s self in the process,” they can be seen to experience “a life lived to the full.” In a dynamic innovative economy, this experience is enjoyed not only by the entrepreneur but by all participants, including: investors willing to act on a hunch; workers imagining and trying new methods; customers pioneering the adoption of new products whose value is not knowable beforehand, and so on. It is precisely in venturing beyond established data distributions—in the messy creation in real life—that humans experience fulfillment, meaning, and vitality.
Unfortunately, the AI executives’ vision appears to suggest that it’s not only possible but in a way desirable to override the human sensibilities involved in innovation, or to take over the innovation processes altogether. In Altman’s framing, humans appear as operators of AI agent armies that interact with other such armies. In Amodei’s image, the innovator is explicitly AI, telling humans what to do “as a Principal Investigator would to their graduate students.”
Effectively, in this vision, the work of delivery app drivers today can be seen to foreshadow that of future entrepreneurs: following instructions generated by an opaque algorithm with only arbitrary agency on the job. Rather than exploring new markets or boldly pursuing their unconventional ideas, startup founders may end up shakily waiting for signals from AI-generated simulations that dictate what, when, and how to “innovate.”
We are naturally drawn to technologies that relieve us from burdensome effort. But as we outsource our perceptions and mediate our interactions using algorithms, we risk eroding the relational undercurrent on which true innovation builds. The more our actions are algorithmically guided and our collaborators and colleagues replaced by machines, the less we employ our senses fully to gather the courage and unconceptualized insight that drive innovation.
A Worldview, Not the World
So, what are we to do? Right now, the march of AI toward occupying the kind of role in the economy we have argued against seems inevitable. But the crux of the matter might be precisely how we perceive and understand the role we are granting to AI. The problem is not simply the changes that AI might bring, but that already in anticipation we limit our own understanding of the world and humanity to fit with what AI can do.
This challenge calls for two different responses.
Keeping AI in the Role of a Tool. The first response involves learning how to continually hold these progressively more powerful and alluring technologies as nothing more or less than tools. This means both asking corporations to consider how the tools are designed and asking all of us to be mindful of how insidiously such technologies can slide into the driver’s seat. The interventions needed in this regard are likely akin to the movement spearheaded by Jonathan Haidt to reduce teenage addiction to smartphones and social media. Following that model, we can decide when and where the use of such tools is appropriate and supportive of human flourishing, and then restrict their use wherever they are not. While such measures may not seem immediately enticing, they are likely vital for preserving connection to our sense of agency.
Recognizing the Worldview at Play. The second response involves an awareness that what we currently encounter in the rise of AI is not just a technological solution but a particular worldview—one that sees all problems as solvable by adding computational power, and human flourishing as the consumption of those solutions. This worldview, indeed, precedes the latest developments in AI. These are essentially the assumptions that Phelps has argued against within economics for decades. But the current framing and adoption of AI rests on a doubling down on that worldview.
Most existing AI regulation efforts operate within these assumptions, focusing on maximizing benefits and minimizing harms. While this is necessary, ultimately, for making a real difference we need to focus on what AI can’t do—to emphasize what this worldview gets plainly wrong about human life, innovation, and societal flourishing.
Humans are not just computationally inefficient utility optimizers but creative participants in collectives. Innovation is not pattern reconfiguration but the emergence of novelty in social contexts. Flourishing is not passive consumption but the consciousness of possibilities and the realization of a desire to create. All of those things are, and will for the foreseeable future remain, uniquely and importantly human. And as history obviously suggests, we can be human with or without AI. As we increasingly find ourselves being human alongside AI, we can also reflect on, and decide whether, it helps us to innovate and flourish, societally and individually.
Ultimately, the winning economies in the AI age won’t be those driven by fear of falling behind or by blind deployment, but those that are able to maintain humans at the helm throughout the fabric of business and society. Otherwise, there will be no one actually doing the winning.
Read more stories by Lauri Pietinalho, Jukka Luoma & Matt Statler.
