(Photo by iStock/necati bahadir bermek)
The digital economy’s story often centers on stock prices and initial public offerings, but the processes and people behind it reveal a very different reality. Across outsourcing hubs like Nairobi, Manila, and Hyderabad, content moderators working for Facebook, OpenAI, and their subcontractors spend hours each day reviewing beheadings, sexual violence, child abuse, and hate speech to train and police AI systems. This form of labor has led many to report severe psychological harm, including depression, anxiety, and post-traumatic stress disorder. Investigations have documented suicide attempts among moderators in Kenya and the Philippines, alongside widespread reports of suicidal ideation linked to relentless exposure to traumatic content, low pay, and a lack of mental-health support. These incidents are not isolated tragedies, but rather symptoms of an industry structured to offload risk downward through opaque contracting chains while concentrating profit and control at the top.
These cases are a stark reminder that when technological systems are designed solely for extraction and efficiency, they isolate and break the people who sustain them. As artificial intelligence (AI) accelerates, we face a similar precipice. Without deliberate intervention, these extractive logics will scale globally, further concentrating power at the top, unless we choose to build a fundamentally different system.
Participants at the Cooperative AI Conference, Istanbul, 2025. (Photo by NeedsMap. Licensed under Creative Commons Attribution–NonCommercial, CC BY-NC)
Previous contributions to the Stanford Social Innovation Review have argued for investing in enterprises that work for everyone and prioritizing community-centered AI collaborations. Others have highlighted how worker cooperatives drive civic engagement and the necessity of reconceptualizing the social economy. Building on this discourse and the momentum of a recent Cooperative AI conference in Istanbul, we argue that the response to AI’s harms cannot stop at regulating dominant platforms. The concentration of Big Tech power increasingly leaves even critical international institutions vulnerable to authoritarian politics and executive pressure, as the International Criminal Court’s dependence on Microsoft infrastructure amid threats issued under the Trump administration illustrates. Instead, cooperatives, public institutions, and social movements must actively construct and connect alternatives through what we call the “solidarity stack”—an emerging, cooperative digital economy. Already 1.2 million workers across 53 countries are engaged in building it, with potential for much wider participation and collective scaling.
A New Structure for Solidarity
Extraction is not just a matter of biased algorithms or privacy violations. It is a structural issue, and AI today operates through what we call a vertically integrated “extraction stack” that includes hardware, cloud infrastructure, models, labor, and applications. Only a few companies control how to build, govern, and use these technologies; the people who depend on them have no democratic say in any of it.
Critics and regulators rightly point out that ethical guidelines alone cannot resolve this issue. AI systems are built on ownership models, supply chains, and technical architectures that prioritize profit, scale, and control. These structural incentives determine how data is collected, how labor is treated, who makes decisions, and who captures value, leaving ethical guidelines with no power to override the system’s underlying logic. A democratic AI cannot simply rent space on the extraction stack. It requires that workers, communities, cooperatives, and public institutions reclaim ownership of the infrastructure itself, layer by layer, from the earth to the cloud. The solidarity stack also rejects the notion of artificial intelligence, which implies a magical, autonomous force, and reframes it as collective intelligence, acknowledging the human labor and communal knowledge that power these systems.
But achieving the coordination required to challenge global monopolies remains a formidable challenge. Even established organizations such as the International Cooperative Alliance, which represents a movement of roughly one billion members and cooperatives responsible for about 10 percent of global employment, are structured primarily for representation and advocacy rather than for coordinating and operating shared digital infrastructure.
Constructing the solidarity stack will take time and involve developing a set of concrete interventions across distinct layers of the AI economy. However, several early efforts by communities, cooperatives, and public institutions are already reclaiming control over these economic layers, including earth and infrastructure, data, labor, and knowledge.
Sovereignty Over Earth and Infrastructure
The extraction stack begins with rare-earth mineral extraction, including the mining of lithium and cobalt, across China, Australia, and Myanmar. These minerals are essential for components in cooling systems and hard drives, but miners often work in hazardous and exploitative conditions. Meanwhile, large cloud providers that operate globally scaled data centers, such as Amazon Web Services and Microsoft, dominate computation. This centralization creates systemic fragility; when a single provider goes down, it can lock people out of their homes or erase public records, as seen in the 2025 South Korean data center fire. A solidarity approach would require transparent supply chains, community ownership of mineral resources, and equitable benefit-sharing arrangements. It would also distribute infrastructure across federated, community-owned servers that can interconnect without central control.
Butler Rural Electric provides a powerful historical precedent. Founded in the 1930s with federal support and cooperative governance, rural electric cooperatives enabled communities to finance, build, and manage their own power infrastructure, a model that continues to provide electricity to roughly 42 million people across rural America today. Digital cooperatives such as Hostsharing eG in Germany and Som Connexi in Spain, and several retail cooperatives in the United Kingdom apply this same logic by pooling member resources, using cooperative governance, and working with public partners to build and operate shared digital infrastructure. This enables communities to reduce their dependence on proprietary cloud providers, maintain local control over data, and take responsibility for managing the environmental costs of energy consumption. Although these initiatives are explicitly experimental and modest in scale, they suggest that computing capacity can function as a public good. Policy makers and municipal leaders could even apply this model to create a public option for computing power.
Data Stewardship
In the extraction model, personal data is a raw material pulled from users to fuel proprietary models. This logic treats people not as participants or rights-holders but as passive sources of value. The solidarity stack reimagines data as a shared resource managed through democratic stewardship.
For example, MIDATA, a Swiss health data platform owned and governed by patients who contribute and rely on their own medical information, operates as a fiduciary for its members. It maintains a secure infrastructure in which patients can view their aggregated data and democratically decide whether or not to share it for medical research. MIDATA demonstrates that creating high-quality, ethically sourced datasets is possible without surveillance; members willingly share their data because they trust the cooperative’s governance and data stewardship, eliminating the need for extraction or coercive monitoring.
Dignity and Support for Labor
AI implies automation, but as discussed earlier, it relies on a rigorous feedback loop between content and human moderators. After automated systems flag potentially violative content, human moderators review and label images, videos, and text to decide whether material should be removed, restricted, or allowed. Their decisions are recorded and used as training data, teaching AI systems how to recognize and classify similar content in the future. This process forces individuals to absorb the psychological burden of toxic content to “cleanse” the digital environment for the platform’s primary users. A solidarity approach would ensure fair wages, psychological support, and worker ownership of the platforms they sustain. In the extraction stack, the hidden workforce of AI—the millions of data workers—are treated as liabilities. In the solidarity stack, they are owners with power and voice.
Kauna Malgwi, a content moderator working for a Meta subcontractor challenged unsafe conditions and later helped initiate the Gamayyar African Tech Workers Cooperative in Kenya. The cooperative brings together content moderators, data labelers, and engineers to explore whether locally grounded AI models can be developed under worker-owned conditions. The effort represents an attempt to move beyond extractive labor arrangements by giving workers a collective stake in the value they help produce. Similarly, Facttic, a federation of tech cooperatives based primarily in Argentina, and the digital agency Outlandish, a London-based technology cooperative, coordinate worker-owned software co-ops to manage democratic labor governance, shared technical capacity, and collective contracting with public and cooperative clients.
Democratizing Knowledge and Application
At the knowledge level, access to AI alone is insufficient if the tools operate as black boxes that cannot explain or be challenged in their outputs and that are grounded in imposed rather than democratically established values. The solidarity approach responds by reclaiming the knowledge layer as a collectively governed space—one that favors explainability, contestability, shared standards, and deep AI literacy.
AI4Coops in Argentina is a small, exploratory initiative bringing together cooperative practitioners and technologists to consider how artificial intelligence might support cooperative governance and shared learning. It ensures that algorithmic literacy is not confined to elite institutions or concentrated within dominant technology corporations such as Google or Meta, but is instead broadly accessible to workers, cooperatives, and communities. Meanwhile, the UK-based Animorph Co-op develops augmented-reality tools for dementia care, using immersive storytelling and visual prompts to support memory, communication, and emotional connection for people living with dementia and their caregivers. Because it is worker-owned, it refuses to monetize patient vulnerability, designing tools that prioritize care over engagement metrics.
These examples illustrate that “AI for good” cannot amount to virtue signaling, ethical branding, or forms of greenwashing or co-op-washing. Building the solidarity stack requires that alternative business models are genuinely anchored in workplace democracy, shared ownership, and accountable governance. Importantly, these localized models prioritize linguistic sovereignty and the preservation of cultural information that is often inaccessible or ignored by large-scale, global LLMs.
Implementation for Ecosystem Building
A resilient solidarity stack will emerge only through the strategic weaving together of policy, finance, and community organizing. Rather than coming together around a single protocol or platform, its continued development will require that tech developers repeatedly align on shared principles, material interdependence, durable institutions, rituals, and a common political story that people consciously choose to uphold—especially under pressure.
Though still experimental, decentralized autonomous organizations (DAOs), collectively managed blockchain-based organizations offer one possible way to encode shared governance and collective decision-making at scale. Initiatives such as Gitcoin and Breadchain use DAOs to support mutual aid and public-goods funding, while the Public AI Network advances similar principles of collective accountability and public ownership without relying on DAOs, instead pursuing policy- and institution-led approaches to AI as public infrastructure.
A central obstacle remains regulatory uncertainty, particularly under SEC securities law, which limits the ability of DAOs and cooperative digital organizations to govern assets and scale durable public infrastructure without clear legal safe harbors. For philanthropists, policy makers, civic leaders, cooperators, and technologists, the path forward requires three concrete shifts:
1. Supplementing regulation with public-cooperative partnerships. Experimental partnerships between local governments and cooperatives, such as the LESTAC AI initiative in France, show how municipalities can create real-world testing environments to trial ecologically responsible AI services with local businesses before broader deployment. Meanwhile, efforts such as .coop 2025—which convenes developers, cooperative members, and solidarity economy leaders around ethical, sustainable, and democratically governed uses of technology—suggest how more cohesive and supportive environments for these alternatives might take shape.
2. Investing in federated learning and open models. Examples include open-source, multilingual models such as Apertus, the Swiss large language model developed by Swiss Federal Institute of Technology in Zurich, and the Swiss Federal Institute of Technology in Lausanne, which offers a public alternative to extractive artificial intelligence systems and is trained using public infrastructure. Federated learning allows cooperatives to leverage AI while maintaining privacy by training algorithms locally rather than centralizing data. Shared protocols can further facilitate the technical integration of these efforts by providing the framework for a decentralized ecosystem of community-owned platforms.
The OpenCourier protocol, for instance, creates a common technical foundation that lets worker-owned delivery platforms connect and work together. It does this through three building blocks: a directory where platforms can find each other, a communication system that lets couriers talk to any platform using the same language, and a transparent way to manage orders. Like the early Internet standards that let any website talk to any browser, this approach uses open, publicly available tools so that independent, worker-owned platforms can share information and resources without needing a corporate middleman. This shared technical framework is essential because it eliminates fragmentation across the stack, allowing localized cooperatives to scale collectively while maintaining linguistic and operational sovereignty.
3. Cultivating alliances across movements. Here, a good example is the Co-operative Councils’ Innovation Network Cooperative Values-Driven AI project, a network of UK local authorities committed to cooperative principles in public service delivery. The project is bringing together councils, civil society groups, and technologists to prototype ethical technologies, as well as organize around shared political commitments that shape policy agendas and public procurement standards in favor of democratic technology.
The development and adoption of shared, open protocols also helps ensure that community-owned platforms can interconnect and scale without succumbing to the fragmentation that often undermines decentralized efforts. The OpenCourier protocol demonstrates how standardized technical frameworks can encode values of collective ownership and cross-platform cooperation. Solidarity stack “circles”—small, local formations of researchers, technologists, and organizers who treat infrastructure-building as a form of collective political action, whether through municipal data trusts, cooperative cloud services, or publicly governed language models—can also help.
From Inevitability to Choice
The dominant AI narrative falsely suggests that centralized corporate control is inevitable. However, content moderators in Kenya, data stewards in Switzerland, and others show that the components of a democratic digital future are already at hand. Our task is to connect them, and in so doing, exercise agency, refuse despair, and create a system where technology serves the majority.
Read more stories by R. Trebor Scholz & Mark Esposito.
