women in brightly colored dresses sitting in a circle outdoors looking at a laptop (Image generated in DALL-E by the author)

The aftermath of the OpenAI governance controversy revealed the extent to which power has been consolidated by AI tech giants, a situation with dangerous implications for critical aspects of society. The potential of AI tools to provide societal benefits is real: for example, we have already seen the use of chatbots to manage humanitarian disaster responses, the deployment of AI-enhanced data analysis for climate mitigation and adaptation, and the use of data integration and textual analysis to address gender based violence, among other examples. But putting unchecked development in the hands of (primarily) male tech executives who espouse a particular Silicon Valley ethos oriented toward profit and dominance above all else, will only intensify threats to our social systems and vulnerable communities. It will erode information systems, produce algorithmic bias, introduce gender and racial discrimination, facilitate sexual abuse, increase labor exploitation, allow for the exploitation of creative works, and create new risks of violence, death, and deprivation to civilians in war from autonomous AI decision-making.

In short, relying on tech companies to govern their own AI development carves a path toward societal collapse by repeating mistakes made in past development of the web and social media. We need a new roadmap.

To establish effective AI governance, then, is the challenge for civil society organizations and social innovators. This entails determining the frameworks and structures we need to build to effectively organize and govern society amid rapid technological change and unchecked power consolidation. To address this challenge, it is crucial to elevate the voices, perspectives, and solutions of communities who directly experience the harms of AI.

Community-Led Transformation

An important way to create community-led AI governance lies in supporting cooperative, collective, and collaborative structures. The first step in doing so will be building an enabling environment and establishing the conditions that can support an ecosystem of advocates, creatives, and practitioners who can build the AI sector toward justice, equity, and shared prosperity. In a perfect world, AI would be treated as a public utility, and we would foster a collaborative and equitable approach to its development and deployment through open-source frameworks and transparent governance structures. If we viewed AI as a communal resource, we would shift the focus from proprietary interests to the collective good, prioritizing accessibility and ensuring that AI benefits are shared across diverse communities.

Are you enjoying this article? Read more like this, plus SSIR's full archive of content, when you subscribe.

At present, of course, this may be something of a stretch goal. But to move towards creating an enabling environment, we need to imagine the world we want to create. We need:

  • Strong regulatory environment: Public oversight across the globe is necessary, as are policies that strategically address issues like workforce reskilling, education, rights to data, data accessibility and privacy, responsible AI adoption, fraud, abuse, and discrimination.
  • Rights-based approach: This would require documentation of adherence to goals safeguarding human, civil, and cultural rights at every stage of AI development.
  • Public education: Addressing the AI knowledge gap requires fostering engagement and inclusion, and an emphasis on leadership, ethics, and informed public discourse. Whether integrated into public school systems or facilitated by civil society, educational initiatives are pivotal, and should prioritize plain language and cultural relevance to broaden access, particularly to individuals who may lack the capacity to navigate intricate AI concepts that directly impact their lives.
  • Governance framework: As I set out in a piece last year with co-author Scott Smith, the four tenets of a values system for just and inclusive web initiatives are intentionality, accountability, mutually affirmed norms, and an ethics framework. Ethics don’t mutate when they are applied to a different technology, and these tenets are transferable to the realm of AI. Based on these, civil society and community organizations should develop an AI governance framework, for embrace by government and corporate sectors. The framework should emphasize power dynamics, community engagement, and principles for ethical, transparent, accountable, and inclusive governance grounded in shared responsibility that not only mitigates risks but empowers individuals to shape AI's trajectory.

Investing in Cooperatives, Collectives, and Collaborative Structures

Within this enabling environment, how would we build an ecosystem in which AI innovation serves humanity and the planet?

Ethical AI governance in this context would go beyond defense against AI’s harms, reimagining how we live together and care for each other, and then actively implementing ethical, effective, and creative AI applications to realize these visions. To do this, we need long-term, unrestricted investments in cooperatives, communities, and collaborative networks that transcend borders, sectors, and issues, promote the public good, and value access, justice, and responsible innovation. These kinds of structures have a long and storied history, but civil society, philanthropic institutions, and investors in our current landscape are still only beginning to provide robust support.

By emphasizing shared ownership, democratic control, and collaboration, cooperatives and collectives empower communities and cultivate shared responsibility. Their commitment to openness and transparency results in accountable decision-making, such as advocating for open-source software, transparent algorithms, and shared ethical guidelines. Assembling a network of diverse stakeholders—researchers, developers, ethicists, artists, advocates, and community representatives—ensures multifaceted perspectives and facilitates resource and knowledge sharing, democratizing AI tools and addressing hidden biases.

Investing in such cooperative structures would be instrumental for building responsible and responsive AI applications, fostering relationships of plurality and mutual care, and ultimately benefiting communities. Additionally, supporting cultural initiatives that empower under-resourced communities, alongside recognition and financial backing, becomes vital for achieving a more equitable and inclusive AI ecosystem.

This is what a global society-wide investment strategy might look like:

1. Invest in creative global collaboration for community transformation: We need to engage creative minds—designers, musicians, filmmakers, and photographers—in collaboration with diverse diaspora voices (and visionary funders) to design ethical AI. To do this, we need to facilitate connections among diverse stakeholders, including governments, NGOs, enterprises, advocates, artists, and community organizations, and to foster skill exchanges and collaborative AI training. With institutional support, strategic investor engagements, art and design showcases, and global market access, this type of investment allows stakeholders to create and champion projects that address local social issues and challenge dominant narratives for meaningful structural change.

There are organizations that exemplify or are taking this approach already. Electric South collaborates with artists and creative technologists across Africa who work in immersive media, AI, design, and storytelling technologies through labs, production, and distribution. Last year, the organization convened a group of African artists to develop a set of responsible AI and effective AI policies for the African XR ecosystem. The organizations Brown Girls Doc Mafia, Bitchitra Collective, and Center for Cultural Power each support and connect BIPOC independent media creators through funding, mentorship, industry database access and stewarded networking, and skills sharing. Metalabel is a collaborative space where creative people release, sell, and exhibit work together. Each of these groups has a unique model, yet they all support visionary leadership and distributed decision-making. Their emphasis on ethical and collaborative design, implementation, and cooperation, offering insights into how caring communities can promote creativity, justice, and shared opportunities in AI.

2. Support intermediary organizations and communities of practice: The kinds of entities that serve as connective tissue among funders, organizations, and communities are pivotal in supporting cooperatives and collectives by fostering collaborative knowledge exchange, sharing best practices, and facilitating ongoing dialogue. These entities can help stakeholders collectively navigate ethical challenges, stay informed about emerging standards, and develop a shared understanding of responsible AI practices, ensuring a more inclusive and sustainable approach to AI development. Support might take the form of facilitating knowledge-sharing forums, providing technical expertise, directing and regranting funds, fostering collaboration among diverse stakeholders. and monitoring and advocating for responsible AI practices within these collaborative efforts.

Examples of these kinds of organizations and communities include the non-profit organization Promising Trouble and its sister social enterprise Careful Industries, which together empower communities to own, use, and adapt technologies through research, consulting, tool building, and policy advocacy; TechSalon has built a community of research and practice for applications of technology to international development and humanitarian aid; and Black in AI works to connect Black technologists to foster collaborations and increase the presence of Black people in the field of Artificial Intelligence.

3. Contribute to efforts for community capability building and mentorship: This type of investment fosters long-term impact by establishing a foundation of AI literacy and skills within communities. It enables responsible AI development and usage, democratizes access to knowledge, and empowers diverse communities to actively engage with and influence the technology. Moreover, it seeds economic opportunities and enhances social cohesion.

An example of the benefits of this approach is found in the work of the Cyber Collective, which aims to bridge the AI knowledge gap by promoting knowledge sharing, community engagement, and inclusion through plain, accessible language and tools.

4. Fund advocacy organizations: Donate and grant to organizations that take a rights and justice approach to ethical AI, and that promote equity, transparency, accountability, voice, and ethical considerations. Examples of organizations that are advocating for ethical uses of AI, as well as inclusion in the design process, and leadership in the AI sector include the Distributed AI Research Institute (DAIR), which uses community-centered research that reflect lived experiences to influence production, deployment, and access to AI tools; and The Algorithmic Justice League, which advocates for for equitable and accountable AI using art and research to illuminate AI's potential for discrimination, engage with policymakers and industry to prevent harm, and amplify the voices of those most affected.

5. Steward and capitalize efforts to build alternative business models: The organizations Transform Finance and Zebras Unite have both developed knowledge with their communities that examine alternative enterprise structures and alternative financing structures. These emphasize alternative ownership models like cooperatives, employee share ownerships, and decentralized autonomous organizations. These kinds of structures aim at a just and sustainable economy by changing the way economic value is created and distributed, and democratize decision-making power to non-investor stakeholders. An “Exit to Community" strategy, a recent structural innovation that provides a departure from traditional startup exits like acquisition or IPOs, prioritizes community ownership over profit-seeking investors. Insights from these types of economic structures can be applied to AI and digital governance.

A Call to Support Collective Action

We are navigating an era of collapsing systems and global uncertainty. AI, already deeply rooted in many of our social and economic systems, raises ethical concerns as its complexity and self-learning abilities expand. The pace of AI development through unchecked, profit-driven structures heightens the risk of further power consolidation by the already privileged and further marginalization of vulnerable communities. Urgent action is imperative to establish ethical governance for AI.

The responsibility of civil society in the realm of AI is to promote shared prosperity and distributed power through community initiatives that ensure just and collective outcomes for current populations and for future generations. Achieving this requires us to support community-led AI governance structures and foster the community-driven creation of compassionate, ethical, and equitable AI tools. Cooperatives and collectives play a vital role in driving ethical AI governance, facilitating proactive measures, embracing moral and legal responsibilities, engaging in dialogues on automation's costs and harms, designing innovative and creative alternatives, and sharing power and resources among many.

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Lina Srivastava.