young black woman works remote at home office (Illustration by iStock/Paper Trident)

Several years ago, in an interview with Quartz, I argued that women are often mischaracterized as financially “risk-averse,” when a better description would be “risk aware.” Instead of being celebrated as mindful stewards of financial resources, women are often perceived as being “too cautious.” In another world—one where women define the terms—diligence would be seen as a strength rather than a perceived weakness. Women often see flaws in systems they have not built, weigh the stakes carefully, and act accordingly.

The rise of generative AI, powered by large language models like ChatGPT, Claude, and Gemini, has been brisk and disorienting. For many women, their gut response has been one of skepticism. When I tell women my age that I’m working with women and tech-wary folks on how to navigate generative AI, I often see faces contort as if they have tasted something unpleasant. The anguished comments begin: “I hate that it exists.” “Will a college degree mean anything anymore?” Some excitedly tell me about a new tool or prompt they have tried, but more often, I hear feedback that results are generic and impersonal. A favorite game is to point out flagrant hallucinations, which for many people amount to evidence enough that these tools can never be trusted.

As in financial systems, women are attuned to the weaknesses in generative AI systems that designers didn’t notice or prioritize (bias, privacy risks, unreliable outputs) before putting their products out into the world. Some of the industry’s more misogynistic offerings (see Grok’s Ani fantasy chatbot) or disturbing policies (see Facebook’s leaked policies on children and illicit content) are enough to send most users into a catatonic depression spiral. But for women, beyond being offensive, such outputs are evidence of what gets built when development teams lack gender diversity. When women engage with systems that they’ve been largely left out of creating, the products can feel foreign, awkward, or even hostile.

But here is the catch: Opting out of generative AI is about as realistic as opting out of electricity or the internet.

The response to dizzying hype should not be rejection; it should be fierce ambivalence. That means passionately holding two seemingly contradictory truths at once: We should use generative AI to empower ourselves and others, and we should demand exacting standards of transparency, fairness, and safety from those building and governing these tools.

For better or for worse, generative AI is already reshaping how work is done, who advances, and whose perspectives guide the next generation of tools. Generative AI tools deliver incredible efficiency gains. A study by The Federal Reserve Bank of St. Louis found that the majority of respondents who used generative AI tools in the past week saved between 2-4 hours with more frequent users reporting even greater time savings. The irony is that the most overwhelmed people are too overwhelmed to learn the tools that will help them become less overwhelmed. More than half of professionals say learning AI feels like a second job, which, for most women, is actually a third job when you consider the continued disparities in time spent on child care and household work. However, if women disengage, they lose influence at precisely the moment when their experience is most needed. If they engage and advocate, their risk awareness can strengthen the technology itself.

What the Research Shows

Data confirms the patterns I’m seeing around me. Multiple studies have shown a significant gender gap in generative AI. While recent OpenAI and Deloitte studies show signs of more equal usage in the United States, this is not true for all ages or all countries. A Harvard Business School meta-analysis of 18 separate studies covering 143,008 individuals across 25 countries found women had 22 percent lower odds of using generative AI than men. The Deloitte study found that AI adoption declines with age, and the gender gap is most pronounced in the 45+ group.

The same Deloitte study and recent Pew research both show that women predict AI will bring less benefit and do more harm across personal, professional, and public life. Men tend to be more optimistic, confident, and self-assured in their competence.

There are plentiful reasons to be concerned:

  • Knowledge gaps: A Federal Reserve Bank of New York survey shows that when women say they want generative AI training, they aren’t just looking for skills; they’re signaling awareness of the technology’s opacity and their unwillingness to trust a system they don’t fully understand. A systemic review of gender and AI adoption found that women consistently cite lack of transparency and the opaque nature of AI tools as barriers to trust.
  • Hallucinations: A closely related issue is the propensity of large language models to generate what is colloquially known as a “hallucination,” or fabricated information that appears authentic. For newcomers, these moments are unsettling. They interrupt the fragile sense of mastery that comes with early experimentation and raise doubts about what, if anything, this powerful technology should be relied upon to produce. Case in point: While I was researching how women’s concerns about using generative AI are exacerbated by the production of inaccurate content, ChatGPT hallucinated a very relevant-sounding study and offered up a nonexistent hyperlink.
  • Expectations of sanction: Women believe they will be more harshly judged for using generative AI on the job. And they are right. A Harvard Business Review study found that female engineers who used AI to generate code were rated 9 percent less competent than their male peers, despite evaluators reviewing identical outputs.
  • Biased outputs: The impact of biased training data in generative AI systems ranges from the individual to the systemic. A study out of Germany showed that generative AI chatbots advised women to ask for significantly lower salaries than men with identical profiles. This pattern detection can exacerbate exclusionary outcomes for marginalized groups seeking housing, credit, or seeking employment opportunities.
  • Privacy concerns: Most major AI systems and platforms require a specific opt-out to protect user data from being used to train their models. Even data that could have reasonably been considered private have found their way into the public domain (see this Meta AI chat release earlier this year). As with other technology adoption, research shows women are more concerned than men about how AI handles data, in part because women experience higher rates of tech-facilitated abuse and harassment.

This adds up to a striking conclusion: Women’s generative AI hesitancy is rooted in rationality, not hysteria. It is not risk aversion. It is risk awareness.

Why Mature Women’s Engagement Matters

All of this is true across age groups, but the stakes, in my view, are unique for women over 40. These women are often at the height of their careers—leading teams, running organizations, influencing policy, and shaping strategy.

At this stage, disengagement is particularly costly. Research on productivity underscores the risk. A Harvard Business School study titled “Navigating the Jagged Technological Frontier” found that less-experienced consultants who used GPT-4 matched or even outperformed more seasoned colleagues who did not. In other words, opting out of generative AI use can erode the advantage that senior professionals have built over decades.

If experienced women do not build confidence with AI tools, boardrooms, executive suites, public debates, and AI itself could tilt even further male.

The Stakes for Equity and Organizations

This matters on multiple levels. For women, lower and slower adoption risks widening the gender pay gap and reducing promotion opportunities. Productivity gains will accrue disproportionately to men if women disengage.

For generative AI companies, failing to meaningfully involve women in development and deployment raises the continued risk of biased and harmful outcomes, heightens exposure to emerging regulatory scrutiny, and limits an organization’s capacity to innovate and compete in a rapidly evolving market. Just as a car manufacturer cannot survive if its vehicles are unsafe or a pharmaceutical firm will collapse if its products harm patients, generative AI companies will lose legitimacy if their systems are distrusted by half the population.

According to some predictions, women are nearly three times more likely than men to be in jobs that generative AI can easily automate due to their overrepresentation in clerical and routine cognitive roles. We need to be vigilant about who is gaining access to new skills, who is retaining (or losing) influence in the workplace, and how generative AI could widen existing inequalities across economies if women’s roles are disproportionately disrupted.

What We Can Do: Turning Risk Awareness Into Leadership

The solution is not to ask women to ignore their skepticism and accept the situation as beyond their control. The answer is for women—and everybody else—to embrace fierce ambivalence by creating spaces, programs, and policies where risk awareness is normalized and mindful, deliberate adoption is respected. Lessons from decades of women’s economic empowerment work offer a roadmap of potential solutions.

  • Community-based learning and peer support: Savings groups in low-income countries thrive because they build circles of trust. Women learn from each other, share risks, and support one another financially and emotionally. Generative AI adoption can follow the same model: peer-to-peer learning in small, supportive groups. I founded my organization First Prompt, because I was inspired by the self-efficacy I saw women develop when they took financial matters into their own hands and taught others to do the same. In First Prompt workshops, participants experiment and learn together. There is power in learning how to “drive” these tools safely and skillfully. Being exposed to foundational practices for working with large language models, such as setting context, giving clear instructions, iterating, and exploring different perspectives, helps participants feel more in control and confident.
  • Flexible, accessible design: Mobile money platforms succeeded in building women’s financial control in part by accommodating women’s time constraints and mobility realities. She is AI and Women Defining AI are inclusive learning and networking communities that aim to invite women into the gen-AI revolution. Both run Slack channels bursting with encouragement and new connections, while also providing easy access to on-demand AI trainings for a low participation fee, making it easy for women to engage on their own schedules.
  • AI as a springboard for solo-preneurship: Financial inclusion programs such as Accion and Agora Partnerships paired access to credit with training in financial literacy and entrepreneurship. Similarly, AI adoption can be paired with skills-building for leadership, business growth, and productivity to provide new routes to flexible income generation. Imagine a train-the-trainers certification program for AI coaches as a form of solo-preneurship, equipping women to guide others through the AI adoption process, formalize their businesses, and market their services.
  • Role models and leadership: Role modeling is essential to drawing women into fields where they are underrepresented. The Women’s Leadership Alliance for Financial Advisors connects aspiring advisers with a community of established mentors. In AI, showcasing women who successfully build and use AI solutions in business, nonprofit, and academic settings can inspire others to engage. Initiatives such as Women Applying AI, launched in Boston this fall with nearly 40 cross-sector founding members, frame AI learning as a movement where women are invited to learn alongside peers they admire.
  • Building trust through enhanced transparency: Financial institutions such as Triodos Bank and Amalgamated Bank earn consumer trust through disclosures on lending and emissions that go above and beyond what is required by regulators. Companies in the generative AI space that advance data privacy, increase transparency, and actively address bias will capture the segment of the market that is holding back due to concerns about trust. For example, OpenAI and Anthropic recently collaborated to release the results of each lab’s internal safety tests on the other’s models, highlighting the strengths and weaknesses of each system as well as the efforts underway to mitigate identified risks.
  • Creating rules to prevent corporate abuse: Regulatory strengthening is needed to provide AI users with assurance at a systemic level. Just as the Consumer Financial Protection Bureau, created in the aftermath of the 2008 financial crisis, brought long-overdue clarity, oversight, and consumer recourse to financial markets, bills such as the Artificial Intelligence (AI) Civil Rights Act seek to protect consumers by systematizing bias surveillance in AI systems. Endorsed by leading digital rights and civil society organizations, the bill would require independent audits, algorithmic impact assessments, transparency reports, and human-appeal rights for individuals affected by automated decisions and enable legal action against companies that fail to comply.

Together, these strategies reflect a spectrum of change. When these levels reinforce one another, the AI era can expand opportunity rather than deepen inequality.

Reclaiming the Narrative

The misframing of women as “risk-averse” created a misunderstanding in the market that has limited women’s access to credit and capital. Women’s caution offers valid, helpful signals to the generative AI industry. Their perspectives highlight ways to make the technology stronger. While no single formula will ensure equity in generative AI adoption, experience from other sectors suggests that progress is likely when action is taken across the following fronts:

  • Learning: Build women’s confidence in generative AI through user-centric approaches.
  • Access: Design flexible, inclusive upskilling opportunities that recognize women’s time constraints.
  • Opportunity: Translate generative AI competence into leadership and income generation.
  • Accountability: Demand transparency and fairness from generative AI companies.
  • Oversight: Establish safeguards to ensure that this new technology serves the public good.

In an era of constant technological change, fierce ambivalence is the way forward for all AI adopters. Women are well-suited to lead the charge.

Read more stories by Mara Bolis.