Exploring AI at a Mile High

Human Input: The harassment tax: What women pay to be visible in AI

The future of AI needs more women building, questioning, explaining, and leading. That won’t happen if visibility keeps coming with an added tax.

Crys Black

Longmont, Colorado

Last updated on Apr 27, 2026

Posted on Apr 27, 2026

When Reese Witherspoon recently told women on Instagram that they needed to learn more about AI, the reaction was immediate: While some people heard a practical warning, others heard celebrity tech evangelism dressed up as empowerment. After the backlash, Witherspoon clarified that she was not being paid to promote AI and that she did not believe computers should replace humanity. She also pointed to the same uncomfortable data many of us are seeing: Women are more exposed to AI-related job disruption while using AI tools less often than men. 

I keep coming back to the fact that this was Reese Witherspoon. Not because celebrity opinions should set our AI strategy, but because she has been so deliberate about backing women. Her career choices, production company, book club, and public conversations about entrepreneurship have all been part of a larger pattern: Women should have more agency over the stories, companies, and economies they help create.

So, when she said women should play with AI to protect our futures, it did not strike me as a random tech promo. It sounded like part of the same argument she has been making for years. The backlash is what got my attention. There is still deep discomfort when women tell other women to move toward power instead of away from it.

AI did not create the gender gap

The gender gap in technology did not begin with AI. AI is exposing it, speeding it up, and making it harder to ignore.

The U.S. Equal Employment Opportunity Commission’s research on the high-tech workforce found that women remained less than a quarter of the high-tech workforce in 2022, with Reuters summarizing the figure at about 23%. The EEOC also noted limited progress over time, despite the growth and wealth of the sector.

The AI numbers are not much better. Reporting that draws on World Economic Forum data has indicated that women make up only about 22% of AI professionals globally and 18% of authors at leading AI conferences. In cybersecurity, where public speaking, threat research, and community credibility matter deeply, women remain underrepresented as well. Recent industry reporting puts women at roughly 22% of cybersecurity teams or the global cybersecurity workforce.

Those numbers are not just workforce trivia. They shape who is seen as naturally technical, who gets invited to speak, and who gets sponsored, on the one hand, versus who gets doubted and who has to prove they belong before they can get to the actual work.

Now layer AI adoption on top of all that. Harvard Business School’s Working Knowledge summarized research showing women are adopting generative AI at significantly lower rates than men, often because women raise more ethical concerns and worry about how AI use will be perceived. Lean In’s 2026 research found men are more likely to be encouraged by managers to use AI and more likely to be praised when they do.

So when someone says, “Women need to learn AI,” I do not disagree. I teach that very thing. But the sentence is incomplete.

Susan Frew, CSP. (Photo credit: Susan Frew)

As Susan Frew, CSP, an AI keynote speaker, entrepreneur, and author of RECODED: Upgrade Your Business DNA with AI, has stated, "We are entering a season where if we do not become emergent, we run the risk of becoming obsolete. Reskilling is no longer optional. It is essential."

Women do not need special handling to learn AI. They need fewer penalties for experimenting, speaking publicly, asking hard questions, and claiming technical authority.

The visibility tax

In my experience, what looks like hesitation is often intelligence.

I have watched women in AI classes ask the better questions. Not the loudest questions and not always the first questions. But the questions that connect the tool to consequences: Where did the data come from? Is this safe to use with client information? What happens to the junior person whose job is being automated? Will I get in trouble if I use this? Will people think I cheated?

I don’t read those questions as reluctance. I read them as competence: the kind that connects a tool to customers, teams, risk, and consequences.

The problem is that tech culture often rewards speed before it rewards judgment. The person who tries the tool first gets called an early adopter. The person who asks about governance gets labeled anxious. The person who posts a confident thread gets visibility. The person who worries about whether the post will attract abuse stays quiet.

That silence is not free. It compounds.

The harassment tax is not only the hateful email or the doxxing threat, although those are real. It is also the mental spreadsheet women run before becoming visible: Will this invite abuse? Will I be taken seriously? Will I have to defend my credentials? Will my appearance become part of the conversation? Will my employer be contacted? Will my family be dragged into this? Will I spend the next three days moderating comments instead of doing my job?

Men in tech can absolutely be targeted. The reported Molotov-cocktail attack at Sam Altman’s home is a stark reminder that anger at AI and tech power can become personal and dangerous. But there is still a structural difference. Men are often attacked for their decisions, wealth, influence, or institutional power. Women are attacked for those things plus their gender, voice, face, body, tone, credibility, ambition, and right to speak at all.

And even that sentence is incomplete. The tax is not distributed evenly among women. Race, age, disability, sexuality, religion, class, accent, immigration status, and whether someone is trans or gender-nonconforming can all add more weight. A white woman with a large platform may face misogyny and still have forms of protection that other women do not. A Black woman, an Indigenous woman, a Muslim woman, a trans woman, or an older woman speaking about AI may be read through more than one stereotype before she gets to finish the sentence.

Melissa Reeve, founder of Hyperadaptive Solutions, speaking at HubSpot's INBOUND 2025 conference. (Photo credit: Melissa Reeve)

I am writing from my own vantage point, so I want to be careful not to pretend I can account for every version of that cost. But the pattern matters: The farther someone is from the default image of who “belongs” in technical authority, the more expensive visibility can become.

Melissa Reeve, founder of Hyperadaptive Solutions and author of Hyperadaptive, sees the same issue in who gets handed the microphone. “We need a wide variety of voices in order to look at things holistically,” she told me. “When I see predominantly white males on stage, it makes me feel like we’re not thinking through things completely. There will be unintended consequences that won’t be brought forward and won’t be considered. AI represents a big change, and we all hold a piece of the puzzle.” 

That distinction matters.

Research on online violence against women shows the pattern is broad. The Economist Intelligence Unit found that 38% of women surveyed had personally experienced online violence and 85% had witnessed it against other women. UN Women reported in late 2025 that more than two-thirds of surveyed women journalists, activists, and human-rights defenders had experienced online violence, and 41% connected online abuse to offline harm.

Tech has its own long memory here. Long before generative AI became the public fight, women in security, open source, gaming, and software communities were already paying this tax. I think about women in security who were doxxed after giving talks. I think about the women I know now who are right now getting hate mail for being visible. I think about how often women who explain technology are treated not as experts, but as targets.

And I think about the support work.

Women are often the ones creating the bridge into technology for others. They run the workshops. They explain the acronyms. They make the onboarding less hostile. They mentor the person who is too embarrassed to ask the basic question. They translate technical tools into business use cases. They build community around adoption.

That work is essential. It is also frequently undervalued.

Research on non-promotable tasks has found women are asked to take on this kind of low-recognition work more often than men – and say yes more often – with backlash risk when they decline. In AI, that pattern shows up as unpaid enablement labor: “Can you teach the team?” “Can you make this less scary?” “Can you review the policy?” “Can you mentor the women’s group?” “Can you explain this to leadership?”

It is not that these tasks are unimportant. They are very important. The issue is that organizations often treat them as generosity instead of infrastructure.

That is why the Reese Witherspoon moment is useful, even if imperfect. It showed the double bind in public. Women are told to get into AI before the market moves without them. But when a woman tells other women to get into AI, she is quickly accused of selling out, shilling, misunderstanding, oversimplifying, or betraying creative labor.

Some criticism of AI advocacy is fair. AI has real labor, environmental, copyright, bias, and governance concerns. I share many of them. But the pattern I am watching is bigger than one celebrity post. Women who become interpreters of technology are often expected to be technically fluent, ethically perfect, emotionally careful, publicly useful, and personally resilient under attack.

That is not a leadership standard. That is a tax structure.

Lower the cost of visibility

I am not interested in another version of “women need more confidence.” I have met plenty of confident women who are making rational decisions about when visibility is worth the cost.

The question for leaders is what they are doing to make that cost lower.

Reeve puts some of that responsibility squarely on the people who shape public AI conversations: “It is up to event organizers and media outlets to recognize this need for diversity, even when it’s not in vogue, and respond accordingly. Consciously seeking out diverse points of view so that we can all get a better sense of the big picture.” 

That is where the conversation needs to move: from telling women to be more confident to asking institutions to reduce the cost of visibility. The burden should not fall only on the people already paying the tax.

Leaders can start in five practical ways:

  1. Treat AI literacy as sponsored professional development, not personal homework. If AI matters to the business, give people time, tools, training, and clear usage policies. Do not make women experiment at the margins and then penalize them for being cautious.
  2. Measure the adoption gap inside your own organization. Who is using AI? Who is encouraged to use it? Who gets praised for it? Who gets questioned? Lean In’s data suggests the encouragement gap is already visible. Managers should assume the gap can show up in their own teams unless they prove otherwise.
  3. Reward the people doing AI translation work. Training, documentation, mentoring, prompt libraries, governance discussions, and internal enablement are not “soft” contributions. They are how adoption becomes real. Put that labor into performance reviews, promotion cases, and compensation decisions.
  4. Build a response plan for harassment before someone needs it. This applies to companies, conferences, accelerators, universities, and community groups. If you ask women to speak publicly about AI, cybersecurity, or technology, know what you will do if they are targeted. Silence from institutions often feels like a second injury.
  5. Stop treating ethical concern as lack of technical ambition. The people asking hard questions about AI are not slowing down progress. They are trying to make sure the work survives contact with customers, regulators, employees, and reality.

Finally, be careful with the “women need to catch up” narrative. It may be directionally true, but it is incomplete. Women are not behind because they lack curiosity. Many are moving carefully because they understand the social and professional risks of being wrong, visible, or publicly enthusiastic in a field that still does not evaluate everyone the same way.

Stop adding to the bill

I want more women in AI. I want more women building models, funding startups, writing policy, leading security teams, teaching classes, questioning bad deployments, and explaining the technology in plain English.

But I do not want the price of admission to be another layer of calculation.

Too often, the cost shows up quietly. A woman stops posting. She skips the panel. She lets someone else ask the question. She keeps the AI experiment private. She decides the inbox, the comments, the credibility tests, or the personal attacks are not worth it this time.

That is how a field loses talent while insisting the door is open.

Women do not need to be coaxed into AI. Many are already here, already learning, already teaching, already building, already doing the careful work of connecting tools to real consequences.

What they need is for the rest of us to stop adding to the bill.

The future of AI will not be better because women were told, one more time, to be brave. It will be better when the tax on their visibility gets too obvious to ignore, and too costly for serious leaders to keep passing along.

; ; ; ;

Share on

Tags

Subscribe for free to keep up with Colorado AI News!

Sign up today to get weekly email updates and to comment on selected articles.

Subscribe Now