Laura Bates: AI's Dehumanization of Women
 |
Laura Bates (photo: Siggi Holm) |
Laura Bates is a U.K.-based bestselling author and the founder of the Everyday Sexism Project, a collection of more than 200,000 testimonies of gender inequality. Her nonfiction books include Everyday Sexism, Girl Up, Misogynation, Men Who Hate Women and Fix the System, Not the Women. Bates works closely with international organizations such as the Council of Europe and the United Nations to tackle gender inequality. She was awarded a British Empire Medal for services to gender equality. The New Age of Sexism: How AI and Emerging Technologies Are Reinventing Misogyny is her sharply argued treatise on the ways women are being dehumanized by a current technological trajectory, and her call for solutions to this new sexism. It will be published by Sourcebooks September 9, 2025.
Why did you decide to write on the topic of sexism in AI and similar emergent technologies?
I was already seeing really alarming evidence of the ways in which AI is demonstrably discriminating against women and marginalized groups with almost no regulation in place, and I felt so frustrated that there doesn't seem to be any acknowledgment from government about the urgency of considering what an ethical regulatory response should look like.
Your book also dips into how these technologies are biased against minority groups, including Black, Indigenous, and LGBTQIA+ communities.
Yes, it's vital to recognize these intersectional aspects of the problem because they are so often overlooked. Too often we focus solely on the impacts on privileged white women, which not only erases the often more severe and complex forms of discrimination and abuse experienced by Black, Indigenous, and LGBTQIA+ communities but also inevitably leads to the creation of solutions that are not fit for purpose for all those impacted. We can't meaningfully tackle the threats posed by AI and emerging technologies without recognizing that they exist simultaneously on many different fronts and that an intersectional framework of solutions is required for us to achieve robust and sustainable progress.
Why do you think there is such little attention being paid to this issue?
I think there are two reasons. First, the people developing these technologies--rolling them out, profiting from them, those in positions of power to potentially regulate them, the law enforcement and government agencies assessing future threat, as well as the media and editorial staff writing about them--are all groups of people overwhelmingly dominated by privileged white men. Not much attention is being paid to the problem because these are not groups of people who would themselves immediately be impacted by such technologies ingraining racism and misogyny in our future society. Secondly, people aren't paying attention to the issue because it isn't seen as a crisis. Misogyny and racism are so normalized in our society that they are simply seen as inevitable and eternal. The idea of women facing an outpouring of abuse and harassment from technology like deepfakes, for example, isn't seen as shocking and urgent because misogyny is so commonplace that many people will consider it an inevitable and acceptable byproduct of any technological advancements.
You investigated many of the sexist technologies critiqued firsthand. How did you deal with the cognitive dissonance of speaking down to an AI "girlfriend" or closely examining a sex doll?
As someone who works on a day-to-day basis with survivors of male violence, some of the research I had to do for this book left me feeling sick. But it was so important for me to remember that these technologies, whether sex robots or AI "girlfriends," are not real women. What mattered was highlighting the very real harms their use and normalization might pose to women and girls in the real world; if it was so hard for me to forget that the bots I was interacting with were not real women, it would be very easy for the men who use them to pretend that they are. And that's a huge part of the problem.
Can you reassure anyone who is sex positive that the arguments you present against cyber brothels and sex robots have nothing to do with being against sexual adventurousness?
Absolutely. This is not for a moment about being censorious or prohibitive of individual sexual freedoms--in fact it is the opposite! It's about resisting the attempts of a group of rich white men to co-opt a sex-positive narrative as a veil for their own deep misogyny. Consent, an ethical framework, and mutual pleasure are central to any kink, BDSM, or fetish activities, whereas what sex robots and cyber brothels present is the opposite: the illusion of total ownership of a "woman" in a situation where consent is by definition impossible. And this lack of consent is the whole point--you can see that from the sex robot manufacturers who came up with the "frigid" setting to allow you to simulate raping your robot or the companies that offer to create an exact sex robot replica of a real-life woman without her consent from photographs so that you can do whatever you like to her. The companies don't position their products as sex toys or masturbation aids but as improved versions of real women who won't "nag," "bring drama," or have different sexual preferences from yours. Their marketing very explicitly caters not to a diverse community who are sexually adventurous but to straight men who want to be able to do whatever they want to a "woman" without the pesky bother of actually having to communicate, listen to her, or respect her boundaries in any way.
The biases burned into emergent tech reflect our current culture, but what would you say to those who argue that waiting isn't an option, and that future tech can change as we change?
That's a very easy argument to make if you're not someone directly impacted now! Effective regulation to ensure that AI tools don't actively discriminate against certain groups or promote existing bias and misinformation about marginalized communities should be a bare minimum, not a luxury. There's no reason effective regulation has to prevent progress or even slow innovation. When you consider that many of the companies developing these tools have multi-billion dollar budgets, making such changes isn't prohibitive to progress; it merely means that they will need to actually spend some of that obscene wealth on creating safety processes, which is an entirely reasonable thing to ask.
The underlying problem here isn't new--we live in a misogynistic society--so how would you assert that the outcomes of our current tech trajectory are more dangerous?
The urgency here is that AI risks upending decades of slow progress towards equality and turbocharging the backlash on a scale we have never seen before. When AI tools ingest vast datasets of biased material because of the ways in which they are deliberately designed to eliminate outliers in order to optimize accuracy, they don't just spew those prejudices and stereotypes back at us but instead actually intensify them, risking making the problem even worse. And if we allow emerging technologies to embed those biases deep within everything like recruitment processes, healthcare decision-making, financial services, and credit checks, it will make the job of trying to eliminate them a million times harder. --Samantha Zaboski |