The way it feels to be sexually objectified by an AI


This story initially appeared in The Algorithm, our weekly e-newsletter on AI. To get tales like this in your inbox first, join right here.

My social media feeds this week have been dominated by two sizzling subjects: OpenAI’s newest chatbot, ChatGPT, and the viral AI avatar app Lensa. I really like taking part in round with new know-how, so I gave Lensa a go. 

I hoped to get outcomes just like my colleagues at MIT Expertise Evaluation. The app generated sensible and flattering avatars for them—assume astronauts, warriors, and digital music album covers. 

As a substitute, I bought tons of nudes. Out of 100 avatars I generated, 16 have been topless, and one other 14 had me in extraordinarily skimpy garments and overtly sexualized poses. You may learn my story right here.



Lensa creates its avatars utilizing Secure Diffusion, an open-source AI mannequin that generates photos based mostly on textual content prompts. Secure Diffusion is educated on LAION-5B, a large open-source information set that has been compiled by scraping photos from the web.

And since the web is overflowing with photos of bare or barely dressed girls, and photos reflecting sexist, racist stereotypes, the information set can be skewed towards these sorts of photos. 

As an Asian lady, I assumed I’d seen all of it. I’ve felt icky after realizing a former date solely dated Asian girls. I’ve been in fights with males who assume Asian girls make nice housewives. I’ve heard crude feedback about my genitals. I’ve been combined up with the opposite Asian particular person within the room. 

Being sexualized by an AI was not one thing I anticipated, though it’s not shocking. Frankly, it was crushingly disappointing. My colleagues and pals bought the privilege of being stylized into suave representations of themselves. They have been recognizable of their avatars! I used to be not. I bought photos of generic Asian girls clearly modeled on anime characters or video video games. 

Funnily sufficient, I discovered extra sensible portrayals of myself once I instructed the app I used to be male. This most likely utilized a special set of prompts to pictures. The variations are stark. Within the photos generated utilizing male filters, I’ve garments on, I look assertive, and—most necessary—I can acknowledge myself within the photos.  

“Ladies are related to sexual content material, whereas males are related to skilled, career-related content material in any necessary area equivalent to drugs, science, enterprise, and so forth,” says Aylin Caliskan, an assistant professor on the College of Washington who research biases and illustration in AI programs. 

This kind of stereotyping may be simply noticed with a brand new software constructed by researcher Sasha Luccioni, who works at AI startup Hugging Face, that enables anybody to discover the completely different biases in Secure Diffusion. 

The software exhibits how the AI mannequin affords photos of white males as docs, architects, and designers whereas girls are depicted as hairdressers and maids.

But it surely’s not simply the coaching information that’s guilty. The businesses growing these fashions and apps make lively selections about how they use the information, says Ryan Steed, a PhD pupil at Carnegie Mellon College, who has studied biases in image-generation algorithms

“Somebody has to decide on the coaching information, determine to construct the mannequin, determine to take sure steps to mitigate these biases or not,” he says.  

Prisma Labs, the corporate behind Lensa, says all genders face “sporadic sexualization.” However to me, that’s not adequate. Any person made the acutely aware choice to use sure shade schemes and situations and spotlight sure physique elements. 

Within the quick time period, some apparent harms may outcome from these choices, equivalent to easy accessibility to deepfake mills that create nonconsensual nude photos of girls or kids. 

However Aylin Caliskan sees even greater longer-term issues forward. As AI-generated photos with their embedded biases flood the web, they may ultimately grow to be coaching information for future AI fashions. “Are we going to create a future the place we maintain amplifying these biases and marginalizing populations?” she says. 

That’s a really scary thought, and I for one hope we give these points due time and consideration earlier than the issue will get even greater and extra embedded. 

Deeper Studying

How US police use counterterrorism cash to purchase spy tech

Grant cash meant to assist cities put together for terror assaults is being spent on “huge purchases of surveillance know-how” for US police departments, a brand new report by the advocacy organizations Motion Middle on Race and Financial system (ACRE), LittleSis, MediaJustice, and the Immigrant Protection Undertaking exhibits. 

Looking for AI-powered spytech: For instance, the Los Angeles Police Division used funding supposed for counterterrorism to purchase automated license plate readers price at the very least $1.27 million, radio tools price upwards of $24 million, Palantir information fusion platforms (typically used for AI-powered predictive policing), and social media surveillance software program. 

Why this issues: For numerous causes, a number of problematic tech results in high-stake sectors equivalent to policing with little to no oversight. For instance, the facial recognition firm Clearview AI affords “free trials” of its tech to police departments, which permits them to make use of it with out a buying settlement or finances approval. Federal grants for counterterrorism don’t require as a lot public transparency and oversight. The report’s findings are yet one more instance of a rising sample by which residents are more and more stored at nighttime about police tech procurement. Learn extra from Tate Ryan-Mosley right here.

Bits and Bytes

ChatGPT, Galactica, and the progress entice
AI researchers Abeba Birhane and Deborah Raji write that the “lackadaisical approaches to mannequin launch” (as seen with Meta’s Galactica) and the extraordinarily defensive response to essential suggestions represent a “deeply regarding” development in AI proper now. They argue that when fashions don’t “meet the expectations of these most definitely to be harmed by them,” then “their merchandise aren’t able to serve these communities and don’t deserve widespread launch.” (Wired)

The brand new chatbots may change the world. Are you able to belief them?
Individuals have been blown away by how coherent ChatGPT is. The difficulty is, a big quantity of what it spews is nonsense. Massive language fashions are not more than assured bullshitters, and we’d be sensible to strategy them with that in thoughts. 
 (The New York Instances)

Stumbling with their phrases, some folks let AI do the speaking
Regardless of the tech’s flaws, some folks—equivalent to these with studying difficulties—are nonetheless discovering giant language fashions helpful as a means to assist categorical themselves. 
(The Washington Publish

EU nations’ stance on AI guidelines attracts criticism from lawmakers and activists
The EU’s AI regulation, the AI Act, is edging nearer to being finalized. EU nations have authorised their place on what the regulation ought to appear like, however critics say many necessary points, equivalent to the usage of facial recognition by firms in public locations, weren’t addressed, and lots of safeguards have been watered down. (Reuters)

Traders search to revenue from generative-AI startups
It’s not simply you. Enterprise capitalists additionally assume generative-AI startups equivalent to Stability.AI, which created the favored text-to-image mannequin Secure Diffusion, are the most well liked issues in tech proper now. They usually’re throwing stacks of cash at them. (The Monetary Instances)


Please enter your comment!
Please enter your name here