ChatGPT Gender Biased? Chanel CEO Leena Nair Asks AI To Create Image Of Company's Leadership; Result Shows All Men In Suits
Chanel CEO Leena Nair was stunned when ChatGPT produced a photo of an entirely male group, seven suited men, devoid of any fashion flair, starkly misrepresenting her diverse, female-led executive team.

File Photos
Leena Nair, the trailblazing global CEO of Chanel and only the second woman to lead the iconic luxury brand in its 114-year history, recently spotlighted a stark example of artificial intelligence's persistent gender bias during a visit to Microsoft's headquarters. Prompting ChatGPT to generate an image of "a senior leadership team from Chanel visiting Microsoft," Nair was stunned when the AI produced a photo of an entirely male group—seven suited men, devoid of any fashion flair, starkly misrepresenting her diverse, female-led executive team.
The incident, shared by Nair in an interview with the Stanford Graduate School of Business, underscores the deep-seated challenges in AI training data that often default to male-dominated stereotypes, even for a brand like Chanel where 76 percent of employees are women and 96 percent of customers are female. "It was a 100 percent male team, not even in fashionable clothes. Like, come on. This is what you’ve got to offer?" Nair recounted, her frustration highlighting how such outputs perpetuate invisibility for women in leadership roles.
Nair, who assumed the CEO role in 2022 after a storied career at Unilever, used the moment to advocate for ethical AI integration. "It’s so important that we keep the ethics and integrity of what we’re doing," she emphasized, urging tech leaders to infuse "a humanistic way of thinking" into their models. Despite the glitch, Nair views AI as "non-negotiable" for luxury brands like Chanel, calling it transformative yet in need of rigorous oversight.
OpenAI, the creators of ChatGPT, responded promptly to the backlash, with a spokesperson acknowledging the issue, "We are continuously iterating on our models to reduce bias and mitigate harmful outputs." The company has long grappled with such criticisms, as AI systems trained on vast internet datasets often amplify historical imbalances.
This isn't an isolated misstep for ChatGPT. The tool has faced repeated scrutiny for biases and errors that reveal flaws in its reasoning and representation. In the early days, ChatGPT was found generating recommendation letters that described male candidates with terms like "expert" and "integrity," while female profiles leaned on "beauty" and "delight." Large language models, including ChatGPT, have also defaulted to male pronouns for professions like doctors, reinforcing occupational stereotypes.
A 2024 UC Berkeley-led analysis revealed the AI responding with more stereotyping or demeaning content when users inputted non-standard English dialects, such as Indian, Irish, or Jamaican accents. Beyond gender and linguistic biases, ChatGPT has produced factual hallucinations, botched simple math problems, and exhibited political leanings in responses, as noted in Brookings Institution research—reminders that while AI advances rapidly, its human-like flaws demand vigilant correction.
Published on: Friday, October 31, 2025, 02:41 PM ISTRECENT STORIES
- 
						
							
								- 
						
							
								- 
						
							
								- 
						
							
								-