One day in November, a product strategist we’ll call Michelle (not her real name), logged into her LinkedIn account and switched her gender to male. She also changed her name to Michael, she told TechCrunch. She was partaking in an experiment called #WearthePants where women tested the hypothesis that LinkedIn’s new algorithm was biased against women. For months, some heavy LinkedIn users complained about seeing drops in engagement and impressions on the career-oriented social network. This came after the company’s vice president of engineering, Tim Jurka, said in August that the platform had “more recently” implemented LLMs to help surface content useful to users. Michelle (whose identity is known to TechCrunch) was suspicious about the changes because she has more than 10,000 followers and ghostwrites posts for her husband, who has only around 2,000. Yet she and her husband tend to get around the same number of post impressions, she said, despite her larger following. “The only significant variable was gender,” she said. Marilynn Joyner, a founder, also changed her profile gender. She’s been posting on LinkedIn consistently for two years and noticed in the last few months that her posts’ visibility declined. “I changed my gender on my profile from female to male, and my impressions jumped 238% within a day,” she told TechCrunch. Megan Cornish reported similar results, as did Rosie Taylor, Jessica Doyle Mekkes, Abby Nydam, Felicity Menzies, Lucy Ferguson, and so on. Techcrunch event San Francisco | October 13-15, 2026 LinkedIn said that its “algorithm and AI systems do not use demographic information such as age, race, or gender as a signal to determine the visibility of content, profile, or posts in the Feed” and that “a side-by-side snapshot of your own feed updates that are not perfectly representative, or equal in reach, do not automatically imply unfair treatment or bias” within the Feed. Social algorithm experts agree that explicit sexism may not have been a cause, although implicit bias may be at work. Platforms are “an intricate symphony of algorithms that pull specific mathematical and social levers, simultaneously and constantly,” Brandeis Marshall, a data ethics consultant, told TechCrunch. “The changing of one’s profile photo and name is just one such lever,” she said, adding that the algorithm is also influenced by, for example, how a user has and currently interacts with other content. “What we don’t know of is all the other levers that make this algorithm prioritize one person’s content over another. This is a more complicated problem than people assume,” Marshall said. Bro-coded The #WearthePants experiment began with two entrepreneurs — Cindy Gallop and Jane Evans. They asked two men to make and post the same content as them, curious to know if gender was the reason so many women were feeling a dip in engagement. Gallop and Evans both have sizable followings — more than 150,000 combined compared to the two men who had around 9,400 at the time. Gallop reported that her post reached only 801 people, while the man who posted the exact same content reached 10,408 people, more than 100% of his followers. Other women then took part. Some, like Joyner, who uses LinkedIn to market her business, became concerned. “I’d really love to see LinkedIn take accountability for any bias that may exist within its algorithm,” Joyner said. But LinkedIn, like other LLM-dependent search and social media platforms, offers scant details on how content-picking models were trained. Marshall said that most of these platforms “innately have embedded a white, male, Western-centric viewpoint” due to who trained the models. Researchers find evidence of human biases like sexism and racism in popular LLM models because the models are trained on human-generated content, and humans are often directly involved in post-training or reinforcement learning. Still, how any individual company implements its AI systems is shrouded in the secrecy of the algorithmic black box. LinkedIn says that the #WearthePants experiment could not have demonstrated gender bias against women. Jurka’s August statement said — and LinkedIn’s Head of Responsible AI and Governance, Sakshi Jain, reiterated in another post in November — that its systems are not using demographic information as a signal for visibility. Instead, LinkedIn told TechCrunch that it tests millions of posts to connect users to opportunities. It said demographic data is used only for such testing, like seeing if posts “from different creators compete on equal footing and that the scrolling experience, what you se