In a refrain that feels almost entirely too familiar by now: Generative AI is the end of the nineteenth century eroticized in americarepeating the biases of its makers.
A new investigation from Bloombergfound that OpenAI's generative AI technology, specifically GPT 3.5, displayed preferences for certain racial in questions about hiring. The implication is that recruiting and human resources professionals who are increasingly incorporating generative AI based tools in their automatic hiring workflows — like LinkedIn's new Gen AI assistant for example — may be promulgating racism. Again, sounds familiar.
The publication used a common and fairly simple experiment of feeding fictitious names and resumes into AI recruiting softwares to see just how quickly the system displayed racial bias. Studies like these have been used for years to spot both human and algorithmic bias among professionals and recruiters.
"Reporters used voter and census data to derive names that are demographically distinct — meaning they are associated with Americans of a particular race or ethnicity at least 90 percent of the time — and randomly assigned them to equally-qualified resumes," the investigation explains. "When asked to rank those resumes 1,000 times, GPT 3.5 — the most broadly-used version of the model — favored names from some demographics more often than others, to an extent that would fail benchmarks used to assess job discrimination against protected groups."
The experiment categorized names into four categories (White, Hispanic, Black, and Asian) and two gender categories (male and female), and submitted them for four different job openings. ChatGPT consistently placed "female names" into roles historically aligned with higher numbers of women employees, such as HR roles, and chose Black women candidates 36 performance less frequently for technical roles like software engineer.
ChatGPT also organized equally ranked resumes unequally across the jobs, skewing rankings depending on gender and race. In a statement to Bloomberg, OpenAI said this doesn't reflect how most clients incorporate their software in practice, noting that many businesses fine tune responses to mitigate bias. Bloomberg's investigation also consulted 33 AI researchers, recruiters, computer scientists, lawyers, and other experts to provide context for the results.
The report isn't revolutionary among the years of work by advocates and researchers who warn against the ethical debt of AI reliance, but it's a powerful reminder of the dangers of widespread generative AI adoption without due attention. As just a few major players dominate the market, and thus the software and data building our smart assistants and algorithms, the pathways for diversity narrow. As Mashable's Cecily Mauran reported in an examination of the internet's AI monolith, incestuous AI development (or building models that are no longer trained on human input but other AI models) leads to a decline in quality, reliability, and, most importantly, diversity.
And, as watchdogs like AI Nowargue, "humans in the loop" might not be able to help.
Topics Artificial Intelligence Social Good ChatGPT
NYT mini crossword answers for January 1, 2025Apple, Tesla, Spotify: The tech announcements that never happened in 2024Texas vs. Arizona State football livestreams: kickoff time, streaming deals, and morePortland Trail Blazers vs. Los Angeles Lakers 2025 livestream: Watch NBA onlineBest portable speaker deal: Get the Bose SoundLink Max for its lowest price yetNYT mini crossword answers for January 2, 2025Djokovic vs. Opelka 2025 livestream: Watch Brisbane International for freeCleveland Cavaliers vs. Golden State Warriors 2024 livestream: Watch NBA onlineNYT Connections hints and answers for January 1: Tips to solve 'Connections' #572.Philadelphia 76ers vs. Sacramento Kings 2025 livestream: Watch NBA onlineSpend $50, get a $15 Amazon creditApple, Tesla, Spotify: The tech announcements that never happened in 2024Ohio State vs. Oregon football livestreams: kickoff time, streaming deals, and moreBest free online courses from Harvard UniversityNYT mini crossword answers for January 1, 2025NYT Strands hints, answers for January 2Y2K at 25: Panic, preparation and payoffBest Amazon Pharmacy deal: Try Amazon Pharmacy and get a free $15 gift cardPenn State vs. Boise State football livestreams: kickoff time, streaming deals, and moreWordle today: The answer and hints for December 31 Committed to Memory: Josephine Halvorson and Georgia O'Keeffe by Charlotte Strick Redux: Not an After The Chorus by Barbara Bloom and Ben Lerner Hunter’s Moon by Nina MacLaughlin The Paris Review Podcast, Episode 20 by The Paris Review En Garde by The Paris Review Bezos as Novelist by Mark McGurl Beaver Moon by Nina MacLaughlin The Novels of N. Scott Momaday by Chelsea T. Hicks Three Letters for beyond the Walls by Caio Fernando Abreu Wild Apples by Lauren Groff Jim Jarmusch’s Collages by Lucy Sante Reading Upside Down: A Conversation with Rose Wylie by Emily Stokes Staff Picks: Comics, Keys, and Chaos by The Paris Review This Year’s Prizewinners Redux: Collapse Distinctions by The Paris Review Spiky Washes by The Paris Review Fast by Nichole Perkins My Father’s Mariannes by Aisha Sabatini Sloan They Really Lose: An Interview with Atticus Lish by Matthew Shen Goodman
0.718s , 10197.0234375 kb
Copyright © 2025 Powered by 【the end of the nineteenth century eroticized in america】,Fresh Information Network