Submission Date

7-28-2025

Document Type

Paper- Restricted to Campus Access

Department

African American and Africana Studies

Faculty Mentor

Patricia Lott

Second Faculty Mentor

Edward Onaci

Project Description

This research project examines how racial bias and anti-blackness in artificial intelligence (A.I.) disadvantage African Americans and predict issues for broader American society. Specifically, it analyzes the words and stereotypes that ChatGPT uses to describe African Americans and investigates whether these words and stereotypes are linked to anti-blackness. Terms like “blacks” and “colored people,” as well as racial stereotypes portraying African Americans as inherently lazy, unhealthy, and unintelligent, may communicate what Safiya Umoja Noble calls “algorithmic oppression” and reflect biased encoding by the conversational bot’s developers [Algorithms of Oppression]. Supporting these conclusions, secondary scholarship includes Cathy O'Neil's Weapons of Math Destruction (2016), Ruha Benjamin's Race After Technology (2019), and Safiya Umoja Noble’s Algorithms of Oppression (2018). Combining summaries of this secondary scholarship with findings from experiments on ChatGPT helps develop tools to reveal and fight both the hidden and visible aspects of anti-blackness within algorithms, code, and evolving A.I. technologies. This effort can help empower African Americans by increasing awareness of anti-blackness in these technologies and creating tools to counteract it.

Comments

Presented during the 27th Annual Summer Fellows Symposium, July 18, 2025 at Ursinus College.

Restricted

Available to Ursinus community only.

Share

COinS