January DEI – Addressing Linguistic Bias and Ageism in AI
January 2025
Like it or not, ChatGPT is here to stay. But could technology that helps us search, summarize, and research faster harbor unintentional biases? The same question was posed by researchers Eve Fleisig, Genevieve Smith, Madeline Bossi, Ishita Rustagi, Xavier Yin, and Dan Klein from the Berkeley Artificial Intelligence Research (BAIR) Lab, who published their findings in the study “Linguistic Bias in ChatGPT: Language Models Reinforce Dialect Discrimination.” The study found that ChatGPT, while proficient in English communication, shows biases against non-standard varieties of English (Fleisig et al., 2024).
Only 15% of ChatGPT users are from the US, where Standard American English (SAE) is prevalent. The model is widely used globally, where other English varieties are spoken. Research indicates that, while all English language varieties are equally complex and legitimate, speakers of non-standard English varieties often face discrimination. UC Berkeley’s study examined ChatGPT’s responses to 10 English varieties, including SAE, Standard British English (SBE), and eight non-standard varieties (i.e., African American, Indian, and Nigerian English). Researchers found that ChatGPT retains SAE features more than non-standard dialects and defaults to American conventions, which can frustrate non-American users. Additionally, responses to non-standard varieties showed increased stereotyping, demeaning content, poorer comprehension, and condescension. ChatGPT’s biases against non-standard English varieties could exacerbate existing discrimination.
Linguistic bias is not the only prejudice ChatGPT exhibits. As programmers who develop the algorithms are mostly men in their late 20s, older adults are often excluded from the research and design of digital technologies, leaving them with low adoption rates and negative user experiences. Older adults can and do learn new technologies, but they have fewer opportunities to do so than their younger counterparts. According to the 2022 Harvard Business Review article, “Harnessing the Power of Age Diversity,” multigenerational teams bring plenty to the table, including complementary skills, abilities, and networks, and can offer improved decision-making, collaboration, and overall performance (Gerhardt, 2022).
Despite legislative codification, newly introduced AI software in the workplace often creates or perpetuates ageism. Technical biases resulting from how data is collected and processed show age stereotypes and prejudices in AI. This is highlighted in a 2024 study that shows individual-level biases held by those developing AI technologies or the invisibility of old age in AI discourse. Filtering out the language of underrepresented communities results in a lack of training data that portrays marginalized identities favorably (Czaińska, 2024).
Therefore, current practice favors the hegemonic viewpoint at every stage, from initial engagement to ongoing participation via collection and training data filtering. Since data is collected from the past, ChatGPT tends to have a regressive bias that fails to reflect societal progress toward inclusivity. The lack of representation and discussion of older populations in AI-related conversations leads to their exclusion or marginalization as users of AI technology.
Linguistic biases and ageism in AI are extraordinarily complex and often covert. Non-SAE speakers may not receive culturally appropriate responses, and older individuals’ health, cognition, and well-being can be negatively affected. According to an article by the American Psychological Association, over 90% of adults aged 50-80 in the United States reported experiencing ageism (DeAngelis, 2022). Reducing ageism and linguistic biases is crucial for promoting equity and well-being for all AI users. A multigenerational and multilingual workforce exemplifies the goals of diversity, equity, and inclusion (DEI) by promoting diversity across generations, countering stereotypes, and ensuring everyone’s wisdom and experience is considered.
References:
Czaińska, K. (2024, September). Knowledge Transfer Across Peer and Multigenerational Teams of Employees. Scientific Papers of Silesian University of Technology. https://managementpapers.polsl.pl/wp-content/uploads/2024/09/201-Czaińska.pdf
DeAngelis, T. (2022, September 1). By the numbers: Older adults report high levels of ageism. Monitor on Psychology. https://www.apa.org/monitor/2022/09/older-adults-ageism
Fleisig, E., Smith, G., Bossi, M., Rustagi, I., Yin, X., & Klein, D. (2024, September 17). Linguistic Bias in ChatGPT: Language Models Reinforce Dialect Discrimination. arXiv. https://arxiv.org/abs/2406.08818
Gerhardt, M. W. (2022, November 9). Harnessing the Power of Age Diversity. Harvard Business Review. https://hbr.org/2022/03/harnessing-the-power-of-age-diversity
Dana Fischel, ACP, CAS, has worked as a litigation paralegal for over 20 years in a San Bernardino, California, probate law firm. Dana obtained her BA in human development with a concentration in gerontology from California State University East Bay Hayworth. She has a Paralegal Certificate, a Professional Fiduciary Management Certificate, and an Accounting for Governmental and Nonprofit Organizations Certificate from the University of California Riverside. She is currently enrolled in the University of Maryland Baltimore’s Gerontology Master Program with a specialization in thanatology. Dana has served for over 15 years as an Inland Counties Association of Paralegals Board Director and Treasurer.