Skip to Main Content

Artificial Intelligence Literacy

Sources and information about generative artificial intelligence

Bias in generative AI

Generative Artificial Intelligence models are not free from human bias.  There is opportunity for bias to infiltrate Gen AI at a variety of levels.  Artificial Intelligence models can include both intentional and unintentional biases in their algorithms, training data,  Since the training data of LLMs was created by humans, all of whom hold a variety of value sets, the models will reflect and perpetuate existing human biases. 

 

ChatGPT and other generative AI models utilize freely available information as training data for their models.  This information is unverified and contained with biases, stereotypes, and hate speech, which can then be replicated and further spread through the use of Gen AI.   In addition, as of January 2024, 52% of information available on the internet is in English, which means this bias is built into the system through training data. About 70% of people working in AI are male (World Economic Forum, 2023 Global Gender Gap Report) and the majority are white (Georgetown University, The US AI Workforce: Analyzing Current Supply and Growth, January 2024). As a result, there have been numerous cases of algorithmic bias, which is when algorithms make decisions that systematically disadvantage certain groups, in generative AI systems.

 

SOURCE: text adapted from University of Texas Austin's AI LibGuide

Scholarly Articles about AI Bias