Bias in AI Models

Jacob Johnston
2 min readFeb 23, 2021

Merriam Webster defines bias as “an inclination of temperament or outlook”. It can be rather difficult to live a life that is completely devoid of bias. People form opinions as results of their surroundings and, while some can be good and some can be bad, not everyone is capable, or even willing, to overcome these opinions.

In Cathy O’Neil’s book, Weapons of Math Destruction, she defines a model as “an abstract representation of some process” (O’Neil, 25). She opens the first chapter with an example of baseball and how there are several different possible universes filled with different probabilities and events that take place. “They include every measurable relationship among every one of the sport’s components, from walks to home runs to the players themselves” (23). Models are meant to run all different scenarios and analyze the results of each different possibility. She says that models are put in place to look for optimal combinations in any situation.

These models can show bias depending on what information they are given. O’Neil states that baseball is a good statistical model due to the fact that the information is public to anyone that wants to view it or analyze it for themselves. There is also a constant flow of information for as long as a team or individual remains in the sport. She does point out, however, that mistakes are inevitable because a model is only a simplification of the thing it is representing.

O’Neil discusses the bias of AI models while diving into the Washington, D.C. school system and the model to evaluate teacher performance. The idea of a model is unable to take into account many personal factors that might affect how an instructor is performing and teaching their students. The model looks solely at the test score information that it is given access to, not the personal ability of a teacher to connect with their students or how they manage their classroom: two of many important factors in what makes a good teacher.

Bias exists in AI models because we allow it to be there. When we edit a model to remove a factor or add another one in, we add our own personal bias and ideology of what is “the optimal” outcome.

--

--