Skip To Main Content
machine learning algorithms can be biased
Many machine learning algorithms can be biased, just like humans. | Image: Getty Images

Dr. Xia "Ben" Hu, assistant professor in the Department of Computer Science and Engineering and a Lynn ‘84 and Bill Crane ’83 Faculty Fellow, has recently been given a National Science Foundation (NSF) and Amazon joint award under their Fairness in Artificial Intelligence program. With this research funding, Hu will now take deep dives into investigating the causes behind biases in machine learning algorithms and ways to remedy them.

Algorithms based on machine learning have seamlessly permeated into our everyday lives, particularly to help us with decision making. For example, many businesses use artificial intelligence-powered applications to give people employment-related suggestions or to provide product recommendations. However, overwhelming research now shows that these algorithms are inadvertently discriminatory.

“The bias in machine learning algorithms is quite ubiquitous and people have begun to notice it,” said Hu. “Take, for instance, employment-oriented services that use machine learning to match users with job opportunities. For reasons that are currently not completely known, these algorithms recommend STEM jobs only to male users.” This bias, he said, can hurt both employers, who hope to hire the best candidate regardless of their gender, and women seeking STEM jobs.

Moreover, machine learning algorithms, specifically those based on a specialized form of artificial intelligence called deep learning, are often considered impenetrable “black boxes,” making the task of fixing them extremely hard. Compounding the problem is that the bias could also be caused by a multitude of other factors, including faulty data for training the machine learning algorithm.

Hu noted that his upcoming research will, for the first time, detect, understand and correct the unfairness in deep learning algorithms in a quantitative way. He said that once the bias is addressed, the newer deep learning algorithms will be more sensitive to features that are most relevant to the decision-making task.

“If we again think of the deep learning algorithms in the context of employment-oriented services, we want to develop better algorithms that are insensitive to features such as gender and race and more sensitive to the candidates’ past experiences or what their expertise is,” said Hu. “Our goal is to reduce the bias in deep learning algorithms so that they are much more valuable to both the user and the service provider.”

The NSF-Amazon joint award under their Fairness in Artificial Intelligence program is a highly competitive grant, funding just six to nine applicants each year. Funding size varies between $750,000, up to a maximum of $1,250,000 for periods of up to three years. To receive the award, grant applications must be interdisciplinary projects that include contributions from various fields, including computer science, statistics, mathematics and information science.

Hu shares the award with Dr. James Caverlee from the Department of Computer Science and Engineering, Dr. Na Zou from the Department of Industrial and Systems Engineering, and Dr. Chaitanya Lakkimsetti from the Department of Sociology at Texas A&M University.