The Penn State and Columbia University researchers have created a new artificial intelligence tool to detect unfair discrimination based on gender and race. For example, a long-standing concern of civilized societies has been preventing unfair treatment of individuals based on gender, race, or ethnicity.

However, to detect such discrimination resulting from decisions, be it human decision makers or automated AI systems, can be extremely challenging. The general adoption of AI systems to automate decisions in many domains has further exacerbated this challenge, including policing, consumer finance, business, and higher education.

Professor and Edward Frymoyer Chair of Information Sciences and Technology, Penn State, Vasant Honavar, said that artificial intelligence systems such as those involved in selecting candidates for a job or admission to a university are trained on large amounts of data. However, if there is bias with these data, they can affect the recommendations of AI systems.

He pointed out further as he cited an example of if a company historically has never hired a woman for a particular type of job, then an AI system trained on this historical data, will not recommend a woman for a new job.

Honavar noted further that there is nothing wrong with the machine learning algorithm itself. It is doing what it is supposed to do, which is to discover a good job based on specific desirable characteristics. However, since it was trained on historical, biased data, it has the potential to make unfair recommendations.

The researchers created an AI tool for detecting discrimination concerning a protected attribute such as gender or race by human decision-makers or AI systems that are based on the concept of causality in which one thing, a cause, causes another thing, an effect.

With the use of a variety of data, the team tested their method, such as income data from the U.S. Census Bureau, to determine whether there is gender-based discrimination in salaries. Also, they tested their technique using the New York City Police Department's stop-and-frisk program data to determine whether there is discrimination against people of color in arrests made after stops. The results of the team appear in Proceedings of The Web Conference 2019.

According to Honavar, as data-driven artificial intelligent systems increasingly determine how businesses target advertisements to consumers, how police departments monitor people or groups for criminal activity, how banks decide who gets a loan, which employers choose to hire, and how colleges and universities decide who gets admitted or receives financial aid, there is an urgent need for tools such as the one he and his colleagues developed. He added that their tool could help ensure that such systems do not become instruments of discrimination, barriers to equality, threats to social justice, and sources of unfairness.