Algorithms are often unfair – researchers aim to change that

Artificial intelligence is often racist or sexist. NRP 77 researcher Christoph Heitz explains why that is and what he plans to do to prevent it.

A global corporation such as Amazon thrives on having talented developers working for it. But finding the right candidate among the thousands of applicants costs time and resources. This was why in 2014 the company developed an artificial intelligence (AI) tool to automate the preselection of applicants. In the following years, however, it became apparent that the algorithm was quite sexist. The AI tool primarily identified men rather than women as suitable candidates.

Sexist and racist – that is prejudiced – algorithms are repeatedly hitting the headlines. How to develop AI applications that are not only good but also fair is the topic of research project NRP 77 “Socially acceptable and fair artificial intelligence”. Project leader Christoph Heitz from Zurich University of Applied Sciences (ZHAW) recently gave a presentation on this subject at the Natural Scientific Society in Winterthur.

The data scientist began with an example to demonstrate why algorithms are unfair at all: When people plan to buy a house, it is usually a bank that decides whether to give the homebuyer a loan or not. The bank used to decide on the basis of a set of simple rules. Meanwhile, data-based algorithms are being increasingly used for such decisions. They are trained with historic data to recognise patterns that allow them to assess whether a potential borrower will be able to pay off their loan or not. The crux here is that the algorithms are optimised to serve one specific purpose. And that purpose is to enable the bank to earn as much money as possible.

The fact that the algorithm may make a wrong decision in an individual case, that is treat a possible borrower unfairly, is superficially immaterial. “Decision-based algorithms are optimised for frequent use,” explains Heitz. “It’s never about making the right decision in each individual case, but about whether frequent application will increase the aggregate benefits.”

This method of use optimisation with no explicit consideration of fairness is seen by Heitz as one reason for the unfairness of algorithms. A second reason lies in the data used to train an AI tool. In the case of the bank, this data consists of decisions made earlier by people, who in turn are not free of prejudice. What played a role with Amazon’s recruiting algorithm was that the company had in the past also tended to favour male candidates.

Heitz also points out in his presentation that fairness is not an absolute concept. As the researcher underlines, “What constitutes fairness depends on the situation and is usually contentious.” Fairness might, for instance, mean equal opportunities for everyone, or equal opportunities for everyone who has earned it – for example by acquiring certain qualifications. “Determining what is fair in a given context is a social negotiation process,” points out Heitz.

In light of this fact, Heitz’s research project is structured on an interdisciplinary basis, bringing together researchers from the fields of IT, ethics and business. Heitz is convinced that it is possible to build fair algorithms by constructing them in such a way that they not only maximise the benefits, but at the same time also guarantee fairness.

Initial project findings were recently presented at the Conference on Fairness, Accountability, and Transparency (FAccT), the world’s foremost conference on algorithmic fairness. Two of the 82 papers presented at the conference were from Heitz’s NRP 77 project. “We were especially delighted that one of these papers won the best student paper award,” adds Heitz. The lead author was Corinna Hertweck, a doctoral student at the ZHAW and the University of Zurich.

Video (in German)External Link Icon