GMV recognized for its capability of identifying and mitigating Artificial-Intelligence bias
Artificial Intelligence is making increasing inroads into today’s society and pundits say it is only likely to make even further headway in the future, until it finally becomes part of our daily decision-making procedures or even replaces them. We are speaking about everyday cases like the granting of a mortgage, assessing the likelihood of a criminal reoffending or the deciding on the best way of distributing medical resources. These ongoing developments have sparked off an ethical debate about leaving certain decisions up to technology, especially in view of the fact that recent studies and publications have pinpointed discriminatory bias in these smart systems. This debate in turn has led to some social concern about the ethical use of data, over and beyond its privacy and security. To confront this problem Telefónica’s Data Unit (LUCA) has organized an international challenge to encourage reasonable use of Artificial Intelligence.
A real passion for taking on new challenges and harnessing all chances to innovate are both hard-wired into GMV’s mindset, so it didn’t hesitate to take up LUCA’s challenge. GMV’s Artificial Intelligence and Big Data team, comprising Alexander Benítez, Paloma López de Arenosa, Antón Makarov and Inmaculada Perea, and led by José Carlos Baquero, presented a proposal that was then awarded 2nd prize in the challenge. “As a society we have to progress towards a less discriminatory world. Machine learning offers us a perfect chance to do so. Every day more and more decisions are delegated to machines so we are duty bound to pay due heed to how these machines learn, just as we do when bringing up children. It is in our power to make sure these algorithms are fair and guarantee we are all treated equally” argues Antón Makarov, GMV Data Scientist.
The work carried out by this team involved analysis of an open data set of Spain’s National Statistics Institute (Instituto Nacional de Estadística: INE) about salaries in Spain, showing there is a gender-based salary gap, with men more likely to reach highly paid positions. First of all the system showed that this inequality still exists even when gender-based information is cancelled out. A model was then trained up with this data showing that it learns this bias. If this first salary-forecasting model were used to make decisions on the person concerned, this would give rise to discriminatory decisions. A solution was then put forward to lessen the data bias and train up a new model based on this data, generating fairer predictions while hardly affecting performance, thus lessening the gender discrimination. “We have replicated the experiment using different algorithms and obtaining similar results. This proves that the bias is learned regardless of the classifier used. Luckily, there is a growing volume of research into this matter and better bias-mitigation algorithms are now being developed, meaning the future bodes well for this matter” says Alexander Benítez, GMV Data Scientist.
GMV’s proposal, therefore, sheds light on the possible ethical consequences of an improper use of data and represents a great stride forward towards a less discriminatory world in which machines making important decisions about individual rights do so guaranteeing that each of these individuals is treated fairly.