Algorithmic bias: leaving behind the biased world of yesterday and building a fairer tomorrow

José Carlos Baquero, Director of Artificial Intelligence and Big Data in GMV’s Secure e-Solutions, analyses the thorny issue of algorithmic bias

For decades we have been witness to the great benefits of algorithms in decision taking. In the real world, their application ranges from medical diagnosis and court judgments to professional recruitment and the detection of criminals. However, as their uses have extended as a result of technological advances, many have demanded greater responsibility in their implementation, with particular concern being expressed about the transparency and fairness of machine learning. Specifically, this uncertainty arises due to the ability to recreate historical prejudices which normalise and increase social inequality through algorithmic bias. This subject has been analysed by José Carlos Baquero, Director of Artificial Intelligence and Big Data at GMV’s Secure e-Solutions, and which made those attending Codemotion Madrid stop and think.

The advances in machine learning have led companies and society to trust data, on the basis that their correct analysis gives rise to more efficient and impartial decisions than those taken by humans. Yet “despite the fact that a decision taken by an algorithm is arrived at on the basis of objective criteria, the result may be unintentional discrimination. Machines learn from our prejudices and stereotypes, and if the algorithms that they use are becoming the key part of our daily activities, we urgently need to understand their impact on society,” argues Baquero. This is why we must insist on a systematic analysis of the algorithmic processes and the generation of new conceptual, legal and regulatory frameworks to guarantee human rights and fairness in a hyperconnected and globalised society. A task that obviously must be done jointly by organisations and governments.

During his presentation, José Carlos Baquero explained some recent cases about this problem, such as Amazon’s AI tool for hiring employees which systematically discriminated against women. In this case, the program reached the conclusion that men were better candidates and tended to give them a higher score when reviewing their CVs. This is just one of the examples that shows that there are increasing concerns about the loss of transparency, responsibility and fairness of algorithms due to the complexity, opaqueness, ubiquity and exclusivity of the environment.

In search of fair forecasting models

Regardless of how the algorithm is adjusted, they all have biases. Ultimately, forecasts are based on general statistics, not somebody’s individual situation. But we can use them to take wiser and fairer decisions than those made by individual humans. For this, we need to look urgently for new ways to mitigate the discrimination that is being found in the models. Moreover, we must be sure that the predictions do not unfairly prejudice those groups with certain sensitive characteristics (gender, ethnicity, etc.).

Amongst other things, José Carlos Baquero stressed the need to focus on interpretation and transparency, making it possible to interrogate complex models, and also to make models that are more robust and fairer in their predictions, modifying the optimisation of the functions and adding restrictions.

In short, “building impartial forecasting models is not simply a question of removing certain sensitive attributes from the data set. We clearly need ingenious techniques to correct the profound bias in the data and force models to make more impartial predictions. All of this involves a reduction in the performance of our models, but this is a small price to pay to leave behind the biased world of yesterday and build a fairer tomorrow,” concluded Baquero.
 

Sector

Source URL: http://www.gmv.com/communication/news/algorithmic-bias-leaving-behind-biased-world-yesterday-and-building-fairer