Home Communication News Back New search Date Min Max Aeronautics Automotive Corporate Cybersecurity Defense and Security Financial Healthcare Industry Intelligent Transportation Systems Digital Public Services Services Space Services Sharing solutions to the challenge of fairness in predictive models of artificial intelligence 08/04/2019 Print Share Universidad CEU San Pablo has hosted the first conference on artificial intelligence and ethics under the title: “Looking for an ethical algorithm”. Some of the insightful papers were presented by outstanding experts in this matter, including, José Carlos Baquero, GMV’s Big Data and Artificial Intelligence Manager, who spoke about his AI experience and explained how to achieve fairer, bias-free algorithms. From credit requests to online dating, machine learning models are automating our day-to-day decision making. Nonetheless, over and above the positive impact of artificial intelligence on business models, we also have to bear in mind its negative externalities (fairness, responsibility, transparency and ethics) in terms of the algorithms used for these decision-making processes. In his speech José Carlos Baquero, Big Data and Artificial Intelligence Manager of GMV’s Secure e-Solutions sector, stressed the importance of fairness as one of the mainstays of ethical artificial intelligence. He invited his audience to reflect on this, while also presenting the latest ways to mitigate emergent discrimination in our models. This involves highlighting the factors of interpretability and transparency, subjecting complex models to a rigorous interrogation while at the same time making more robust and fairer models in our predictions, allowing us to modify the optimism of the functions and tag on constraints.It is, however, clear that building impartial predictive models is not a simple matter of removing sensitive attributes of the training data. Ingenious techniques are required to correct the deep-lying data bias and force models to make more impartial predictions. Furthermore, making impartial predictions involves a cost that entails an impairment of our model’s performance. In short, if we analyze and come to a better understanding of both the predictive model and machine learning, we will be able to head off problems and build a sense of fairness into the predictive models of artificial intelligence. It is a case of taking impartiality seriously and making sure our predictions do not unfairly harm our environment, in the interests of getting the very best out of artificial intelligence. VIDEO OF THE ARTIFICIAL INTELLIGENCE AND ETHICS CONFERENCE Print Share Related Digital Public ServicesServices PAIT® solution: technological support for the new equal pay and pay transparency regulations HealthcareIndustryServices AMETIC Artificial Intelligence Summit 2024 #AIAMSummit24 09 May IndustryDigital Public ServicesServicesFinancial #DebatesUAM on Artificial Intelligence: Debates and Challenges 20 Dec 9:30 AM - 2:00 PM