You are currently viewing The Government prepares mechanisms to measure the social impact of algorithms

Algorithms affect more and more spheres of our lives. And, sometimes, they harm citizens. The Government has decided to take action: it has launched a battery of measures that aim to analyze the social impact of artificial intelligence (AI) systems before they become operational. With that objective in mind, and to accelerate the changes with more momentum, Spain has proposed to be the guinea pig of the draft AI Regulation presented by the European Commission, which is expected to come into force in 2023.

The pilot test would imply advancing the implementation of the regulation one year, to 2022. And this would entail evaluating the risk posed by the application of each algorithm (facial recognition or scoring systems, for example, are considered high risk and therefore are prohibited with some exceptions) and prepare audit mechanisms to ensure that they do not discriminate on the basis of race, gender or income (so-called algorithmic biases).

Spain would be the only country that goes ahead in the application of this European regulation. With this movement, the Government intends to place the country at the forefront of this important regulation and accelerate “the changes that sooner or later will have to take place,” say sources from the Secretary of State for Digitalization and Artificial Intelligence (Sedia): prepare teams of analysis, defining standards, developing procedures for action, and so on. These same sources confirm that the agreement between the Government and the European Commission is well advanced. If there are no setbacks, it will be announced when you return from vacation.

Early application of the draft European regulation would not imply additional legal obligations for companies operating the algorithms to be monitored. The sector, in fact, has welcomed the initiative and has offered to collaborate as necessary with Sedia, under the Ministry of Economic Affairs and Digital Transformation headed by First Vice President Nadia Calviño. The idea of ​​the Government is to create a sandbox or test bed in a secure environment similar to the one already developed for fintech.

Risks evaluation

Both the National Artificial Intelligence Strategy, presented at the end of last year, and the European AI regulation contemplate the figure of algorithmic auditing as one of the pillars of the risk assessment system.

The Government is still not clear about which body will be in charge of carrying out these audits in Spain. “We have not reached that discussion,” they point out from the Secretary of State led by Carme Artigas. One option would be to create an independent office, as is already the case in the field of privacy with the Spanish Data Protection Agency (AEPD).

Before reaching that point, and to prepare the ground, the Observatory on the Social Impact of Algorithms (Obisal) has just been launched. In turn, dependent on the National Observatory of Technology and Society (Ontsi), this entity will be in charge of developing the benchmark indicators that will be used in the audits. El Obisal is currently in the process of recruiting experts and setting up sectoral working groups to analyze AI systems. “The algorithms are not sectorial, but their impact is,” they underline from Sedia.

This observatory will produce reports that will help to see the light of a specific methodology for algorithmic audits next year. A registry of public sector algorithms that is being promoted by the Secretary of State itself will also contribute to this. The objective, according to government sources, is that this process of evaluating the social impact of the systems can materialize in a kind of quality seal of the algorithms. “All this work is preparing the ground for 10 or 15 years from now,” they say from Sedia.

You can follow EL PAÍS TECNOLOGÍA at Facebook Y Twitter or sign up here to receive our newsletter semanal.

Disclaimer: If you need to update/edit/remove this news or article then please contact our support team Learn more