Observacom
Sin categoría

A comprehensive analysis by AlSur examines Latin America’s AI regulation efforts

The lack of comprehensive regulation of artificial intelligence in Latin America opens the door to various risks from the perpetuation of inequalities to the violation of human rights through mass surveillance and misinformation. Without clear frameworks, technology advances without effective protection for citizens, which could consolidate practices that are harmful to democracy in the region. The consortium of social organizations AlSur analyzed the cases of Brazil, Colombia, Mexico and Peru in a recent work called Regulatory Paths for AI in Latin America.
Foto: Pexels

In Latin America, the deployment of artificial intelligence systems has been promoted from a perspective of economic and social development and this perspective is what has guided the discussions and debates on how to regulate a technology that crosses all spheres of citizens’ daily lives. “The regulatory scenarios in these countries are examples of legislative reactivity without a comprehensive legal framework, as well as a lack of knowledge of the material reality and the particular needs of each country,” is something that is repeated in each case, says the AlSur report.

The need for adequate regulation lies in the fact that, without clear limits defined by the State -and in dialogue with civil society, academia and the technical community- AI can “lead to biased automated decisions that affect crucial aspects of people’s lives, perpetuating discrimination and inequalities in access to health services, education and employment.” Without clear guidelines, there is also the risk of “abusive use of the technology for mass surveillance, violating privacy and individual freedom” in cases such as those involving facial recognition systems.

In the field of information and communication, it has been proven that AI systems can be used “to spread false or biased information, affecting public opinion and democracy.” Regulation is necessary in these cases because these practices destabilize trust in the essential institutions of any democratic society.

Finally, without adequate supervision there is a risk of exclusion due to digital gaps between those who develop and have access and the “vulnerable communities that do not have access to the technology or the education necessary to benefit from it.”

Among the different risks that arise, the lack of transparency stands out due to the opacity of the algorithms that drive AI systems. This makes it difficult for people to understand the criteria and processes with which automated decisions that affect them are made. This lack of transparency also prevents governments from auditing and supervising that their operation is compatible with the framework of compliance with the standards of the Inter-American Human Rights System.

Something to highlight as a risk of insufficient regulation is the case of Brazil, where it is possible to observe that “until a comprehensive law is approved, bills appear aimed at specific uses of a certain type of artificial intelligence based on the cases of questionable uses that come to light.” An example of this is what happened after the scandal of deep fakes produced by university students that apparently showed fellow students in nude scenes. In 2023 alone, at least 25 bills were presented on this issue.

The report highlights how the regulation adapts to “the needs of techno-solutionist projects” based on the influence of international legislation, as emerges from the analysis of the case of Brazil. The problem is that specific problems of the country and its population are not taken into account.

In the case of Mexico, which has a generic regulation of AI that seeks to replicate the risk model of the European Union’s Artificial Intelligence Law, one of the problems is that “it focuses on risks that do not yet exist and takes its attention away from specific issues such as mass biometric surveillance under the premises of ‘national security”.

On the other hand, the case of Peru highlights that different institutions have been created to promote the ethical use of various technologies, including AI, but “the analysis of the risks related to the use of AI and how the State should deal with them has been left aside”.

In Colombia, despite the fact that there are ethical frameworks and regulatory mechanisms for an “ethical and sustainable adoption of AI”, there is no dialogue with existing regulations such as the personal data protection law or copyright legislation.

In order to achieve better regulation, which takes into account the various problems and scenarios, the participation of civil society, academia and the technical community in public discussion forums is essential. In the cases of Peru, Mexico and Colombia, “the effective participation of civil society in the construction of legal frameworks for AI has been null, generating a constant distrust regarding the guarantee of human rights,” AlSur assures.


RELATED LINKS:

Caminos regulatorios para la IA en América Latina. Recopilación de estudios de caso de Brasil, México, Perú y Colombia

TikTok despide empleados y profundiza una moderación de contenidos automatizada a cargo de la IA

Nuevos compromisos hacia un enfoque regional para la gobernanza de la IA

Foro I&D insta a Estados garantizar la integridad de la información en la era de la IA

Ir al contenido ... (en este espacio hay varias líneas de código de programación personalizado)