The government of Luis Ignacio “Lula” Da Silva is working on a new proposal for regulating digital platforms that includes “duty of care” obligations to remove content that violates existing laws, combat hate speech and mass disinformation, and increase transparency regarding their content moderation.
Discussions about the initiative began in recent weeks, following the crisis over Pix – a Brazilian electronic payment system that recently saw a disinformation campaign suggesting the tool was going to be taxed – and Meta’s decision to remove or relax content controls on its platforms, including Facebook and Instagram.
According to Folha, the platform’s obligations would be transparency, the duty of «precaution and prevention» and the “reduction of systemic risks”. Regarding the first point, the project says that companies should provide transparency in their terms of use, their policies, how content recommendation algorithms work, as well as in their moderation reports and external audits.
As for “duty of care”, the project introduces a concept similar to the “duty of care” responsibility of European legislation, which would oblige platforms to remove content considered illegal under Brazilian rules, without the need for a prior court order.
The proposal establishes that individual content control -that is, the evaluation of publications- would be carried out by the platforms themselves, based on the rules defined by law. This self-regulation action would be supervised by the State, through a committee that it would create with the function of monitoring the general behavior of each platform and determining whether they are following the established criteria.
In addition to self-regulation, the text indicates two other levels of responsibility. In the first companies would have the duty to act when they receive “extrajudicial notifications” about publications containing “disinformation about public policies.” The second stipulates that platforms should only act in the event of a judicial decision on issues relating to journalistic content, protection of reputation and situations of offence to the honour of public officials.
It also provides for measures to be taken against “systemic risks” such as those generated by algorithmic programming itself, which allows and facilitates the massive amplification of disinformation, hate speech and extremist content. In this context, the definition of the word “disinformation” is a critical point in the discussions on regulation.
Another central axis of the debate is the creation of the regulatory authority (conceived as a state supervisory committee), in charge of monitoring the compliance of the platforms and applying sanctions in case of non-compliance. However, it has not yet been defined which bodies would form part of this committee, although options such as the National Telecommunications Agency (Anatel), the National Data Protection Authority (ANPD) and the Administrative Council for Economic Defense (CADE) are being considered.
In addition, the initiative proposes specific rules for electoral periods, establishing accelerated content moderation procedures during campaigns, which has been a controversial issue in Brazil following the 2022 elections.
The project still faces disagreements within the government and it will have to be decided whether it will be sent to Congress as an initiative or whether it will be incorporated into another legislative project in progress. Furthermore, its viability will depend on the decision of the Federal Supreme Court on the Civil Framework of the Internet, which could modify the responsibility of platforms regarding third-party publications.
RELATED LINKS:
Governo Lula discute novo projeto para redes com regras de remoção de conteúdo
La falsa narrativa de censura: Meta y su resistencia a la regulación democrática