Observacom
Sin categoría

Less moderation, more misinformation: The impact of Meta’s new policies on Ecuador’s elections.

By OBSERVACOM.

Ecuador was the first country in Latin America to hold presidential elections following the shift announced by Meta in January 2025, which included abandoning independent fact-checking programs and reducing political content moderation. Local organizations warn that these new policies are already having concrete effects: fewer labels on verified disinformation, more circulation of AI-manipulated content, and minimal dialogue with key actors, in a context of growing opacity and polarization within their networks.

In January of this year, Meta announced significant policy changes that, among other things, included the elimination of third-party fact-checking—transferring that responsibility to «community notes»—the reduction of moderation of «political content» (including sensitive topics such as immigration and gender), and the relocation of moderation teams from California to Texas. The first of these changes immediately applied to the United States, but the company announced that it would not apply to the rest of the world.

This analysis by OBSERVACOM attempts to identify which of the newly-announced policies were implemented in our region. Did the withdrawal of verification only apply to the U.S., as Meta claimed, or was it extended to the region? Were agreements with regional fact-checkers maintained? What was the impact of the loosening of content moderation policies? The report also examines the level of transparency adopted by Meta and other platforms regarding political advertising—or the lack thereof—on their networks.

To do so, we examined the recent presidential elections in Ecuador, the first national elections held in Latin America since the policy shift. It is based on our own surveys and interviews with local organizations that conducted content verification tasks. 

In the current context, platforms like Facebook and Instagram—among others—have not only facilitated the circulation of information of public interest, but have also become key channels for the dissemination of misinformation, as well as facilitating opaque political advertising practices by parties and candidates. 

The presidential elections, which culminated on April 13 with the reelection of Daniel Noboa, took place in a context marked by complaints about the platforms’ inaction in the face of electoral disinformation, according to interviewees. Added to this is an increasingly complex scenario, in which artificial intelligence and polarized discourse fuel disinformation on social media.

The two fact-checking organizations interviewed, Lupa Media and Fundamedios, warned of the lack of platform responsiveness, limited transparency regarding political ads, and the absence of effective content moderation mechanisms.

Fact-checking 

Carolina Bazante, director of Lupa Media, stated that although Meta still maintains agreements with independent fact-checkers in the region— unlike in the United States, where the company permanently abandoned their use —the effectiveness of these mechanisms has significantly decreased in recent years. “Previously, platforms would label false content, but now they no longer do so, even when we report it,” she stated.

A similar assessment was made by César Ricaurte, director of Fundamedios, the organization that manages the Ecuador Chequea portal and is still part of Meta’s independent fact-checking program. According to Ricaurte, “the platforms have been much less active” during the last elections compared to previous elections. 

This setback, in the case of Meta, aligns with the new policies announced in January 2025, which include the abandonment of the independent third-party verification program in the United States and a transition to a more lax content moderation model. Although contracts with fact-checking organizations remain formally active in Latin America, the fact that labels have been discontinued in Ecuador suggests that the institutional change is already having real effects in the region.

Content moderation

This lack of protection may be related to two key aspects of Meta’s new policies. On the one hand, the company announced that it would only intervene in cases of “illegal and highly serious violations,” although it did not specify what topics these new rules would cover. On the other hand, it also deliberately relaxed its interventions regarding “political content,” in line with an approach that prioritizes freedom of expression on topics considered sensitive. Under this framework, AI-generated electoral disinformation may have been left unmoderated—not by accident, but in line with the new policy.

For his part, Ricaurte emphasized that, between 2021 and 2023, there was a certain degree of dialogue between the platforms, electoral authorities, and civil society organizations. During that period, both Meta and TikTok expressed interest in supporting efforts against disinformation. However, in the 2024 campaign, that willingness to engage in dialogue disappeared. “Meta, which was previously one of the most active, has become very distant. It is very difficult to interact with them, even when removing journalistic content,” Ricaurte explained. He also indicated that there was engagement by TikTok or X, the latter described as the most inaccessible platform for institutional dialogue today.

Regarding Meta’s attitude in particular, the loss of dialogue with local stakeholders is not a side effect, but rather a foreseeable consequence of its new model: less human presence, more automation, and a redefinition of priorities that leaves regions like Latin America off the radar of direct attention.

Ricaurte also noted that the 2023 election had already been marked by high levels of violence—including the assassination of presidential candidate Fernando Villavicencio—and by a strong presence of TikTok in the strategies of the leading candidates. In this context of increasing polarization, the lack of engagement with platforms to monitor and mitigate disinformation is particularly worrying.

Transparency

On the other hand, both interviewees agreed that, although Meta’s library of political ads offers some transparency, it remains insufficient. Bazante emphasized that the creation of pages that create and spread disinformation can be tracked, but it is impossible to know who their administrators are or how much they have spent on advertising. Furthermore, Noboa, the current president of Ecuador, reportedly spent more than a million dollars on advertising on Meta without that spending being officially counted as electoral expenditure, the director of Lupa Media points out. Ricaurte added that third-party-funded ads escape official oversight, making it difficult to trace the strategies behind these disguised opinion campaigns.

The situation with Google adds another layer of concern: the company prevents tracking of foreign-funded campaigns targeting Ecuadorian audiences, complicating efforts to monitor political ads and identify the sources of disinformation.

Artificial intelligence

Both organizations highlighted the particular impact of AI during Ecuador’s elections: deepfakes, AI-generated images, and automated narratives were used to spread misleading content. “This combination—less oversight by platforms and more AI-driven disinformation—creates an explosive situation,” Bazante warned.

It is important to highlight that the preliminary reports of the Electoral Observation Missions (EOM) also made observations on the use of AI in disinformation in the electoral context. The EOM of the Organization of American States (OAS) stated that in Ecuador, “it was observed during the first round that the electoral process was marked by political polarization, identity theft for the dissemination of false content, the circulation of already verified disinformation, and an increase in the use of artificial intelligence for the purpose of manipulating content.” In the case of the European Union’s EOM, it stated that “a wide dissemination of disinformation campaigns was detected on all monitored digital platforms, the reach of which was frequently amplified through paid ads and bot farms.”

Conclusions

As a result of the above, Bazante and Ricaurte share a concern: in Ecuador, digital platforms are failing to fulfill their responsibility to ensure a healthy information environment during electoral processes. The lack of transparency and dialogue, the weakness of appeals systems, and the growing sophistication of disinformation campaigns pose a fundamental challenge to democracy. As Ricaurte emphasized, “t has been a highly polarized campaign, with a lot of disinformation and a lot of camouflaged political advertising,” with no accountability whatsoever for the platforms that sustain this system.

Although Meta has not formally announced the implementation of its new framework outside the United States, the practices observed during the Ecuadorian electiondemonstrates a shift in direction that includes laxer moderation (less labeling of content reported as “disinformation” and greater circulation of AI-manipulated content), and less dialogue and space for independent fact-checkers (local partners still exist, but without the tools or channels that previously allowed them to escalate cases).

In short, Meta’s transition toward “less intervention and more freedom of expression” resulted, in the Ecuadorian case, in a reduced ability to curb disinformation during election periods and greater opacity in the platform’s actions and political propaganda on its networks, just as the widespread use of AI multiplied the risks. The new policies are already negatively impacting the quality of democratic debate and democracy in Latin America.

Ir al contenido ... (en este espacio hay varias líneas de código de programación personalizado)