Internal documents reveal that Meta, the parent company of Facebook and Instagram, implemented measures that exempt high-spending advertisers from standard content moderation processes. The decision seeks to prevent “false positives” that could affect its revenue and deepens concerns about the lack of fairness and transparency in content moderation.
Meta, the parent company of Facebook and Instagram, has implemented measures that exempt its largest advertisers from standard content moderation processes, according to a 2023 internal documents obtained by the Financial Times. This means that the company has given «preferential treatment to large advertisers» in that «these companies’ campaigns do not pass the usual filters to protect the business they generate.»
The measures indicate that companies that spend at least $1,500 per day would be classified as «large planners» («P95 Spenders»), meaning they would be part of the top 5% of advertising planners. These advertisers are exempt from automatic ad restrictions, and their campaigns are subject to manual review to avoid false positives. Additionally, safeguards were implemented for business accounts generating more than $1,200 over a 56-day period and for individual users spending more than $960 over the same period. In these cases, the processes are designed to «suppress detections» based on specific characteristics such as spending level.
Meta justifies these measures by arguing that higher ad spending means broader reach, so the consequences of removing ads or mistakenly blocking accounts would be more significant. The company has acknowledged that its automated systems have occasionally incorrectly flagged high-spending accounts for alleged violations of its internal policies. A company spokesperson noted that these advertisers are subject to a proportionally higher risk of erroneous violation notifications.
It has not been confirmed whether this practice is temporary or ongoing, as Meta has not provided details. However, the company has previously acknowledged the challenges associated with large-scale automated moderation and the errors that can result from it.
The strategy has raised concerns about the fairness and transparency of its content moderation processes, especially considering that, in the past, according to documents leaked in 2021 by former company employee Frances Haugen, the company implemented internal systems to protect posts by politicians, journalists, and public figures from erroneous removals, although in some cases these measures also protected users who violated internal regulations.