Observacom
Blog Europa Opinión

Los desafíos para la implementación de la Directiva de Servicios Audiovisuales en Europa según Joan Barata

Setiembre de este año es la fecha límite para la implementación de la Directiva de Servicios de Comunicación Audiovisual (DSCA) en los distintos países de la Unión Europea. Joan Barata, consultor sobre responsabilidad de intermediarios en el Centro sobre Internet y Sociedad de la Universidad de Stanford hace un llamado de atención sobre los desafíos en dos temas: las nuevas obligaciones para las plataformas de intercambio de video sobre discurso de odio y protección de la niñez por un lado, y la aplicación del principio de país de origen y jurisdicción, por el otro.

Barata advierte que el artículo 28b de la Directiva trae una serie de nuevas obligaciones para las plataformas de intercambio de video relativas a la prevención y moderación de contenidos que contienen discurso de odio y pornografía infantil, que afectan el desarrollo físico y mental de los niños, que violan obligaciones en las comunicaciones comerciales o que pueden ser considerados como “terrorista” que son problemáticas.

De acuerdo con el especialista, el mayor problema reside en que las plataformas no sólo deberán dar de baja ciertos contenidos ilegales de discurso de odio, sino que además tendrán el poder de definir por medio de sus términos de servicio qué expresiones puedan suponer discurso de odio, eliminando aquel contenido y pudiendo persistir en su decisión de remoción aún en casos de apelaciones realizadas por parte de usuarios en las que ese contenido haya sido declarado legítimo (en sentido legal) por las autoridades públicas que entiendan en dicha apelación. La especial gravedad está dada porque no se puede obligar a las plataformas a reponer contenido, aún legal, cuando ésta entienda que el mismo viola sus términos de servicio.

En este sentido, Barata expresa de manera textual en su columna de opinión publicada en el blog de LSE: “La habilidad de las plataformas de prohibir contenido legítimo en base a las provisiones de sus términos de servicio se cruza con las medidas proactivas para los derechos de los usuarios en las plataformas establecidas en la DSCA. Si bien los usuarios tienen derecho a impugnar ante las autoridades nacionales la decisión de un proveedor de eliminar un video según la ley, ese derecho de reparación no parece existir cuando el proveedor ha actuado basándose únicamente en sus ToS [términos de servicio]”.

Respecto del país de origen y la jurisdicción, Barata nota una situación paradójica: “a lo largo de la Unión Europea, temas de sensibilidad nacional como discurso de odio o contenido terrorista serían (casi) exclusivamente definidos por las autoridades irlandesas”, donde la mayoría de las grandes plataformas tienen sus operaciones.

Barata destaca que mientras las futuras disposiciones aplicables por las autoridades europeas para los proveedores de servicios de intercambio de video afectarán a los servicios provistos por estos operadores en toda la Unión Europea, la regulación de contenido online no audiovisual permanecerá bajo la responsabilidad de cada estado miembro.

La Autoridad de Radiodifusión de Irlanda (BAI, por sus siglas en inglés) en su aporte para la consulta pública de dicho gobierno sobre Regulación de Contenido Nocivo y la Implementación de la Directiva de Servicios de Medios Audiovisuales hace importantes observaciones de acuerdo con Barata.

En cuanto al país de origen, el documento del regulador de Irlanda admite que el esquema propuesto debilitará inevitablemente el poder los Estados miembro de la Unión Europea para combatir el contenido audiovisual ilegal en su propia jurisdicción.

Su propuesta desarrolla la idea de dar al regulador el poder de establecer un marco regulatorio “macro”, mientras que las compañías tomarán las decisiones “micro” de los casos individuales. Esto significa según Barata que los proveedores de servicio de intercambio de video “transformarán estas macro guías en reglas específicas de ToS y, sobre esa base, adoptarán las decisiones… esto significa que la mayoría de las quejas sobre supuesto contenido ilegal serán remitidas a los proveedores de servicios de intercambio de video”.

“La propuesta no parece reconocer de forma adecuada ni establecer mecanismos adecuados para abordar la confusión que habrá entre las bajas de contenido basadas en provisiones legales y regulatorias y aquellas basadas solo en ToS”, concluye Barata.

A continuación compartimos el artículo completo de Joan Barata (en inglés), Regulating content moderation in Europe beyond the AVMSD, publicado en el blog de LSE.

Regulating content moderation in Europe beyond the AVMSD

The EU Audiovisual Media Services Directive (AVMSD) was adopted in November 2018, aiming to better reflect the digital age and create a more level playing field between traditional television and newer on-demand and video-sharing services. This is the third contribution made by Joan Barata, a consultant on intermediary liability at Stanford’s Cyber Policy Center, to a series of exchanges with former ERGA chair Lubos Kuklis on one of the most controversial, and perhaps most opaque provisions included in the latest version of the AVMSD.

As the deadline for implementation approaches (September 2020), the AVMSD will raise important challenges with regards to delineating a proper separation between:

  • Platform content moderation decisions based on the law, which are associated with a series of legal safeguards and protections for users

  • Measures adopted on the exclusive basis of their own terms of service (ToS), which are not.

In addition, the application of the principle of country of origin will presumably concentrate the responsibility for regulation of the most important video sharing platforms in the hands of Irish authorities. This means that moderation of something as nationally-sensitive as hate speech or terrorism content will be primarily defined by a single (and small) EU member State, no matter where in the Union the author of the content and its targeted audience are based. This creates a unique regulatory scenario which completely diverges from the application of the same principle to “traditional” audio-visual media.

New obligations for video sharing platforms

Article 28b encompasses a series of duties of so-called video sharing platforms (VSPs) concerning the prevention and moderation of content that constitutes hate speech and child pornography, affects children’s physical and mental development, violates obligations in the area of commercial communications, or can be considered as terrorist. National authorities (mainly independent media regulatory bodies) are given the responsibility of verifying that VSPs have adopted “appropriate measures” to properly deal with the types of content mentioned above (alongside other undesirable content). This includes the guarantee that platforms properly revise and enforce their ToS; have appropriate flagging, reporting, and declaring functionalities; implement age verification or rating and control systems; establish and operate transparent, easy-to-use and effective procedures to resolve users’ complaints; and provide media literacy tools.

In his last post, Lubos correctly points at the fact that the new AVMSD represents a significant change for platform users, as it will create “real, enforceable rights” for the first time. I agree on the fact that this legal scheme is far better than “leaving the users entirely under the power of the platforms, with no real rights, no enforceable protections and no guarantees for fair treatment”. In addition, Lubos underscores the fact that platforms would not only bear a duty to take down certain kind of content, but they may also have an obligation to leave legitimate content online. This is, in my understanding, the most problematic area in our ongoing discussion.

The fact that, for example, there are legal provisions on hate speech that will necessarily need to be applied by platforms through their ToS does not mean that platforms do not have their own (and broader) rules applicable to the very same area, as can be found, for example, in Twitter’s Rules or Facebook’s Community Standards. These rules, in general (and in Europe at least), go beyond what is established by national legislation and even international standards.

Platforms will not only have the duty of taking down illegal hate speech, but they will also hold the power to eliminate legitimate (in the sense of fully legal) content that violates their own ToS. Even if a regulator, upon a user’s appeal, considers that something taken down does not fall under the category of illegal hate speech according to national legislation, the platform in question could not be forced, according to the AVMSD, to leave it online, inasmuch as it asserts that the decision was adopted according to its private and broader anti-hate speech rules.

In a recent report to the UN Human Rights Council, the United Nations Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression David Kaye emphasises that companies must “(d)efine the category of content that they consider to be hate speech with reasoned explanations for users and the public and approaches that are consistent across jurisdictions”, and that “any enforcement of hate speech rules involves an evaluation of context and the harm that the content imposes on users and the public”. At the same time, States must adopt “laws that require companies to describe in detail and in public how they define hate speech and enforce their rules against it”.

These recommendations are very much in line with the intentions behind the European audio-visual legislation. Kaye’s report also refers to the fact that “companies should define how they determine when a user has violated the hate speech rules” and, more importantly, that when “company rules differ from international standards, the companies should give a reasoned explanation of the policy difference in advance, in a way that articulates the variation. For example, were a company to decide to prohibit the use of a derogatory term to refer to a national, racial or religious group – which, on its own, would not be subject to restriction under human rights law – it should clarify its decision in accordance with human rights law”. Putting aside other considerations, this is important as it suggests that the Special Rapporteur accepts that ToS (even in areas as important as hate speech) may go further than legal standards (at least international ones), inasmuch as the requirements of certainty, necessity and proportionality are properly met. As discussed in a previous post, platforms’ ability to prohibit lawful content based on ToS provisions intersects with the protective measures for platform users’ rights established in the AVMSD. While users have a right to challenge before national authorities a provider’s decision to remove a video based on law, that right of redress does not appear to exist where the provider has acted based purely on its ToS.

Country of origin and jurisdiction issues

Another very sensitive area mentioned by Lubos in his recent article concerns jurisdiction issues. As he correctly points out, the application of the country of origin principle to platform regulation will most probably put the modest (in terms of resources) regulator in Ireland in charge of enforcement of the AVMSD rules that are applicable to entities like Google or Facebook. Moreover, the aforementioned regulator would apply these rules according to the parameters set by the Irish legislator in the transposition process.

The country of origin principle has been in place as part of the European Union broadcasting and other audio-visual media services regulation since 1989. It essentially concentrates jurisdiction over audio-visual content in the hands of State authorities of the country where the service provider is established, hosts its main office or takes fundamental editorial decisions. This can be considered as a reasonably manageable and applicable principle when it comes to audio-visual media services as such (that is, traditional broadcasting and on-demand audio-visual services such as those provided by OTT companies). Media companies (particularly television stations) in Europe, even if they tend to form continental conglomerates, still perform their activities, take their decisions and shape their offer on the basis of local markets (language being one, but not the only factor to explain this). VSPs, on the contrary, and particularly big platforms like YouTube, Facebook or others, tend to concentrate their main offices and strategic decisions in one single EU country, on the basis of fiscal considerations, data protection establishment principles, or other factors.

I thus believe that this creates a new regulatory scenario. European authorities count on a legal instrument aiming at establishing general audio-visual content rules, to be then adapted to the specific national legal systems and cultures within each Member State. However, the country of origin principle leads, at least with regards to the biggest and probably most influential VSPs, to the paradoxical consequence that I mentioned above: throughout the EU,   issues as nationally-sensitive as hate speech or terrorism content will be (almost) exclusively defined by Irish authorities. Was this outcome anticipated by those who participated in the different drafting phases of the Directive?

Pending proposals and unanswered questions

Overview

Recent months have seen the introduction of important proposals from both ERGA and the Irish regulator, spelling out in more detail their intended interpretation of the AVMSD. These build on ERGA’s 2016 opinion with regards to the newly proposed Directive, including new provisions applicable to VSPs, which highlighted:

  • The need to define the “practical steps VSPs and regulators will need to take under the provisions of Article 28a”,

  • The “ability of (national regulatory authorities) to establish jurisdiction”,

  • The “interpretation of “editorial responsibility”, and “dissociable content” within the meaning of VSP providers”.

These are three fundamental issues to be defined by national authorities in coordination with European Union’s bodies in order to create a level playing field for the effective application of the Directive across the continent.

At a meeting held in November 2019, in parallel with the approval of the Directive, the organization adopted a document entitled “A Framework for Effective Co-Regulation of Video-Sharing Platforms,” which raised many unresolved implementation questions relating to key issues such as openness and transparency of platforms, accountability of platforms vis-à-vis regulators, accessibility of complaint procedures, effective handling and resolution, and most importantly, backstop powers in the hands of regulators. As for this last question, it is crucial to define and provide legal certainty regarding the specific powers of intervention by regulatory authorities, procedural aspects, and the possible and legitimate ways for regulators to affect platforms’ content moderation policies.

In this context, the Broadcasting Authority of Ireland (BAI) published in June 2019 a submission to the Government’s Public Consultation on the Regulation of Harmful Content and the Implementation of the Revised Audio-visual Media Services Directive, which was initiated by the Minister for Communications, Climate Action and the Environment. BAI’s submission proposed a key role for itself in the regulation of online harmful content in general (including on platforms not covered by the AVMSD. It is important to note that while possible future Irish provisions applicable to VSPs will affect services provided by these operators across the European Union, regulation of non-audio-visual online content remains the responsibility of each member State, as shown by the emergence of national proposals such as the Online Harms White Paper in the UK or the French Government’s report aiming at elaborating proposals to “make social media platforms more accountable”.)

With respect to the AVMSD, BAI lays out a number of important observations. It assumes that it will be in charge of most significant video sharing platforms in Europe, specifying a preliminary list that includes platforms whose main purpose is to provide VSP services (YouTube, TikTok, Vimeo, DailyMotion, Twitch) and those who provide them as part of their essential functionalities (Facebook, Twitter, Instagram, Snapchat, LinkedIn, Reddit). To further complicate matters, audio-visual content provided by VSPs will only be subject to the new provisions of article 28b when content is originally provided by users who cannot themselves be considered as AVMS providers. In other words, when VSPs become a transmission system to disseminate traditional television programs or to offer video on demand services under the editorial responsibility of an entity that itself qualifies under law as an audio-visual provider, the rules for linear and non-linear audio-visual services will apply instead. On the other hand, and as previously mentioned, when online platforms disseminate content different from audio-visual services or third party audio-visual content, a different set of rules will also apply.

Macro regulation versus micro decisions

Perhaps of greatest interest to observers outside Ireland is the proposal’s discussion of BAI’s responsibilities consistent with the country of origin principle. Somewhat remarkably, the submission admits that the proposed regulatory scheme will inevitably weaken EU member States’ power (except in one case, obviously) to combat illegal audiovisual content online in their own jurisdictions. The proposal further develops the idea of giving the regulator the power to establish “macro” regulatory frameworks and handing over to companies the thousands of decisions on “micro” individual cases. The former means that the regulator will broadly define and develop the scope of the categories of illegal content included in the Directive. VSPs will then transform these “macro” guidelines into specific ToS rules and, on such a basis, adopt the decisions mentioned above, while the statutory regulator will only focus on basic and broad clarification issues of “macro” rules, as well as certain types of aggregated complaints. This means that most complaints on allegedly illegal content will actually be addressed by VSPs. Let’s not forget that such decisions will not be taken on the sole basis of “macro” legal and regulatory guidelines incorporated and developed in their ToS, but also on the basis of any additional internal rules they may decide to adopt. The proposal does not seem to properly acknowledge and establish adequate mechanisms to address the foreseeable confusions between takedowns based on legal and regulatory provisions and those based on ToS only.

Complaints and appeal mechanisms

In order to compensate for the aforementioned almost complete loss of powers of national regulators, the submission also suggests that companies grant to certain bodies the status of “priority complainant,” which would create the responsibility to pay special attention to the public interest considerations brought up by such complainants. In particular, regulatory bodies from other member States would have such priority complainant status.

This complex design does not only put in the hands of VSPs the decision of when legal boundaries are crossed, but also seems to limit the access to appeal mechanisms before public bodies. It is important to note, as briefly mentioned earlier, that according to paragraph 7 of new article 28(b) of the AVMSD, Member States “shall ensure that out-of-court redress mechanisms are available for the settlement of disputes between users and video-sharing platform providers”. Putting aside practical considerations, BAI’s proposal does not mention how platform users may exercise their basic right to appeal, before an independent/judiciary body, individual or specific decisions taken on the grounds of a legal provision affecting no less than a fundamental right. The safeguards established in the AVMSD shall apply to any decision a platform makes to remove content based on a perceived violation of law, when it relates to the content provisions established in the Directive and affects the right to freedom of expression of any individual.

The immediate future

In December 2019, Ireland’s Department of Communications, Climate Action and Environment published the General Government Scheme of the Online Safety & Media Regulation Bill, previously adopted by the Irish Government.

This General Scheme only contains a few guidelines that should be developed in the text of a future and more articulated legal proposal. It is therefore important to note that this Scheme does not give a completely accurate view of the regulatory solutions that will finally be adopted with regards to online safety and the implementation of the provisions contained in the AVMSD.

From a formal point of view, the future legal instrument will amend the Broadcasting Act of 2009 currently in place and, in particular, will establish a new Media Commission that will assume all competences and staff of BAI. The Media Commission will also take over new functions in the areas covered by the new rules.

That said, it is also important to note that the Scheme basically follows the main principles and proposals included in BAI’s submission. The aforementioned differentiation between macro and micro content decisions is included via the introduction of a requirement for the regulator to establish “a scheme wherein it can receive notice of systemic issues with relevant and designated online services from designated bodies”. In other words, the scheme also opts for a mechanism of super complaints submitted by super complainants (using here the language of the explanatory notes included in the Scheme). It will be in the hands of the regulator to particularly define the functioning of this complaint scheme, the nominating process, as well as the process to be followed and standards to be met by nominated complainants in notifying systemic issues (or super complaints). Other important elements to be noted from the proposal are the references to the need for the regulator to cooperate with other bodies, such as those “outside the State which perform similar functions to the Commission, including members of the European Regulators Group for Audio-visual Media Services”, and the absence so far of an out-of-court independent mechanism to deal with specific controversies between VSPs and users.

As mentioned above, there are still a few steps needed to be taken further in order to have a fully-fledged and articulated legal proposal that provides answers to all the sensitive questions that have already emerged regarding the transposition of the AVMSD by Ireland. Much dialogue and reflection is still needed both domestically and among EU partners in order to reach a regulatory model based on a proper understanding of the role and capacities of VSPs, that can provide adequate and proportionate solutions to effectively protect citizens’ rights, as well as to create a higher degree of legal certainty when it comes to the regulation of online services.

ENLACES RELACIONADOS:

Recomendaciones para una regulación democrática de los servicios audiovisuales en Internet. Nuevo documento de OBSERVACOM para UNESCO

Regulación sobre moderación de contenidos en las agendas de varios países desarrollados

Access Now: recomendaciones para la moderación de contenidos en Internet

... (en este espacio hay varias líneas de código de programación personalizado)