Robin Mansell *
A common platform strategy is to use public relations communications and the courts to delay, deny or to deflect public criticism. When confronted with proposed or implemented changes in laws aimed at curtailing platform practices, these companies highlight their commitments to public values.
Modern digital platforms are distinguished by their use of digital technologies for binding, coordinating, and implementing methods for linking multiple suppliers and consumers or citizens using their data. The dominant market positions achieved by Google, Facebook, Amazon and Twitter are echoed by Tencent, JD.com and Alibaba, Baidu, YouKu and QQ.com, and Weibo in the Chinese market. These developments are widely attributed to the ‘intelligent’ capabilities of machines, market entry by companies that disrupt older ‘single sided’ business models, and the scale of globally distributed end-users of platforms. The platform strategies enable datafication, information circulation and commodification, constituting key elements of ‘platformisation’. When the interests of governments and civil society stakeholders in the platform society do not align with the platforms’ ambitions, then behavioural and or structural remedies are considered.
The platform companies are continuously seeking to strengthen and diversify their revenue streams by exploiting global, regional and local markets. In the international rule-based system that facilitates trading relationships, they can organise their operations with relatively little regard for national boundaries, although they face specific national constraints in China and, increasingly, in other national jurisdictions. A common platform strategy is to use public relations communications and the courts to delay, deny or to deflect public criticism. When confronted with proposed or implemented changes in laws aimed at curtailing platform practices, these companies highlight their commitments to public values – ‘Google cares deeply about journalism’.
Voluntary codes of practice and ethical codes are being developed by the platforms and by governments in response to calls for action. But a burden of proof generally rests on those concerned about the power of platforms to demonstrate that platform self-regulation is not yielding societal well-being and solidarity. The neoliberal argument is that if there is a need for policy to stimulate innovation, foster connectivity, build trust, and increase transparency and accountability, it should be the responsibility of the platform operators. Even when the potential for exclusion, inequality and harms to democracy or young people is acknowledged, the solution is often said to lie in the further development of technical innovations.
When pro-active responses to platform dominance are considered, a crucial question is whether new institutional rules and norms can be drawn up to yield outcomes that will be valued by society: as Van Dijck et al. put it, how can public values ‘be forced upon the ecosystem’s architecture’. What criteria, evidence and institutional set up should be put in place if the choice is taken to seek structural and/or behavioural change? Formal regulation by the state, co-regulation by platforms and the state, and various forms of independent governance provide multiple options, but each raises questions about how to ensure that clashes among economic and public values do not yield outcomes that are disadvantageous or harmful to specific individuals or groups of stakeholders.
In the face of growing criticism of the dominant platforms, there has been much discussion of whether and how to impose a legal ‘information fudiciary’ obligation or a ‘duty of care’ on the platforms. In the UK, a Parliamentary Committee has concluded that ‘self-regulation by online platforms which host user-generated content, including social media platforms, is failing’. Yet, as Marsden points out, ‘all that law can achieve is to enforce against a few bad actors to prevent the most egregious overreaching by companies and users’. The dilemma is to ensure that governments do not use a recognition of threats as a justification to give the state controls that infringe on people’s rights and freedoms. When there are calls – as in the LSE’s Commission on Truth, Trust and Technology Report or in the UK Government’s 2019 Online Harms White Paper – for a new regulatory institution to curtail platform power, the independence of any such institution is stressed and a high premium is given to achieving transparency of platform operation and to generating information about the impact of platform algorithms. Most proposals emphasise a proportionate approach with action requiring evidence of the severity of harms and the size of risk as well as an assessment of the expected impacts on the platforms’ behaviour.
The desire to intervene in the platform market is giving rise to tough language suggesting that the political will is accumulating to redirect the platform business models. In the UK, the House of Commons’ report on disinformation says ‘companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law’. Behavioural remedies aiming to encourage proactive agency on the part of platform users include requiring data mobility and interoperability standards between platforms, investing in strengthening media and data literacies, oversight of platform Terms of Service Agreements, insisting on algorithm neutrality, formal recognition of user rights of ownership in their data, taxation, and, in some cases, making platforms legally liable for harmful content. Structural remedies are also being considered as the dominant platforms in the western world start to be regarded as the robber barons of 21st century and as an ‘evil’ that has created ‘a new digital road to serfdom’. It its 2019 report, a UK Digital Competition Expert Panel concluded that ‘competition for the market cannot be counted on, by itself, to solve the problems associated with market tipping and “winner-takes-most”’.
Remedies for platform power cannot, however, tell us specifically where accountability should rest for protecting freedom of expression while, simultaneously, limiting hate speech, harmful information or bad online behaviour. While multiple initiatives are being discussed to protect citizens from the power of dominant platforms, for the most part, the priority in Western countries remains economic growth and technology innovation. The preservation of human autonomy and dignity in the face of platform power (and biased algorithms) is downplayed in a competitive race towards increasing machine autonomy. It is what can be priced, quantified and calculated in the marketplace that is privileged in terms of investment. Even in the face of an ‘information crisis’ where individuals are seen as vulnerable to misinformation, easily nudged and manipulated, and lacking in critical media literacy, a platform model is being encouraged that fosters a pervasive ‘culture of surveillance’ which is inconsistent with values of fairness, solidarity, accountability and democratic control. Resistance comes from data activism and data justice movements, with some calling for digital technology to be deployed in a knowledge commons, to sustain public service media as a means of counteract the platforms’ power, but these do not seem to be challenging the dominant platforms’ power.
Even if some successes in restraining platforms are assumed through varying combinations of behavioural or structural remedies and social movement resistance tactics, these still seem to foster a platform logic of intense datafication, reducing human autonomy and weakening collective decision making. They do not tackle the underlying logic of datafication, a logic based on subliminal influence and attitude change on a mass, but highly individualized, scale. If this logic is to be tackled it will require an effort to denaturalize the neoliberal hegemony that justifies datafication practices using ever advancing technology. It will require an open debate on the moral limits of digital platform markets, one that is generative of alternatives and which challenges the idea that platform dominance is inevitable. This is the only way that it is likely to be feasible to secure public values in the platform era.
* Professor at the Department of Media and Communications, London School of Economics and Political Science, email: firstname.lastname@example.org