Facebook recently announced the rollout of a new content and suggestion policy that will make it more difficult for political topics to appear on its platform and on users’ walls. This measure was prompted by the massive growth of political content and viral activist groups, as well as by the strong criticism levelled at Facebook in recent years for its alleged inability to control the publication of political messages and false information. It will take a few months to be refined but it is expected to have very concrete consequences for users.
The objective is clear: to limit the number of political messages visible on users’ screens. Facebook will stop recommending “political and civic” groups (which can bring together activists or supporters who identify with a particular political cause). Mark Zuckerberg’s platform wants to put an end to the abuses of the past few years and prevent the development of groups with several million members that can destabilize entire electoral processes and massively spread false information. Of course, the details of this new policy are unknown, particularly in regard to smaller scale, local discussion groups (although they, too, have sometimes served as particularly violent battlegrounds between activists).
This decision is even more interesting when juxtaposed with another set of actions aimed at businesses. It will now be possible for campaign managers to block certain topics for the advertising they prepare, thus avoiding visual association with problematic themes considered repulsive and negative, such as “crime and tragedy” as well as “politics and news.” This is further evidence of the unpopularity of these topics for businesses and their toxic nature, in a context where the platform (like Twitter, YouTube and other industry players) has been widely criticized for its inability to regulate hate speech and the radicalization introduced by its features.
These decisions by Facebook, which could take several months to implement and whose technical details are still uncertain, are in any case, a clear indication of the company’s awareness of the increasingly brutal, radical and counterproductive nature of political content on its platforms. While relatively discrete, these measures can also be seen as the death of a long-standing dream of creating a genuine online society, where users interact, share, and calmly engage in debate, that complements in-person public debate. On the contrary, the willingness of major platforms to massively restrict these topics is much more an admission of failure: politics shared online has led to radicalisation, misinformation and has even been blamed for being responsible – sometimes outrageously – for some of the most shocking electoral outcomes of recent years (Facebook with the surprise Brexit victory in 2016, the disinformation groups on WhatsApp in Jair Bolsonaro’s victory in 2018 in Brazil, or of course the violence and, the at times, surreal reality of the Trump campaigns from 2016 to 2020). While the model of the main social networking platforms may be more at fault than the actual ‘digitalization’ of democracy; it must be said that the power of GAFAMs makes them the most visible and influential sites of public debate, and therefore the main drivers of the worrying trends currently at work.
Regulators in most democracies are already urging digital players to better control terrorist or criminal content published on their platforms. This was one of Theresa May’s major battles in 2017 after the London attacks, in an international action led primarily with Emmanuel Macron, then with Jacinda Ardern following the Christchurch attacks. Today, public authorities in the United States and Europe are being asked if they are willing to take on the task of demanding or imposing strict controls and minimize the role of politics online (in the broad sense), associated with violence, radicalization, campaign lies and the latent collapse of liberal democracy? And thus confirming the contrary: that the advent of a digital political space on an international scale stifles expression in a field already exhausted by its presence.
In any case, the conclusion drawn by Facebook begs further reflection. It should motivate us all to consider new strategies that free democratic political debate from the trap it’s been in for 20 years (created by digitalization) ; one that has opened the door to radicalization and heightened social tensions, bred by online groups and extremist rhetoric. If not, it increases the risk of letting private players decide for themselves how political debate in democracies should be controlled.
By Guillaume Alévêque, senior consultant at Antidox