“Do algorithms cause content moderation failures on social networks?” by Amélie Gillot

On January 22, a tweeet by the feminist and anti-racist activist @Mélusine_2 was sanctioned by Twitter a few hours after its publication. The subject of the offencive tweet? A question highlighting a burning societal problem, at the time of the #MeToo, #MeTooIncest and #MeTooGay movements: “How can we make men stop raping?” Some users complained to Twitter claiming that the publication would unfairly label all men as rapists. Reports were then moderated and validated by the platform’s algorithms. The result: @Mélusine_2 saw her account suspended for “hateful conduct,” with the possibility of reactivating it if, and only if, she deleted her post.

Suspending @Mélusine_2’s account: the rest of the iceberg

This sanction, which was later extended to other feminist accounts that retweeted the information, caused numerous reactions online, most of them supporting the activist and questioning Twitter about the underbelly of its algorithms. Between January 22 and 26, @Mélusine_2’s account was mentioned more than 24,000 times, while the hashtag #HowToMakeMenStopHarming went viral on January 23 and counted almost 5,000 mentions in four days.

Among these reactions, feminist accounts including that of Caroline de Haas, of the collective Nous toutes. But it went beyond feminist accounts ౼ one tweet out of three mentioning the hashtag was published by… a man. Faced with this craze, major media outlets, such as France Inter or The Huffington Post, were quick to report, while Mélusine continued to make her voice heard, thanks in large part to an article in Libération on January 29.

Twitter has since apologized, but one question remains, more than ever, at the heart of the debate: how effective can we really expect social network moderation algorithms to be? Should we give all the power to these software tools that sometimes lack context? Because what happened to Mélusine is far from anecdotal: all platforms and all spheres of expression (and not only activists and politicians) are concerned.

“An algorithm is really just an opinion integrated into programs”

The mechanics of algorithms still clearly have some shortcomings, starting with the understanding of language, so natural to human beings and yet so complex to integrate into technological tools. Distinguishing an assertion from a denunciation, identifying emotion or sarcasm, analyzing the context surrounding sensitive keywords such as “rape”: a task that is more than difficult for an algorithm. All the more so in the face of societal controversies that bring to light the expression of opinions that are, to say the least, different but, nonetheless, legitimate.

Let’s not forget that a platform’s moderation decisions are often based on member reports. Stereotyping all men by categorizing them as rapists is why some users took issue with @Melusine_2’s tweet. So, who should prevail? A denunciatory tweet or the disapproval of a collective? Faced with these societal complexities, how can we expect fair and irreproachable decision making from simple algorithms?

It would be wrong to point the finger only at the technological flaws without seeing the human flaws behind them. Mathematician Cathy O’Neil, a specialist in the field, explains: “an algorithm is really only an opinion integrated into the programs”. Because yes, behind every type of software lies an incredible amount of human work, such as conception and moderation, which is undoubtedly  biased. The importance of bias must be taken into account from the very creation of the tools. Last March, the Montaigne Institute published a report that reminds us that in France, “most developers have been trained in applied mathematics, statistics and computer science, without any specialized training in social sciences.” The complexities of societal issues are, not surprisingly, often forgotten in the design of these algorithms.

Humans are then at the heart of the colossal work that is moderation ౼ feeding algorithms with their decisions. Facebook has a total of 5,000 moderators, each one evaluating about 15,000 images per day. The thousands of independent workers connected each day to the “Mechanical Turk” marketplace imagined by Amazon – also called “MTurk” – would also contribute to adjusting these algorithms. Their role? Perform micro-tasks such as reading tweets from women who say they have been harassed or reporting shocking images[1]. But is this human intervention really enough? In the Melusine case, Twitter itself acknowledged that “while we strive for consistency in our systems, sometimes we lack context, generally provided by our teams , leading us to make mistakes.

In other words, it is time to address the critical issue of algorithmic control, and more broadly the control of information on social networks, which is becoming an increasingly important issue in our highly connected societies. One of the first avenues currently being explored is to increase the number and cultural diversity of human moderators. This summer, Facebook committed to having half of its staff composed of visible minorities by 2023. But will this mean that there will be enough diversity of opinion? Another area where we still have a long way to go is transparency, which requires platforms to truly educate people about their algorithms. Finally, governments also have a role to play. A role that is already embodied in the current legislative debates in Europe and across the Atlantic on the complex subjects of online hate and fake news, where algorithms have already shown their limits. In France, an amendment, integrated into the draft “law against separatism,” puts the enormous project of regulating social networks back on the table this year.

 

1] “The world according to Amazon”, Benoît Berthelot, 2019

 

By Amélie Gillot, senior consultant at Antidox