Generative AI and democratic resilience: 2024, a year of electoral tensions

The year 2024 will be characterised by an unprecedented concentration of electoral deadlines across the world. A year that sounds like so many challenges for public authorities and digital platforms, at a time of information wars and generative AI.

The year 2024 will be one of challenges. According to US civic technology firm Anchor Change, 83 elections will take place over the course of the year, the largest global electoral concentration in 24 years. These include major elections for many countries: the Indian parliamentary elections in May 2024, the European elections next June and, of course, the US presidential election in November. Elections in areas that have held the world’s attention for several months have taken place, or are due to take place, such as in Taiwan, which held its presidential election last January, and possibly in Ukraine next March.

So many elements coupled with the multiplication of information warfare campaigns led by states such as Russia, China, or, in a more modest but very real way, Azerbaijan against France in the context of the organisation of the 2024 Olympic Games or Turkey, challenges public authorities and private players.

Social networks and the attrition of human resources

The last few years will have been marked by Elon Musk’s April 2022 takeover of Twitter, renamed X. This coincided with mass redundancies and departures, particularly in the ‘trust and security’ division, which is responsible for content moderation. The effects of these departures raise many questions about the platform’s ability to guarantee a healthy information environment (effective removal of content that is violent or incites violence, detection of information warfare campaigns by state actors, distribution of fraudulent content and other deepfakes, etc.).

X therefore continues to represent a major challenge. Competing micro-blogging offerings Bluesky (founded by Twitter founder and former CEO Jack Dorsey) and Threads (Meta) have not yet achieved the critical mass of users that X has. Threads has 130 million monthly users worldwide, Bluesky between 4 and nearly 5 million, while X has 556 million.

Moreover, the attrition of human resources is not limited to X.. Meta made 20,000 employees redundant in November 2022, while YouTube’s ‘security and trust’ team was also affected, according to the New York Times*, when parent company Alphabet cut 12,000 jobs in January 2023.

This attrition in human resources at the world’s leading digital agora platforms is heightening concerns as generative AI chatbots go live.

IA and misinformation: platforms seek reassurance

The release to the public of OpenAI’s chatbot ChatGPT, followed by competing services Copilot (Microsoft) and BARD (Google), has raised a new source of questions about the risks of exposing public opinion to the spread of false information and coordinated manoeuvres to manipulate it. During the Slovak parliamentary elections in September 2023, the opposition candidate was the target of an audio deepfake suggesting that the elections were going to be rigged. For Operation Döppelganger, the American cybersecurity company Recorded Future considers it ‘likely’ that generative AI will be used to feed content to the web pages of the Russian information system. On a different note, the mass distribution of a pornographic fake targeting singer Taylor Swift on X demonstrated the potential for spreading this type of content – although it was quickly detected.

For their part, the platforms are reassuring us about their ability to detect AI-generated content. In its transparency report for April – June 2023, YouTube claims to have been able to remove 93% of videos that violate its terms of use. TikTok puts the figure at 62%, while Meta claims to be able to detect 90%**.

Last week, Meta announced that it was working on a module for labelling AI-generated images and videos posted on Facebook, Instagram and Threads over the coming months. On the same day, OpenAI announced the introduction of a similar system for content generated by its DALL-E image generation service.

Betting on the resilience of companies?

Of course, private operators cannot be held solely responsible for guaranteeing balanced access to quality information, transparency and fairness in public debate. Guaranteeing the exercise of these fundamental rights remains the prerogative of the public authorities; the entry into force of the European regulation on digital services is one of the pieces of the puzzle.

The relative effectiveness of major disinformation campaigns has also shown its limits. The relative failure, in the very first weeks of the invasion of Ukraine, of the information warfare system put in place by Russia was analysed in these same columns back in March 2022. The intense information campaign waged by China in Taiwan to influence the results of the presidential election did not prevent the election of the pro-independence candidate Lai Ching-te.

Let’s hope, to use the expression of OpenAI co-founder Sam Altman, that free societies will have developed enough ‘societal antibodies’ to face the challenges ahead.

*https://www.nytimes.com/2024/01/09/business/media/election-disinformation-2024.html

**Ibid