Ending the collective delusion over AI

Big Data, Open Data, artificial intelligence, machine learning, deep learning… the world of communication appears to have been heavily influenced by the Minority Report effect in its appreciation of these current issues. Although this fascination for such intangible forms of progress cannot simply be ignored, the situation becomes somewhat worrying when it starts to take the place of debate based on data-related issues.

“Can you use your methodologies to predict a future crisis? “, “Do you have an algorithm that automatically detects fake news? “, and “Is there any way of collecting all historical data about me? “. Although such questions doubtless have a certain value, and demonstrate a desire on the part of customers to stay abreast of the key trends that are shaping the digital ecosystem as a whole, they are nonetheless indicative of a clear disconnect between the inherent potential of data analysis and the effectiveness of currently available methodologies.
Questions of this kind logically lead agencies to a virtually inescapable binary choice. Should we place the emphasis on transparency – at the risk of showing a kind of weakness – or on a more or less well-substantiated rhetoric which may well create the right illusion for a short while? The latter approach, having previously been the sole preserve of consultants, consultancy heads and other agency managers, is now gradually becoming the “original sin” of the (to be fair, still nascent) industry, built on the promise of a symbiosis between the worlds of communication and data analysis. The Minority Report effect (a reference to a short story by the science fiction writer Philip K. Dick adapted for the silver screen by Steven Spielberg in 2002) is the logical result of the cumulative effect of these related opportunistic discourses.
This situation – which has a strong resonance with the political sociology concept of path dependence – is a perverse one, as a result of its negative externalities. Although the causes of the phenomenon are easy enough to understand, its symptoms deserve equal consideration, as they promote relationships which are hardly distinguishable from addiction and the consequences of drug use. How can we reconnect with the real (and admittedly somewhat prosaic and trivial) world, in an era when the application-fuelled one-upmanship of artificial intelligence is being driven to fantastical extremes in an onslaught of LinkedIn posts? Many stakeholders, in their desperate pursuit of likes and leads, are currently burning their boats, and – as a result – leading the communication sector merrily up a blind one-way street.

There’s no such thing as artificial intelligence

This perpetual post-truth assertion, presented as a means of existence, survival and conquest, can be made only because – in contrast to the structured sectors of medicine, pharmaceuticals and law – there is no authority or council to oversee the vast sector of communication. Would the General Medical Council happily grant a licence to a practitioner who promised his patients eternal life? Of course not; sanctions would follow in short order, and this is how we separate the wheat from the chaff – and ultimately ensure that the legitimacy of an entire industry is not cast to the winds.

Although we are not necessarily calling for the creation of such a body, it remains our view that the issue of transparency in terms of discourse, methodology and process is now more topical than ever. There’s no such thing as artificial intelligence. Or at least, not in the form it tends to take in some narratives. More often than not, the promise of AI is replaced by a more prosaic reality that researchers in the mould of Antonio Casili and Dominique Cardon refer to as “digital labor”. And this collective delusion over AI has been neatly deconstructed in a maxim adopted by many network developers: “If it’s written in Python, it’s probably machine learning. If it’s written in PowerPoint, it’s probably AI.

One has only to consider the linguistic and image recognition setbacks experienced by Big Tech, through the combination of advanced models and the wholesale outsourcing of repetitive menial tasks, to realise that the supposed omnipotence of AI has more to do with sales rhetoric than actual programming and data analysis reality.

If communication is to move towards advanced hybridisation with data and metadata produced via social networks, the social web and data made available as Open Data, it will need to switch from a paradigm based on myths to one based on methods. The vague, superficial discourse surrounding AI needs to be challenged by conceptual and methodological transparency. We should, for example, point out that a machine learning model requires… human effort, whether in the labelling of corpora by consultants with humanities and social sciences qualifications, the creation of custom dictionaries devoted to a given topic, the methodological biases regarding the parameters to be taken into account when constructing models, or the adjustments required to increase model accuracy. As for the models to be used, two points should be emphasised: they are legion, and none is ever truly perfect. Residual errors between the “train and test” corpora are a reality that cannot be ignored. Pointing out the limitations of such practices does not equate to delegitimising them. In fact, the exact opposite is likely to be true over the next few years.

By escaping from this Promethean, etheral vision, we can once again anchor data analysis and its potential in the real world, even if in its virtual form. Of course, such a promise sounds less impressive, but at least it has the clear merit of transparency and honesty. These two characteristics provide a foundation for professional ethics which, if observed, should ensure that its practitioners are easily distinguishable from fishermen in troubled waters.

Xavier Desmaison, CEO of Antidox, Jean-Baptiste Delhomme, Associate director of IDS Partners and Damien Liccia, Associate director of IDS Partners
Source : https://www.maddyness.com/2019/11/18/finir-illusion-collective-ia/