When the machine wakes up…: Rethinking the role of the watchdog in the age of AI

On 14 September 2023, Onclusive, one of France’s leading media monitoring companies, announced that it was cutting 217 of its 383 jobs. The reason? To replace the majority of its workforce with artificial intelligence systems. If behind this announcement lie the strategic manoeuvres of an international group not intrinsically linked to a radical upheaval in the profession, it should (literally) give pause for thought.

AI, catalyst for the decline of the watchdog?

The event caused a stir because for the first time a company is claiming the choice to replace tasks traditionally performed by humans with AI solutions. This embodies a growing trend in the digital sector, and more specifically in the intelligence sector: the increasing reliance on algorithms to perform previously human tasks. Back in March, speculation was rife around the advent of ChatGPT, with alarming predictions that certain professions would disappear by the end of the decade.

Among the headlines interpreting this announcement as the demise of a profession, the company’s customers also expressed reservations. The SIG (Government Information Service), for example, questioned ‘the guarantees it intends to provide to maintain the level of skills, expertise and quality of deliverables’, as reported by Mediapart.

Tag clouds associated with Onclusive’s announcement to replace half of its workforce with AI

The rapid transformation of the digital information landscape poses a growing challenge for traditional intelligence. Indeed, the volume of information to be processed has increased exponentially – it’s called ‘infobesity’ – making it all the more complex to verify sources. Added to this is the unconventional nature of data, particularly on social networks, often in the form of informal conversations, making it even more difficult to analyse.

The use of AI, capable of automating and processing huge volumes of data at unprecedented speed, takes on its full meaning in this context. It is based on two fundamental technical pillars: machine learning (ML) and deep learning (DL). So AI is learning, and it’s learning at a rate that no human can match… or perhaps we need to reconsider AI as the new kid on the block, a little overzealous and not without its faults.

The limits of sociotechnical devices and AI

Publishers of intelligence software solutions are vying with each other to perfect their sociotechnical devices for gathering and processing information, and have largely integrated AI-related technologies. To name but a few: image and voice recognition, opinion mining, predictive analysis, etc. Predictive analysis tools combine AI with historical data to predict trends and thus anticipate market movements. Automated language processing analyses the feelings associated with conversations in order to decode their subtleties.

Infographics showing predictive analysis and sentiment on Talkwalker

Developments linked to AI and automation are not without their critics. For example, opinion mining is regularly criticised for its simplification of complex discourse (Lui, Buisson, 2015) and its difficulty in guaranteeing the representativeness of the data collected due to uncertainty over the exact identities of users. Similarly, AI predictions should be taken with a grain of salt; it is far from being an infallible predictive modelling system, but a tool that can essentially repeat language.

The watchdog has not said its last word

These limitations redefine the role of the intelligence worker in the intelligence process. Although AI can manage many repetitive tasks and continues to evolve, the human remains indispensable at the heart of this mechanism. It is the human who initiates the process, steers it and conducts the underlying strategic thinking. The watchdog, with the benefit of his experience, is in control of the final validations and makes the necessary adjustments thanks to his critical capacity. As Albert Einstein so aptly put it: ‘Knowledge is acquired through experience, everything else is just information’.

Information, and by extension intelligence itself, is only of value when it is put into action. With this in mind, Davison (2001) states: ‘Information only has value in a decision problem if it results in a change in some action to be taken by a decision maker’. In fact, intelligence is not limited to analysing and gathering information, it involves exchanges with the recipients of that information. If AI can truly assist the intelligence analyst on a daily basis, it will draw conclusions to guide and reassure the decision-maker in his or her choices.

Imagine for a moment receiving an alert from your system telling you that the number of brand mentions is exploding. You are then faced with a shapeless mass of information, made up of figures, graphs and data of all kinds. At this stage, it’s up to the market researcher to piece together the events and the thread of the story in order to give it shape, coherence and intelligibility. The production of meaning also involves a mediation that only humans are (still) capable of producing.

Adopting a reflective stance

Going beyond the alarmist forecasts of the impact of AI on intelligence professions and the postulate of the need to jump on the innovation bandwagon, leads us to question practice. Every step forward, synonymous with uncertainty, is not just a source of anxiety, but a gateway to every opportunity. It is the combination of technical innovation and the musicality of the human spirit that gives intelligence work its full value. So it’s a question of questioning our practice and enriching it by integrating the possibilities offered by AI with a view to continually producing operational and actionable results for decision-makers. Streamlining processes thanks to AI, putting analysis back at the heart of intelligence work and emphasising the human contribution should enable us to see the future of the intelligence specialist in a more optimistic light.

By Bertille Chalut