OpenAI: a textbook case of the limits of governance in the face of serious risks

It became clear on the morning of Thursday 23 that Sam Altman’s ouster as CEO of OpenAI was due to a disagreement over a major breakthrough in artificial intelligence. The Q* (pronounced Q-Star) project, mentioned in OpenAI’s internal communications, is seen as a potential breakthrough towards general artificial intelligence (GAI): a model capable of surpassing human performance in most tasks and not, as at present, in just one: chess, go, calculating protein folds, etc. To achieve this, the company’s machine learning capabilities will be enhanced by the development of an artificial intelligence model. To achieve this, the model’s learning capabilities need to be made considerably more powerful than those of GPT-4 Turbo, whose performance is nonetheless dazzling.

Q* is only at a preliminary stage. It solves maths problems at primary school level. But the toddler is arousing the greatest hopes among researchers. And equally high fears. So much so that some of them saw fit to write to the Board of Directors and the Scientific Director supported, first of all, the dismissal of the CEO, guilty of having ‘lacked transparency’ about Q*.

The sole statutory mission of OpenAI’s Board of Directors is to ensure that the models developed do not pose a risk to humanity. Directors may not hold shares in the underlying commercial companies.

The dismissal of chairman Greg Brockman and chief executive Sam Altman was initially decided by the other four directors: chief scientific officer Ilya Sutskever, one of the fathers of neural networks and everything that followed, right up to ChatGPT, entrepreneurs Tasha McCauley and Adam D’Angelo, and finally Helen Toner, who has just left Georgetown University for the Centre for AI Governance, newly created in Oxford by the UK government.

Fears about artificial intelligence are more pronounced among professionals than among the general public. They fall into two broad categories: black boxes and the ‘boom’. No human can retrace the workings of a neural network and redo the calculations used by ChatGPT to answer a question. When one model (probably an AGI) becomes capable of training others, the exponential growth in performance will see one of its last limits disappear: human control. This ‘fast overflow of machinery’ (FOOM) was anticipated as early as 1965 by Irving John Good, one of Alan Turing’s colleagues at Bletchley Park.

The list of potentially harmful consequences of such a phenomenon is limited only by human imagination. We’ve just devoted the whole week to Experts, reviewing the state of the art in scientific research on the subject. But one very simple thing emerges.

In exercising its right to dismiss Sam Altman, the Board was convinced that it was fulfilling its statutory mission.

Denial of reality

This was without taking into account the mobilisation of the staff around its CEO. More than 700 of the 770 employees signed a petition calling for the reinstatement of the man who would enable them to sell their shares on the basis of an enterprise value of 86 billion dollars.

The board then risked finding itself overseeing an empty shell. Ilya Surskever and then Adam D’Angelo both went to Canossa, surrendering to the irresistible force of the man they had thanked a few days earlier. Whatever their reservations, the other two directors had no choice but to submit.

In his future reform of governance, the now all-powerful Sam Altman may – or may not – show that he too believes in the risks of AI. The new board includes two old hands: Bret Taylor and Larry Summers. Bret Taylor is best known for co-creating Google Maps and serving as CTO of Facebook. He was also a Salesforce executive. Larry Summers is known for having served as US Treasury Secretary under President Bill Clinton and as President of Harvard University. He has also held senior positions at the International Monetary Fund and was a key economic advisor during Barack Obama’s presidency.

In France, we often tend to underestimate the risks of AI – a sign that we are lagging behind. But in a country where public opinion expects the State to take over from private governance, it is urgent that we move away from blissful support for entrepreneurs and address, at the highest level, the profoundly political dimension of artificial intelligence.

By Jean Rognetta and Maurice de Rambuteau

Chronology

Friday 17 November: Sam Altman, CEO of OpenAI, is abruptly dismissed by the board of directors, who announce that Mira Murati will become interim CEO.

Saturday 18 November: Despite his dismissal the previous day, major OpenAI investors, including Microsoft, put pressure on the board to reinstate Altman. An agreement is reported for his return and that of Greg Brockman, but the Saturday evening deadline passes without confirmation.

Sunday 19 November: Altman returns to the OpenAI offices to negotiate his reinstatement. However, OpenAI announces the appointment of Emmett Shear as the new interim CEO, leaving the outcome of the situation uncertain.

Monday 20 November: Microsoft CEO Satya Nadella announces that Altman and Brockman will join Microsoft to lead a new AI research team. At the same time, hundreds of OpenAI employees threaten to leave the company following Altman’s dismissal.

Tuesday 21 November / Wednesday 22 November: OpenAI announces early on Wednesday morning the official return of Sam Altman as CEO, with a new initial board of directors comprising Bret Taylor, Larry Summers and Adam D’Angelo.

Thursday 23 November: It is revealed that shortly before Sam Altman’s dismissal, OpenAI researchers sent a letter to the Board of Directors, warning of a major discovery posing a risk to humanity, which Sam Altman had allegedly kept from the Board.