The abrupt dismissal of OpenAI chief executive Sam Altman sent shockwaves through the world of artificial intelligence. But after Greg Brockman, the company’s president, quit in solidarity with Altman and more than 700 of its 770 employees threatened to do the same if Altman was not reinstated, it now appears he will return.
Instead, the OpenAI board that claimed Altman “was not consistently candid in his communications with the board”, without elaborating further, is to be revamped with new members. The lack of clarity about the reasons behind the split fuelled considerable speculation with a focus on ideological or philosophical differences about the future of artificial intelligence (AI).
Altman is known for pushing the AI industry to move quickly to release new AI-powered tools that others might have said were yet not ready for public use, like ChatGPT. It’s been suggested that the OpenAI board members who initially forced Altman out are more cautious; they worry about potential ‘existential risks’ they believe are associated with powerful AI tools and generally promote a slower approach to the development of increasingly larger and more capable generative AI models.
Boomers vs Doomers
These two ideological camps are sometimes referred to as ‘AI boomers’ – those who are ‘techno-optimists’, eager to hasten the benefits that they believe advanced AI will bring – and ‘AI doomers’ – those who worry that advanced AI poses potentially catastrophic risks to the survival of humanity.
The most extreme AI boomers decry any efforts to slow down the pace of development. Marc Andreessen, a billionaire venture capitalist and boomer, posted a ‘Techno-Optimist Manifesto’ in October in which he claimed that “social responsibility”, “trust and safety”, “tech ethics”, “risk management”, and “sustainability”, among other terms, represent “a mass demoralisation campaign… against technology and against life”. He also listed “the ivory tower” – in other words, our respected institutions of higher education – and “the precautionary principle”, which emphasises caution when dealing with potentially harmful innovations, as being among the techno-optimist’s “enemies”. You can see why someone might be concerned!
On the flip side, doomers are consumed with anxiety over the possibility that advanced AI might wipe out humankind. Some of OpenAI’s board members are affiliated with the Effective Altruist movement, which funds AI safety and AI alignment research and worries over the potential of this technology to destroy humanity.
The UK Government seems to be in the thrall of the doomers. Ian Hogarth, who leads the UK’s Frontier AI Taskforce, formerly known as the Foundation Model Taskforce, penned a viral opinion piece for the Financial Times in April, calling for a slow-down in “the race to God-like AI”. Rishi Sunak’s AI Safety Summit, held in early November 2023, was focussed on “existential risk”.
Shared doubts
Despite these differences, both boomers and doomers have one key belief in common: that we are just on the cusp of creating artificial general intelligence (AGI). You’ll be familiar with this thanks to the movies: Hal of 2001: A Space Odyssey, J.A.R.V.I.S. from the Iron Man and Avengers movies, Samantha from the movie Her, and of course the Terminator from the eponymous film are all examples of what Hollywood thinks AGI might look like. Boomers think it will bring amazing benefits, whereas doomers fear that, without precautions, we may end up with something more apocalyptic.
Nick Bostrom, co-originator of the Effective Altruism movement, explains their fears in the form of the “paperclip maximiser”: Pretend an otherwise harmless advanced AI technology had been set a goal to make as many paperclips as it could. An AGI of sufficient intelligence might realise that humans could thwart its paperclip maximising by either turning it off or changing its goals. Plus, humans are made of the same things paperclips are made of – atoms! Upon this realisation, the AGI could take over all matter and energy within its reach, kill all humans to prevent itself from being shut off or having its goals changed and, as a bonus, our atoms could then be turned into more paperclips. Truly a chilling thought experiment.
AI future is now
There is a third perspective, however – one that I share. I don’t believe we are anywhere close to the creation of AGI, and ‘existential risk’ is largely a bugbear and a distraction. But I still believe in the promise of AI, if only it is developed, governed, and applied responsibly to the areas where it can make a positive impact on human quality of life.
Given the emphasis on risk reduction, it might seem those who share my view have commonalities with the doomers. However, the main difference is that we believe current regulatory and safety efforts are best focussed on the many actual and present harms of AI tools, including, but not limited to, psychological harms suffered by gig workers hired to sanitise generative AI models, social harms caused by the persistent and endemic bias of generative AI models, and environmental harms such as the massive water and carbon footprint of generative AI models. Those who share my point of view were responsible for organising an ‘AI Fringe’ around the AI Safety Summit – focussed on addressing the real impacts of the technology, including on historically underrepresented communities and civil society, and diversifying the voices within the AI ecosystem.
While the ultimate fallout from Altman’s firing and rehiring is not yet clear, powerful actors at OpenAI, Microsoft, and other companies developing advanced AI are keen to direct public focus to hypothetical, ‘existential’ risks or potential future benefits of their technologies. I suggest that we would do well to remember that the harms of AI models are not just hypothetical, but all too real.
Discover more on AI at Edinburgh
The article was first published in The Scotsman 22 November 2023, read the original.
Image credit: Sam Altman on stage: Justin Sullivan / Staff / Getty; Sam Altman speaking in Tokyo: Tomohiro Ohsumi / Stringer / Getty; AI summit group: Leon Neal / Staff / Getty; Computer reflected in glasses: Westend61/Getty Images.