Could AI carry out coups next unless stopped now?

Experts fear the misuse of AI to destabilise economies and governments. Industry can’t be trusted to regulate itself.

Big Question
[Nataliia Shulga/Al Jazeera]
Correction31 May 2023
Kim Vigilia, vice president for strategy at the UK-based Humanising Autonomy, was incorrectly identified in an earlier version of this article.

Steve Wozniak is no fan of Elon Musk. In February, the Apple co-founder described the Tesla, SpaceX and Twitter owner as a “cult leader” and called him dishonest.

Yet, in late March, the tech titans came together, joining dozens of high-profile academics, researchers and entrepreneurs in calling for a six-month pause in training artificial intelligence systems more powerful than GPT-4, the latest version of Chat GPT, the chatbot that has taken the world by storm.

Their letter, penned by the United States-based Future of Life Institute, said the current rate of AI progress was becoming a “dangerous race to ever-larger unpredictable black-box models”. The “emergent capabilities” of these models, the letter said, should be “refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy and loyal”.

ChatGPT, released in November 2022 by research lab OpenAI, employs what is known as generative artificial intelligence – which allows it to mimic humans in producing and analysing text, answering questions, taking tests, and other language and speech-related tasks. Terms previously limited to the world of computer programming, such as “large language model” and “natural language processing,” have become commonplace in a growing debate around ChatGPT and its many AI-based rivals, like Google’s Bard or the image-generating app, Stability AI.

But so have stories of unhinged answers from chatbots and AI-generated “deepfake” images and videos – like the Pope sporting a large white puffer jacket, Ukrainian President Volodymyr Zelenskyy appearing to surrender to Russian forces and, last week, of an apparent explosion at the Pentagon.

Meanwhile, AI is already shaking up industries from the arts to human resources. Goldman Sachs warned in March that generative AI could wipe up 300 million jobs in the future. Other research shows that teachers could be among the most affected.

Then there are more nightmare scenarios of a world where humans lose control of AI – a common theme in science fiction writing that suddenly seems not quite so implausible. Regulators and legislators the world over are watching closely. And sceptics received endorsement when computer scientist and psychologist Geoffrey Hinton, considered one of the pioneers of AI, left his job at Google to speak publicly about the “dangers of AI” and how it will become capable of outsmarting humans.

So, should we pause or stop AI before it’s too late?

The short answer: An industry-wide pause is unlikely. In May, OpenAI CEO Sam Altman told a US Senate hearing that the company would not develop a new version of ChatGPT for six months and has used his new mainstream fame to call for more regulation. But competitors could still be pushing forward with their work. For now, say experts, the biggest fear involves the misuse of AI to destabilise economies and governments. The risks extend beyond generative AI, and experts say it is governments – not the market – that must set guardrails before the technology goes rogue.

A delivery drone developed by Meituan is displayed at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 8, 2021. Picture taken July 8, 2021. REUTERS/Yilei Sun
A delivery drone developed by Meituan is displayed at the World Artificial Intelligence Conference [WAIC] in Shanghai, China, on July 8, 2021 [File: Yilei Sun/Reuters]

Science fiction no more

At the moment, much of the technology on offer revolves around what the European Parliamentary Research Service calls “artificial narrow intelligence” or “weak AI”. This includes “image and speech recognition systems … trained on well-labelled datasets to perform specific tasks and operate within a predefined environment”.

While this form of AI can’t in itself escape human control, it has its own problems. It reflects the biases of its designer. Some of this technology has been used for nefarious purposes, like surveillance through facial recognition software.

On the positive side, this AI also includes things like digital voice assistants and the tech behind self-driving cars.

Generative AI like GPT-4 is also technically “weak AI”, but even OpenAI admits it does not fully understand how the chatbot works, raising some troubling questions and speculation. GPT-4 has also reputedly shown “sparks of artificial general intelligence”, according to a major study by Microsoft Research released in April, due to its “core mental capabilities”, its “range of topics on which it has gained expertise”, and the “variety of tasks it is able to perform”.

Artificial general intelligence, or strong AI, is still a hypothetical scenario, but one where AI could operate autonomously and even surpass human intelligence. Think 2001 Space Odyssey or Blade Runner.

It is exciting and troubling.

“It’s worth remembering that nuclear weapons, space travel, gene editing, and machines that could converse fluently in English were all once considered science fiction,” Stuart Russell, director of the Center for Human-Compatible AI at the University of California, Berkeley, told Al Jazeera. “But now they’re real.”

ChatGPT logo and AI Artificial Intelligence words are seen in this illustration taken, May 4, 2023. REUTERS/Dado Ruvic/Illustration//
ChatGPT employs what is known as generative artificial intelligence  which allows it to mimic humans in producing and analysing text [File: Dado Ruvic/Illustration via Reuters]

‘Take it seriously’

So how far is AI from being able to truly build a mind of its own?

Russell asked Sebastien Bubeck, the lead author of the Microsoft study that claimed GPT-4 had shown signs of artificial general intelligence, whether the chatbot technology could have developed its own “internal goals”. After all, “it is trained to imitate human linguistic behaviour, which is produced by humans who have goals”, he said by email.

“‘The answer? ‘We have no idea.’”

It would be “irresponsible” to deploy such a powerful technology that “operates according to unknown principles” and “may or may not be pursuing its own internal goals”, Russell said, citing an example from his 2019 book, Human Compatible: AI and the Problem of Control.

In the book, Russell uses a hypothetical scenario to argue that a “future super-intelligent system”, assigned a task with the best of intentions, could still set off a crisis. He imagines a situation where the AI-based system is asked to pursue a cure for cancer as quickly as possible. Someone dies from cancer every 3 seconds. In a few hours, the AI-powered machine has read all the “biomedical literature and hypothesised millions of potentially effective but previously untested chemical compounds”, Russell said, describing this scenario.

That’s when the danger kicks in.

“Within weeks, it has induced multiple tumours of different kinds in every living human being so as to carry out medical trials of these compounds, this being the fastest way to find a cure,” Russell said. “Oops.”

For Yoshua Bengio, another signatory of the Future of Life letter and the scientific director of the Montreal Institute for Learning Algorithms, these unanswered questions about the true capabilities of AI systems are a reason to take a beat on advancing beyond systems like GPT-4.

“There’s really not enough research to have a good handle on whether we are okay, or it’s very dangerous – and as a scientist who cares about humanity, if you tell me that there’s a lot of uncertainty about something that could be highly disruptive, I think we should care,” he said. “We should take it seriously.”

FILE PHOTO: Tesla Inc CEO Elon Musk attends the World Artificial Intelligence Conference (WAIC) in Shanghai, China August 29, 2019. REUTERS/Aly Song/File Photo
Tesla CEO Elon Musk attends the World Artificial Intelligence Conference [WAIC] in Shanghai, China, on August 29, 2019 [File: Aly Song/Reuters]

Or is it just maths?

John Buyers, head of the AI and machine learning client team at the British law firm Osborne Clark, sees things slightly differently.

“We are not yet at a stage where AI is developing consciousness. It is very much a technological tool,” he told Al Jazeera. “It is exponentially becoming more and more capable, but at the end of the day, it is still actually a very clever piece of maths,” he said.

Current AI systems, he said, are very good at correlating patterns, and as statistical engines. ChatGPT, for instance, can figure out what is the statistically most probable sentence to follow what the user types into its interface, he said.

But “that’s not engaging you in sentient conversation”, Buyers said. “I think we’re always in the development cycle before we can get to kind of the level of conscious artificial intelligence.”

Still, Buyers is concerned about the impact of AI and agrees on the need to begin regulating it.

He also believes that decisions about the future of AI should not be left up to the tech industry alone.

“It’s certainly not up to Elon Musk to decide whether or not there is a pause on the technology,” he said. “On the one hand, he’s telling people that there should be a pause, and on the other, he’s rolling out his investments in various AI businesses, and he’s obviously deploying fleets of cars that have artificial intelligence in them – it’s a bit of a hypocritical stance.”

So, if not industry, then who should regulate AI?

Buyers is clear about the answer.

“It’s up to nation states and societies to decide how these technologies should be tempered – not companies,” he said. “They should not be rolled out on the basis of market forces alone.”

OpenAI CEO Sam Altman testifies before a Senate Judiciary Privacy, Technology & the Law Subcommittee hearing titled 'Oversight of A.I.: Rules for Artificial Intelligence' on Capitol Hill in Washington, U.S., May 16, 2023. REUTERS/Elizabeth Frantz
OpenAI CEO Sam Altman testifies before a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing, titled Oversight of AI: Rules for Artificial Intelligence, on Capitol Hill in Washington, DC, May 16, 2023 [Elizabeth Frantz/Reuters]

Where do regulators stand?

At the moment, state regulators are very much playing catchup and debating questions about how specific legislation should get given the breakneck pace of change.

China is leading the way with almost bespoke regulations for every new advance. Last year, the Cyberspace Administration of China moved forward with a so-called “algorithm registry”, followed by regulations of deep synthesis or “deepfake” technology at the start of 2023.

In April, China released new draft rules – their most sweeping yet – to regulate generative AI products to ensure they are compliant with the “core values of socialism”, making tech companies responsible for any mishaps. Commentators, however, said these rules may be difficult to enforce.

The European Union, meanwhile, has taken a much broader approach that will build on its already strict personal data protection laws. A proposed Artificial Intelligence Act will aim to classify AI based on perceived risk, dividing tech into three categories: acceptable, high-risk and so to be regulated, and so dangerous that it must be banned.

Canada and Brazil appear to be following a similar direction as the EU, with a push to regulate the acceptable use of AI tech. In the US, the federal government appears to be moving more slowly. Individual US states like Illinois, Colorado, California and Virginia have, however, taken the initiative to regulate certain kinds of AI, like facial recognition and biometric data management.

Experts believe that the rest of the world will be watching these early attempts at building regulations for AI to see which approach appears to work best. That, in turn, could determine what other nations do.

Images created by Eliot Higgins with the use of artificial intelligence show a fictitious skirmish with Donald Trump and New York City police officers posted on Higgins' Twitter account, as photographed on an iPhone in Arlington, Va., Thursday, March 23, 2023. The highly detailed, sensational images, which are not real, were produced using a sophisticated and widely accessible image generator. (AP Photo/J. David Ake)
Images created by Bellingcat founder Eliot Higgins with the use of artificial intelligence show a fictitious skirmish with Donald Trump and New York City police officers on Thursday, March 23, 2023. The detailed, sensational images, which are not real, were produced using a sophisticated and widely accessible image generator [J David Ake/AP Photo]

The threat beyond AI – From humans

But some analysts believe that the real danger is not AI technology itself but the people and the companies who wield it – and how they use it.

Kim Vigilia, vice president for strategy at the UK-based Humanising Autonomy, a company that tries to take a “human-centric” approach in developing AI software, said that boundaries need to be set on the technology’s use now.

“The AI predominantly out there right now – including some generative AI – right now, it’s a tool. But it’s a really powerful tool. It’s getting more powerful,” she told Al Jazeera. “So the people who are using it just have to have more control over the boundaries.”

She said it is time to be “concerned about how companies are using it – how much automation … how much of their decision-making is made by a machine”.

Other experts share those concerns – even if they differ on current AI capability.

Berkeley’s Russell told Al Jazeera that individual private interests should come second to greater ethical guidelines. Technology should be made to suit legislation and not vice versa.

“In my view, regulators’ primary concern should be protecting individuals and society. If AI system developers cannot meet the necessary criteria for safety and predictability, that’s their problem, not ours,” he said.

Much of this is because, even though current AI is not sentient, it has passed the Turing test –  which means it can fool a human into thinking it’s another human.

Even without AI, “we already have trolls and social media and cyber-attacks through email and other means”, pointed out Bengio at the University of Montreal. “But they’re humans that are behind the keyboard, like trolls, and the size of these troll armies can’t be that big.”

By contrast, “an AI-driven system could be replicated on a huge scale,” he said. “I’m in particular concerned about destabilising democracy.”

In late March, Eliot Higgins, founder of the investigative journalism group Bellingcat, shared an image showing former US President Trump falling to the ground while being arrested in New York. It was a deepfake image that Higgins had created for fun. But it went viral, raising concerns about the power of AI to potentially mislead millions of people in a volatile political environment that has already seen violent attacks like the January 6, 2021, riots at the US Capitol by Trump supporters.

And things could get a lot worse, experts caution. AI could also make scandals like Cambridge Analytica, a consulting firm that harvested US voter data from social media ahead of the 2016 election, look quaint.

Unlike with most other industrial sectors, the public is largely unprotected from potential AI harm beyond some privacy-related standards, Bengio said.

“There’s an urgency to set in place a lot more governance frameworks than we currently have – which is essentially nothing,” he said.

It’s a race against time. And it’s a race against machines and technology that – for all their other limitations – are specifically good at one thing: speed.

Source: Al Jazeera