Artificial intelligence reinforces power and privilege

Computational surveillance empowers governments and corporations and diminishes accountability.

artificial intelligence
A man looks at a demonstration of human motion analysis software during the Security China 2018 exhibition on public safety and security in Beijing on October 24, 2018 [File: Reuters/Thomas Peter]

What do a Yemeni refugee in the queue for food aid, a checkout worker in a British supermarket and a depressed university student have in common? They’re all being sifted by some form of artificial intelligence.

Advanced nations and the world’s biggest companies have thrown billions of dollars behind AI – a set of computing practices, including machine learning that collate masses of our data, analyse it, and use it to predict what we would do.

Yet cycles of hype and despair are inseparable from the history of AI. Is that clunky robot really about to take my job? How do the non-geeks among us distinguish AI’s promise from the hot air and decide where to focus concern?

190607121159308

Computer scientist Jaron Lanier ought to know. An inventor of virtual reality, Lanier worked with AI pioneer Marvin Minsky, one of the people who coined the term “artificial intelligence” in the 1950s. Lanier insists AI, then and now, is mostly a marketing term. In our interview, he recalled years of debate with Minsky about whether AI was real or a myth:

Advertisement

“At one point, [Minsky] said to me, ‘Look, whatever you think about this, just play along, because it gets us funding, this’ll be great.’ And it’s true, you know … in those days, the military was the principal source of funding for computer science research.

And if you went into the funders and you said, ‘We’re going to make these machines smarter than people some day and whoever isn’t on that ride is going to get left behind and big time. So we have to stay ahead on this, and boy! You got funding like crazy.'”

But at worst, he says, AI can be more insidious: a ploy the powerful use to shirk responsibility for the decisions they make. If “computer says, ‘no,'” as the old joke goes, to whom do you complain?

We’d all better find out quickly. Whether or not you agree with Lanier about the term AI, machine learning is getting more sophisticated, and it’s in use by everyone from the tech giants of Silicon Valley to cash-strapped local authorities. From credit to jobs to policing to healthcare, we’re ceding more and more power to algorithms, or rather – to the people behind them.

Many applications of AI are incredible: we could it to improve wind farms or spot cancer sooner. But that isn’t the only, or even the main, AI trend. The worrying ones involve the assessment and prediction of people – and, in particular, grading for various kinds of risk.

As a human rights lawyer doing “war on terror” cases, I thought a lot about our attitudes to risk. Remember Vice President Dick Cheney’s “one percent doctrine”? He said that any risk – even one percent – of a terror attack would, in the post-9/11 world, to be treated like a certainty.

Advertisement

That was just a complex way of saying that the US would use force based on the barest suspicion about a person. This attitude survived the transition to a new administration – and the shift to a machine learning-driven process in national security, too.

The World According to AI: Targeted by Algorithm

During President Barack Obama’s drone wars, suspicion didn’t even need to be personal – in a “signature strike”, it could be a nameless profile, generated by an algorithm, analysing where you went and who you talked to on your mobile phone. This was made clear in an unforgettable comment by ex-CIA and NSA director Michael Hayden: “We kill people based on metadata,” he said.

Now a similar logic pervades the modern marketplace, the sense that total certainty and zero risk – that is, zero risk for the class of people Lanier describes as “closest to the biggest computer” – is achievable and desirable. This is what is crucial for us all to understand: AI isn’t just about Google and Facebook targeting you with advertisements. It’s about risk.

The police in Los Angeles believed it was possible to use machine learning to predict crime. London’s Metropolitan Police, and others, want to use it to see your face wherever you go. Credit agencies and insurers want to build a better profile to understand whether you might get heart disease, or drop out of work, or fall behind on payments.

It used to be common to talk about “the digital divide”. This originally meant that the skills and advantages of connected citizens in rich nations would massively outrun poorer citizens without computers and the Internet. The solution: get everyone online and connected. This drove policies like One Laptop Per Child – and it drives newer ones, like Digital ID, the aim to give everyone on Earth a unique identity, in the name of economic participation. And connectivity has, at times, indeed opened people to new ideas and opportunities.

Advertisement

But it also comes at a cost. Today, a new digital divide is opening. One between the knowers and the known. The data miners and optimisers, who optimise, of course, according to their values, and the optimised. The surveillance capitalists, who have the tools and the skills to know more about everyone, all the time, and the world’s citizens.

AI has ushered in a new pecking order, largely set by our proximity to this new computational power. This should be our real concern: how advanced computing could be used to preserve power and privilege.

This is not a dignified future. People are right to be suspicious of this use of AI, and to seek ways to democratise this technology. I use an iPhone and enjoy, on this expensive device, considerably less personalised tracking of me by default than a poorer user of an Android phone.

When I apply for a job in law or journalism, a panel of humans interviews me; not an AI using “expression analysis” as I would experience applying for a job in a Tesco supermarket in the UK. We can do better than to split society into those who can afford privacy and personal human assessment – and everyone else, who gets number-crunched, tagged, and sorted.

Unless we head off what Shoshana Zuboff calls “the substitution of computation for politics” – where decisions are taken outside of a democratic contest, in the grey zone of prediction, scoring, and automation – we risk losing control over our values.

The future of artificial intelligence belongs to us all. The values that get encoded into AI ought to be a matter for public debate and, yes, regulation. Just as we banned certain kinds of discrimination, should certain inferences by AI be taken off the table? Should AI firms have a statutory duty to allow in auditors to test for bias and inequality?

Advertisement

Is a certain platform size (say, Facebook and Google, which drive much AI development now and supply services to over two billion people) just too big – like Big Rail, Big Steel, and Big Oil of the past? Do we need to break up Big Tech?

Everyone has a stake in these questions. Friendly panels and hand-picked corporate “AI ethics boards” won’t cut it. Only by opening up these systems to critical, independent enquiry – and increasing the power of everyone to participate in them – will we build a just future for all.

The views expressed in this article are the author’s own and do not necessarily reflect Al Jazeera’s editorial stance.


Advertisement