Episode 1: Targeted by Algorithm
Artificial intelligence is already here.
There’s a lot of debate and hype about AI, and it’s tended to focus on the extreme possibilities of a technology still in its infancy. From self-aware computers and killer robots taking over the world, to a fully-automated world where humans are made redundant by machines, the brave new world of Artificial Intelligence is prophesied by some to be a doomed, scary place, no place for people.
For others, AI is ushering in great technological advances for humanity, helping the world communicate, manufacture, trade and innovate faster, longer, better.
Marginalised communities are experimented upon, and they're on the front lines of these technological systems, the front lines of harm. They are also on the front lines of rebellion and refusal.
But in between these competing utopian and dystopian visions, AI is allowing new ways of maintaining an old order.
It is being used across public and private spheres to make decisions about the lives of millions of people around the world – and sometimes those decisions can mean life or death.
“Communities, particularly vulnerable communities, children, people of colour, women are often characterised by these systems, in quite misrepresentative ways,” says Safiya Umoja Noble, author of the book, Algorithms of Oppression.
In episode one of The Big Picture: The World According to AI, we chart the evolution of artificial intelligence from its post-World War II origins and, dissect the mechanisms by which existing prejudices are built into the very systems that are supposed to be free of human bias.
We shed a harsh light on computerised targeting everywhere from foreign drone warfare to civilian policing. In the UK, we witness the trialling of revolutionary new facial recognition technology by the London Metropolitan Police Service.
We examine how these technologies, that are far from proven, are being sold as new policing solutions to maintain order in some of the world’s biggest cities.
The Big Picture: The World According to AI explores how artificial intelligence is being used today, and what it means to those on its receiving end.
Episode 2: The Bias in the Machine
Artificial intelligence might be a technological revolution unlike any other, transforming our homes, our work, our lives; but for many – the poor, minority groups, the people deemed to be expendable – their picture remains the same.
There are human biases in targeting on the battlefield, there are human biases in who gets loans, there are human biases in who is subject to arrest ... The algorithms have refined the worst of human cognition, rather than the best.
“The way these technologies are being developed is not empowering people, it’s empowering corporations,” says Zeynep Tufekci, from the University of North Carolina.
“They are in the hands of the people who hold the data. And that data is being fed into algorithms that we don’t really get to see or understand that are opaque even to the people who wrote the programme. And they’re being used against us, rather than for us.”
In episode two of The Big Picture: The World According to AI we examine practices such as predictive policing, predictive sentencing, as well as the power structures and in-built prejudices that could lead to even more harm than the good its champions would suggest.
In the United States, we travel to one of the country’s poorest neighbourhoods, Skid Row in Los Angeles, to see first-hand how the Los Angeles Police Department is using algorithmic software to police a majority black community.
And in China, we examine the implications of a social credit scoring system that deploys machine learning technologies – new innovations in surveillance and social control that are claimed to be used against ethnic Uighur communities.
As AI is used to make more and more decisions for and about us, from targeting, to policing, to social welfare, it raises huge questions. What will AI be used for in the future? And who will stand to benefit?