Technology is the new border enforcer, and it discriminates

Tech solutions have not made border control more objective or humane, but rather more dangerous.

Drone test near the bo
An Austrian police special forces pilot tests a drone in Dadia village, near the Greek-Turkish border on March 11, 2020 [File: AP/Giannis Papanikos]

Across the globe, an unprecedented number of people are on the move due to conflict, instability, environmental disasters, and poverty. As a result, many countries have started exploring technological solutions for border enforcement, decision-making, and data collection and processing.

From drones patrolling the Mediterranean Sea to Big Data projects predicting people’s movement to automated decision-making in immigration applications, governments are justifying these innovations as necessary to maintain border security. However, what they often omit to acknowledge is that these high-risk technological experiments exacerbate systemic racism and discrimination.

On November 10, the United Nations Special Rapporteur on Discrimination released a critical new report on racial and xenophobic discrimination, emerging digital technologies, and immigration enforcement. Supported by an investigation by the European Digital Rights (EDRi) and other researchers, the report shows the far reaching ramifications of technological experiments on marginalised communities crossing borders.

Despite popular public perception that technology is objective and perhaps less brutal and more neutral than humans, its use in border policing deepens discrimination and leads to tragic loss of life.

As Adissu, a young Eritrean man living in Brussels without papers, told us in an interview in July: “We are Black and border guards hate us. Their computers hate us too.”

A whole host of actors and players operate in the dizzying panopticon of technological development, obscuring responsibility and liability, exacerbating racism and structural violence and obfuscating meaningful mechanisms of redress. These negative impacts are disproportionately felt by marginalised and under-resourced communities, who already lack or are denied access to robust human rights protections and resources with which to defend themselves.

EDRi’s research in Greece and conversations with people on the move revealed that certain places serve as testing grounds for new technologies, places where regulation is limited and where an “anything goes” frontier attitude informs the development and deployment of surveillance at the expense of humanity.

This techno-solutionism is coupled with increasing criminalisation of migrants crossing borders and dangerous far-right narratives stoking anti-migrant sentiments across the globe.

More and more, violent uses of technology push policing beyond actual borders demarcations and reinforce border militarisation. These policies have resulted in growing discrimination, brutal mistreatment and even death along borders: dangerous pushbacks to Libya, drownings in the Mediterranean, cruel detention and separation of children from their families at the US-Mexico border. Facial recognition technologies, which are supposed to be less invasive, also have serious negative effects. They ultimately perpetuate systemic racism by aiding in the over-policing of racialised communities.

Many of these malpractices have been facilitated by private companies like Palantir, which has provided critical data infrastructure to build profiles of migrant families and their children in the US, assisting with their detention and deportation. Countries have allocated significant funds to finance these operations, with Big Tech and private security companies making significant profits from lucrative government contracts.

There is little government regulation of the use of border technologies and decisions around the use and functionality of tech tools at borders often occur without consultation with border communities or the consent of affected groups. As a result, what is “acceptable” is increasingly determined by the private companies that profit from the abuse of and data extraction from people on the move.

To change this, we need a fundamental shift in the regulation of the use of technology in the sphere of migration. At the Migration and Technology Monitor, a new collective of community groups, journalists, filmmakers, and academics created to monitor the use of border and immigration enforcement technologies, we call for states and international organisations to commit to the abolition of automated migration management technologies until thorough independent, and impartial human rights impact assessments are concluded.

Systemic harms must be at the centre of discussion, fundamental rights have to be strictly upheld, and affected communities and civil society must drive the conversation around the development of technology in the migration and border space.

The tech community – policymakers, developers, and critics alike – must also push the conversation beyond reform and towards abolition of technologies that hurt people, destroy communities, separate families, and exacerbate the deep structural violence continually felt by Black, Indigenous, racialised, LGBTQI+, disabled, and migrant communities.

As Kaleb, an Ethiopian man in his thirties who is trying to seek asylum in the UK said in an interview with us, technology is increasingly reducing people to “a piece of meat without a life, just fingerprints and eye scans.”

People on the move like Kaleb – an already marginalised community – are routinely having their rights violated. Until we can understand and mitigate these real harms, there should be a moratorium on the development and deployment of border technologies. These are real people who should not be reduced to data points and eye scans.

The views expressed in this article are the authors’ own and do not necessarily reflect Al Jazeera’s editorial stance.