6 minute read

The Dangers of Algorithmic Policing

The question of how artificial intelligence should be used by society has become increasingly prominent. One area of particular concern is the use of AI by law enforcement, a practice that is already widespread in the United States and the United Kingdom and continues to develop in Canada. However, experts have pointed out that AI exacerbates the existing problems with policing, leading to more surveillance of marginalized communities and encroaching on our privacy.

Algorithmic, or big data policing refers to “a range of technological practices used by law enforcement to gather and process surveillance data on people, places, and populations and to forecast where and when crime is likely to occur.”

Advertisement

A 2020 report from the University of Toronto’s Citizen Lab stated that “multiple law enforcement agencies across Canada have started to use, procure, develop, or test a variety of algorithmic policing methods.”

For example, the RCMP has long used facial recognition in human trafficking cases. The Edmonton

Police Service uses a software called NeoFace Reveal to compare faces to their mugshot database. The Toronto Police, among many other Canadian police departments, came under fire in 2020 for using facial recognition software without public knowledge. According to the Canadian Centre for Civil Liberties, “facial recognition technology has been deployed by police forces without notice, meaningful consultation, or public oversight and accountability.”

The report defined four types of algorithmic technology used by police forces: automated license plate readers (ALPRs) that can detect license plates and use them to find information about the vehicle in question; social media surveillance software that analyzes data from social media platforms; facial recognition technology to detect faces and compare them with a database to retrieve information about people; and social network analyses that use data to analyze relationships in a particular social organization. As people’s lives are increasingly online, this provides significant data about their lives that private companies or law enforcement can collect and use for surveillance.

Data-driven policing strategies have many advantages for law enforcement. They promise increased efficiency by using algorithmic analysis to predict where crime will occur and decide which areas law enforcement should focus on. They allow law enforcement to easily find information about a person and use that data to predict whether that person is dangerous. Additionally, they’re able to collect data on general mobility patterns and social activities at a much larger scale than older strategies.

However, algorithmic policing strategies, especially facial recognition, are embedded with bias and can be very fallible. This produces concrete harm for marginalized people, something many critical AI experts have drawn attention to.

The Montreal Society and Artificial Intelligence Collective (MoSAIC) argues that data-driven policing technologies present an “extreme risk,” as they “can be used for intrusive mass surveillance, particularly when used in live settings.” Dr. Joy Buolamwini, founder of the Algorithmic Justice League and a researcher at MIT, has found that AI services have significant difficulty identifying darker-skinned and female faces. Data scientist Kelsey

Campbell from Gayta Science has similarly flagged the ways in which AI presents a danger to trans and nonbinary people. According to Campbell, AI is usually programmed based on binary conceptions of gender, ignoring the reality of transgender and nonbinary people. It automatically classifies people as either male or female based on arbitrary categories, namely body parts. If transgender or non-binary people are using AI technologies, they may have to identify themselves as male or female, even when this choice doesn’t reflect their gender identity. Facial recognition also doesn’t account for a changing appearance, such as a transition, putting transgender people at an increased risk of being flagged as dangerous – or possibly outed. Even if the technology itself was not biased, it can enable or exacerbate existing biases within police forces by enhancing their capabilities of surveillance and profiling. A 2021 report by the Standing Committee on Public Safety and National Security determined that systemic racism in policing “is a real and pressing problem to be urgently addressed.” If police officers already hold deep-seated racial bias, they’re unlikely to question it in the technology they’re using. In the United States, where facial recognition is even more widespread, Black people such as Nijeer Parks, Robert Williams, and Michael Oliver have been falsely arrested for crimes they didn’t commit because they were misidentified by facial recognition software. Although all three of these men were eventually cleared, the trauma from these false arrests had serious repercussions for their families and communities. Although police have targeted marginalized communities since long before AI was created, this technology allows them to conduct their operations more efficiently, leading to increased surveillance of marginalized groups. If certain people’s faces are less “legible” to AI, then there’s an increased risk that they will be flagged as dangerous or mistaken for someone else. If AI considers someone to be dangerous, it gives the police more justification to stop them on the street or arrest them, even if they have no other reason to suspect them of a crime.

Another concern is the lack of transparency regarding how police departments use this technology and where they acquire it. In January 2020, it was revealed that many Canadian police departments used facial recognition technology provided by Clearview AI without public knowledge. Clearview AI offers a large database of faces mostly scraped from the internet without consent from the websites or from those whose data was taken. By the time the collaboration came to light, Canadian police had already run 3,400 searches using 150 free trial accounts. In February 2021, Canadian authorities declared Clearview AI’s activities illegal as they were ‘mass surveillance.’ Another company, Palantir, which provides technology to the Calgary Police, was also used by ICE in the United States to plan raids and detain undocumented immigrants, receiving condemnation from Amnesty International.

A further issue with transparency is the lack of consultation with is no AI-specific legislation in Canada to regulate the use of algorithmic policing technologies other than existing privacy and human rights legislation and the Directive on Automated Decision-Making. The Directive states that automated decision-making technologies have to pass a risk-assessment, and there should be opportunities for affected individuals to provide feedback and seek recourse. However, the Directive currently has no power over technologies developed at the provincial level or by private companies. Without a proper governance framework, it’s nearly impossible for the federal government to regulate the use of these technologies for either public institutions or private companies. And given the speed at which AI is developing, it will only become more difficult for policymakers to keep up. the communities in which these technologies are deployed. For example, in 2021, the SPVM announced that they were installing nine new cameras in public spaces to tackle gun violence. They were subsequently criticized for not adequately consulting the communities affected. Surveilling people without consultation or informed consent takes away their agency in tackling issues within their own communities. Meaningful community consultations must be as accessible as possible, which means providing information in languages spoken by community members, advertising the consultation sessions through different mediums, and using accessible language to describe the technology and its impacts. MoSAIC argues that if a technology cannot be easily explained or interpreted to the communities affected, it is inherently high-risk.

Because the Canadian government currently lacks policies that effectively constrain the use of algorithmic policing technologies, there is also little accountability. At present, there

In order to mitigate the harms of AI in policing, many civil society groups are calling for a moratorium on the use of AI technology by law enforcement until proper regulation can be put in place. When Vermont became the first state to ban facial recognition in 2020, the American Civil Liberties Union of Vermont stated that this ban “sends a clear message that instead of a discriminatory police state, Vermonters want to create communities where everyone can feel safe, regardless of what they look like, where they are from, or where they live.” Across the United States, states and cities have already banned the use of algorithmic and predictive policing technologies, especially facial recognition, by law enforcement. Many of these bans came into effect after the murder of George Floyd in 2020, when there emerged an increased national consciousness about systemic racism and bias in policing. In October 2022, in Canada, the House of Commons Standing Committee on Access to Information, Privacy and Ethics recommended a “national pause” on facial recognition technology until a substantive legal framework could be put in place.

The Citizen Lab report provides additional recommendations for the government to mitigate the harms of AI. They ask for a judicial inquiry into the use of police datasets in algorithmic policing technologies. They call for complete transparency surrounding which technologies are being developed, used, or procured by police departments, and for the government to establish requirements concerning reliability, necessity, and proportionality before a technology can be used. Provincial governments should also implement regulations for algorithmic policing technologies within their own provinces. Police departments should also obtain judicial authorization before deploying any of these technologies. Finally, governments and law enforcement should bring in external experts, including those from communities targeted by police violence, to design policies for implementing technologies, developing regulation, and monitoring the effects of algorithmic policing technologies. While AI as a technology is still developing, there is concrete evidence that it can cause harm, especially in the hands of law enforcement. It’s concerning that institutions with documented problems of systemic racism are allowed to use these powerful technologies with little public oversight. Many studies have proven that these technologies are extremely fallible, especially when dealing with women, transgender people, and people of colour. The past few years have brought increased attention to police violence towards marginalized communities, and giving police departments more sophisticated tools to conduct their operations will only exacerbate this violence and discrimination.

This article is from: