5 minute read

The Consequences of Surveillance

In this contemporary digital age, we are constantly sharing personal information: our interests, photos, consumer history, location, desires, and countless other facets of our digital footprints. Technology is developing faster than it can be controlled and there is a lack of regulation in how our private information is sold. But what happens if technology enforces the law, weaponizing our personal data to uphold a white, capitalist surveillance state?

In January 2020, the New York Times revealed that over 600 law enforcement agencies have been using Clearview, a facial recognition app that is still in its covert start-up stage. Clearview’s facial recognition algorithm compares uploaded photos to similar photos in its comprehensive database, along with the links to the photos’ source. The database is a collection of faces from across the Internet: news sites, employment sites, and social networks such as Facebook, Twitter, and even Venmo. Many companies ban this scraping of users’ data from their websites, and Twitter has explicitly prohibited use of its data for facial recognition. But Clearview has deliberately disregarded these other companies’ data privacy policies in order to capitalize on the pervasive police state.

Advertisement

Clearview’s massive photo network, which stores photos even after accounts are made private, gives its clients an unprecedented efficiency in identifying subjects, whose faces can even be partially covered in the photos. When testing out Clearview’s technology, one Indiana State Police force was able to identify a suspect from a fight filmed by a bystander within 20 minutes of using the app. Many fear the release of this invasive technology to civilians, but placing this power in the hands of the police already renders it ripe for abuse. Police officers may use this technology for personal reasons without consequence as they have historically done so with other weapons granted by the state. Multiple studies show that 40 percent of police officer families experience domestic violence in comparison to 10 percent of families in the general population.

Furthermore, this technology facilitates the racism that has long pervaded law enforcement. Police are now able to identify activists from one picture, or figure out the location of any citizen they racially profile. Immigration and Customs Enforcement (ICE) uses Amazon’s Rekognition software, which matches photos to real-time footage of police body cameras and city camera networks in order to identify and arrest undocumented immigrants. Yet studies have found that Amazon’s software is more likely to misidentify faces with darker skin, and similar gender and racial biases have been found in facial analysis software from Microsoft and IBM. The solution to these inherent biases is not to improve the accuracy of surveillance technology, but to eliminate its use as an instrument of oppression and injustice entirely.

Personal surveillance technology like Ring, Amazon’s home surveillance technology, forces people to exhibit police behavior in their own communities. Law enforcement agencies are allowed to access people’s Ring surveillance footage without a warrant, and as of July 2019, more than 200 police departments were partnered with Ring. These police departments have special access to Ring’s accompanying app, Neighbors, where people can share their surveillance footage with anyone else that has the app. Police can request access to a user’s Ring footage on the app, and specify a date, time, and location of the footage to notify the app users in the surrounding area.

Mass surveillance has become automated, and individuals are never able to escape from the overreaching police state.

The advancement of technology and its infiltration of private data is a powerful tool to supposedly increase public safety. But the police do not exist to defend people, they exist to defend the structures of the status quo and maintain the power of whiteness and capitalism.

This conversation about data policing extends beyond facial recognition software: What are the consequences and definition of free speech? How do companies, the government, and other monolithic entities track our digital footprint? And for what purposes? These questions seem hypothetical, but these fears are grounded in reality. The government and different corporations already manipulate data to target and silence marginalized communities, such as pro-Palestine activists and sex workers.

More often than not, the Internet is an unsafe space for activists to organize and share anti-establishment ideas. For instance, user-run platforms such as Canary Mission publicly condemn students and faculty who express pro-Palestine ideology, and post all public information for future employers, colleagues, and universities to see.

The openness of the Internet once presented a haven to sex workers, but recent legislation has caused social media platforms to ban users that made their sites sustainable and profitable in the first place. Legislation such as SESTA/FOSTA claims to protect against sex trafficking by penalizing and pursuing websites that “promote or facilitate prostitution” and “knowingly assisting, facilitating, or supporting sex trafficking,” but these broad claims lead content platforms to overpolice and censor their content to avoid any risk of civil or criminal liability. However, generally criminalizing platforms that host sexual content criminalizes safer, independent methods of sex work. Without the community and safety of individual online profiles, sex workers are subjected to the demands of third-party companies and authorities who do not grant them as much agency in their labor. Companies have enacted site-wide bans on NSFW content to protect themselves against criminal charges instead of actually discerning between child sexual abuse and other forms of nonconsensual sexual trafficking that should be restricted differently than consensual sex work. Yet sites like 8chan and Tumblr still allow cesspools of white supremacy to form and organize, which has led to real-world consequences, such as the mass shootings in El Paso, Texas and Poway, California.

Independent networks of connection allow sex workers to evade criminalization and exploitation, especially when many sex workers, like trans sex workers or undocumented sex workers, are locked out of other forms of labor due to discrimination. Sex workers are able to organize online and communicate about safe work methods, dangerous clients, community meetings, and other information that gives them more safety and stability. The criminalization, and the consequent surveillance and erasure of sex workers takes away their agency and means of survival. It bans them from participating in labor to survive in a capitalist society, while those who uphold these unjust standards are still allowed to participate in the Internet, and in fact, govern it.

Surveillance technology brings us closer to a dystopian panopticon, where individuals are constantly being watched without their knowledge or consent. Already, citizens pay for software to police their own communities and hundreds of police departments scour personal Internet profiles and city cameras to pursue their violent agendas of whiteness and capitalism.

This article is from: