Countering Terrorism Online with AI - An Overview for Law Enforcement

Page 43

ii. Technical challenges in using AI tools a.

False positives and false negatives

When optimizing an algorithms’ accuracy, AI developers can change the threshold that sets the classification label into positive or negative. By adjusting this threshold, it is possible to control two types of errors: false positives and false negatives. False positives can be understood as instances where a positive result is incorrectly given, and false negatives can be understood as instances where a negative result is incorrectly given. As some margin of error in the output of algorithms is inevitable and it is not possible to simultaneously reduce both false positives and false negatives, there is a need to choose which correction should be prioritized. This choice is not straightforward, as both false negatives and false positives can have significant implications. In a predictive model that tries to identify terrorist actors (considered as the positive label), reducing false negatives implies increasing false positives, which translates into the acceptance of wrongly identifying someone as a terrorist. This minimizes the risk of potentially dangerous individuals passing through the algorithm unspotted but can indiscriminately burden civilians. On the other hand, optimizing for false positives increases false negatives. This approach prioritizes avoiding the incorrect identification of someone as a terrorist but has also more tolerance to relevant subjects escaping undetected. Accordingly, the precise classification of the threshold can have a significant impact on the results of the AI model deployed.

Photo by Brendan Church on Unsplash

b.

Bias in the data

Fairness in AI is a declared aim across wide parts of the tech industry, policy, academia, and civil society and one of the key principles touted in terms of the responsible use of AI. It implies that algorithmic decisions do not create a discriminatory or unjust impact on the end-users – as was the case with the aforementioned COMPAS recidivism algorithm.149 An investigation by an independent newsroom – ProPublica – demonstrated that the algorithm used by judges and parole officers in the United States to assess a criminal defendant’s likelihood to re-offend was biased against certain racial groups. The analysis of the tool found that African American defendants were more likely to be incorrectly judged to be at a higher risk of recidivism than Caucasian defendants, i.e. there were considerably more false positives among the African American community, while Caucasian defendants were more likely than African American defendants to be incorrectly flagged as low risk, meaning that the false-negative rate was recurrently higher in the Caucasian community creating a differential treatment for the two groups. Another high-profile example of algorithmic discrimination, this time relating to gender-based bias, came to light in 2018 when Amazon’s AI automatic hiring tool was shuttered after it was seen to result in bias against women. Based on the male dominance within the tech company, Amazon’s system taught itself that male candidates were preferable.150 149 Julia Angwin, et al. (2016). Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And it’s Biased Against Blacks. ProPublica. Accessible at https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. 150 Jeffrey Dastin. (Oct. 11, 2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. Accessible at https:// www.reuters.com/article/us-amazon-com-jobs-automation-%20insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-biasagainst-women-%20idUSKCN1MK08G.

43


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.