Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

Page 1


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

Project Report, Summer 2018 Montreal Institute for Genocide and Human Rights Studies (MIGS), ​Concordia University

The following report is based on research and on the ideas, opinions, and concerns expressed by many individuals contacted throughout the Artificial Intelligence, Networked Hate, and Human Rights project. The conclusions drawn from this study are subject to change, as further research may reveal information not currently considered. The subject matter is in rapid development. Therefore, the purpose of this report is not to present any final position of MIGS, but to develop a first starting point and landscape map from which policy gaps can be identified, more research can be conducted, and new projects conceptualized. The project was made possible through the support of Global Affairs Canada. About MIGS The Montreal Institute for Genocide and Human Rights Studies at Concordia University is recognized internationally as Canada’s leading research and advocacy institute for the prevention of genocide, mass atrocity crimes and violent extremism. MIGS conducts in-depth research and proposes concrete policy recommendations to resolve conflicts before they degenerate into mass atrocity crimes. MIGS has achieved national and international recognition for its role as an idea and leadership incubator working with policymakers, academics, leading research institutions, and the media. Today, MIGS plays an important role in working at the intersection of emerging technologies and human security.

Page 1/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

TABLE OF CONTENTS EXECUTIVE SUMMARY

3

INTRODUCTION

5

2 ANALYSIS

10

2.1 Context, Dynamics, and Recent Developments

10

2.2 Content Takedown and Regulation of Social Media Platforms: Current Approaches

12

2.3 Nefarious AI Use Cases and Proliferation of AI technology to Non-State Malicious Actors

16

2.4 Geopolitical Considerations

19

3. CONCLUSION AND RECOMMENDATIONS

20

APPENDIX

23

Resources and Further Information: Papers and Articles

23

Resources and Further Information: AI Policy Papers and AI Tutorials

26

AI Landscape Mapping: Workshop in cooperation with Tech Against Terrorism

29

AI Landscape Mapping: Montreal AI Community

31

Page 2/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

EXECUTIVE SUMMARY In the digital age, rising societal tensions often develop and play out on social media platforms. Extremists, be it jihadists or groups associated with the far-right and far-left, are increasingly using this space to propagate their hateful ideas, recruit members to their cause, and execute their plans. Even with their sophisticated capabilities in gathering and monitoring large amounts of data, intelligence and law enforcement agencies are overwhelmed by the sheer volume of extremist content produced and diffused online. Facing evolving political pressure to remove harmful online content, tech companies are realizing the limits of utilizing, and contracting real people as content moderators. As such, are increasingly developing and deploying various forms of Artificial Intelligence (AI) technology, that can automate the process of unwanted content detection and removal. There is growing global political will among national governments to regulate social media companies. However many of the large tech giants do not want to be regulated and are advancing the argument that they can self-regulate and that AI is one of the key tools they will use to disrupt online hate and extremism. As governments become more forceful in advancing regulation, the key question is how to do this in a smart manner. Well known social media giants have publicly stated they would put more resources into policing their platforms, it is evident that they favor self-regulation and are wary of government interference and oversight. The United Kingdom, France, Germany, the European Union and the United States, amongst others, have begun to openly discuss and implement regulatory measures on the tech industry, not only pertaining to terrorism and hate speech but also digital election interference and the spread of “fake news” and misinformation campaigns. Given that democratic and non-democratic states are competing to lead the development and application of AI against the backdrop of geopolitical instability, this also raises important questions about the application of AI technology in conflict and the proliferation of AI technology to malicious non-state actors. Reduced costs and knowledge needed to develop and apply AI technology will make it a prime target for exploitation by terrorist and extremist organisations in the near future. For example, they might employ Deepfakes (completely AI created and realistic looking videos) to incite hate and spread Page 3/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

propaganda, engage in AI-enhanced phishing campaigns, try to sabotage civilian AI system’s data sources, or build makeshift AI surveillance systems. As AI is advancing and evolving at a rapid pace, it is imperative that governments, the private sector and civil society prioritise collaboration and knowledge sharing, while also conducting forecasting efforts to design policies and normative frameworks in relation to the malicious uses of AI. The key challenge over the coming years will be to narrow the knowledge gap between policy and (applied) research to craft sensible policies and government responses to the growing application of AI. With Canada quickly becoming an AI leader on the global stage, an opportunity is presenting itself in which the Canadian government and all stakeholders can ensure AI is used for social good while simultaneously contributing to upholding human rights and countering networked hate.

Page 4/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

INTRODUCTION Recent breakthroughs in Artificial Intelligence (AI) research and development, and its increasing application in civil and military products, has given rise to new and complex challenges for governments, private entities, and the public. In many of the discussions in 2017 regarding AI-driven systems, security and global implications of artificial intelligence have lived in the shadow of domestic and legal concerns (i.e. AI’s impact on the labour market and regulation on self-driving cars, respectively) but policymakers, intelligence practitioners, and human rights activists have recently found more interest in the rapidly evolving field of AI and international security, which promises both opportunities and challenges. “Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape” is intended to add to current AI discussions ​in ​examining opportunities and risks of AI systems’ application for the

detection and takedown of online terrorist and extremist content, AI’s possible acceleratory effects for extremist causes and ideologies, and AI system’s potential misuse by terror and extremist organisations. It is based on a) dialogue and outreach within the Montreal AI community b) interviews and discussions with Canadian and international AI experts and policy-makers c) a joint workshop with the global NGO Tech Against Terrorism, a UN Counter Terrorism Executive Directorate supported initiative.1 The objectives of this report are threefold. First, it gives a broad overview of key policy challenges around pressing issues at the intersection of Artificial Intelligence, Networked Hate, and Human Rights. Second, it provides initial thoughts and recommendations on how to approach these challenges. Third, it serves as an AI knowledge-mobilization tool through extensive lists of curated policy papers, and AI learning resources. The report is divided into three sections. The first includes a brief methodology and outlines the key terms used. The second section, which constitutes the bulk of this report, is an analysis of the current and future challenges in the development and application of AI-powered systems in digital extremist content moderation, and an overview on malicious AI use cases. The third section provides concluding remarks and policy recommendations for the Government of Canada, with special attention to the mandate of Global Affairs Canada. ​The appendix of this report provides further resources and summarizes this projects’ mapping component.

1

See TaT mission statement and relationship to UN CTED ​here​. Page 5/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

1 METHODOLOGY AND DEFINITIONS The findings of this exploratory report were ascertained on the basis of three types of content analysis: semi-structured interviews, presentations and discussions during a collaborative one-day workshop organized with Tech Against Terrorism, and desk research and media monitoring. Method of Data Collection 1: Semi-structured Interviews This report made use of semi-structured interviews conducted between November 2017 and March 2018 with AI researchers, social scientists, and policy-makers in Berlin, New York, Washington DC, Toronto, and Montreal. This interview format consisted of several key questions to help frame the conversation, however did not limit the respondents in their answers, by following a predetermined set of questions. Doing so allowed for follow-up questions (i.e. when the interviewee’s answer was complementary/incompatible with previous interviews) and offered the flexibility required to have an in-depth discussion of the subject matter. While the interviewer was knowledgeable of the subject, this approach allowed for information to be gathered that was not previously anticipated by the research team. The guiding questions were: 1. What role do social media companies currently play in taking down terrorist and extremist content? a. What should this role be? b. To what extent and do these companies rely on AI systems? c. How much do we know about these systems? 2. What do current and future malicious AI use cases look like ? a.

How are existing AI systems exploited?

b.

Do we already see makeshift AI systems being deployed by malicious non-state actors?

c. What is the current thinking around the proliferation of AI systems to malicious non-state actors? 3. What are the global trends and dynamics in AI developments? a. Which nations are rolling out AI strategies and how? The majority of the interviews were conducted in-person, with some interviews conducted via Skype. In almost all cases, interviews lasted between 45 to 90 minutes. The respondents were chosen from the tech industry, government, research institutes, and academia. A constant effort was made to keep a gender balance in our pool of respondents. However, the number of male participants outpaced the number of female respondents by approximately 2 to 1.

Page 6/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

Method of Data Collection 2: Workshop with Tech Against Terrorism This report’s findings are informed by a full-day multi-stakeholder workshop at Concordia University in March 2018, which brought together a diverse group of about seventy participants: AI researchers, radicalisation and counter terrorism experts, startup founders, and civil society representatives. Participants and invited experts came from different backgrounds to discuss the opportunities and challenges of AI, with the goal to further develop innovative approaches and solutions. While the workshop was advertised to the public, MIGS targeted invites towards individuals and companies who were likely to contribute constructively to the discussions. By doing so, the workshop was steered to include mainly participants who are directly involved in the development of AI or the fight against extremism, or both. Five separate sessions were held, covering the following topics: Launch of the Tech Against Terrorism Data Science Network; Terrorism, Technology, and Exploitation; OSINT - Big Data and Application; Algorithms and Application – Predict and Identify; Artificial Intelligence and Ethics - the Requirement for Transparency. The workshop partly served as a platform for several Canada and US based companies to showcase their AI applications and discuss their risks and opportunities. Furthermore, the project played a central role in the launch of the Data Science Network, a Tech against Terrorism (TaT) initiative, comprised of a collection of tools that small tech companies can use to better protect themselves against malicious use of their services. This is especially important as small tech companies are the most vulnerable to malicious use, due to a lack of financial and personal resources to adequately address the exploitation of their service for extremist causes. Method of Data Collection 3: Research and Outreach Data collected through interviews and the workshop was supported by research and outreach in Montreal to the AI ethics and startup community. The research component consisted mainly of the selection of recent and relevant AI policy papers, international media monitoring, AI research papers, and AI learning tutorials. Montreal outreach was done through visiting relevant AI startups, hosting the AI & Ethics Meetup , participating in conferences and workshops, and frequent interaction with Concordia University’s District 3 startup incubator.2

2

For selected papers see ​Appendix Resources and Further Information: AI Policy Papers and AI Tutorials ​. Page 7/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

Definition 1: Artificial Intelligence The term Artificial Intelligence is defined in various ways -- meaning different things to many people. For the purpose of this report, we found relying on the broad description by Ryan Calo, as slightly adapted below, most useful in terms of flexibility and practical relevance: “Much of the contemporary excitement around AI flows from the enormous promise of a particular set of techniques known collectively as Machine Learning (ML). Machine learning refers to the capacity of a system to improve its performance at a task over time. Often this task involves recognizing patterns in datasets, although ML outputs can include everything from translating languages and diagnosing precancerous moles to grasping objects or helping to drive a car. Most every technique that underpins ML has been around for decades. The recent explosion of efficacy comes from a combination of much faster computers and much more data. In other words, AI is an umbrella term, comprised of many different techniques. Today’s cutting-edge practitioners tend to emphasize approaches such as Deep Learning (DL) within Machine Learning (ML) that leverage many-layered structures to extract features from enormous data sets in service of practical tasks requiring pattern recognition, or use other techniques to similar effect. As we will see, these general features of contemporary AI — the shift toward practical applications, for example, and the reliance on data — also inform our policy questions.”3

Overview: Deep Learning (DL) as part of Machine Learning (ML)4

This report assumes basic knowledge about AI and familiarity with key terms and distinctions between various classes of AI5 (especially ML and various DL approaches), the difference between structured and unstructured data​, and supervised vs. unsupervised learning. Over the course of this

See ​Artificial Intelligence Policy: A Primer​ and Roadmap​. See ​Efficient Processing of Deep Neural Networks: A Tutorial and Survey​. 5 As outlined here: ​Cheat Sheets for AI, Neural Networks, Machine Learning, Deep Learning & Big Data​. 3 4

Page 8/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

project, some substantial and detailed introductions to AI for policy makers have been published, for example by the Harvard Belfer Center6 and the Brookfield Institute7. For a selection of introduction papers and detailed technical overviews see the resource section in the appendix.

This report focuses on current prototypes of AI systems, their live applications and a future time horizon of the next 5 to 10 years. It is also important to note that even with major breakthroughs in AI recently, there are still many basic challenges to overcome, such as the reproducibility of ML model outcomes, ML programming code version control,8 and inherent biases in underlying datasets.9 The discussions about the application of AI systems to engage against networked hate have to be seen in that light, and popularized and overconfident projections about the impact of AI generally should be taken with a grain of salt (as ​Amara’s law also applies to AI: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run10”).

In the report we will use the terms Artificial Intelligence (AI) and Machine Learning (ML) interchangeably and will refer to more specific DL concepts where appropriate and necessary. Networked Hate This project was originally titled “Mapping the Artificial Intelligence, ​Online Hate,​ and Human Rights

Landscape.” After some research and an initial round of expert interviews, it became clear that in today’s digitally-connected environment, the binary distinction between online and offline becomes more and more obsolete. Being “online” and “digitally networked” is the norm, often completely embedded into everyday life, blurring the line between the “online” and “offline” worlds.

The adapted term ​networked hate​, stresses this networked and constant character of hateful

speech and behaviour over the internet, especially on social media platforms. While networked hate

certainly could be used to describe the whole range of hateful behaviors (i.e. #GamerGate and online bullying), this report focuses on those actions deemed extreme, illegal, and often politically-motivated, committed by terrorist and/or extremist organisations .

See ​Machine Learning for Policy Makers (2017)​. See ​AI + Public Policy: Understanding the shift (2018)​. 8 ​As outlined in ​The Machine Learning Reproducibility Crisis​. 9 ​See ​talk by Kate Crawford at NIPS 2017​. 10 See​ The Seven Deadly Sins of Predicting the Future of AI​. 6 7

Page 9/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

2 ANALYSIS 2.1 Context, Dynamics, and Recent Developments In the digital age, rising societal tensions often develop and play out on social media platforms. Extremists, be it jihadists or groups associated with the far-right and far-left, are increasingly using this space to propagate their hateful ideas, recruit members to their cause, and execute their plans. Even with their sophisticated capabilities in gathering and monitoring large amounts of data, intelligence and law enforcement agencies are overwhelmed by the sheer volume of extremist content produced and diffused online. Terrorist organizations, most notably the Islamic State in Iraq and Syria (ISIS), have taken advantage of social media platforms in their propaganda campaigns, using them to spread their ideology online, recruit "foreign fighters" to their cause, and plan terrorist attacks against civilians. At the heart of this problem is the nature of the internet itself; this space is transnational, often ignoring physical state boundaries and legal jurisdictions. Non-state actors can use the services of the tech industry with agility and

fairly assume impunity. Therefore, it becomes much easier for terrorist

organizations to circulate their ideas and plans while remaining somewhat hidden from law enforcement and intelligence authorities. Simply put, social media platforms and tech companies are facilitating communication amongst extremists, threatening the security of nations and the human rights of individuals. We are facing an important question in our democratic societies: how do we best deal with and respond to this networked extremist content? Mass casualty attacks by extremist groups, and those who have pledged allegiance to them, have forced governments to respond to the digital aspect both at the domestic and international levels. Increasingly, far right groups, in response to perceived “civilizational” terrorist attacks by ISIS and, in some cases the migration of asylum-seekers into Western countries, have begun exploiting tech platforms to espouse anti-immigrant views and demonize minorities. This has resulted in a perfect digital cocktail of networked hate. Social media companies, more than ever, are facing mounting pressures from national governments, the United Nations, civil society groups and individual citizens to address the issue of extremist content diffused through their platforms and technologies.

Page 10/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

The evolving nexus between terrorism and the internet forced the United Nations Security Council to begin taking this issue more seriously, with UN resolution 2129 mandating the UN Counter Terrorism Executive Directorate (UN CTED) to begin mounting a response in close cooperation with the private sector. In 2017, UN resolution 2354 instructed the UN CTED to review developments globally in countering terrorist narratives and ways for Member States to build their capacity in the field of counter terrorist narratives, including the online component. Supported by the UN and tech industry giants, including Twitter, Microsoft, Facebook and Google, Tech Against Terrorism was established in 2016 to tackle how extremists exploit emerging technologies. While the well known social media giants have publicly stated they would put more resources into policing their platforms, it is evident that they favor self-regulation and are wary of government interference and oversight. The United Kingdom, France, Germany, the European Union and the United States, amongst others, have begun to openly discuss regulatory measures on the tech industry, not only pertaining to terrorism and hate speech but also digital election interference and the spread of “fake news” and misinformation campaigns. In March 2018 the United States Congress passed legislation to hold websites accountable for advertisements and posts that cover a major crime: the sex trafficking of children. This decision was originally opposed by many tech companies and internet freedom advocates, as it removes the previous safe harbour protection for online platforms, commonly referred to as “Section 230”. Before this ruling service providers and social media companies could not be held liable for posts made by users of their services. This might prove a significant development, with implications how networked hate and extremism online will be dealt with in the United States -- especially since prominent proponents of Section 230 lawyers have recently expressed a willingness to change. The European Union is also contemplating measures to address the immunity of online platforms, by considering laws that will hold private companies responsible for illegal content including terrorism and certain types of hate speech. Facing this evolving political pressure to remove harmful online content and realizing the limits of using and contracting real people as content moderators, tech companies have demonstrated a mounting interest on building and deploying AI-powered algorithms that can automate the process of unwanted content detection and removal.

Page 11/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

2.2 Content Takedown and Regulation of Social Media Platforms: Current Approaches 2.2.1​ ​What role do social media companies currently play in taking down terrorist and extremist content?

Tech companies are painfully aware of the malicious use of their platforms, and as such have launched increased initiatives aimed at removing terrorist and extremist content. In June 2017, Facebook, Microsoft, Twitter and YouTube announced the formation of the Global Internet Forum to Counter Terrorism (GIFCT) in order to disrupt the distribution of extremist content online. Tech companies joined together to create new technologies and techniques in flagging and removing unwanted content. In its latest transparency report, Twitter claims to have suspended more than 1.2 million terrorist accounts since August 2015. Furthermore, it has observed a decrease in the number of terrorist content takedown in recent reporting periods, suggesting that the platform is becoming an undesirable place for those who seek to promote terrorism. Facebook has pledged to double its safety and security staff, adding an extra 10,000 (mainly outsourced) analysts involved in content moderation. The company also claims to be able to remove 99% of ISIS and Al-Qaeda affiliated content using its own AI powered algorithms and human content moderators. However, what is missing is the hard number of terrorist content removed with the use of AI. In 2017, 250 companies suspended advertising contracts with Google over its alleged failure to moderate YouTube’s extremist content. A year later, Google’s senior vice president of advertising and commerce, Sridhar Ramaswamy, stated that the company is making strong progress in platform safety to regain the lost confidence of clients. Governments are also concerned about the use of social media platforms to propagate extremist content online. Germany for example, has witnessed an increased anti-semitism seen on digital platforms, far right groups mobilising online to disseminate anti-refugee sentiment, and an ISIS inspired terrorist attack in Berlin. It is unsurprising then that country has taking the issue seriously, given its unique history. In January 2018, the so-called Networked Enforcement Act came into force and it requires social media platforms with more than 2 million users in the country to erase posts that violate German hate speech laws. Failure to do so within 24 hours can result in a fine of up to 50 million euros.

Page 12/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

The government of the United Kingdom has additionally taken a different approach. In part due to a string of ISIS inspired terrorist attacks in London and Manchester, as well as a growing far right movement that has resulted in the recent assassination of Member of Parliament Jo Cox and a terrorist attack against a mosque in London. Seeing that the government has an active role to play in content moderation, the services of the private sector company ASI Data Science were retained to develop a software to use AI to detect extremist videos and prevent them from being uploaded. 2.2.2 ​To what extent and do these companies rely on AI systems? And how much do we know about these systems? Faced with mounting pressure from the public and private sector, major tech companies are now making use of AI-powered systems in conjunction with human analysts to flag and remove extremist posts. In its current usage, AI focuses mainly on identifying unwanted content; human analysts are responsible for reviewing flagged content and deciding on its removal. The following section explores some of the central challenges for tech companies in regard to the use of AI in digital extremist content moderation. As it will be discussed, these systems are -- albeit already deployed -- in their early stages of development, which gives raise to potential flaws which need to be addressed. First, public sector actors, and surprisingly online platforms, often assume that the technical problems associated with automated content moderation processes are easily solvable. However, detecting and removing unwanted online content is far easier said than done. While the major online platforms often claim of having near perfect content moderation capabilities using AI (between 80-99%, depending on the platform), there is not a common approach determining which content the employed AI system deemed illegal, and why. These self-reported statistics don’t offer the details required to judge the success of current AI-driven algorithms in content moderation. Furthermore, there are different definitions of what is considered “unwanted content,” creating grey zones and often leading to either confusion over takedowns or over-blocking. Removing content which is not considered harmful, offensive, extremist, or illegal, even if distasteful, is an impediment to free speech and the use of AI in content removal has sometimes resulted in blocking legitimate content posted by human rights activists. In 2017, a number of activists had their social media accounts suspended or their posts deleted. Shah Hossain, an activist living in Saudi Arabia, had a number of his posts deleted regarding the conflict in Myanmar, and digital content of atrocities committed against the Rohingya minority were deleted by Facebook. Arakan News Agency, a YouTube news channel with nearly 80,000 subscribers, was deleted by YouTube for posting videos of the Rohingya crisis. In Syria, Page 13/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

where independent journalism is severely restricted by war, videos and photos posted online by individuals on the ground are crucial to understanding the situation in the country. In an attempt to crackdown on digital extremist content, however, YouTube’s AI-powered algorithms have removed thousands of videos which documented the atrocities and were posted as evidence for the eventual prosecution of Syrian officials for crimes against humanity. In these cases AI powered algorithms removed content that was not created for the purpose of radicalizing or inciting others to commit violent acts. The current training system with known unwanted content datasets is impractical, as systems trained in this manner cannot flag and remove content not previously seen. Unless this method of training is changed, considerable human oversight will be required in content moderation to minimize the risk of false positives and false negatives. Independent testing of online platforms AI-systems must also be considered, potentially through a regulatory body. Limits on false positives and false negatives must be set and enforced to strike a reasonable balance between content moderation and freedom of speech. Under current practice, AI moderation success rates are self-reported by companies. These private entities have an incentive to exaggerate their systems’ performance, as their advertisement revenues are highly affected by their capacity to moderate unwanted content. Another observation is the current application of AI-powered algorithms is based on using a database of known unwanted contents to flag and remove potentially offensive or illegal content; these systems do not have the capability of moderating content which they have never been trained on. This is especially problematic in the case of terrorist content, as the perpetrators continuously change the language and imagery used online to avoid detection. Furthermore, to increase the AI’s ability to moderate unwanted content, there is the need for “good data” which can be used to train the system. In this case, good data is understood as data which is useful to the particular task (i.e. terrorist data used to train AI-algorithms for terrorist content moderation). Obtaining these large datasets are costly endeavours, often only attainable by the larger platforms such as Facebook and Google. Smaller companies are, therefore, often left vulnerable to malicious use, simply because they do not have the resources to collect the data required to train their AI systems. Tech Against Terrorism, with the launch of its Data Science Network, is attempting to address this concern by allowing smaller companies to access a shared database. However, some small companies have shown reluctance in participating, voicing concerns over sharing their data with those they often view as their competitors.

Page 14/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

Misuse in both types of platforms arguably pose an equal threat to Canadian national security; while smaller platforms may have a shorter reach in terms of audience, propagating harmful content while remaining undetected is much easier. In addition, there must be an international component in any potential government regulation. The Government of Canada ought to coordinate its regulatory efforts, through Global Affairs Canada, with other states, both democracies and others. There must be a basic definition of what should be considered “unwanted content,” so that online platforms can build a dataset applicable in multiple countries or regions. Finally, regulating the internet, a space which is inherently international, using a national regulatory regime is challenging. What is considered “unwanted content” differs substantially depending on the country. AI-systems cannot be trained on a single large dataset, and then applied to online platforms. Each country requires its own dataset, tailored to its own needs and concerns. This requires a complex AI-training process which must be sensitive to regional and political considerations. Furthermore, moderation systems powered by algorithms largely depend on associating keywords or images with known unwanted content in a database. Therefore, small changes to words or modified pixels on an image can mislead the AI-system, either flagging and removing acceptable content or not recognizing known unwanted ones (false positives and false negatives, respectively). Simply put, current AI-driven systems cannot yet replace the human analyst, as they are easily misled by a user with some understanding of automated moderation systems. Some of the technical issues outlined above cannot be solved unless considerable advancement is made in AI systems decision-making processes. Canada is a center for AI research and is especially well positioned to foster further innovative research in AI, perhaps through government funding programs for Canadian AI companies.

Page 15/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

2.3 Nefarious AI Use Cases and Proliferation of AI technology to Non-State Malicious Actors 2.3.1​ ​What do current and future malicious AI use cases look like ? The rise of more easily adoptable and employable AI systems has recently sparked debate and research into a variety of malicious AI use cases, exemplified by the recently published Malicious Use of Artificial Intelligence report by Open AI, the Electronic Frontier Foundation, and others.11 The production of AI systems and applications is getting increasingly democratized with the major tech companies offering access to their databases, ML libraries and production environments. Additionally, tutorials on simple but effective ways to train neural networks with a possibly nefarious use case in mind, are easily available and the hardware-costs to build simple AI systems are decreasing -- an ideal combination for highly motivated but relatively ressource strapt malicious actors like terrorist organisations. This project explored in more detail the use of AI systems by terrorist organisations and the future scenarios below have been discussed with interviewees and workshop participants: ●

Terrorist organisations could potentially create and use Deepfakes12 to incite hate and to spread propaganda.

Terrorist organisations might engage in AI-enhanced phishing campaigns for a wide variety of strategic or tactical reasons, e.g. getting access to information which can then be used for attack planning.

Terrorist organizations might intentionally make use of “data poisoning”13 tactics -- meaning that they will try to influence an AI system’s data source in order to render it useless or produce favourable outcomes for the terrorist organisations cause.

Terrorist organisations could employ makeshift AI surveillance systems for a wide variety of use cases, especially if they have territory under their control.14

Terrorist organisations might be pairing facial recognition with an IED (improvised explosive device) to only attack specific individuals when they come close to the device.

See ​The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation​. ​A​udio​ and ​video​ ​manipulated​ by an Deep Learning system to ​look​ and ​sound​ ​like​ a ​real​ ​person​, ​saying​ something that that ​person​ has never ​said​ - ​MacMillan Dictionary 13 ​See ​Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning 14 ​Simple tutorials for facial-recognition​ are openly available on the web. 11 12

Page 16/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

2.3.2 ​Do we already see makeshift AI systems being deployed by malicious non-state actors? Throughout this project we did not come across any such makeshift AI systems in use by terrorist organisations and neither did the experts consulted. The consensus view however was that this should be an area which has to be closely monitored as user-friendliness and access to AI production environments15 will certainly increase over time. As terrorist organisations have shown to use the newest technology available to them to advance their cause, there will likely be more experimentation with makeshift AI systems in the not too distant future. The main reasons for a lack of adaptation so far are access to sufficient amounts of training data and missing AI skills and knowledge. With access to various AI platforms these hurdles are relatively easy to overcome. According to Chris Meserole from the Brookings Institution we are at a similar stage with AI where we were with social media in 2008: there is a lot of attention and excitement around the technology but not too much thinking about the risks and opportunities for exploitation. In order to prevent ending up at a similar situation with AI where we are with social media platforms today, thinking about the risks and malicious uses (and their prevention) have to be front and center and warrant efforts in scenario and policy planning . Therefore AI platform providers and national regulators should think ahead and think through questions of AI development environment access and malicious use cases. 2.3.3 ​How are existing AI systems exploited? Equally important, according to workshop participants, is the fact that malicious actors will always try to game and take advantage of algorithms and AI systems already deployed by major tech and social media companies and AI startups. As we have seen with the ongoing whack-a-mole game between terrorist organisations and social media companies in account creation, a similar dynamic will (and to some extent already does) play out in gaming AI systems employed in content takedown and access restrictions.

15

For example Tensorflow, Microsoft Azure AI, ​and others. Page 17/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

2.3.4 ​What is the current thinking around the proliferation of AI systems, especially to malicious non-state actors? The dynamics outlined above sparked some discussion throughout this project about the proliferation of AI technology to malicious non-state actors and what can and should be done about it. Similar to issues around easily available hacking and distributed denial of service (DDoS) tools, AI technology in the wrong hands can turn out to become a serious cyber security risk -- especially when one considers that due to the “golden rule in ML”16 offensive cyber tools and tactics will have an advantage over defensive ones in employing AI. Any serious national AI strategy should consider these dynamics and pay close attention to the general cyberconflict and cybersecurity landscape which more and more incorporates AI systems in attack and defense. To illustrate the last point: the US Cyber Command has just published its new vision17 and argues to move its operation towards “persistent engagement” which has more active cyber components than the previous doctrine. While this has important ramifications to the goal of a free and open internet, it also could lead to a shift in cyber tactics by other hostile nations, e.g. hiring or enrolling ever more hackers and proxies (e.g. terrorist organisations) to conduct digital campaigns using AI to access and disrupt more networks and infrastructures than the U.S. (and their allies) can counter.18 When it comes to the proliferation of AI systems in the context of dual-use technologies, one final finding from the workshop warrants a mention. Several representatives from recently established US and Canada based AI startups at the workshop mentioned that they were approached by various non-democratic states. Their ask: to buy or get access to AI technologies for the purpose of surveilling and ultimately controlling their citizenry. Similar to debates about the proliferation of Western internet and communications surveillance technology, this poses important questions about the sale of AI technology and services to oppressive regimes. With Canada as a leading nation in AI development, there are difficult considerations and tradeoffs to be made about existing dual-use arms control regulations and the proliferation of AI technology to non-democratic allies to serve Canada’s national interest.

“​ ​Data you are going to work on needs to come from approximately the same distribution as the data you are training on” as described in ​ML and IT Security Final Zeronights Keynote (2017) - Google Project Zero​. 17 ​See ​Achieve and Maintain Cyberspace Superiority​. 18 See ​Triggering a New Forever War in Cyberspace​. 16

Page 18/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

2.4 Geopolitical Considerations The following section does not narrowly fall into the main focus of the Artificial Intelligence, Networked Hate, and Human Rights project, but it reflects on often voiced remarks in conducted interviews and during the workshop. Any national AI strategy must take note of geopolitical considerations; the technology is not developing in a vacuum and is not shielded by the usual international considerations afforded to, for example, environmental or economic policies. From a geopolitical perspective, the development of artificial intelligence is often framed as matter of competition (with many references in the media as an “arms race”) in which the leader may eventually set the standards in terms of AI development and use. From this standpoint, it becomes imperative for democratic states to take the initiative, as failing to do so may give the advantage to undemocratic states and threaten the collective security and safety of democracies. The expanding eye of the Chinese state with its intrusive system of algorithm surveillance, constructing and storing profiles on its own citizens, is a prime example of the authoritarian use of AI. The manner in which artificial intelligence is created and applied will be a decisive factor in how democracies preserve their defining characteristics. Simply put, the development of AI is a matter of national and international security, with far reaching implications for human rights. AI research and development will operate within the above context. Thus, when speaking of AI, it is important to consider both software and hardware (e.g. specialized AI processors) development, and the manufacturing process and raw material requirements associated with the production of the latter. Especially the aspect of manufacturing of AI-related hardware and developments in the semiconductor industry are topics not discussed often enough but warrant great attention. Many aspects of geopolitical competition will focus on the economic and military benefits AI cold bring, and will revolve around access to and development of AI centred hardware, similar to dynamics around 5G next generation mobile telecommunication technology.19 MIT Media Lab’s Tim Hwang offers a timely and thorough overview of the geopolitical implications of AI hardware development.20 To conclude, any Western national AI strategy must include extensive international cooperation among liberal democratic states. It is highly encouraged to maintain continuous discussions on the subject matter with Western allies, specifically the US and UK (consider G7 and NATO for economic and military developments of AI respectively). 19 20

​See for example in the US context ​Qualcomm v. Broadcom: A National Security Issue​. ​Computational Power and the Social Impact of Artificial Intelligence​. Page 19/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

3. CONCLUSION AND RECOMMENDATIONS The intersection between Artificial Intelligence, networked hate, and human rights is not a theoretical exercise. Governments, the United Nations, NGOs and tech companies are struggling to find solutions to what is a global challenge. In trying to tackle how terrorists and hate groups exploit the internet with malicious intent, more and more attention is being given to the use of AI in browsing through data with the goal of preventing or removing online extremist content. Several key lessons were taken from this projects’ interviews and the workshop held in Montreal in partnership with Tech Against Terrorism. The first is that there is a lack of transparency in how tech companies design and deploy AI; very little information is made available to governmental authorities or the public. Second, the use of AI to take down content deemed hateful or extremist can sometimes be problematic. In some instances, human rights activists, using social media platforms to raise awareness of crimes against humanity or to collect digital evidence for war crimes prosecution, saw AI-powered algorithms destroy their work. AI is imperfect and, in somes cases, its use can impede on key human rights that the Government of Canada has made a priority at home and abroad, including free speech, freedom of thought and freedom of expression. Third, when AI is used to take down content deemed as extremist or social media accounts are deactivated, there is no legal or transparent process to repeal these actions. Fourth, there is growing global political will among national governments to regulate social media companies. However many of the large tech giants do not want to be regulated and are advancing the argument that they can self-regulate and that AI is one of the key tools they will use to disrupt online hate and extremism. As governments become more forceful in advancing regulation, the key question is how to do this in a smart manner. Fifth, as AI is advancing and evolving at a rapid pace, it is imperative that governments, the private sector and civil society prioritise collaboration and knowledge sharing, while also conducting forecasting efforts to design policies and normative frameworks in relation to the malicious uses of AI. The key

Page 20/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

challenge over the coming years will be to narrow the knowledge gap between policy and (applied) research to craft sensible policies and government responses to real world problems. Sixth, as Canada is quickly becoming an AI leader on the global stage, an opportunity is presenting itself in which the Canadian government and all stakeholders can ensure AI is used for social good while simultaneously contributing to upholding human rights and countering networked hate. Seventh, democratic and non-democratic states are competing to lead the development and application of AI against the backdrop of geopolitical instability. This raises important questions about the application of AI technology in conflict and the proliferation of AI technology to non-state malicious actors. Recommendations Content Takedown and Social Media Platform Regulation ●

Follow closely regulatory developments ongoing in the US, EU, Germany, France, the United Kingdom and other key western allies. While it seems unrealistic at this point to secure a unified approach by liberal democracies to takedown and regulation, it is necessary to advocate for systemic and rights-based legislation congruent with democratic norms and values.

Promote transparency in AI systems. Regulation has to consider demanding detailed information about the applied AI systems (“explainable AI”); AI systems decision-making needs to be explainable, as AI will be a key technology for content takedown and filtering by tech and social media companies -- both on their public facing (e.g. news feed) components and their “dark social” (e.g. direct messages) ones.

Assess the potential impact of automated content takedown on counterterrorism efforts. ​It is crucial to minimize the negative effects of automated systems on ongoing counterrorism investigations and missions.

Assess the potential impact of the proliferation of AI systems​, specifically how the use of automated content takedown systems by non-democracies may affect human rights. Global Affairs Canada can promote internationally the ethical and responsible development and application of AI.

Canadian digital content regulatory legislation must strike a balance between freedom of expression and security. Consider that overshoot might embolden and encourage non-democratic states to enact highly restrictive legislation.

Page 21/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

Nefarius AI Use Cases ●

Allocate resources to study current and future malicious AI use cases. ​Thinking through possible scenarios today will benefit the crafting of policy responses in the future.

Take note of AI technology proliferation for malicious causes. ​While not yet observed today, current dynamics in cyber-security and cyber-conflict point to an increased use of AI systems for information operations by non-state malicious actors.

AI Knowledge Development ●

Knowledge creation. ​Support civil society organizations to develop educational tools that can be

shared with government officials. Encourage employees to dedicate time for knowledge creation through self-learning and tutorials - something which proves quite successful in private companies already. ●

Knowledge mobilization. ​Identify and gather existing knowledge to create an network of people working on “all things AI” within Global Affairs Canada.

Knowledge sharing. Support this network with regular discussion forums and workshops in Canada and abroad.

Fund AI literacy. ​Allocate resources to have staff attend AI and information security conferences and workshops.

Establish a working group​, comprised of private sector tech experts and researchers, to monitor the development of AI and inform Global Affairs Canada on existing and emerging opportunities and challenges. This team can also be used to independently test the success rate of AI-driven systems used by social media platforms.

Global Leadership ●

Ensure that the responsible and ethical development and use of AI to counter networked hate is added to the global policy agenda. ​Canada can play a leading role with its allies and partners in

continuing to raise this in discussions at the G7 Summit meetings, NATO, OSCE, La Francophonie, The Commonwealth, the OAS and the United Nations, including the UN Counter Terrorism Executive Directorate and its various specialized agencies.

Page 22/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

APPENDIX Resources and Further Information: Papers and Articles This section consists of a curated selection of hyperlinked resources identified as relevant through research and the expert interviews conducted. They are meant to complement or add nuance to findings of the report to provide a wider picture of the issues discussed in this report. Ressources deemed especially important are marked in ​italic.

Context, Dynamics and Recent Developments A National Data Strategy for Canada Key Elements and Policy Considerations (2018) - CIGI Counter-Conversations (2018) - Institute for Strategic Dialogue Digital Deceit - The Technologies Behind Precision Propaganda on the Internet (2018) - New America & Schorenstein Center Social-Media, Political Polarization and Political Disinformation (2018) - Hewlett Foundation The Future of Political Warfare: Russia, the West and the Coming Age of Global Digital Competition (2018) - Brookings & Robert Bosch Foundation Untrue-Tube: Monetizing Misery and Disinformation (2018) - Jonathan Albright Content Takedown and Regulation of Social Media Platforms: Current Approaches Conversation AI (2018) - Jigsaw Code of Conduct on Countering Illegal Hate Speech Online: First Results on Implementation (2016) - EU Commission Code of Conduct on Countering Illegal Hate Speech Online: One Year After (2017) - EU Commission Code of Conduct on Countering Illegal Hate Speech Online: Results of the 3rd Monitoring Exercise (2018) - EU Commission Page 23/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

Detection of Human Rights Violations in Images (2017) - Kalliatakis et al. Extremist Propaganda and Social Media (2018) - Hearing at US Senate Commerce, Science and Transportation Committee Extremist Speech, Compelled Conformity, and Censorship Creep (2018) - Citron Facebook’s Uneven Enforcement of Hate Speech Rules (2017) - ProPublica Here’s Who’s Been Blocked By Twitter’s Country-Specific Censorship Program (2018) - BuzzFeed How Deep Neural Networks Work and How We Put Them to Work at Facebook (2017) - ODSC Inside Facebook's Fast-Growing Content-Moderation Effort (2018) - The Atlantic Motherboard (2018) - AI and Hate Symbols What does Facebook consider hate speech? (2017) - ProPublica Nefarious AI Use Cases and Proliferation of AI technology to Non-State Malicious Actors Assessing Threat of Adversarial Examples on Deep Neural Network (2016) - Vast Lab Audio Adversarial Examples: Targeted Attacks on Speech-to-Text (2018) - Carlini & Wagner Audio Adversarial Examples: (2017) - Carlini Adolf Hitler "Downfall Movie" to Mauricio Macri Deepfake (2017) - Reddit / Youtube Adversarial Examples and Adversarial Training (2017) - Ian Goodfellow Merkel Trump Deepfake (2017) - Reddit / Youtube Risks of Advanced AI part 1: What is AI? (2018) - Maharaj & Krueger Risks of Advanced AI part 2: What are the risks? (2018) Maharaj & Krueger Page 24/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

Synthesizing Obama: Learning Lip Sync from Audio (2017) - University of Washington Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning (2017) - UC Berkeley The Malicious Use of Artificial Intelligence (2018) - Miles Brundage et al Geopolitical considerations Artificial Intelligence and the Future of Defense (2017) - The Hague Center for Strategic Studies Artificial Intelligence Index 2017 Annual Report (2017) - AI Index Artificial Intelligence and National Security (2017) - Belfer Center Battlefield Singularity Artificial Intelligence, Military Revolution, and China’s Future Military Power (2017) - Center for New American Security China embraces AI: A Close Look and A Long View (2017) - Eurasia Group Computational Power and the Social Impact of Artificial Intelligence (2018) - Tim Wang The Canadian AI Ecosystem (2018) - Greentech Asia The State of AI in Montreal (2017) - Techemerge The Future of Weaponized Artificial Intelligence (2017) - Army Cyber Institute

Page 25/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

Resources and Further Information: AI Policy Papers and AI Tutorials This section represents a selection of AI policy papers, ML tutorials, and noteworthy conferences we came across throughout the project, some of which were written or organized by interviewees or workshop participants. Relevant AI discussion forums we consulted throughout this project are also linked below. AI policy papers AI​ N ​ ow​ ​2017​ ​Report (2017) - AI Now

Artificial Intelligence and Foreign Policy (2017) - Stiftung Neue Verantwortung Artificial Intelligence Policy: A Primer​ and Roadmap (2017) - ​Ryan Calo

AI + Public Policy: Understanding the shift (2018) - Brookfield Institute Machine Learning for Policy Makers (2017) - Belfer Center Machine Learning Explained (2017) - Rodney Brooks The Seven Deadly Sins of Predicting the Future of AI (2017) - Rodney Brooks Responsible Data handbook (2016) - The Engine Room AI Courses and Tutorials AI Experiments with Google Tensorflow (2018) - Google - Tensorflow examples which can be programmed with Java Script and run out of a browser! AI Learning Accelerator​ - comprehensive list of tutorials on how to train various neural networks. Cheat-sheets for AI, Neural Networks, Machine Learning, Deep Learning - comprehensive list of ML/AI cheat-sheets and overview on different types of neural networks. Course.fast.ai - 2018 tutorial of Practical Deep Learning For Coders; learn how to build state of the art models without needing graduate-level math—but also without dumbing anything down.

Page 26/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

Data Sheets for Data Sets (2018) - Crawford et al Detectron - Facebook AI Research software system that implements state-of-the-art object detection algorithms, including Mask R-CNN. It is written in Python and powered by the Caffe2 deep learning framework FakeApp: A Desktop Tool for Creating Deepfakes​ - tutorial on how to build Deepfake videos.

GAN: A Beginner’s Guide to Generative Adversarial Networks - Generative adversarial networks (GANs) are deep neural net architectures comprised of two nets, pitting one against the other (thus the “adversarial”). GANs were introduced by Ian Goodfellow and others at the University of Montreal, including Yoshua Bengio. Jason Mayes Machine Learning 101 ​- fantastic presentation to get informed about ML/AI, highly recommended by various people throughout the project. Machine Learning Glossary - Machine Learning Glossary This glossary defines general machine learning terms as well as terms specific to TensorFlow. Making your own Face Recognition System (2017) - Freecode - tutorial on how to build a makeshift face recognition system with limited resources. TensorFlow and deep learning without a PhD​ - Google tutorial to learn about TensorFlow. Noteworthy Conferences and Workshops Alexander von Humboldt Institute for Internet and Society (HIIG): The Turn to Artificial Intelligence in Governing Communication Online (2018) -​ organized by HIIG and Access now the workshop examined how

artificial intelligence research has increasingly found applications in the area of content moderation and communication governance on digital platforms, especially in the German context. Many representatives of social media and tech companies were present; workshop outcomes will be shared with MIGS later this year. Artificial Intelligence and Inclusion (2017) - organized by The Institute for Technology & Society of Rio, the Berkman-Klein Center, and the Global Network of Internet and Society Research Centers. The conference

Page 27/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

highlighted the issues existing at the intersection of AI development and the application divide between the Global North and the Global South. Bracing for Impact: The Artificial Intelligence Challenge (2018) - Canada has positioned itself as a world leader and destination of choice for companies looking to invest in artificial intelligence and innovation. It is therefore important to not only fund AI innovation, but we must also move quickly to ensure that robust and effective governance structures are in place. Santa Clara University School of Law: Content Moderation & Removal at Scale (2018) ​- This conference explored how Internet companies operationalize the moderation and removal of third party/user-generated

content (UGC). UGC services routinely say that moderating and removing content is hard and expensive. This conference explained the operational challenges and how companies are trying to solve them. Senior management and researchers from US tech companies presented and shared their daily practise of content takedown - videos and slides are available online. The Transatlantic Dialogue Initiative: Big Data & Cybersecurity and Artificial Intelligence (2018) - ​The recent and dramatic developments in the fields of Big Data, Cybersecurity and Artificial Intelligence (AI) are

already fundamentally impacting societies, industries and individuals. The German Federal Ministry for Economic Affairs and Energy together with the Canadian German Chamber of Industry & Commerce have therefore called into life this initiative in order to strengthen the cooperation between Canada and Germany on the field of Big Data, Cybersecurity and AI. AI Discussion Forums Arxiv Sanity Preserver ​- selection and discussion board of ML/AI arXiv articles, recommended by Microsoft Maluba researchers to stay up-to date on what is possible in research. ​Machine Learning Reddit​ - mix of popular and research discussion board about all things ML/AI.

Machine Learning Security ​- a widely read blog by ​Ian Goodfellow and ​Nicolas Papernot about security and privacy in machine learning. NIPS 2017 - Advances in Neural Information Processing Systems - papers selection of Advances in Neural Information Processing Systems; they are the proceedings from the conference Neural Information Processing Systems 2017. Page 28/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

Shortscience.org ​- platform for post-publication discussion aiming to improve accessibility and reproducibility of ML/AI research ideas. The Building Blocks of Interpretability - With the growing success of neural networks, there is a corresponding need to be able to explain their decisions — including building confidence about how they will behave in the real-world, detecting model bias, and for scientific curiosity. AI Landscape Mapping: Workshop in cooperation with Tech Against Terrorism Workshop panelists Audrey Alexander,​ Research Fellow, Program on Extremism ●

Session: Technology, Terrorism, and Exploitation.

Marc-André Argentino​, Policy Analyst, Global Affairs Canada ●

Chair for session: Technology, Terrorism, and Exploitation.

Kunal Batra​, Head of Developer Relations, Clarifai ●

Session: Algorithms and Application - Predict and Identify.

Kendra Clarke​, VP Data Science Sparks & Honey ●

Session: Algorithms and Application - Predict and Identify.

Zach Deveraux​, Director of Public Sector Solutions, Nexalogy ●

Session: OSINT - Big Data and Application.

Sheldon Fernandez​, CEO, DarwinAI ●

Session: Algorithms and Application - Predict and Identify.

Richard Frank​, Director, International Cyber Crime Unit, Simon Fraser University ●

Session: Algorithms and Application - Predict and Identify.

Raphael Gluck​, founder JihadoScope and contributor Bellingcat ●

Session: OSINT - Big Data and Application.

Page 29/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

Adam Hadley​, Project Director, Tech Against Terrorism ●

Session: Tech Against Terrorism and the Data Science Network.

Alex Harris​, Project Manager, Tech Against Terrorism ●

Session: Tech Against Terrorism and the Data Science Network.

Tegan Maharaj​, PhD Candidate, Montreal Institute of Learning Algorithms ●

Session: Artificial Intelligence and Ethics - the Requirement for Transparency

Stephanie McLellan​, Research Associate, Center for Governance Innovation ●

Chair for session: Algorithms and Application - Predict and Identify.

Chris Meserole​, Fellow, Center for Middle East Policy, Brookings Institution ●

Session: Technology, Terrorism, and Exploitation.

Ketra Schmitt​, Associate Professor, Concordia University ●

Chair for session: OSINT - Big Data and Application.

Page 30/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape

AI Landscape Mapping: Montreal AI Community We mapped out mainly the Montreal tech startup and ML/AI research landscape in preparation for the report and workshop and reached out to all of the organisations below. In italic are the ones we identified as especially relevant to the topic of this report and the project workshop -- they either are working on interesting technology or their service might be misused for terrorism purposes.

Name

Type

Website

City

Acquisio

AI / Data Analytics

https://www.acquisio.com/contact-us

Montreal

Advanced Symbolics

AI / Data Analytics

http://www.advancedsymbolics.com/

Ottawa

Aerial

AI / Data Analytics

http://www.aerial.ai/

Montreal

AI For Good

Foundation

https://www.ai4good.org

Local chapters

Automat

AI / Data Analytics

http://www.automat.ai

Montreal

Blockstream

AI research lab

https://blockstream.com/

Montreal

Borealis AI

AI research lab

http://www.borealisai.com/

Montreal

Botler AI

AI / Data Analytics

https://botler.ai/

Montreal

C2RO

AI / Data Analytics

http://c2ro.com/

Montreal

Protection

Government

https://protectchildren.ca/app/en/

Winnipeg

Canvass Analytics

AI / Data Analytics

https://www.canvass.io/

Toronto

Cogilex

AI / Data Analytics

http://www.cogilex.com/

Montreal

CRIM

CS research lab

http://www.crim.ca/en/

Montreal

Crowdflower

AI research lab

www.crowdflower.com

San Francisco

CSI Flex

AI / Data Analytics

http://www.csiflex.com/

Montreal

DarwinAI

AI / Data Analytics

http://www.darwinai.ca/

Waterloo

Data & Society

AI research lab

https://datasociety.net/

New York

Data Performers

AI / Data Analytics

https://www.dataperformers.com/

Montreal

Delve Labs

AI / Data Analytics

https://www.delve-labs.com/

Montreal

desmahealth

AI / Data Analytics

https://www.desmahealth.com/

Montreal

District 3 Concordia

AI research lab

http://d3center.ca/

Montreal

Drupal Project

Open Source

http://walkah.net/about.html

Electric Imp

Encryption

www.electricimp.com

Canadian Centre for Child


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape ElementAI

AI / Data Analytics

https://www.elementai.com/

Montreal

Envision.ai

AI / Data Analytics

https://www.envision.ai/

Montreal

EruditeAI

AI / Data Analytics

http://erudite.ai/

Montreal

Exia

Data Analysis

https://exia.ca/en/

Montreal

Facebook AI Research

https://research.fb.com/category/faceboo

Canada

AI research lab

k-ai-research-fair/

Montreal

Faim Data

Fintech

http://www.faimdata.com/

Montreal

flinks ai

Fintech

https://flinks.io

Montreal

fluent.ai

AI / Data Analytics

http://www.fluent.ai/

Montreal

Fuzzy AI

AI / Data Analytics

https://fuzzy.ai/

Montreal

GERAD

AI research lab

https://www.gerad.ca/en

Montreal

Guavus

AI / Data Analytics

http://guavus.com/

Montreal

Harvard AI Initiative

University Initiative

http://ai-initiative.org/

Cambridge, MA

Hopper

AI / Data Analytics

http://www.hopper.com

Montreal

Imagia

AI / Data Analytics

https://www.imagia.com/

Montreal

Immunio

AI / Data Analytics

https://www.immun.io/

Montreal

Insight Engines

AI / Data Analytics

https://insightengines.com/

San Francisco

Valorisation

AI research lab

https://ivado.ca/en

Montreal

Integrate AI

AI / Data Analytics

https://www.integrate.ai/#welcome-page Toronto

Intellogo

AI / Data Analytics

http://www.intellogo.com/

Montreal

Invenia

AI research lab

https://www.invenia.ca

Winnipeg

JDA

AI / Data Analytics

https://jda.com/

Montreal

Kaloom Inc.

AI / Data Analytics

http://www.kaloom.com/

Montreal

Keatext

AI / Data Analytics

https://www.keatext.ai/

Klipfolio

Fintech

https://www.klipfolio.com

Kronos

AI / Data Analytics

https://www.kronos.ca/

Montreal

Institute for Data

Laboratory for Imagery

http://en.etsmtl.ca/Unites-de-recherche/LI

Vision and AI

AI research lab

VIA/Accueil?lang=en-CA

Montreal

Landr

AI / Data Analytics

https://www.landr.com/en

Montreal

Laurus Technologies

AI / Data Analytics

https://www.larus.com/

Ottawa


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape Lexalytics

AI / Data Analytics

https://www.lexalytics.com/

Montreal

Local Logic

AI / Data Analytics

https://www.locallogic.co/en

Montreal

Logimethods

Data Analytics

http://logimethods.com/

Montreal

Lyrebird

AI / Data Analytics

https://lyrebird.ai/

Montreal

https://www.microsoft.com/en-us/researc Maluuba Microsoft

AI research lab

h/lab/microsoft-research-montreal/

Montreal

Milieu

Sentiment Analysis

https://www.milieu.io/

Montreal

MIMS

AI / Data Analytics

http://www.mims.ai/

Montreal

Mindbridge

AI / Data Analytics

https://www.mindbridge.ai

Montreal

mnubo

AI / Data Analytics

https://mnubo.com/

Montreal

learning Algorithms

AI research lab

https://mila.quebec/en/

Montreal

Montreal.AI

AI community

http://www.montreal.ai/

Montreal

MuBrain

AI / Data Analytics

http://mubrain.com/

Montreal

Nash

AI consultancy

http://nash.agency/

Montreal

nectar

AI / Data Analytics

https://www.nectar.buzz/

Montreal

Nexalogy

AI / Data Analytics

https://nexalogy.com/

Montreal

nGuvu

AI / Data Analytics

https://www.nguvu.com/

Montreal

Notman House

Startup incubator

http://notman.org/

Montreal

AI / Data Analytics

https://www.nuance.com/

Montreal

Montreal Institute for

Nuance Communications Canada Inc.

AI research and Partnership on AI

community

https://www.partnershiponai.org/

Worldwide

Perfiqt

Fintech

http://main.perfiqt.com/

Montreal

Public opinion Pew Center

research

http://www.pewresearch.org/

Plotly

AI / Data Analytics

https://plot.ly

Montreal

Propulse Analytics

AI / Data Analytics

http://www.propulseanalytics.com/en/

Montreal

Pythian

AI / Data Analytics

https://pythian.com

Montreal

AI research lab

http://rl.cs.mcgill.ca/

Montreal

Reasoning and Learning Lab

Page 33/34


Mapping the Artificial Intelligence, Networked Hate, and Human Rights Landscape https://research.google.com/teams/brain/ Research at Google

AI research lab

about.html

Montreal

Roof AI

AI / Data Analytics

https://roof.ai/

Montreal

Seamless Planet

AI / Data Analytics

https://www.seamlessplanet.com/

Montreal

Solink

AI / Data Analytics

http://solinkcorp.com

Ottawa

Sooth.ca

AI / Data Analytics

http://www.getsoothe.ca/

Montreal

SportlogiQ

AI / Data Analytics

http://sportlogiq.com

Montreal

Stradigil Labs

AI / Data Analytics

https://www.stradigilabs.com/

Montreal

Stratuscent

AI / Data Analytics

http://stratuscent.com/

Montreal

StreamScan

AI / Data Analytics

www.streamscan.ai

Sweet IQ

AI / Data Analytics

https://sweetiq.com/

Montreal

Counter The MPower Project

Radicalisation

https://www.mpowerproject.org/contact

Thirdshelf

AI / Data Analytics

https://www.thirdshelf.com/

Montreal

UpTurn

AI consultancy

https://www.teamupturn.org/

Washington,DC

Ventana

AI / Data Analytics

http://www.getventana.co

Montreal

Wrnch AI

AI / Data Analytics

https://wrnch.com/

Montreal

Zighra

AI / Data Analytics

https://zighra.com/

Ottawa

Page 34/34


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.