The Use of Artificial Intelligence under Data Protection Law Vanessa Kodilinye* The world is currently experiencing a digital revolution1, driven and supported by technological innovation enabling the collection, processing and formation of new data-sets on an unprecedented scale and at an incredible speed, in all scientific, business, cultural and societal fields. No country wishes to be left behind2 in the new digital, borderless world order, or to miss the benefits of technological trends that are shaping an open-data driven world in the 21st century, which promise increased efficiencies and competitiveness for business and societies3. Technological players4 in high velocity data driven ecosystems are no longer satisfied simply with accumulating5 and statistically analysing ‘Big Data’6 tunnelled through clouds. Such unstructured data, whether created by users or produced as by-products of computing7, now have an aggregated and insightful value in which large business organizations8 and governments have invested9, and *
LL M (UWI), LL M in IT and Telecoms (Strathclyde, Scotland), Dr rer publ (Leuphana, Germany),Attorneys-atLaw(Barbados), Solicitor(England and Wales), CAMS (Association of Certified Anti-Money Laundering Specialists) 1 The Fourth Industrial Revolution. 2 Center for Data Innovation: ‘ How Governments are Preparing for Artificial Intelligence’, available at https://www.datainnovation.org/2017/08/how-governments-are-preparing-for-artificial-intelligence Recent efforts, identified in the article, at governmental level include those of Japan (2015), a ‘Robotics Policy Office’ established in 2017; Canada (2017) under their Pan-Canadian Artificial Intelligence Strategy, a $125 million Cd dollar programme will be used to support research and establish AI institutes; China (2017), ‘Next Generation Artificial Development Plan’; UK (2017) as part of their digital strategy will be spending £17.3 million on promotion of responsible development of AI. As of 2017, USA is focusing on the role of policy makers in an AI economy to equip the workforce with suitable skills. 3 On the other hand, uncontrollable development of AI could lead to economic, social and political challenges, and displacement of labour, and could reduce fundamental freedoms, leading to oppression. 4 Services provided may be more closely aligned to essential utilities in the modern world. 5 No predefined purpose for collecting or processing data (often personal data). 6 There is no set definition of ‘Big Data’. See classification of ‘Big Data’ developed by UNECE Task Team on Big Data contained in an IMF Staff Discussion Note, 13/09/2017, entitled ‘Big Data: Potential, Challenges and Statistical Implications’ available at https://www.imf.org/en/Publications/Staff-Discussion-Notes/Issues/2017/09/13/Big-DataPotential-Challenges-and-Statistical-Implications-45106 ; Marr, B: ‘Big Data: 20 Mind-Boggling Facts Everyone Must Read’, Forbes 30/07/2015 estimates that big data will generate 1.7 megabytes every second for each individual by the year 2020. For a more comprehensive analysis of ‘Big Data’, see https://privacyinternational.org 7 Created as secondary use of data which has been mined. 8 Such as Microsoft, Google, Facebook, which all use harvested data, novel data and shared data from competitors as training sets for their algorithms. https://www.partnershiponai.org 9 ‘Open data’ which may include the release of personal information by governments to be freely used by private enterprise; it is now considered an ‘economic asset’. For example, private/governmental partnerships such as Google Deep Mind and the UK Royal Free NHS Trust share medical data of identifiable patients without their consent in relation to the development of the Streams instant alert application. In July, 2017 the Information Commissioner ruled such sharing to be illegal. However, it is possible that under Art 25(2) of the General Data Protection Regulation, Regulation 2016/679, effective 25 th May, 2018, this type of scientific co-operation, with advanced pseudonymization techniques, may produce ‘pseudonymous data’ for which there are fewer restrictions on processing as they may be seen as technical and organisational methods of implementing privacy by design. Another example is China’s proposed ‘Social Credit System’, which pilot project began in 2015 and is to date in 40 cities in China. It is essentially a collaboration between tech companies and government to build personal digital profiles of each citizen through the mergers of private and public databases, so as to allow business to be conducted only after verifying scores through the social credit system. Such a system is also designed to account for political