Enhancing Human Ingenuity
Enhancing 1st Annual Adarga Symposium on Human AI Ingenuity The Royal Institution, 5 September 2019
Hosted by
1
About Adarga
Table of contents
Adarga provides organisations with powerful AI analytics technology that helps you analyse all of your disparate data, including unstructured information, to discover the deep insights that drive faster, better decisions.
About Adarga
2
Table of contents
3
Welcome to Adarga’s AI Symposium
4
Our Core Partners
6
We’ve been developing AI analytics software to transform data intensive, human knowledge processes since 2016. Our two products, adarga_engineTM and adarga_benchTM, allow our customers to harness the power of natural language processing and machine learning to effectively and efficiently deal with the complexity of their data.
Agenda 7 Getting started with AI
8
The result? Our customers can stay ahead of fast evolving situations, make better decisions and anticipate emerging threats.
Success with AI is a Partnership
14
Artificial Intelligence – navigating an evolving legal landscape
18
Enhancing capability through AI – the defence context
26
This is actionable intelligence. This is Adarga.
The Problem Remains the Same: the role and texture of leadership in an AI world
32
Artificial Intelligence: from weakness, strength
40
Speakers 44 2
Find out more at www.adarga.ai
Our panellists
48
3
Welcome to Adarga’s AI Symposium
I
am proud to be hosting our first event on this highly relevant subject and to be doing so in the Royal Institution, the home of British science, founded in 1799 with the aim of introducing new technologies and teaching science to the general public. We are at an inflection point. The key trends driving change in our world are already familiar to us: environmental stress, evolving demographics, the relentless advancement of technology, the increased importance of information, and the transition in the economic and political power of nations. However, much less familiar is the unprecedented acceleration in the speed of change, driving more complex interactions between these trends in our ever more highly networked world. To truly understand what is driving future change represents a challenge for us all. We all have an enduring responsibility to learn, unlearn, and relearn. This is especially important in relation to understanding technological change. The rate and impact of this type of change is in part cultural, constrained by our own capacity to understand, absorb and demand technological progress. We must embrace the potential of human ingenuity, strive to think differently, encourage curiosity and maintain an agility to enable continuous adaptation - creating, designing, inventing, introducing new systems, new ways of thinking, new forms of leadership which enable new ideas to be embraced and new technologies to be exploited. Understanding what is driving this paradigm shift will allow us to change the way we think, make more effective decisions and take swifter actions.
4
A consequence of our technological progress to date is digitisation; computing power, the volume and variety of data and congealing connectivity continue to grow exponentially and are in turn driving the development of artificial intelligence (AI). AI is itself one of the central technologies fuelling the rapid pace of technological advancement and, as AI develops, it has the ability to solve problems of increasing complexity, leading to improvements across all aspects of human endeavour. AI technologies are poised to transform every industry, just as electricity did 100 years ago. AI will allow us to identify unforeseeable threats and seize fleeting opportunities in this uncertain, complex and volatile world. The increasing adoption and application of AI will in itself fuel even greater velocity in development and sophistication
in capability. The pace of change we are witnessing is only going to get faster. AI is, of course, a buzzword in every boardroom. Organisations, from governments to businesses, imagine AI being employed for a range of purposes. However, very few of the leaders of these organisations have an accurate or comprehensive understanding of how these algorithms and systems are built, how they operate, the range of uses that they might be put to, or the challenges that exist in applying them in the real world, in complex, everyday tasks. It is the human that is at the heart of this subject and will rightly remain so for some time to come. It is humans that are required to be educated in order to become the data scientists who write and train the algorithmic models. Human software engineers are needed to integrate these models into useable products and systems that in turn will be employed by human users endeavouring to reduce complexity, improve efficiency and disrupt the way in which they perform their own jobs. The other group of humans that are central to effective design, adoption and application of AI are the leaders of these same organisations. Their responsibility is to endeavour to understand the underlying technologies and their capabilities as well as the opportunities, the challenges and the constraints presented by AI. This is the central theme of our symposium today - enhancing human ingenuity. Our speakers will focus on the important and inseparable dependencies that exist between the human and the machine in the effective design and practical implementation of AI systems for real world users. The future has already started. Are you ready? Rob Bassett Cross Founder & CEO, Adarga
5
Our Core Partners Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. AWS offers over 165 on-demand services for compute, storage, databases, networking, analytics, robotics, machine learning and artificial intelligence (AI), and more. Millions of customers including the fastest-growing start-ups, largest enterprises, and leading government agencies - trust AWS to power their infrastructure, become more agile, and lower costs. Get started at aws.amazon.com
Agenda 12.00 – 12.45
Arrival, registration and light lunch
12.55 – 13.00
Host welcome Rob Bassett Cross CEO, Adarga
13.00 – 13.55
Opening keynote Mark Stevenson Futurist and co-founder, We Do Things Differently
In a world where technology is rapidly transforming businesses, clients at all stages of growth trust Linklaters’ lawyers to help them maximise opportunities and navigate threats in this increasingly regulated landscape. Our global legal experts are passionate about technology and bring clients a depth of experience across a broad spectrum of emerging technology trends, including AI, blockchain, fintech, and the internet of things, by drawing on our strengths across multiple disciplines, sectors and jurisdictions. This ensures that our clients receive commercially innovative and technically strong advice.
14.00 – 15.00
Keynote Ranju Das GM, Amazon Rekognition, Amazon Web Services
15.00 – 15.30
Tea and Coffee Break
15.30 – 16.30
Panel discussion AI in the Enterprise – hype versus reality Chaired by Professor John Cunningham, Columbia University Participants Sue Daley, techUK
6
Stifel is a full-service wealth management and investment banking firm established in 1890 and headquartered in St.Louis, Missouri. The company provides securities brokerage, investment banking, trading, investment advisory and related financial services to individual investors, institutions, corporations and municipalities. The company operates more than 400 offices across the US and in Europe through Stifel Nicolaus Europe Limited.
Jem Davies, Arm Jas Mundae, Linklaters Amy Shi-Nash, HSBC 16.30 – 17.30
Closing keynote General Stanley A. McChrystal
17.30 – 20.30
Drinks and networking
7
Getting started with AI By Neil Mackin Technical Business Development Manager at Amazon Web Services
Artificial Intelligence presents new opportunities to realize foundational gains such as efficiency and cost savings, as well as higher-value gains such as product innovation and spurring discovery and research. But how do organizations get started?
F
or many, AI adoption begins by identifying workflows and business processes that suffer from low efficiency or where human mistakes abound. They consider all their data sources and existing data strategy. They determine the best cloudbased infrastructure and tools to scale AI. And last, they ensure that the right skills are on board for machine learning projects to be successful.
Understand business objectives Understanding the business benefits of AI adoption—in particular, the specific benefits relevant to your organization—is critical for enterprise success with AI. Once objectives are identified, it’s important for business and technical leaders to understand and champion their role. •
8
Select a targeted use case: When choosing a pilot, consider use cases where AI can have the most impact, and those from which you can learn to scale enterprise deployment. Focus on how you can deliver a better experience for your customers and identify the business and operational outcomes desired. Then establish one or two high-value proofs of concepts (PoC) that can really make a difference to your organization and quickly demonstrate results. For the PoC to succeed, it’s critical to have the right resources in place, including infrastructure, data, and capabilities.
•
Understand the impact: From the outset, consider the operational effect of new AI solutions. AI can have transformative impacts, so it’s important to plan in advance for what you want to achieve and measure. This can also help you determine how to measure success. When considering ROI, ensure you have value checkpoints early in the project lifecycle. This lets you adapt the approach before you’ve scaled investment.
•
Iterate and learn: Once you’ve proven the potential of machine learning, the next step is to move from pilot to production, which may include integrating the machine learning capability into a larger IT system. This move typically takes longer than the pilot process and can vary depending on the complexity of the overall system and how large-scale the production deployment will be.
Advance your data strategy Data is gold for leaders who are looking to disrupt their industries with machine learning, but many organizations don’t have machine learning-ready data. Recognizing the importance of data and developing a plan to collect and use that data is critical for successful machine learning adoption, even at the PoC stage. All sources of data need to be uncovered, from structured data like billing to unstructured data like images and forms. Then that data needs to be evaluated for quality and usefulness. Finally, data needs to be cleaned and accurately labelled for machine learning models to transform it into valuable insights.
9
Leverage the cloud Successful machine learning initiatives need more than just the right tools. A comprehensive platform brings together data store, security, and analytics services, as well as compute resources for training and deployment. Turning to the cloud for these services brings a wide range of benefits, including speed, scalability, flexibility, resilience, security, and reduced cost. As well, the cloud offers the widest range of high-performance CPU and GPU processor types, which are essential for large-scale training and for deployment in a production environment. Using cloud-based data lakes and storage also ensures that you can easily access and manage data so that machine learning initiatives are seamless, repeatable, and scalable.
Enable your organization Along with the right use cases, having the right skills to build machine learning applications and systems, as well as the right process and operating model, are essential to getting pilots off the ground and scaling enterprise AI.
World-changing organisations trust the cloud with the most functionality, innovation, and experience.
www.BuildOn.aws
•
Assemble the team: Consider appointing a Chief Data Officer (CDO) to lead the charge on data strategy and governance, bring together interdisciplinary teams, and streamline data processes. Assemble a team of machine learning developers and data scientists essential for a successful PoC, and train teams for future deployments. It’s also important to involve subject matter experts who understand your business vernacular, especially for industry domains, to help you get to ground truth with your data.
•
Create the process: AI may not bring the expected value if the results are not integrated with other areas of the organization. Operationalizing machine learning models is hard—as many as half of PoCs don’t get deployed into production. Therefore, executive sponsorship to change business processes and alignment with applications development is key. Successful teams create processes to align AI experts, data scientists and developers with key business stakeholders. A well-defined process also helps ensure the final output is well integrated into business processes. 11
•
Build the culture: To help realize its potential, there needs to be cultural acceptance that AI is an important part of business and operations. Some initiatives may require information from across these domains, so it’s important to understand all the stakeholders who need to be involved, and bring together stakeholders who can champion adoption.
Getting started with AWS AI and ML AWS has the broadest and deepest set of AI and Machine Learning services for your business. On behalf of our customers, we are focused on solving some of the toughest challenges that hold back AI adoption. Choose from pre-trained AI services; Amazon SageMaker to build and scale machine learning; or build custom models with support for all the popular open-source frameworks. AWS ML capabilities are built on a comprehensive cloud platform, optimized for machine learning with high performance compute, security and analytics. AWS offer a full portfolio of business development and technical training resources, giving your organization the skills it needs to push AI forward with confidence.
Find out more at www.aws.amazon.com
PROUDLY SUPPORTING ADARGA’S FIRST ANNUAL AI SYMPOSIUM Celicourt is a leading communications consultancy, focused on effective engagement with key stakeholders, to ensure our clients achieve their strategic goals.
CELICOURT.UK
12
Success with AI is a Partnership By Jason Atlas Newly appointed Chief Technology Officer, Adarga
There are two AI narratives that have entrenched themselves into contemporary technology culture. The first is that AI will solve everything. It is almost magic. Do you have a problem? AI will solve it for you! Soon, humans won’t be necessary. AI will do it better! If you need to analyse something, just plug in an algorithm and voilà! The answers are miraculously displayed. Heck, once you get this magic set up and running, handed off from a mystical team of software engineers and data scientists, all the problems in your business will simply disappear: you will identify all the bad guys; you will have seamless logistics; you will reduce you extraneous spend by 99%; you will cure cancer; find aliens; and solve aging. This is the theory of the “Master Algorithm” where AI can solve all problems envisioned by humans. Not only is anything possible but EVERYTHING is possible! The other narrative is its absolute counter. That the promises of AI are vastly overblown. The industry view that the majority of applications and services where business is promised miracles and have spent huge amounts of time and money have had very limited returns is prevalent. Recent evidence has already demonstrated that targeted advertising is no more effective than contextual advertising. 2018 saw the publication of the first list of “AI Failures”, which highlighted such headline blunders as: the total failure of AI to predict the outcome of the 2018 World Cup; the strong gender bias of AI-powered recruiting software; and the fatalities caused by driverless cars.
14
Me? I think of AI like the computer Deep Thought in Hitchhiker’s Guide to the Galaxy. Billions of dollars. The collective intelligence of humanity contributing to this massively intelligent system. And the answer it gives? Well it is almost indecipherable!
People found that it did not just give them a blueprint to solve all our collective problems. It gave them a path to knowledge. This is analogous to AI in the modern world of computing. We spend thousands and thousands of hours researching, hypothesising, experimenting, testing, and re-calibrating, until we feel we have achieved a level of speed and accuracy needed to address the needs of the government or a business. These systems are meant to provide intelligence. It is NOT the direct answer to all your needs with no work. We have not achieved magic yet. There is no Oracle of Delphi. What we have is an intermediary step in the journey to eventual real emerging consciousness and artificial intelligence. We are at the stage where what AI can do is broad stroke intelligence, problem solving and complexity space, and to make the massive amounts of noise into a signal that the human can leverage. The key here is complexity. On simple things, AI can, indeed, do many things with little human intervention. When the level of complexity increases human beings are still key. The AI can make what is seen, done, and actioned by that human vastly more impactful. Together we can allow each of you to become more accurate, faster, and working with many more sources and types of information than you could otherwise even hope to. Together we can make sense of all of this overwhelming data to turn this into opportunities that were previously impossible. Buckminster Fuller created the “Knowledge Doubling Curve”; he noticed that until 1900 human knowledge doubled approximately every century. By the end of World War II knowledge was doubling every 25 years. Today things are not as simple as different types of knowledge have different rates of growth. For example, nanotechnology knowledge is doubling every two years and clinical knowledge every 18 months. But on average human knowledge is doubling every 13 months. That is insane!
15
But this is what AI excels at. A great AI system, leveraging intelligence design engineering is capable of taking this vast amount of data – or noise – and produce from it a signal that means something to a person. The fact is, with this much information, it is just not possible for any person, or even collection of people, to process, understand and act on all this information. Modern computing systems leveraging AI is the tool humans have chosen to address this reality. For AI and data systems, in partnership with a trained analyst can enable that person to do things that would not be vaguely possible. We can find patterns that would be impossible otherwise. We can find anomalies, we can discover threats, risks, opportunities and areas for future research. Together we are able to make faster, more informed decisions. We can eliminate ambiguity. We can discover behaviours otherwise thought impossible, we can get into the minds of the people we are engaging with as customers or targets. And we are just beginning. It will require the best human analysts and subject matter experts, working in conjunction with innovative engineering and data science companies to succeed. Working together we are unlocking capabilities that have never been possible at scales never imagined. There are no miracles, but there is tremendous promise. Join us on this journey.
16
Artificial Intelligence – navigating an evolving legal landscape Artificial intelligence is widely seen as the key to competitive advantage by businesses and governments across the globe. Recent AI developments in areas such as language translation, facial recognition and driverless cars have led to renewed interest from investors, regulators and governments.
T
he use of artificial intelligence is growing rapidly, enabled by computing power, private investment, and user demand. It is important to innovation in companies in the financial services, health, automotive and other sectors. We are beginning to see steps to regulate AI. This article provides an introduction to the regulation of AI and to key legal issues that arise for businesses deploying the technology that is available today.
Regulation of AI Regulation can struggle to keep pace with accelerating change brought by new technological development. Many governments see great potential for AI to drive economic development and to solve societal challenges. They want to provide the legal framework needed to encourage innovation, attract investment and enable growth. At the same time, they recognise there is a need to protect their citizens and to address the ethical, legal, social, and economic issues related to AI. Currently there is very limited specific AI legislation. Certain legislation such as the EU’s General Data Protection Regulation (GDPR) was developed with AI in mind and various countries have legislated to address sector-specific issues such as the development of autonomous vehicles. 18
Today’s use of AI is largely governed by the application of existing laws and, to
an extent, self-regulation by corporates. However, this is an evolving area and we expect to see changes following the work of the numerous initiatives across the globe at international, inter-governmental, regional and national levels addressing the issues presented by AI.
International initiatives It remains to be seen whether international consensus can be reached on the rules for AI given that ethics vary by country and culture. There are also differences in public acceptance and use of technologies across different countries and cultures. This includes, for example, different attitudes to the balance of privacy versus convenience. Recent progress has been made in the European Union and by the global policy forum, OECD. Countries in the European Union have agreed to cooperate to resolve the “social, economic, ethical and legal questions” of AI. The European Commission formed an expert group to advise it on AI. In April 2019, the group released its Ethics Guidelines for Trustworthy AI and the Commission will explore policies based on those recommendations. In May 2019, the OECD published the first intergovernmental standard for AI policies which was endorsed by 42 countries1.
National strategies Governments in more than 20 countries have published national strategies on AI. Many involve consulting experts and industry, proposing ethics and principle-based guidelines, and identifying changes needed to existing law and regulation to enable
1
OECD Recommendation of the Council on AI
19
the use of AI. Prominent examples are the UK, China and the US. The UK Government is seeking to “put the UK at the forefront of the artificial intelligence and data revolution”2 and among other things, has set up three organisations to support this: The Centre for Data Ethics and Innovation, the AI Council and the Office for AI. AI is also a strategic priority for the Information Commissioner’s Office. The Chinese Government’s New Generation AI Development Plan launched in 2017 and declared its intention to be the world’s “premier AI innovation centre” by 2030. China also wants to lead on global standards for AI. It has set up advisory groups and in April 2018, a major ISO international standards meeting was held in Beijing. The United States’ AI strategy has favoured innovation over regulation, with big technology corporations bringing rapid technology development and self-regulation. In February 2019, President Trump launched the American AI Initiative directed towards expanding the role of the United States as “the world leader” in AI.
Key legal issues The aim of an artificially intelligent system is to be intelligent – to analyse, decide, and potentially act, with a degree of independence from its maker. The algorithm at the heart of the AI system may be opaque and, unlike a human, there is no common-sense safety valve. Delegating decisions to a machine which a human does not control or even understand raises interesting issues. When deploying an AI system in a business some of the key legal issues to consider are therefore: • • • •
viability for the system; use of personal data; anticompetitive behaviour; sector specific regulation.
These issues are outlined below with reference to the position in the UK.
20
2
Regulation for the Fourth Industrial Revolution, White Paper, June 2019
Liability – contract, tort, product liability Where an AI system or service is provided to a third party under contract and it fails to perform or results in unintended consequences, there is a risk of claims for breach of contract. In addition, if an AI system causes physical harm, damage to tangible property, or economic loss, this could give rise to liability in tort. Strict liability could also arise under product liability laws if the AI were embedded in a product sold to a consumer.
Data – unfairness, bias and discrimination The key advances in AI over the past few years have been driven by machine learning which, in turn, is fuelled by data. Data protection laws in the EU are mainly set out in the GDPR and supplemented in the UK by the Data Protection Act 2018.
The obligation is to provide “meaningful information about the logic involved”. If the algorithm is opaque, the logic used may not be understandable or easy to describe. Beyond the technical requirements of data protection law, there is a wider ethical question of whether it is appropriate to delegate the final decision about an individual to a machine
Data – security and cyber threats The security of the system will be essential. A breach could have serious consequences including: • •
One of the key principles under data protection law is that personal data must be processed fairly and lawfully. Under the accountability principle in the GDPR, there is a requirement to ensure the processing is fair and to be able to demonstrate this. The challenge is to square this with the use of an opaque algorithm. Similarly, if the algorithm is opaque there is a risk that it will make decisions that are either discriminatory or reflect bias in the underlying dataset. This is not just a potential breach of data protection law but might also breach the Equalities Act 2010.
Data – automated decisions Data protection law requires individuals to be told about what information is held about them and how it is being used. This means that they would normally need to be told if AI was going to be used to process their personal data. The GDPR contains controls on the use of automated decision making. Where automated decision making takes place there is also a “right of explanation” and affected individuals must be told: • • • 22
of the fact of automated decision making; about the significance of the automated decision making; and how the automated decision making operates.
Uncontrolled behaviour – The security breach could allow the hacker to take control of the system. Unauthorised disclosure – If the security breach results in personal data being compromised that could be a breach of the GDPR.
Security obligations arise under a range of laws. In the UK this includes the GDPR, the Network and Information Systems Regulations 2018 and product liability laws.
Anti-competitive behaviour There is a risk that an algorithm might result in anticompetitive behaviour whether inadvertent or not. Pricing algorithms are coming under greater scrutiny and EU Competition Commissioner Vestager has said that “pricing algorithms need to be built in a way that doesn’t allow them to collude”.
Sector-specific regulation Organisations must ensure that their approach to AI reflects the additional regulatory requirements placed on them. For example, the Markets in Financial Instruments Directive introduced specific rules for algorithmic trading and high-frequency trading to avoid the risks of rapid and significant market distortion.
Conclusions While general artificial intelligence may be many years away, governments and regulators have work to do in the short term to address the issues presented by the narrow artificial intelligence developed and deployed today; and to anticipate how
23
the technology may be used in future. We expect the law, regulation and the regulators to continue to adapt to address the novel issues presented by AI. It remains to be seen how this will develop at national, regional and international levels. We recommend organisations take a broad, forward-looking approach to anticipate the future impact of AI technology on their business. For guidance in this area, you can access our AI guide at www.linklaters.com/AI This a practical toolkit that provides an overview of the legal issues that arise when rolling out AI within your business. It includes practical solutions for managing legal risks and issues.
Find out more at www.linklaters.com
 ��� � � �
Š Linklaters LLP. All Rights reserved 2019
This publication is intended merely to highlight issues and not to be comprehensive, nor to provide legal advice. Should you have any questions on issues reported here or on other areas of law, please contact one of your regular contacts, or contact the editors.
Linklaters LLP is a limited liability partnership registered in England and Wales with registered number OC326345. It is a law firm authorised and regulated by the Solicitors Regulation Authority. The term partner in relation to Linklaters LLP is used to refer to a member of Linklaters LLP or an employee or consultant of Linklaters LLP or any of its affiliated firms or entities with equivalent standing and qualifications. A list of the names of the members of Linklaters LLP and of the non-members who are designated as partners and their professional qualifications is open to inspection at its registered office, One Silk Street, London EC2Y 8HQ, England or on www.linklaters.com and such persons are either solicitors, registered foreign lawyers or European lawyers. Please refer to www.linklaters.com/regulation for important information on our regulatory position. 24
Enhancing capability through AI – the defence context Artificial Intelligence offers vast potential to the UK, both in the public and private sector. The Defence sector is no different, with AI providing the opportunity to deliver significant benefits for the MOD, which is currently working to deliver its transformation agenda.
I
ndeed, some initiatives, most notably the Royal Navy’s Project Nelson have already begun to adopt and implement AI solutions rapidly and effectively, delivering new capabilities to the Royal Navy. However, techUK argues that for wider (and faster) adoption to be realised across the MOD, innovators need to address several key challenges which are of specific relevance to the Defence Sector: • • •
Recognising AI as capability, not just a money saver Overcoming complex process and ingrained behaviours & culture Considering digital ethics issues raised in the Defence context
Leveraging AI for Capabilities not simply efficiencies Too often discussions around AI technology and its solutions focus on potential efficiencies, framing the conversation around financial or resourcing savings for the MOD. AI certainly does offer scope for these efficiencies, but arguably much more significant are how these technologies can be utilised to drive better decision making, giving personnel across the Defence enterprise quality data and information to interpret and utilise. This is true for both front line roles and back office functions such as estates, HR and logistics. techUK would argue that AI solutions need to be recognised for the potential capabilities they offer, in much the same way that platforms or hardware are.
26
At a time when MOD has sought to develop its thinking on its future structure and priorities, now is the perfect opportunity for the MOD to address the barriers to digital transformation. To be successful, suitable investment needs to be made in
technologies like AI to drive long term efficiencies. This should also be done in partnership with UK industry, which provides and underpins the technology used on the front line and across the MOD. Re-energising and renewing these partnerships is vital in an era in which Defence is no longer the pioneer of technological innovation, and relies on technologies developed and exploited across other sectors. The capabilities and technologies that MOD needs to modernise already exist within the technology sector, but in many cases sit outside of the Defence sector. The MOD needs to continue to develop its engagement with other industrial sectors and also challenge the perception of Defence that it is too complex and difficult to enter. Some AI technologies with applicable use cases in the MOD are routinely exploited across the private sector, but the department as it stands does not have the commercial agility to acquire and support these technologies. Specifically, techUK has identified five key blockers it believes the MOD needs to mitigate against in order for it to widely adopt transformative digital solutions more effectively: 1. Whilst the various MOD innovation initiatives have helped to identify and engage non-traditional suppliers, they have focused on technology assessment and evaluation rather than business exploitation. 2. Current initiatives have not delivered an enduring route to market and growth for both SMEs and non-traditional suppliers wishing to supply into the major Defence programmes in the UK. 3. The lack of a single delivery framework for mature technologies between the MOD, Primes and SMEs is inhibiting technology growth and innovation across Defence. 4. The complexity, or perceived complexity, and length of commercial processes make entering the Defence market unappealing to nontraditional suppliers of all sizes. 27
5. Pressures on incumbent Prime contractors to both reduce costs and accept lower profit margins, combined with impact of single source contract regulations, have disincentivised Primes from investing in higher risk technology developments with non-traditional suppliers.
Overcoming ingrained internal cultures and behaviours Further to the challenge of engaging outwardly with industry and other sectors, techUK believes that the MOD’s own complex internal processes, ingrained behaviours and a resistance to embrace radical change have provided a barrier to the adoption of new technologies in the department. This is not a new barrier, but it is a fundamental one, as the benefits of embracing new technology will only be realised if senior decision makers in the department reflect on what has not worked in the past, and proactively address the behaviours, policies and processes which are currently holding the department back. Currently, techUK would postulate that the MOD does not differentiate between incremental, evolutionary technology and disruptive technology, and is hooked into the former at present.
techUK represents the companies and technologies that are defining today the world that we will live in tomorrow. Around 850 companies are members of techUK.
For MOD to effectively leverage new technologies, it needs to develop its approach, with personnel at all levels buying into the potential of technologies like AI. Furthermore, its vital that MOD and industry work together to identify and understand what technologies are needed. The MOD and industry should where possible look to forge longer term partnerships built on honest conversations about what MOD needs, what the best solutions available are and how technology will develop in the medium to long term.
Collectively they employ more than 700,000 people, about half of all tech sector jobs in the UK. These companies range from leading FTSE 100 companies to innovative start-ups. The majority of our members are small and medium sized businesses.
Digital Ethics in the Defence Context Alongside the increased adoption of advanced digital technologies, including AI, the debate around the ethical issues and questions AI raises, have intensified. The need for the ethical design, development and deployment of digital technology is particularly acute in the Defence landscape for obvious reasons. techUK has been at the heart of these debates during which we’ve seen significant developments such as the foundation of the Ada Lovelace Foundation and the creation by Government of the Centre for Data Ethics and Innovation.
techUK.org | @techUK | #techUK
The MOD needs to be at the forefront of these discussions, highlighting how AI can be used to improve human decision-making processes rather than the often-publicised narrative around the dangers of AI in relation to platforms. techUK_standard_advert_A6_portrait.indd 1
09/08/2019 12:23:45
29
For industry, the priority is to ensure ethical principles underpin the MOD’s approach to the adoption of AI and other technologies, making sure that digital ethics is taken seriously by those exploring the application any new technology in Defence. techUK has argued in its recent report ‘Digital ethics in 2019’ that this must be the year in which we move digital ethics out of the conference room and into real life. To make this a reality, we must clearly explain that digital ethics is not a point in time solution for a single problem. It is not just answering the questions being raised right now about the development and use of specific technologies. Rather, digital ethics represents a long-term change in approach that embeds ethical thinking into every aspect of the tech ecosystem, including the way every technological solution is designed, developed and used.
Intellectual property law
Global IP expertise
Secondly, projects and initiatives must move on from talking about digital ethics in the abstract and start applying ethical thinking to real world situations and scenarios. Making the case for how digital ethics can make a difference and is relevant for everyone must be a key objective for the UK’s digital ethics community in 2019 and beyond For Defence, this means providing clarity on how AI will be implemented and leveraged across functions, not limited to but including in the battlespace. To conclude, AI technologies clearly present MOD with vast opportunities to modernise, developing new capabilities as well as augmenting existing processes across all commands and functions. To realise this potential the three key challenges posed here will need to be overcome. The purpose of this will be to improve decision making through giving people better tools and understanding of key information, allowing them to be better place to act and respond. This will only happen in partnership with industry and academia, whose expertise and technologies will be vital if MOD is going to successfully enable AI adoption and implementation.
About techUK
Proud sponsors of the Adarga AI Symposium: Enhancing Human Ingenuity Appleyard Lees
30
techUK represents the companies and technologies that are defining today the world that we will live in tomorrow. More than 900 companies are members of techUK. Collectively they employ approximately 700,000 people, about half of all tech sector jobs in the UK. These companies range from leading FTSE 100 companies to new innovative start-ups. The majority of our members are small and medium-sized businesses.
AppleyardLees AppleyardLees
appleyardlees.com
AL IQPC A4 Final AW (Proud sponsors of the Adarga AI Symposium- Enhancing Human Ingenuity) copy.indd 1
31/07/2019 15:18
The Problem Remains the Same: the role and texture of leadership in an AI world As an intelligence officer for over 25 years, McChrystal Group co-author David Gillian was witness to an explosion in the quantity and diversity of data and information. From the early years of ‘steam-driven’ tactical signals collection involving handwritten reports and manual encryption, to imagery that was often days-delayed between collection and analysis, the later years were literally another world.
stewards of information for millennia. Our ability to capture, transmit, and use that information has been the key to our dominance as a species; there is something almost primeval that both intrigues and terrifies us about some “thing” encroaching on our territory. The Hierarchy of Intelligence depicts this encroachment – the expansion of AI’s domain and the contraction of our own.
V
ast volumes of data from multiple sources, electronically correlated and fused, presented in digital displays with rich contextual information, was the norm.
As the amount of data grew, however, so did the call that there was simply too much information for too few analysts. Yet, throughout this period of growing information there was one constant that helped parse vast quantity into focused, relevant quality – the role of the leader. Those commanders whose guidance consisted of nothing more than “give me all you’ve got” only added to the challenge of information overload. Those leaders who could articulate the outcome they were seeking to achieve, understand the gaps in their knowledge, and drive the intelligence system to answer those questions, were the ones who created the essential analytical focus. As we move into an AI-enabled word, we contend that despite its potential, those leaders who overlook or misunderstand their role in guiding and focusing AI capabilities will create rather than remove challenges for their organisation.
32
The term “AI” evokes a fascinating concoction of emotional responses from any given group of people – excitement, confusion, fear, reluctance, anticipation, uncertainty, scepticism, dread, expectancy, hope. While there are various theories as to the root cause of this commonly visceral response, perhaps the most compelling is that AI directly impacts our purpose on this planet. Humans have been the lone
Figure 1: Hierarchy of Intelligence – Current State
In its infancy, AI made slow progress up the hierarchy, constrained by limited access to information and an archaic processing speed. Humans had always been relatively poor at making sense of large data sets, so welcomed the ability to categorise and sift data into something digestible. The advent of the Information Age catalysed AI’s ascension into a realm only held by humans. Only in the last decade has the potential of AI begun to be recognised, soliciting our passionate response. In the near future, AI will again make the leap up the hierarchy. Information processing will become fully the domain of AI, and knowledge – the implicit or explicit
33
information that is contextual and synthesised through experience – will become a space shared by man and machine. Not only will routine tasks be handed over to the machines, but much of the typical decision-making that consumes managers throughout their day-to-day operations will also be shouldered by AI. This leads to the reasonable question, “what will humans contribute in this brave new world?”
Self-awareness
“
Self-awareness must combine relaxation with activity and dynamism. Deepak Chopra
Though the virtues of Self-Awareness have been lauded for decades, their true criticality will become even more evident in the coming years. Self-Aware leaders not only recognise their strengths, weaknesses, values, and biases, but they also regulate their actions to amplify the positive and mitigate the negative. These leaders have a deep insight into the unique value they provide in any given context, taking time to develop themselves and expand their repertoire of skills for the betterment of the team.
Figure 2: Hierarchy of Intelligence – Future State
The answer, very simply, is that humans will contribute wisdom. In the foreseeable future, it is wisdom – a future-oriented knowledge and judgement rooted in expertise and ethical understanding – that will always remain the domain of human intelligence. The humanity of leaders will become more important, not less, in this new world. They will be the “meaning makers,” providing the ever-critical WHY needed to drive progress forward. And as we move toward this new reality, a new question emerges…
What must leaders do differently? At McChrystal Group, we’ve identified four specific behaviours that are important for leaders to be successful in today’s complex world and will be even more critical as the human/machine partnership becomes even more pronounced. These leader behaviours are not comprehensive but will differentiate truly effective leaders in the Age of AI.
34
In an AI-driven world, Self-Awareness is essential for leaders to operate as “meaning makers.” Our worldview is necessarily biased - we all see through a dimly lit glass that shades our judgement and distorts our moral compass. Great leaders recognise this inherent limitation and take steps to minimise the impact. They invite alternative viewpoints – whether man or machine – to expand their aperture beyond their myopic view, exercising true wisdom in the midst of a dynamic context.
Applied curiosity
“
I have no special talent. I am only passionately curious. Albert Einstein
Curiosity often feels like it’s going out of style. Perhaps it is the ease with which one can obtain information across the web, or perhaps it’s our world’s time pressures that necessitate finding the most direct route to an answer. Further yet, perhaps curiosity feels like the domain of children, artists, and peculiar entrepreneurs. Whatever the reason, the spark of curiosity appears to be constantly under the threat of being snuffed out. The most effective leaders are those that fan this spark into a flame that heats the engines of innovation. 35
Leaders who exhibit Applied Curiosity are in a constant state of scanning, sifting, synthesizing, and sharing. They dedicate time and resources to scan across industries and functions, building up a vast quantity of intellectual raw materials to form the foundation of creative insights. These leaders then sift (or direct the sifting) through the copious amounts of data to systematically identify the signal from the noise, and then synthesise, looking for points of convergence between ideas, concepts, and applications with little discernible connection. Good leaders stop there, but great leaders share these revelations with others. They actively push this information to those who can apply it, expanding the borders of shared consciousness.
The world has changed. Organisations cannot solve 21st Century problems with 20th Century solutions” - Stan McChrystal
In the future, and ironically, in the same way as David’s ‘steam-driven’ past, this leadership behaviour of Applied Curiosity, combining future-oriented exploration with contextual application, will be the key to unlocking AI’s full potential in new ways that ultimately improve the human condition.
Purposeful connections
“
If you’re gonna make connections which are innovative... you have to not have the same bag of experiences as everyone else does. Steve Jobs
In a digital landscape where most social interactions take place through screens, it is tempting to fall into the trap of viewing connections with others through a strictly utilitarian lens. This unspoken, Machiavellian perspective is the underlying reason why studies, such as the 2014 article published in “Administrative Science Quarterly”1 find evidence that the word “networking” makes people feel dirty. In an attempt to right the ship, some leaders and organisations swing the pendulum in the other direction, overemphasizing authenticity and personal engagement. While this relationship-oriented approach sounds appealing, it often produces lackluster results where leaders invest in relational ties that aren’t strategically useful.
36
1
https://journals.sagepub.com/doi/full/10.1177/0001839214554990
What if you could scale the adaptable behaviours of small, high functioning teams into large organisations to gain competitive advantage in a complex and uncertain future business environment? TEAM OF TEAMS®, CONCEIVED IN CONTEST, PROVEN IN BUSINESS.
mcchrystalgroup.com
Leaders who routinely engage in making Purposeful Connections are keenly aware that they have limited time and bandwidth for relationship-building, so are highly intentional in the way they build their personal and professional networks. They look for “weak ties” – individuals who have alternative perspectives, differing expertise, and who share few common connections – because these people are the key to accessing untapped reservoirs of knowledge and resources. They then invest heavily in their strategic connections (both strong and weak ties), building genuine, reciprocal relationships that persevere through good times and bad.
to avoid danger and seize opportunities. They understand that an AI-infused world will accelerate this pace of change and acknowledge the future will undoubtedly look different than what they imagine. There will be unforeseen risks and unprecedented upheavals, inconceivable breakthroughs and extraordinary accomplishments, mindboggling disruptions and incredible new applications. But in this unpredictable future, they know that two tenets will certainly remain true – there will always be the need for wisdom and leadership will be needed more than ever.
Great leaders go even further by brokering relationships between their important connections. This takes great humility and trust for leaders to move away from a “hub and spoke” model where they are the central conduit of knowledge, but it is the only way for teams and organisations to succeed in a complex environment. Finally, they connect people to the common purpose, always pointing to the WHY which drives internal motivation, a steady reference point allowing the collective to reorient their priorities.
Tolerance of tension
“
The world is all opportunities, strings of tension waiting to be struck. Ralph Waldo Emerson
The final, and perhaps most critical behaviour in an AI-driven world, is Tolerance of Tension. Paradoxically, the exponentially increasing volumes of data and AI’s ability to process it into information has increased the daily ambiguity a leader faces. The simple, black-and-white decisions are slowly being removed from a leader’s list of responsibilities – leaving only the most complex and challenging of issues. As AI continues its march up the Hierarchy of Intelligence, leaders will be forced to navigate within this gray space more and more because it can only be traversed by wisdom.
38
Leaders who expertly navigate the gray space don’t do it alone. They invite alternative perspectives and are willing to engage in courageous conversations that may be uncomfortable, but necessary. They lean into the ongoing tension of competing demands, knowing discomfort prods people to find novel solutions. Finally, they recognise change is constant and those who remain adaptable will always be poised
39
Artificial Intelligence: from weakness, strength Professor John P. Cunningham Professor of AI, Columbia University and Chair of the Adarga Advisory Board
As the world celebrates the potential of artificial intelligence and in equal measure fears its potential dangers, it is worthwhile to reflect on a fundamental question: what is true artificial intelligence, and where are we on the path towards it?
prognostication in the 1960’s and 1970’s that contributed to the first “AI winter” — a time of disillusionment with research and development in AI. Now, after a number of AI seasons have turned over again, we find ourselves again predicting the coming of strong AI.
But are we right this time?
H
ere I will argue that, despite some popular media claims to the contrary, we have made rather little progress towards to the utopian (or indeed dystopian) ideal of general “strong” artificial intelligence. Instead, we have made tremendous progress on narrow, highly problem-specific artificial intelligence. Far from being a negative, this is extremely hopeful: our modern technology, and the further massive developments that are to yet to come, will continue to add tremendous value to humanity, our economy and our general prosperity, but they will also require and extend — not displace — humankind’s central role.
40
The utopian hype and dystopian fears of AI centre around a form of artificial intelligence known as strong AI. Tracing its roots back to the 1950’s during the early age of modern computing, strong AI (variously called artificial general intelligence or full AI) is, at its ultimate interpretation, a computer or machine being able to successfully accomplish any human cognitive task: reasoning, communication, memory, planning, learning, knowledge, and even consciousness itself. From this sweeping definition come the sentient robots and all-knowing devices made popular in science fiction from Star Wars to Star Trek.
To answer that question, let us consider the other end of the AI spectrum, the highly problem-specific and somewhat pejoratively named weak AI (also called narrow AI). This function appeals to engineers and operationally minded executives alike: weak AI is a device designed to solve a specific problem that was previously considered a problem of human intelligence. Examples here include the ability to recognise human faces in images, the conversion of human speech to text (which includes services such as Siri and Alexa), financial instrument forecasting models, superhuman play in games like chess and Go, and natural language processing to transform vast quantities of unreadable, unstructured textual data. You would be right to think that these examples sound familiar. Indeed, all our current success in modern AI can be put into the bucket of weak AI. These most recent successes and pace of advancement is due entirely to a number of important drivers: the availability of vast quantities of data, the continuing commoditisation of computer power and its availability via the cloud, the statistics and computer science of machine learning, and the software pulling all these elements together. And importantly, all of these significant advances — from images to text to voice — are tools to help empower human decision makers today.
However, no widely agreed definition of strong AI yet exists. The now-popular Turing Test and Chinese Room Test are thought experiments designed to partially assess the validity of a strong AI, but do not give us the technological bar to be specifically met. Indeed, it was in the earliest days of AI that the great promise of strong AI was forecast to be just a generation away. And so, it was the abject failure of this
There are two conclusions that we can draw from the preceding. First is that, despite the hype, we are for all intents and purposes not notably closer to truly cracking strong AI than we previously were. Weak AI is, of course, an essential step in that developmental direction, but strong AI is also by no means a logical and linear next step from our foundations in weak AI. All of our successes to date are all ex-
41
amples of the weaker variety. Even the definitions of strong AI constrain our ability to demonstrate success — consciousness, human reasoning, and more are not universally agreed upon concepts, nor is the psychological or neurobiological underpinnings of these phenomena comprehensively understood in humans yet. Our second conclusion is that, far from being a negative point, our success in weak AI should be a cause for great hope and celebration. We have already seen tremendous innovation and economic development from these successes, and we have every reason to believe that this trend and speed of advancement will continue into newer and more valuable areas of human endeavour. And what’s more, all of these successes have been tools for enhancing human ingenuity — that lofty economic and societal goal, and the reason we are here today.
Beautiful, flexible workplaces, designed for your business. Mayfair, City Tower, Moorgate, Monument, Leadenhall, Bridge House, St. Paul’s beaumont-uk.com
42
Speakers
Rob Bassett Cross
Ranju Das
Founder and CEO, Adarga
GM, Amazon Rekognition, Amazon Web Services
Rob is the CEO and founder of Adarga. He is a former British Army officer, who was widely respected as one of the leading military officers of his generation and fulfilled some of the most demanding and sensitive appointments during his service as a commander on combat operations in the Middle East, Central Asia, Africa, and elsewhere. Rob was awarded the Military Cross for leadership on counter-terrorism operations in Iraq in 2006. Before leaving the military, Rob led future technology development and procurement for a number of ground-breaking projects which included the first deployment of software engineers to a live war zone and the development of an intelligence analysis tool.
Ranju Das is part of the technical leadership team at Amazon Web Services, pulling from over a decade of expertise in distributed systems, architecture, web-scale analytics, big data, machine learning and high-performance computing to help customers bring their ideas to life through technology. Since joining Amazon in early 2013, he has played a role in introducing significant new features and products to retail customers like Amazon Drive, and Amazon Prime Photos. Ranju has led the development of the Amazon Rekognition Service from its inception. Prior to Amazon, Ranju played a key role in the delivery of Nook Tablet, led the big data and database development for Barnes & Noble, and was founder and CEO of two start-ups.
After leaving the Army, Rob joined J.P. Morgan as an investment banker in the corporate finance division where he advised international corporate customers from a number of sectors and was involved in over $30bn worth of transactions across the spectrum of corporate finance advisory, mergers and acquisitions, and offerings roles. Specific coverage responsibility was for the natural resources and defence and aerospace sectors.
44
Rob founded Adarga in 2016 to apply cutting-edge AI analytics technology to solve complex, real-world problems in defence, legal and other sectors.
45
Speakers
General (ret.) Stanley A. McChrystal Former US Special Forces Commander A transformational leader with a remarkable record of achievement, General Stanley A. McChrystal was called “one of America’s greatest warriors” by Secretary of Defense Robert Gates. He is widely praised for launching a revolution in warfare by leading a comprehensive counter-terrorism organization that fused intelligence and operations, redefining the way military and government agencies interact. A retired four-star general, he is the former commander of U.S. and International Security Assistance Forces (ISAF) in Afghanistan and the former commander of the premier military counter-terrorism force, Joint Special Operations Command (JSOC). His leadership of JSOC is credited with the 2003 capture of Saddam Hussein and the 2006 location and killing of Abu Musab al-Zarqawi, the leader of al-Qaeda in Iraq. The son and grandson of Army officers, McChrystal graduated from West Point in 1976 as an infantry officer and completed Ranger Training at Fort Benning, Georgia, and later Special Forces Training at Fort Bragg, North Carolina. Over the course of his career, he held leadership and staff positions in the Army Special Forces, Army Rangers, 82nd Airborne Division and the XVIII Army Airborne Corp and the Joint Staff. He is a graduate of the US Naval War College, and he completed fellowships at Harvard’s John F. Kennedy School of Government in 1997 and at the Council on Foreign Relations in 2000.
46
From 2003 to 2008, McChrystal commanded JSOC and was responsible for leading the nation’s deployed military counterterrorism efforts around the globe. In June 2009, McChrystal received his fourth star and assumed command of all internation-
al forces in Afghanistan. President Obama’s order for an additional 30,000 troops to Afghanistan was based on McChrystal’s assessment of the war. Since retiring from the military, McChrystal has served on several corporate boards of directors, that include JetBlue Airways, Navistar, Siemens Government Technologies, Fiscal Note, and Accent Technologies. A passionate advocate for national service, McChrystal is the Chair of the Board of Service Year Alliance, which envisions a future in which a service year is a cultural expectation and common opportunity for every young American. He is a senior fellow at Yale University’s Jackson Institute for Global Affairs, where he teaches a popular course on leadership. Additionally, he is the author of the bestselling leadership books, My Share of the Task: A Memoir, Team of Teams: New Rules of Engagement for a Complex World, and Leaders: Myth and Reality. General McChrystal founded the McChrystal Group in January 2011. Recognizing that companies today are experiencing parallels to what he and his colleagues faced in the war theatre, McChrystal established this advisory services firm to help businesses challenge the hierarchical, “command and control” approach to organizational management. General McChrystal resides in Alexandria, Virginia with his wife of 41 years, Annie.
www.mcchrystalgroup.com
47
Speakers
PUTTING THE UK AT THE FOREFRONT OF THE AI AND DATA REVOLUTION A revolution in AI technology is already taking place. As we leave the EU we are looking to global opportunities, with policies to sustain and enhance the UK’s position as a global leader in this technology that will change all our lives.
Mark Stevenson Futurist, Writer and Entrepreneur
Futurologist Mark Stevenson is an author, broadcaster and expert on global trends and innovation. He is one of the world’s most respected thinkers on the interplay of technology and society, helping a diverse mix of clients to become future literate and adapt their cultures and strategy to squarely face the questions the future is asking them. He is the author of two best-selling books, An Optimist’s Tour of the Future and the award-winning We Do Things Differently: the outsiders rebooting our world. Mark’s advisory roles include Sir Richard Branson’s Virgin Earth Challenge, the policy and regulation division of the GSMA, future-literacy hub Atlas of the Future and music industry re-boot The Rattle. He is also ‘futurist without borders’ at Médecins Sans Frontières. www.markstevenson.org 48
To realise all the social and economic benefits for every member of society, we are building a strong partnership between industry, academia and the public sector. Together, we are working hard to make sure the benefits are distributed more widely, ethically and equitably. Visit GOV.UK/OfficeforAI to find out what we have achieved in our first year. Follow @OfficeforAI to stay updated on our latest news.
Our panellists
50
John P. Cunningham
Jem Davies
Professor of AI, Columbia University and Scientific Advisory Group Chair
VP, General Manager and Fellow, Machine Learning Group, Arm
John P. Cunningham, Ph.D. is a leader in machine learning/artificial intelligence and its application to industry. In his academic capacity, he is a professor at Columbia University in the Department of Statistics and Data Science Institute, and has received multiple major awards including the Sloan Fellowship and McKnight Fellowship. In industry, he has worked in, founded, exited, and consulted with multiple companies and funds in the data and AI space. His education includes an undergraduate degree from Dartmouth College, masters and Ph.D. from Stanford University, and a fellowship at University of Cambridge.
Jem is an Arm Fellow and the General Manager for Arm’s Machine Learning business, focusing on Machine Learning and Artificial Intelligence solutions across the wide range of Arm IP products. He was previously GM and vice president of technology for the Media Processing, and Imaging and Vision, Groups, where he set the future technology roadmaps and undertook technological investigations for several acquisitions. Based in Cambridge, Jem has previously been a member of Arm’s CPU Architecture Review Board and he holds four patents in the fields of CPU and GPU design. He has a degree from the University of Cambridge.
Amy Shi-Nash
Jas Mundae
Head of Data Analytics and Data Sciences, HSBC
Legal Tech and Resourcing, Linklaters
Amy is responsible for accelerating the strategic development of analytics, machine learning and AI capability across the group, enabling business adoption, culture change and benefit realisation at scale.
Jas is the head of Linklaters’ LegalTech and Alternative Resourcing capability. Her role involves providing efficient delivery solutions which includes the adoption of matter related technologies. Jas and her team have successfully implemented AI solutions on numerous matters. Jas firmly believes that it is vital to consider an integrated set of suitable technologies in addressing the requirements of each client matter. Whilst AI provides powerful functionality, it may not always be the right solution.
51
Our panellists import React, {useState, useEffect} from 'react'; import Robot from '../components/Robot'; import logic from 'stackoverflow'; const machineLearning = () => { const [knowledge, learn] = useState(); const think = (question, answer) => { if (question) { learn([...knowledge, answer]); } } useEffect(() => { logic.map((data => { think(data.question, data.answer); })); }, []);
Sue Daley Associate Director of Technology & Innovation, techUK
Sue leads techUK’s work on cloud, data analytics and AI and has been recognised as one of the most influential women in UK tech by Computer Weekly. Sue has also been recognised in UK Big Data 100 as a key influencer in driving forward the Big Data agenda, shortlisted for the Milton Keynes Women Leaders Awards and was recently a judge for the Loebner Prize in AI. In addition to being a regular industry speaker on issues including AI ethics, data protection and cyber security, Sue is a regular judge of the annual UK Cloud Awards.
52
Prior to joining techUK in January 2015 Sue was responsible for Symantec’s Government Relations in the UK and Ireland. She has spoken at events including the UK-China Internet Forum in Beijing, UN IGF and European RSA on issues ranging from data usage and privacy, cloud computing and online child safety. Before joining Symantec, Sue was senior policy advisor at the Confederation of British Industry (CBI). Sue has an BA degree on History and American Studies from Leeds University and a master’s degree on International Relations and Diplomacy from the University of Birmingham. Sue is a keen sportswoman and in 2016 achieved a lifelong ambition to swim the English Channel.
return ( <Robot brain={knowledge} /> ) } export default machineLearning;
Confused? Leave the coding to us Seriously... we designed and developed adarga’s website.
info@bmas.agency
Practical information
Wi-fi Network: Password:
Ri-Public Cavendish
We are Adarga
56
16 Berkeley Street London W1J 8DZ
1 Victoria Street Bristol BS1 6AA
hello@adarga.ai
www.adarga.ai