
2 minute read
16.3 Make AI Safe for People
from The Blue Book
our visits moved to teleconferences, our schooling was done via Zoom, several aspects of work also moved online, etc. What we did not easily realise, though, was that in order to carry out these activities online we had to provide a great deal of personal information, and in this way sacrifice our privacy. For example, in the past it was possible to do most of our shopping practically anonymously. We could visit stores anonymously, browse for various products anonymously, we could even pay anonymously using cash. At no point in this process did we have to reveal our name, our address, our telephone number etc. We could reveal this information (if we wanted to), but we did not have to. Today it is almost impossible to do any shopping online without revealing a great deal of personal information such as our name, our telephone number, our address, etc. Such personal information is revealed to a wide range of different actors including the merchant, online advertisers, the courier company, etc. We believe that it is now time to reclaim our privacy and reveal as little information as possible. The guiding principle here is that
if it can be done anonymously offline, it can also be done anonymously
Advertisement
online. This is not an easy task and it may involve several aspects besides research including, for example, awareness and deployment. It may not even be possible in some cases and with some providers. However, having this as a guiding principle will help us trim down all the cases where privacy has been unnecessarily sacrificed.
16.3 Make AI Safe for People
AI is spreading widely and rapidly. For example, a recent whitepaper by Deloitte showed that the world will see AI-driven GDP growth of $15.7 trillion by 2030. The capability of AI, and ML models in particular, to extract/learn complex features from massive volumes of (often) unstructured data is what makes them a popular choice for tackling various problems. Yet, as discussed in Chapter 4, ML-powered applications offer a whole new spectrum of security and privacy exploits for potential adversaries.
First, ML models are often applied to sectors where wrong decision making can have serious implications. Yet it may often not possible to offer formal security guarantees, given those models’ non-deterministic nature. Second, ML models are often trained on personal/sensitive data, especially models deployed in the healthcare field. Thus,