RANDOM GHOST CODES AND DATA PROTECTION PRINCIPLES
pg. 1
TABLE OF CONTENTS WHAT’S THE RANDOM GHOST CODE? ............................................................................................................... 3 BACKGROUND ................................................................................................................................................. 4 GHOST IN THE MACHINE .................................................................................................................................. 4 RANDOM GHOST CODE BEHAVIORS AND THE GDPR .......................................................................................... 4 PREVENTIVE DISCONNECTION PRINCIPLE .......................................................................................................... 5
pg. 2
RANDOM GHOST CODES AND DATA PROTECTION PRINCIPLES WHAT’S THE RANDOM GHOST CODE? Have you heard about the ghost in the Machine? Dr. Alfred Lanning: “there has always been ghosts in the machine, random segments of code that have grouped together to form unexpected protocols. Unanticipated these free radicals engender questions of free will creativity and even the nature of... the soul. “ Ghost in the machine. Album by The Police. Code Segment. “In computing, a code segment, also known as a text segment or simply as text, is a portion of an object file or the corresponding section of the program's virtual address space that contains executable instructions.[1] The term "segment" comes from the memory segment, which is a historical approach to memory management that has been succeeded by paging.” The main question here is: Is it possible that some random segments of codes grouped together or operating under the same tech environment can, by any chance conclude on an executed decision that was never mean to be, if these segments remained secluded? What could happen if for one chance and on a random compilation or execution of code or commands, these segments together make sense seeking for a result? And if they access to a Machine Learning conclusion, could that really work? The result could be a kind of behavior that was not previewed before, nor even taken in mind. If we are talking about a robot, or an operating unit, or even more social, a device that works under the Internet of Things (IoT) protocol, then, it could be developing certain erratic actions or deploying different chain of acts and conclusions that were not supposed to be there. And what if this new random code behavior now proceeds to collect more information that was allowed to, more than even big data, and starts classifying and processing it on its own? pg. 3
BACKGROUND When writing my Ph.D. thesis1, I remembered analyzing old times robots from factories and warehouses, and there was one case about one robotic arm that made an unexpected movement and just hit an operator, causing him serious injuries at the beginning that finally took him to death. That incident was an isolated case, not just nothing to with robots going crazy and starting a kind of revolution against evil owners, but for the first time, the idea over the table was that, automated machines could harm human beings, turning a small part of that society into concerned individuals about their fundamental rights and the necessity of a machine regulation.
GHOST IN THE MACHINE But the reality shows a different point of view for many individuals. If I could just make a quick list to name some, we can easily put in the same platter, lawyers, academics, the big techs, robot manufacturers, software and app engineers and developers, and of course: data privacy officers and data protection government agencies, ending the list with the data subjects: those who we are the owners of our information. So let’s say there could be a small chance of having been developed (by accident, randomly, etc.) more than one segment that wasn’t written by a programmer, a developer or even a ransomware bad guys group (so, definitely it’s not malware), and those different ghosts just, randomly are stacked together in the same or following programming line, so for somehow, makes perfect sense for a software, a machine, a robot with AI and machine learning capacity, to execute them all and complete a real cycle…successfully. Do you think this is possible? For the environment that is running those ghost codes, nothing is different or abnormal. What about for people, society, rights and data subjects?
RANDOM GHOST CODE BEHAVIORS AND THE GDPR So, at this moment we can figure out that a ghost code won’t make any difference from the previously developed and authorized program that is running through an app, software or a smart device. At least not for the device, but (and this is the reason for the following written lines) that doesn’t mean its action 1
SARAVIA MORALES, Andrés, “La Autodeterminación Informativa Limitada. El Síndrome de Hansel & Gretel y la Protección de los Datos Personales In Totum”, Madrid, 2014, Editorial Académica Española, 328 pages. pg. 4
won’t hurt somebody, or execute actions against the law, or the fundamental rights, like our data privacy protection. Let’s put this in context: Let’s say you are part of a big tech company that collects raw personal data and classifies it as part of the main business. Under the General Data Protection Rule (GDPR), you must ask for the information subject’s previous consent in order to collect, process and sell his/her data. That’s the rule and you must be proactive. This data life cycle process is completed through an automated software. But what happens if more than one ghost segment of code, suddenly, joining together will arrive to the conclusion that, upon a previous data collection, the software has already asked and obtained the consent authorization to collect, process and sell everybody’s personal data (despite no consent was given)? Think about the damage and the impact over the owner’s privacy. And about the consequences. There’s more: One of the 7 essential Principles enforced by the European rule, is the accountability. If an algorithm starts collecting data without consent, and not encrypting sensible information to avoid identification due to a random ghost code behavior, it’s a matter of time for the responsible parties to demonstrate that it wasn’t their fault. Despite that the liability for processing personal data under the GDPR will maintain this status, the fact is that the opposite.
PREVENTIVE DISCONNECTION PRINCIPLE This is another path from my thesis: A quick and safe solution called the “Preventive Disconnection Principle” joined together with a list of foundations proposed for an effective privacy protection when all the other possible solutions were unable to do it. Its concept is to count with (as a last resource) a kill switch that can be pushed (figurately) when every device, robot, software, app and IoT environments are out of control, without no possibility of shutting it down or stopping them from collecting, classifying and processing personal information (including but not limited to sensitive data) in a clear evidence of producing harm to the fundamental right of Privacy/Data Protection. Updated to GDPR times, we can include in this group, the failure of completing a PIA that shows how a company carries the present and clear risk of impact on the privacy with terrible results for the data subjects.
pg. 5
To apply the previous disconnection principle, it is necessary to work in a task force, joining the actions/decisions with same tiers to be operative, and that means working together with 1) responsible agents (data processors), 2) data privacy officers (DPOs), 3) Government Privacy Agencies and 4) Stakeholders or third parties. When a device (e.g., a smart doorbell) is not responding to the owner, to the ones in charge of controlling and answering for any malfunction and there’s not chance to comply with privacy rules, in that case, the mentioned actors should have the power to decide a “disconnection” of the device or software that acts as if it carries a ghost code that gives the opposite and harmful orders/sentences. Cutting the energy and/or network, remotely by a direct order, can prevent privacy and security damages or stop any harm already suffered. Andrés Saravia
pg. 6