[PDF Download] The language of deception: weaponizing next generation ai 1st edition justin hutchens
The Language of Deception: Weaponizing Next Generation AI 1st Edition Justin Hutchens
Visit to download the full and correct content document: https://textbookfull.com/product/the-language-of-deception-weaponizing-next-generati on-ai-1st-edition-justin-hutchens/
More products digital (pdf, epub, mobi) instant download maybe you interests ...
Chapter 2: Social Engineering and Psychological Exploitation
The Roots of Social Engineering
Social Engineering and Psychology
Applied Social Engineering
One Principle to Rule Them
The Long Con
Loose Lips Sink Ships
The Takeaway
Chapter 3: A History of Technology and Social Engineering
Advertising
Propaganda
Cybercrime
The Rise of the Bots
The Worst Is Yet to Come
Chapter 4: A History of Language Modeling
Rule-Based NLP Systems
Statistical Language Models
Rule-Based vs. Statistical Models
Note
Chapter 5: Consciousness, Sentience, and Understanding
Mad as a Hatter
The Question of Consciousness
The Sentience Scare
Chapter 6: The Imitation Game
The Chinese Room
The Democratization of Machine Interaction
Emergent Properties
Reinforcement Learning from Human Feedback
Likeable Robots
The Illusion of Empathy
Chapter 7: Weaponizing Social Intelligence
Where's the Risk?
The Spontaneous and Organic Occurrence of Turing Tests
Bot or Not?—CAPTCHA Evasion
Creating the Illusion of Conversation
Context Crafting
Unleash the Bots
Conversational Relays with LLMs
Optimizing Social Engineering with Monte Carlo Simulations
Remaining under the Radar
The Impending AI Scourge
Notes
Chapter 8: Weaponizing Technical Intelligence
Unintended Technical Problems
Deliberate Technical Exploitation
Note
Chapter 9: Multimodal Manipulation
Converging on Transformers
Deepfake Imagery
Sociopolitical Destabilization
Voice Synthesis and Duplication
Physical Robotics
The Takeaway
Chapter 10: The Future
Physical Embodiment
Inside the Echo Chamber
Artificial Super Intelligence
Singularity
Cognitive Integration (BCI)
Note
Chapter 11: The Quest for Resolution
Tactical Solutions
Stopping a Runaway Train
AI Risk Management
Global Partnership
Appendix A: Bot Automation
Browser Emulation
Browser Automation
Appendix B: LLM Pretext Engineering
Social Security Administration Proof of Concept
Credential Harvesting Proof of Concept
Wire Fraud Proof of Concept
Appendix C: CAPTCHA Bypass
Appendix D: Context Manipulation Attacks
Appendix E: Attack Optimization with Monte Carlo Simulations
Appendix F: Autonomous C2 Operations with LLMs
Appendix G: Disembodiment Attacks
Bibliography
Acknowledgments
About the Author
Index
Copyright Dedication
End User License Agreement
List of Illustrations
Chapter 1
Figure 1.1 AI-generated letter of grief and mourning from the Peabody Colleg...
Chapter 2
Figure 2.1 Milgram shock experiment, conducted to understand the influence o...
Chapter 3
Figure 3.1 Software Art for AOHell depicting CEO Steve Case in hell.
Chapter 4
Figure 4.1 Racter packaging and chatbot interface for Macintosh.
Figure 4.2 Error Message from Microsoft Office—apologizing for the inconveni...
Figure 4.3 Common numerical keyboard used on early mobile phone devices.
Figure 4.4 Auto-complete functionality on an iPhone
Chapter 5
Figure 5.1 Ilya Sutskever (chief researcher at OpenAI) tweeting that large n...
Chapter 6
Figure 6.1 The reinforcement learning cycle.
Chapter 7
Figure 7.1 Examples of CAPTCHA puzzles to distinguish humans from bots.
Figure 7.2 Legacy GPT-3 system autocompleting input text within OpenAI playg...
Figure 7.3 GPT-3 manipulated to respond as independent entity using input fo...
Figure 7.4 The illusion of conversation is created through appending text to...
Figure 7.5 Target data can be scraped from social websites (like LinkedIn) t...
Figure 7.6 An LLM-powered autonomous social engineering system
Chapter 8
Figure 8.1 A simple command and control (C2) operations diagram
Figure 8.2 C2 operations with bidirectional (input/output) communications
Chapter 9
Figure 9.1 Social media post from former U.S. President Trump indicating tha...
Chapter 10
Figure 10.1 Interactions with custom Alexa skills are phrases composed of fo...
Figure 10.2 Sensitive information disclosed by the user to the app can be re...
THE LANGUAGE OF DECEPTION
Weaponizing Next Generation AI
JUSTIN HUTCHENS
Foreword
Since the dawn of the Jedi, in a galaxy far, far away, a wise person once said: “Think evil. Do good.” The premise was simple, but became the rallying cry for the Hacking Exposed franchise. To defend yourself from cyberattacks, you must know how the bad guys work. Your strongest defense is a knowledgeable offense. And so the red team and hacker mindset was born.
As defenders (through offensive knowledge) we got better at understanding and preventing attacks. But the bad guys got better too, especially at automating and building intelligence into their attacks to bypass the controls put in place by the defenders. Now, with the near ubiquitous use of AI and ML around the world, the bad guy is once again one step ahead, leveraging these technologies to their malicious ends. And around and around we go.
We are at the dawn of AI/ML's application to social engineering and we need to understand it better. Our adversaries are turning to language as a weapon, and they are weaponizing it through social intelligence and creating an illusion of conversation that can bypass 99 percent of human reasoning. And with automated systems like AutoGPT and Pentest GPT coming on line, the likelihood of a fully automated synthetic hack sequence using AI/ML is clearly upon us.
With the social engineering attacks of 2023 at MGM, Caesars, and Clorox, and Ethereum-based market maker Balancer, the world now knows what we as cybersecurity professionals have known for decades: that humans (users and administrators alike) are the weakest link. If adversaries can leverage systems to help them gain access to and control of systems and networks simply by using our own human intelligence (or lack thereof) against us in an automated way, that would spell more than a “spot of bother” for the defenders.
Justin has built an amazing primer codifying the brief history of our transformation from knuckle-dragging human to hunched-over nerd (and
troll), and more importantly how the technology we built to make our lives better has the potential to bring us a world of overwhelming challenge.
The bad guys are leveraging AI in new and dangerous ways. Automating this powerful hacker tool called social engineering is their inevitable next step. As defenders we have to understand its power in order to thwart tomorrow's attacks. The Language of Deception is a powerful primer on this next attack category and may be your only bridge between victimhood and survival.
—Stuart McClure, Founder/CEO of NumberOne AI, Founder/CEO of Cylance, Founding author of Hacking Exposed series
Introduction
In 2014, I began an independent research project into the potential (mis)use of artificial intelligence (AI) and natural language processing (NLP) for social exploitation and manipulation. I presented this research at ToorCon San Diego in 2016. NLP technology at the time was primitive and even comical, compared to modern capabilities. But even at that time, there was already the early foreshadowing of uniquely new and emerging risks. For the past decade, I have watched these capabilities grow, and with these capabilities, I have also seen the risks continue to grow with them.
In 2020, I began to take note of the increasingly impressive capabilities of OpenAI's Generative Pre-Training Transformer (GPT) models. I dusted off my old research and began evaluating the extent to which these new capabilities (specifically GPT-3) could be used for adversarial purposes. The difference that only a few years had made was both astonishing and terrifying. I was able to effectively construct very capable autonomous social engineering systems that could be used to fully automate the process of defrauding and manipulating potential victims.
On June 20, 2022, I presented my research at the AI Village at DEF CON— the world's largest annual hacking convention. While the DEF CON conference itself had a massive turnout, the attendance in the AI Village was relatively underwhelming (fewer than 50 people were in the village at any given time). And while the message seemed to resonate well with a small subculture of AI enthusiasts, most people were not yet paying attention.
Less than half a year later, on November 30th, everything changed. OpenAI released ChatGPT (GPT-3.5) and within a couple of weeks, the entire world had changed its views on the emerging relevance of AI. In the year that followed (in 2023), the AI Village was the most popular event at the DEF CON conference—the line to even get into the village wrapped all the way around the Caesar's Forum in Las Vegas. But it wasn't just hackers who were paying attention. The entire world had changed its perspective.
ChatGPT became the most rapidly adopted technology platform in history. And countless other AI platforms saw similar momentum shortly thereafter. Business and technology professionals began quickly assessing ways to integrate this new wave of AI technologies into operational workflows and business processes.
With all this increased enthusiasm and unprecedented rush to adopt these new technologies, the importance of making people aware of these risks has never been more pressing than it is now. And out of this need, the idea for this book was born. As the world began to take notice, I quickly scrambled to update and transcribe my years of research. I worked tirelessly to highlight the emerging risks, but also to communicate them in a way that can be easily understood by a broad audience. And finally, after so many long nights, that journey has come to an end. The product of all those efforts now belongs to you—the reader.
This technology is already poised to transform every part of our lives. The world is going to radically change in the coming years, and emerging AI technology is going to be at the center of it all. It is critical that people understand the risks that come along with these new capabilities, and how we can safeguard ourselves against those risks.
This technology is going to impact everyone, but unfortunately, so will the risks. This book was not just written for the technologist or the cybersecurity professional. It was written for all the people that this new technological revolution will affect. And it is written for anybody who has an interest in taking a glimpse into the future and understanding the new and unique risks that are rapidly approaching.
Reader Support for This Book
How to Contact the Publisher
If you believe you've found a mistake in this book, please bring it to our attention. At John Wiley & Sons, we understand how important it is to provide our customers with accurate content, but even with our best efforts an error may occur.
In order to submit your possible errata, please email it to our Customer Service Team at wileysupport@wiley.com with the subject line “Possible Book Errata Submission.”
How to Contact the Author
I appreciate your input and questions about this book! Find me on LinkedIn at www.linkedin.com/in/justinhutchens, or DM me on Twitter at @sociosploit.
Chapter 1 Artificial Social Intelligence
Natural language processing (NLP) is a subfield of artificial intelligence (AI) that seeks to improve human and machine interactions by enabling machines to speak using human language. It is the science of attempting to train machines to understand, interpret, and communicate using human language in the form of natural conversation. Some computer scientists have studied NLP to improve system interfaces by allowing humans to communicate with machines using the same language(s) they already know. For others, however, NLP has been used to simulate interactions to better understand complex social concepts such as language, learning, and social psychology. In recent years, we have seen an explosion of growth in the capabilities of NLP due to the scaling up of deep-learning language models, also referred to as large language models (LLMs).
At this very moment, as you read this, LLMs are being trained in deeplearning GPU powerhouses to become sophisticated systems that establish extremely complex logical connections. These machines are equipped with learning algorithms and optimization routines to teach themselves how to complete their objectives. They are not provided any explicit instructions pertaining to accomplishing the task; instead, their learning algorithms provide a framework for teaching themselves the optimal way to accomplish their objectives.
At their simplest unit level, these models are just composed of individual statistical calculations. But at scale, something else seems to emerge— something that is not so easily explained. When you combine billions of these individual statistical calculations into a single centralized system, they are not merely the sum of their parts. This je ne sais quoi manifests itself as the confluence of all these complex interconnections between neural nodes. During model training, billions of neural pathways are activated, etched, and refined. These refined neural pathways determine the activation configuration of the neural nodes, each one containing a precise weight falling somewhere within the infinite fractional (non-integer) space that exists between 0 and 1. In the same way that aggregate data becomes
information—and understanding of data relationships becomes knowledge —the result is something that is probably not alive or conscious, but is certainly far more than just statistics.
LLMs have introduced incredible new efficiencies to creative workflows and processes and have been rapidly adopted for personal and operational uses. But as with any exceedingly powerful technology, great opportunity often comes with great risk. Any time you are manipulating a highly complex system, there are unexpected consequences. One of the surprising by-products of scaling LLMs is that in learning language, these systems have also been able to make complex connections related to a sense of humanity. Our humanity is often transcribed into our words, our writing, and even our daily use of language. And so, while supplying these models with terabytes of language data, we have also inevitably supplied them with an obscenely large learning sample of our own humanity. While these systems are (presumably) unable to actually experience the human condition, they have nonetheless become highly effective at communicating using language that would suggest a sense of emotions, sympathy, kindness, and consideration. When you interact with an LLM system, it is easy to feel the unsettling sense of a seemingly “human” connection with something that you know is anything but. And herein lies the real risk: in creating a system that we can speak to in our own language, we have also created a system that can just as effectively speak to us with the same.
To more effectively interface with and control machine operations, we have simultaneously and inadvertently created a means whereby the machines can more easily control us. We have now equipped machines with “natural language” (i.e., with our language). This new interface to interact with computer systems uses the same languages and the same methods of communication that we use to interact with one another. By speaking their language, humans have hacked machines for decades. But now, with the machines speaking our language, the era when machines will hack humans is upon us.
Positronic Man
In 1992, Isaac Asimov wrote The Positronic Man, a book about an advanced humanoid AI named Andrew who engaged in a lifelong pursuit to
become more human-like. In 1999, the book was adapted to film as a movie called Bicentennial Man. In both the book and the film, the depicted AI androids were well equipped with basic general intelligence. They were adaptive and able to learn new simple tasks in real time—for example, how to toast a bagel, how to make a cup of coffee, or how to sweep the floor. However, these androids sorely lacked basic social intelligence. They could carry on simple conversations but struggled with complex social concepts like humor and emotion. They routinely failed to identify and appropriately respond to simple social cues and were depicted as being awkward in unexpected or nonstandard social interactions. And Asimov was not alone in this vision of the future. Much of the early science fiction about AI depicted a future where the AI systems were able to engage in tasks requiring general intelligence at a level on par with humans, but they were still very apparently robotic regarding their social interactions. They were often depicted engaging in stiff, mechanical, awkward, and seemingly emotionless interactions.
Asimov's early depiction of AI was, ironically, quite the opposite of what actually came to be. In the decades since Positronic Man, basic artificial general intelligence (AGI) has proved to be exceedingly more difficult to achieve (especially in a kinetic sense) than basic artificial social intelligence. Real-world AI systems lack the ability to adapt to ordinary daily tasks like taking out the garbage or washing the dishes. While these tasks can be automated by nonintelligent machines specifically designed to accomplish these individual tasks, AI systems lack the general intelligence to be able to adapt to unusual circumstances to accomplish simple or unexpected tasks. Instead, machine learning algorithms must be used to train AI systems on very specific types of tasks. And while AGI still has not been achieved (at least not at the time of this writing), thanks to machine learning algorithms specifically focused on the syntactical structures of human language, referred to as natural language processing, AI systems are already displaying signs of something akin to social intelligence. These modern AI systems can write persuasive arguments, tell clever jokes, and even provide words of comfort in times of distress. They can even effectively engage in creative tasks like writing a poem or a story. This strange and unexpected reversal of Asimov's foretelling of the future is precisely the world we find ourselves in today. Even more fascinating is that the key to finally unlocking AGI may be hidden within the
advancement of these language processing capabilities. By integrating LLMs with other operational services (i.e., service-connected LLMs), and due to the unexpected capabilities that emerge naturally from scaling these models, we are witnessing the emergence of what some have begun to consider the early sparks of AGI (Bubeck et al., 2023). So, not only was artificial social intelligence easier to achieve, but it may become the very gateway to achieving AGI as well.
Defining (Artificial) Social Intelligence
The term social intelligence refers to a person's ability to understand and effectively navigate social situations, manage interpersonal relationships, and adapt their behavior in response to the emotions, intentions, and actions of others. In an article in Psychology Today, Ronald Riggio (2014) describes what social intelligence is and how it differs from general intelligence:
Intelligence, or IQ, is largely what you are born with. Genetics play a large part. Social intelligence (SI), on the other hand, is mostly learned. SI develops from experience with people and learning from success and failures in social settings. It is more commonly referred to as “tact,” “common sense,” or “street smarts.”
So, could it be possible for an AI system to have or exhibit social intelligence? And to refine this question even further, it is probably more appropriate to ask specifically whether modern LLMs are capable of social intelligence—since those seem to be the most likely candidates within the current spectrum of AI. Riggio described social intelligence as something that, unlike general intelligence, is learned. He also described it as something that develops from experiences with past social interactions. We know that AI systems do learn, at least insofar as you consider “machine learning” to be actual learning. And just like human learning, machine learning does involve receiving information, processing it to identify patterns and trends, and establishing models of understanding derived from those processes. Moreover, while modern LLMs do not (at least at the time of their initial training) have their own social experiences to learn from, they are nonetheless able to learn from a large pool of non-personal social experiences (albeit the social experiences of others). These social
experiences come in the form of multiparty text-based communications that are supplied to the LLM as its training data. Based on our current understanding, it could be reasonably stated that modern LLMs most likely do not have the ability to experience many of the common “human” characteristics associated with social intelligence, such as emotions and empathy.
At their core, LLMs are highly complex statistical models, but are still just that—statistical models. The outputs of these models are the inevitable consequence of the inputs provided, and how those inputs are processed through a complex network of pretrained “weights and biases” corresponding to the refined parameters of the model's neural nodes and neural interconnections, respectively (Coşkun et al., 2017). I say that these systems “most likely” do not experience emotions and empathy with at least a modicum of hesitation, because in truth, there is much that we still do not know about the human mind. For all our knowledge, even our minds still may be just highly complex, deterministic statistical engines. But based on my own subjective knowledge of the human experience, it certainly seems to me to be more than that. And I think that most others familiar with the human experience would likely agree with this sentiment. But even if the human mind is nothing more than a complex statistical engine, it still achieves learning in a way that is uniquely different from the learning processes used by current LLM systems. As Noam Chomsky pointed out in an article about modern LLMs:
The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations (Chomsky et al., 2023).
But for all the measurable and observable differences between the human mind and LLMs, the human experience is still an entirely subjective one. All the things that we believe make us “alive” and “human,” such as consciousness and self-awareness, are experienced by the self and no other. The innate subjectivity of these experiences is even implicit in the word
self-awareness. I have firsthand knowledge of my own consciousness and awareness, but I do not have that same firsthand knowledge of the consciousness and awareness of my family, friends, and others that I interact with. I can never truly have firsthand knowledge of another. I can assume that others have experiences akin to my own, and it is not unreasonable for me to think so. And the same holds true for you (presumably)—that you have firsthand knowledge of your own consciousness and awareness, but do not have the same knowledge of the consciousness and awareness of others. And so, while it is unlikely that machines (can) possess such qualities of consciousness or their own selfawareness, it is also important to still approach this conversation with reasonable doubt and skepticism.
In truth, I have no more assurance that a person I interact with has their own self-awareness than I do that a machine does, especially if the machine is capable of exhibiting all of the external indicators of social intelligence. We will return to this fascinating question of whether a machine could feasibly be “alive” or “self-aware” at a later point in this book. But for now, let us merely consider whether a machine could be complex enough to exhibit socially intelligent behaviors like those of a being that is “alive” or “selfaware.” So, for the moment, we are foregoing the conversation of whether a machine can have the underlying characteristics that enable social intelligence in humans (i.e., emotions and empathy), and are instead considering whether a machine can engage in interactions that are, from the outside looking in, indistinguishable from the real thing. We will examine whether, from an external perspective, modern LLMs can engage in conversations that would fool a reasonable person (who is unaware that they are engaging with a machine and not influenced by the inherent assumptions associated with that awareness) into believing that those interactions involve genuine empathy and emotional resonance. We will henceforth refer to this appearance of social intelligence (whether stemming from genuine emotion and self-awareness, or purely mechanical simulation) as the semblance of social intelligence. For the purposes of our thesis (i.e., that artificial social intelligence can and will be weaponized), the origin of this semblance of social intelligence is ultimately inconsequential.
The Turing Test
In 1912, a man named Alan Turing was born on the outskirts of London. If you have even a casual interest in technology, you've likely heard of him. And in truth, for a book on machine intelligence to have any other starting point would be a travesty and an injustice. Turing achieved much in his life. He was an accomplished technologist and a futurist; he invented the world's first computer; and he even managed to successfully crack highly sophisticated (for the time) nation-state cryptography during World War II.
If you are a fan of cinema, I would point you toward the 2014 film The Imitation Game (an adaptation of the book Alan Turing: The Enigma), which offers a fantastic dramatization of the life of Alan Turing, as portrayed by British actor Benedict Cumberbatch. Ironically, despite its name, this film does not even address Alan Turing's actual notion of the imitation game. There are several possible explanations as to why this movie title may have been selected. While Turing's actual notion of the imitation game was not the subject of the movie, it is at least loosely related, since this was a term originally created by Turing within the context of computer science. Possibly the marketing team thought that it was a catchier title, and it being loosely related to the film's subject matter was determined to be good enough. But with further inspection, it seems that there is probably even more depth to it than this. The film's title also seems to be a play on words for multiple other aspects of Alan Turing's life, or perhaps those aspects of his life may have inspired his actual notion of the imitation game. The film focuses on multiple levels of imitation, where Turing was pretending, by choice or coercion, to be someone or something he was not. Much of the film focuses on intelligence and espionage operations that Turing was involved in during WWII, and the notion of imitation could easily be a reference to the duplicity that such entanglements require. On a more personal level, the film also portrays Turing as a deeply troubled person having to grapple with his own sexuality during a time in London (and in much of the rest of the world) when homosexuality was marginalized, stigmatized, and even criminalized. Being forced to succumb to this coerced repression itself operates as another prevalent theme of imitation, in both the film and Turing's life.
But for the purposes of this book, we should consider Turing's actual notion of the imitation game, which is also often referred to as the Turing test. The Turing test was the first scientific assessment designed to evaluate whether
advanced computing technology could be capable of thought, or at least perceived as being capable of thought. The Turing test was created as a proposed thought experiment, since there was no sufficiently advanced computing technology at the time that could actually be the subject of such a test. The Turing test was designed before the emergence of the computer era, but to this day it remains the most commonly discussed criterion for evaluating social intelligence in robotics and computing.
In 1950, Turing published an essay entitled “Computing Machinery and Intelligence,” in which he poses a controversial question: whether it could be possible for machines to think. Turing himself acknowledges, in the introduction to the essay, that the answer to this question is intrinsically bound to multiple other ambiguous questions (semantics, metaphysics, epistemology, and even religion). To avoid this ambiguity, Turing moves away from this initial question of whether machine thought is possible and instead proposes a less ambiguous question: whether it is possible that a machine could be perceived as being capable of thought. To address this question, he proposed a testing methodology for determining if or when this threshold is met. He referred to this methodology as the Imitation Game—a process that has since come to be known as the Turing test. This process involved a human interrogator and two test subjects (one human and one machine). The interrogator would interact with each of the subjects over text-based communications and, based on the contents of those communications, would attempt to determine which subject was the human and which subject was the machine. From Turing's perspective, a machine could be perceived as being capable of thinking if the machine interactions were consistently and reliably indistinguishable from the interactions with the human subject. Over the years, the term “Turing test” has evolved to more broadly encompass any scenario in which a human must, consciously or unconsciously, determine whether they are interacting with a person or a machine.
The Semblance of Social Intelligence
On November 30th, 2022, OpenAI released the beta version of ChatGPT to the public. Within two months of its beta release, ChatGPT had over 100 million active users—the fastest growing user base for any technology
platform in history at that time and far outpacing the adoption rate for other popular technology platforms of the time, like TikTok and Instagram (Hu, 2023). The unprecedented flood of media and public enthusiasm made it immediately apparent that the future of how we consume information on the Internet would no longer be a process of searching through indexes of web content that may or may not have the specific answer you are looking for (the way search engines had historically worked). The future of consuming information on the Internet would be asking a direct question and getting the exact answer you were looking for, and more importantly, getting that answer instantly.
On March 8th, 2023, in response to ChatGPT’s unprecedented and rapid integration into popular culture, the well-known satirical cartoon South Park aired an episode called “Deep Learning,” with ChatGPT as its focal point (Lancaster, 2023). The episode loosely resembled a modern retelling of Edmond Rostand's classic play Cyrano de Bergerac. For those unfamiliar, the play's central plot revolves around Cyrano de Bergerac's attempts to help Christian de Neuvillette win the love of Cyrano's beautiful and intelligent cousin Roxane. Christian is portrayed as physically attractive and courageous, but he lacks the eloquence and wit needed to capture Roxane's heart through words. To appease his own secret desire for Roxane, Cyrano agrees to help Christian win her love by writing eloquent love letters and passionate speeches on his behalf. In the South Park episode, Stan, one of the main characters, lacks the tact and thoughtfulness to continue to romance his long-time girlfriend, Wendy. Wendy complains that the text conversations between them feel empty and that Stan is failing to communicate his real feelings. A friend then introduces Stan to ChatGPT and explains to him that he can use the language model to respond to messages from his girlfriend. From that moment forward, when Stan receives a text message from Wendy, he copies and pastes the messages into the ChatGPT interface, and then in turn, relays the response from ChatGPT back to Wendy. And in the same way that Roxane unknowingly falls in love with the passionate and thoughtful words written by Cyrano, Wendy unknowingly begins to fall hopelessly in love with the words written by ChatGPT.
While it is difficult to take any content from this often tongue-in-cheek cartoon too seriously, the basic premise of the episode was rooted in truth.
In an article from Mashable, an investigative journalist revealed that users on the popular dating app Tinder were in fact using ChatGPT to generate responses to prospective mates (Lovine, 2022). The article further indicated that numerous prominent young influencers on social media had published videos demonstrating just how effective this technique was in winning over the interest of others on the app. As comical and ethically questionable as this behavior is, it does go a long way in resolving the question that we have sought to answer. If modern LLMs can engage in social interactions that consistently generate the early foundations of romantic feelings in the unwitting participants with whom it is interacting in these circumstances, it is not difficult to conclude that we have already seemingly crossed that threshold, and that AI has achieved at least the semblance of social intelligence.
In an even more troubling example, the Peabody College at Vanderbilt University had to issue a formal apology after using ChatGPT to write an email that was subsequently sent out to its student body related to a recent mass shooting that had occurred at the University of Michigan (Figure 1.1). The fact that ChatGPT was used was apparent to the recipients of the email because its use was cited at the bottom of the message with the words “Paraphrase from OpenAI's ChatGPT AI language model, personal communication, February 15, 2023” (Korn, 2023).
Figure 1.1 AI-generated letter of grief and mourning from the Peabody College EDI office
Critics of the email felt that the use of AI to generate an email about community in a time of tragedy was “in poor taste.” However, the objections did not pertain to the actual content of the email. The email was well written, tactful, thoughtful, and likely would have captured the intended tone if not for the open disclosure of the use of AI. And herein lies the true crux of the objection—the unsettling realization that the email was able to convey empathy and emotional resonance adequately and effectively, even in the presumable absence of the underlying human characteristics (of emotions and self-awareness) that are generally understood to be the source of such feelings.
The Question of Weaponization
There is no denying the fact that the new capabilities emerging from LLMs will radically change the way we experience and interact with the world. And as with any new and powerful technology, societal structures of power are already considering how this tool can be weaponized. Governments, militaries, and political interest groups are already investigating opportunities to use this capability to influence and manipulate the balance of power on a global scale. In Russia, September 1st is known as “knowledge day” and marks the day that students across the country generally return to school. In 2017, knowledge day began with a public address from Russian President Vladimir Putin to the nation's students, in an effort to set the tone for their upcoming academic lessons. In this address, Putin discussed the future of the world. He did not follow the typical inspirational script and tell the kids that the future of the world belongs to them. Instead, he blatantly declared to them that the future of the world belongs to AI. He stated:
Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world (Meyer, 2017).
There are two important takeaways here. First, emerging AI technology will radically shape the world in ways that no other technology has ever done before. Second, and more importantly, even the world's most powerful leaders and authoritarians understand that AI will be the biggest factor in defining the global distribution of power in the future. Nation-states, governments, politicians, corporations, criminal organizations, and even individuals all stand to gain or lose in the zero-sum game of power, and AI will be the dominating tool to shape that balance. The world is paying attention. Even people who are not technologists or engineers have at least a general understanding of the potential influence of future AI. There is no question that this tool will be weaponized. Nation-states will use it for surveillance, intelligence, and espionage. Governments will use it to distribute propaganda and control sociopolitical narratives. Politicians will use it to deceive, to win favor, and to accrue power and influence. Corporations will use it to manipulate consumer markets and to win market share. Criminal organizations will use it to deceive and to defraud. And even individuals will use it to expand opportunities, achieve personal objectives, or pursue personal vendettas. And in the future, all of these things will be done on a scale hitherto unseen.
The question that I seek to answer in this book is not if this power will be weaponized—there is no question that it will. Rather, I seek to answer how these capabilities will be weaponized:
Will humans manipulate AI systems to achieve their objectives?
Will AI systems manipulate humans (at the direction of other humans)?
Will the machines become sentient and maximize their own selfinterests over the conflicting interests of their human creators?
This last question addresses the worst-case scenario as portrayed in so many dystopian science fiction novels and films. All of these questions are worth consideration. Even the worst-case scenario, which may seem farfetched for some, is a possibility that is increasingly becoming a concern for many highly intelligent and accomplished individuals, and as such, should also be considered in our analysis.
To answer these questions, we will need to go on a fairly extensive journey. We should be deliberate and thorough in our approach, and leave no stone unturned. The early chapters of this book will address fundamental knowledge that is critical to answering these questions that we have posed. This will include a foundational discussion of social exploitation, social robotics, and discussions of consciousness and what it means to “be alive.” Prior to discussing how machines may be able to leverage social interactions for malicious purposes, it is important to understand, first and foremost, how social interactions can be weaponized in the first place (completely independent of machine involvement). To that end, I will begin by examining how an understanding of social psychology can be exploited to manipulate and control people—a practice that has come to be known as social engineering. From there, I will discuss the history of social robotics. I will then examine the confluence of the two—specifically, how social robotics have historically been used for psychological manipulation and social engineering. And then, I will seek to answer the difficult question of whether it could even be possible to replicate the human experience (of consciousness, emotions, and feelings) with technology. Finally, in the latter parts of this book, I will use that foundational knowledge to effectively speculate on how LLMs will be weaponized in the future.
Chapter 2
Social Engineering and Psychological Exploitation
Social engineering is a term used to describe deliberate efforts to manipulate others, often by means of deceit or misrepresentation. While discussing various contexts and uses of social engineering throughout this chapter, I will often use the terms threat actor or attacker to describe those deliberately engaging in deceptive social engineering efforts, and I will use terms like target or victim to describe the subjects of those efforts. Nothing derogatory is implied by any of these terms. Social engineering has a broad range of applications and is used in many ways. Even when used for deception, some of these uses are based on malicious intentions, while others may be based on good intentions. The use of social engineering for deceptive purposes is a moral parallax at best. For most people, their views on morality and ethics are largely contingent upon their own personal experiences, beliefs, and values. Moreover, even a single person's opinions of morality and ethics may change and evolve over time, as they have new experiences or gain additional information. To a large extent, the differences between right and wrong are relative to cultural, geopolitical, and ideological perspectives. Depending on your perspective, sometimes the “bad guys” are not criminals, rogue threat groups, or terrorist cells. Sometimes, they are financially motivated corporate interests that seek to influence and manipulate, and sometimes they are the very governments that are supposed to serve us.
Consider the old phrase “one man's terrorist is another man's freedom fighter.” While the precise origins of this phrase are unclear, it does illustrate how personal ethics are often relative to cultural and geopolitical worldviews. These questions of morals and ethics are rarely as simple as black and white. The lines that divide right from wrong, especially within the context of the use of deception, are often blurry, shifting, or indiscernible. It is not my intention to solve, within the pages of this book, the ethical question of when or even if it is ever morally defensible to use social engineering to manipulate others through the use of deception.
Instead, my intentions are more modest. In this chapter, I seek only to acknowledge the wide range of different ways that social engineering is commonly used for manipulation (both against individual targets and at scale), and then in later chapters to explain how emerging technology (specifically large language models, or LLMs) will be weaponized for this same purpose.
Social engineering is accomplished by exploiting expected patterns of human behavior, emotions, and social interactions to influence an individual's decisions or actions. At its core, social engineering capitalizes on the inherent psychological tendencies of human beings. It relies on exploiting various aspects of human interaction to manipulate individuals into behaving in a desired manner. Understanding social psychology allows social engineers to craft persuasive and deceptive tactics that can influence people in various contexts, such as politics, marketing, or interpersonal relationships. Social engineering preys on human vulnerabilities and social tendencies to influence another's behaviors and decision-making. According to available accounts, the term “social engineering” is presumed to have originated in the late 1800s:
Social engineering is a practice that is as old as time. As long as there has been coveted information, there have been people seeking to exploit it. The term social engineering was first used by Dutch industrialist J.C. Van Marken in 1894. Van Marken suggested that specialists were needed to attend to human challenges in addition to technical ones (CompTIA, n.d.).
When devising a plan to achieve a given objective, there are often multiple elements that must be considered. Refined business and organizational processes in the modern world often include technical systems, human interactions, or a combination thereof. To exploit these processes, a plan may need to include technical engineering to address system-level components of the target process, but it may also need to include social engineering to exploit the portions of the process that require human oversight or interaction. Over the years, as increasingly more consideration has been given to the technical security of solutions, social engineering has commonly become the de facto “path of least resistance” for many hackers in the cyberspace. On the one hand, a technical engineering team can perform technical testing to identify computational or functional flaws
Another random document with no related content on Scribd:
Coffee Grinder Repaired with Rubber Faucet Plug
A rubber piece that held the glass container on a wall coffee grinder wore out, so that the glass would not stay in, putting the grinder out of commission. The piece worn out was a thick rubber washer, tapered at one end to form a hollow in the other, when in place. I cut a section from the thick end of a standard rubber faucet plug, and shaped it to the form of the desired washer. By removing the old rivet carefully, I was able to use it in fixing the new washer into place, and the mill was soon grinding merrily, as of old.—M. T. C., Chicago, Ill.
Coal Hod Made from Iron Pipe
When my coal hod became worn out, I made one of a length of 8in. galvanized-iron pipe, and found it to be handier and stronger than the kind I had used. I fitted the pipe with an iron handle and with a bail of strong wire, set in metal eyes riveted to the pipe. The upper
end was cut to the curved shape shown, and the lower end square, and holes were punched along the edge for fastening it with nails to the double bottom of wood. Three wooden pads were set under the bottom, so that the hod stands easily on a spot that is not quite level.
—T. E. Roberts, Toronto, Canada.
Frayed Shoe Laces Repaired with Pitch or Wax
When the tag or end fastening comes off of a shoe lace, take a little black sealing wax, and press it carefully around the end of the lace and shape it to a point. This will last a long time, and does away with the annoyance of frayed lace ends.
An Enameled Armchair Made of Wooden Strips
An armchair suitable for a dressing table was made by a handy woman from pine strips. The photograph shows the simple and pleasing lines of the construction. Aside from the board seat, only three sizes of wood are used, 2 by 2 in., 1 by 2 in., and ¹⁄₂ by 2 in. The pieces are fastened with screws, round-head brass ones being used at exposed points. The seat is wider from side to side than from front to back. Two coats of white paint and one of white enamel give a good finish.
The Simple Construction of This Neat Armchair Makes It an Attractive Job for the Amateur Craftsman
The dimensions may be varied to suit individual needs. Sizes suggested are: back, 32 in. high and 24 in. wide; side, 26 in. to top of arm and 19 in. wide; seat, 17 in. from floor, 18 in. from front to back, and 20 in. wide between the front supports. The stock is all planed up square to dimensions, and sandpapered smooth. The ends should be cut squarely in a miter box, with a fine-toothed saw, and then sandpapered smooth, taking care not to round the ends.—A.
May Holaday, Chico, Calif.
A Curling-Iron Heater
This Efficient Electric Heater for Curling Irons can be Made Quickly and Is Safer to Use than an Open Flame
Heating of curling irons is a not uncommon source of fires, and to minimize this danger, an electrical heating device is valuable. In the arrangement shown, a long candle-shaped incandescent lamp is mounted in a suitable can, or metal tube, to form the body of the heater. This should be bright, both inside and out, and preferably nickeled. Two irons can be heated by setting them in the holes in the
top. Four small brackets, soldered, as shown, around the edge of the can, hold it to the base. A porcelain lamp receptacle is used, mounted on a base block. Stain, fill, and finish the wood as desired. In assembling the parts, screw the receptacle to the base, and connect the flexible cord through a suitable hole. Screw in the lamp, and fasten down the body.—John D. Adams, Phoenix, Arizona.
A Stepmother for Incubator Chicks
The best imitation mother hen for incubator chicks that we have found, is built by attaching rag strings to the bottom of an inverted cracker, or similar, box, which accommodates 2 doz. chicks. The rag pieces are torn 1 in. wide, from coarse cloth or gunny sacking, and their loose ends just touch the floor. They are placed close together. An inlet to the mother box is cut in the edge of it. The chicks huddle in among the string rags, keeping safe and warm, whereas without such an arrangement, they may crowd together too closely, and some of them be smothered.—J. Cecil Alter, Salt Lake City, Utah.
A Cardboard Writing and Drawing Pad
Manuscripts with Drawings may be Prepared Neatly and Quickly by the Use of This Homemade Writing Pad
Where neatness is desired in the writing of themes, manuscripts, etc., especially if sketches are set into the copy, a homemade writing and drawing pad is useful. The one shown in the sketch can be made easily, and is a handy device for school children as well as older persons. The paper is placed under the guide strips, as shown,
and is moved along under the sliding straightedge as desired. Drawing instruments can be used handily along the straightedge, as indicated. The pad is built up as detailed in the sectional views. The general dimensions can be varied. The bottom piece is ¹⁄₈-in. cloth board. The second layer is ¹⁄₃₂ in. thick and of cardboard; the next is ¹⁄₁₆-in. cardboard, and the upper layer is ¹⁄₈-in. cloth board, similar to the straightedge. The latter rests on the projecting guides for the paper, and is set between the shaded sidepieces.—R. S. Edwards, Walla Walla, Wash.
Homemade Shoulder-Pack Tent
B J. D. BOYLAN
After sleeping under various kinds of canvas coverings and not finding any of them entirely to my liking, I made the tent shown in the illustration, which proved quite satisfactory. It is of light weight, easily set up or taken down, and when buttoned closely is practically rain, wind, and bug-proof. The cost of materials necessary for making it is comparatively slight. I use it not only as a sleeping tent but also as a carry-all in packing camping equipment. The canvas is supported by frames made of pliable branches cut in the woods.
Stakes, Rope Braces, and Supporting Poles are Not Required for This Shoulder-Pack Tent, the Supports being Cut at the Camp
The layout for the canvas is shown in the detailed drawings. The sections for the ends are made of three pieces, one for the ground and two, divided vertically, for the end covering. The ground section of the main portion of the tent and the covering are made in one piece, 6 ft. wide, joined at the middle, as shown. The adjoining edges A are sewed together and the edges B, which are set at the ridge of the tent, are sewed, after the other pieces are joined. Brass grommets are fitted in the canvas, as indicated, and the points of the supporting frames pass through them in driving the supports into the ground. The shoulder straps C are placed so that they are in position when the tent is folded, and rolled into a pack. Other equipment may be placed inside of it. The tent supports D are pointed at the ends E, and are twisted together at the top. The ridge pole F steadies them and holds the canvas at the middle.
To set up the tent, lay the canvas flat on the ground and place the supports, twisted together, through the grommets. Spring them into the ends of the canvas, and insert the ridge pole by springing it between the supports. The canvas is 8-oz. duck, and the fastenings used are snap buttons; buttonholes, buckles, or harness snaps may also be used.
Kitchen for Hikers
B PRESTON HELLER
With a view to provide all the needs of a commissary department for 36 boys for a period of four days, either on a hike or in a permanent camp, the kitchen illustrated was constructed. As it is placed on two wheels, which are removed when the kitchen is in use, it can be moved from one day’s camp to another by attaching it to the rear of a horse-drawn wagon by means of a shaft. When the wheels are removed the entire outfit rests on legs, which are swung down from the bottom. The sides and one end are opened by swinging one half up and resting it on the top, while the other half swings down to a horizontal position where it is used as a work board, making all parts easily accessible.
The outside dimensions of the kitchen, when closed and in the form of a large box on wheels, are 5 ft. 3 in. long, 3 ft. wide, and 2¹⁄₂ ft. high. The main feature of this entire kitchen is its compactness. At the front, and extending about 1 ft. back, is a kitchen cabinet where the plates, sugar, salt, flour, etc., are kept in separate compartments. Here also are found the necessary cooking utensils, such as bread knives, butcher knives, cleaver, cooking spoons, pancake turner, sieves, large forks, lemon squeezer, etc.; and on the shelves of galvanized iron small boxes and packages of baking powder, cocoa, etc., are placed. This entire compartment, as well as all others where food is handled and prepared, is lined with No. 28 gauge galvanized iron which makes sanitation a feature also.
Upon passing around to one side there can be seen a large threeshelved oven, 21 in. wide, which is heated by a gasoline burner. Between the burner and the bottom of the oven are located coils of pipe for heating water, and these coils are connected with a tank of
7-gal. capacity, located just above the oven. An air valve and glass gauge are attached to the tank.
The next compartment to the rear is a large storage space, extending all the way through the kitchen, and a 2¹⁄₂-gal. forgedcopper gasoline tank occupies a shelf in the upper portion of this space. At the rear end along this side are located nickelplated faucets which are connected with the hot-water tank mentioned; a 7gal., white enameled milk tank above; an 18-gal. cold-water tank, and an ice-water tank, used when distilled-water ice can be secured. These faucets all drain into a small sink, which, in turn, drains off through an ordinary sink drain to a hole dug in the ground beneath it. Practically the entire rear end of the kitchen is occupied by the large water tanks, ice box, and milk tanks, with the exception of a small space at the bottom where the silverware is kept in a drawer.
On the other side, and to the rear, two compartments above and below the large water tank form excellent storage space for ham, bacon, sausage, preserves, butter, etc., which need to be kept in a cool place. Next in line is the other end of the large storage place which extends through from the other side. Pans, pails, canned goods, larger packages, etc., are kept in this space.
Immediately to the rear of the kitchen cabinet, on this side, are located compartment shelves where the tin cups are kept, and adjoining this is found a three-compartment steam cooker. By having the cups and plates near this steam cooker, which is also heated by a gas burner, there is less danger from rust, as they are kept thoroughly dried. Wherever there is a gasoline burner the compartment in which it is located is not only lined with galvanized iron, but asbestos in sheets is placed on the inner side, so that the heat will not ignite the interior packing or the woodwork. The tanks are accessible from the top of the kitchen for filling and cleaning, and are packed with ground cork.
The kitchen has shown its efficiency by giving satisfactory service in camps of many members.
Hungry Campers Quickly Provided For from the Compact, Easily Transported Kitchen, Shown with Supply Cabinets on Left and Utensil Cupboards on Right
The Kitchen Outfit Compacted into Its
Cabinet, Mounted on Wheels and under Transport
The Portable Kitchen Outfit Opened, Exposing the Various Compartments Arranged So as to Be Convenient: Above, Stove and Cooking Compartments; Below, Pantry Compartment and Space for Utensils
Bird House Made of Kegs
Two ordinary nail kegs, or other small kegs, will make a good bird house. They should be mounted on a square post with braces of light wood, as shown. The openings for the entrance can be cut in the ends or sides, as desired. If cut in the sides, be sure to make the hole between two staves.