What worried about ai taking the world

Page 1

11/21/2019

What worried about AI Taking The World This site was designed with the

All Posts

News

Types of apps

usmsystems36

.com website builder. Create your website today.

Start Now

Artificial Intelligence

a few seconds ago

Login / Sign up

14 min read

What worried about AI Taking The World

Back to Top

If you were in the 1980s and 1990s, you might remember the now-extinct “computer phobia” phenomenon. In the early 2000s, I personally saw it a few times - as personal computers enter our lives, in our workplaces and homes, many people respond with anxiety, fear or aggression. Some of us are attracted to computers and are amazed at the ability to see them, and most people do not understand them. They are considered aliens, abstract, and in many ways threatening. People are afraid that they will be replaced by technology. Many of us respond to technological changes at best, with panic. That could be true of any change. But the thing is, most of what we are concerned about never happens. Fast forward a few years and the computer-haters have learned to live with them and use them for their own benefit. Computers have not replaced us and have not caused massive unemployment - and we cannot imagine life without our laptops, tablets, and smartphones these days. Threat change has become a comfortable position. At the same time that our fears were working out, computers and the Internet began making threats in the 1980s and 1990s that no one warned us about. Ubiquitous mass surveillance. Hackers who follow our infrastructure or our personal data. Psychological alienation on social media. We lose our patience and our ability to focus. The political or religious radicalization of minds easily affected online. Social networks are being hijacked by hostile foreign powers to undermine Western democracies.

https://usmsystems36.wixsite.com/ainews36/post/what-worried-about-ai-taking-the-world

1/8


11/21/2019

What worried about AI Taking The World

If ourThis fears so irrational inverted, most worrying consequences of the site become was designed with the and .com website the builder. Create your website today. Start Nowpast as a result of technological changes are that many people are not as concerned as they already are. One hundred years ago, we could not really predict that the transportation and manufacturing technologies we were developing would launch a new industrial war that would wipe out tens of millions in two world wars. We did not foresee that the invention of the radio would launch a new wave of public propaganda that would contribute to the rise of fascism in Italy and Germany. The breakthrough of theoretical physics in the 1920s and 1930s was not accompanied by press articles on how these developments would soon begin thermonuclear weapons. Today, the worst problem of our time is that, despite decades of alarm about the weather, a large proportion (44%) of the American people choose to ignore it. As a civilization, we are very bad at correctly identifying future threats and worrying about them, as if we were panicking because of irrational fears. Today, like many times in the past, we are also experiencing a new set of changes: cognitive automation, which can be broadly summarized under the keyword “Artificial Intelligence”. As in the past, we fear that this new technology will harm us - that AI will lead to mass unemployment, or that AI will acquire its own agency and become superhuman and destroy us. What if we were concerned about the wrong thing, like almost every other time? What if the real danger of AI is far from the “superintelligence” and “singularity” narrative that so many people fear today? In this post, I want to raise awareness of what really bothers me when it comes to AI: the most effective, most scalable manipulation of AI-enabled human behavior and its harmful use by corporations and governments. Of course, this is not the only risk that arises from the development of cognitive technologies - there are many more, especially issues related to the harmful bias of machine learning models. Other people are raising awareness of these issues better than I can. I chose to write specifically about Mass Population Manipulation because I see it as a way of emphasizing and appreciating this risk. This risk is already a reality today, and many long-term technology trends are going to expand significantly over the next few decades. As our lives become digitized, social media companies gain more visibility into our lives and minds. At the same time, they gain increasing access to behavioral control vectors - particularly through algorithmic newsfeeds, which

Back to Top

control our information usage. This transmits human behavior into an optimization problem, an AI problem: it is possible for social media companies to repeatedly tune their control vectors to achieve specific behaviors, reaffirming its game strategy to defeat AI. Level, the score is driven by feedback. The only obstacle to this process is the intelligence of the algorithm in the loop - and when this happens, the largest social networking company is currently investing billions in basic AI research. I will explain it in detail. Social media as a psychological panopticon In the last 20 years, our private and public lives have changed online. We spend more and more days watching screens. Our world is being transformed into a state of digital consumption, change or creation. The downside of this long-term trend is that corporations and governments are now collecting massive amounts of data about us, especially through social network services. We communicate with someone. That is what we say. Movies, Movies, Music, News - What content are we using. What mood we are in at specific times. Finally, almost everything we perceive and everything we do is recorded on some remote server. Th d h ll ll h ll

https://usmsystems36.wixsite.com/ainews36/post/what-worried-about-ai-taking-the-world

2/8


11/21/2019

What worried about AI Taking The World

This data, theoretically, allows organizations that collect it to create very accurate This site was designed with the

.com website builder. Create your website today.

Start Now

psychological profiles of individuals and groups. Your opinions and behavior interact with thousands of similar people, giving you an extraordinary understanding of what makes you tick - more than you can achieve by just self-examination (for example, Facebook "likes" algorithms better evaluate your personality than your own friends can) . This data can be shared a few days in advance when you start a new relationship (and with whom), and when to end your current relationship. Or who is at risk of committing suicide. Or even if you still feel undecided, which side will ultimately vote in the election. And it's not just individual-level profiling power - large groups can be more efficient because integrating data points eliminates randomization and individual outliers. The use of digital information as a psychological control vector Passive data collection is not where it ends. Social network services are highly dependent on what information we use. What we see in our newsfeeds is algorithmically "curated". Transparent social media algorithms are ever-increasing, no matter what political stories we read, what movie trailers we see, who we are in touch with, and whose opinions we receive. Integrated over many years of exposure, the algorithmic curation algorithms of the information we carry give our lives considerable power - who we are and who we are. If Facebook decides, over the course of many years, what news you see (real or fake), whose political status updates you see and who yours is, then Facebook can control your worldview and your political beliefs. The Facebook business is about influencing people. For advertisers - including political advertisers - this is a service that sells to its customers. As such, Facebook has built a finetuned algorithmic engine. This engine can simply affect your opinion on the brand or your next smart-speaker purchase. It affects your mood, tunes the content that feeds you to make you angry or happy at will. It could swing elections. Human behavior as an optimization problem In short, social network companies can measure everything about us simultaneously and control the information we use. And this is the fastest trend. When you have access to both awareness and action, you are looking at the problem of AI. You can start establishing an optimization loop for human behavior in which you find out what information is fed to you until you start observing the current state of your goals and observing the views and behaviors you want to see. A large subset of the field of AI - particularly "reinforcement learning" - is about developing algorithms to solve such optimization problems as effectively as possible, close the loop, and achieve complete control of the goal at hand - in this case, us. By moving our lives to the digital realm, we are vulnerable to the things that control it - AI algorithms. The reinforcement learning loop for human behavior This is all facilitated by the fact that the human mind is very vulnerable to simplistic models of social manipulation. For example, consider the following vectors of attack: Identity Reinforcement: This is the old trick of leveraging the first ads in history, and it works the same way for the first time, associating the view given with the markings you identified (or if you want to), so that you automatically override the target view. In the case of AI-optimized

Back to Top

social media usage, a control algorithm can make sure that you only see content (whether news articles or posts from your friends), where the algorithm wants you to stay away from views and views that you want to stick with your own identity markers. Negative Social Reinforcement: If you post a comment that you do not want to have a control algorithm, the system may choose to show your post only to people who have the opposite opinion (perhaps acquaintances, perhaps strangers, perhaps bots), and who are severely criticized. If repeated many times, such social backlash is likely to https://usmsystems36.wixsite.com/ainews36/post/what-worried-about-ai-taking-the-world

3/8


11/21/2019

What worried about AI Taking The World

who are severely criticized. If repeated many times, such social backlash is likely to distract from with yourthe initial views. This site wasyou designed .com website builder. Create your website today. Start Now Positive social reinforcement: If you make a post expressing the view that the control algorithm wants to spread, you can choose to show it only to "likable" people (it may even be bots). It strengthens your confidence and gives you the impression that you are part of a supportive majority. Sample bias: The algorithm is likely to show you posts that support the opinions you want to capture from your friends (or the media at large). You are of the opinion that these views, placed in such an information bubble, have far wider support than they actually are. Argument personalization: In people with a mental profile that is close to you, the algorithm may notice that exposure to certain content has led to the desired change of view. It can serve you with content that you feel is most effective for your particular views and life experience. In the long run, the algorithm can also create maximumeffective content from scratch, especially for you. From an information security perspective, you can call these vulnerabilities: known exploits that can be used to take over the system. In the case of human minds, these vulnerabilities will never stick, they are the way we act. They are in our DNA. The human mind is a stable, vulnerable system that is increasingly under attack from intelligent AI algorithms that simultaneously have a full view of everything we do and believe, and complete control over the information we consume. The current landscape Significantly, the mass demographic manipulation generated by placing AI algorithms to our information diet - especially political control - does not require much sophisticated AI. You don’t need self-aware, super intelligent AI to be a terrible threat - the current technology is enough. Social network companies have been working on this for years, with significant results. Instead of changing your view of the world as they are only trying to increase "engagement" and influence your purchasing decisions, the tools they have developed are already being hijacked by hostile state actors for political purposes - as seen in the 2016 Brexit referendum or the 2016 US presidential election. This is already our reality. If mass population manipulation is already possible today - theoretically - why hasn't the world grown up yet? In short, I feel like I'm really bad at AI. But that's going to change. Until 2015, all ad targeting algorithms across the industry were just running in logistic regression. Of course, this is still true for the most part - only the biggest players have switched to more advanced models. Logistic regression, an algorithm that predates the computing age, is one of the basic techniques you can use for personalization. This is the reason why most of the ads you see online are so irrelevant. Similarly, social media bots used by hostile state actors to jeopardize public opinion have no AI in them. They are all very ancient. Now. Machine learning and AI have been making rapid progress in recent years, and that progress has only begun to target algorithms and social media bots. In-depth learning started into

Back to Top

newsfeeds and ad networks only in 2016. Who knows what will happen next. It is amazing that Facebook is investing enormously in research and development with the clear goal of becoming a leader in this field. When your product is Social Newsfeed, what use are you going to use natural language processing and reinforcement learning? We are looking at a company that builds sensitive psychological profiles of nearly two billion human beings that serve as the primary news source for many, conducting large-scale behavior manipulation experiments and aiming to develop the best AI technology. The world has never seen it. Personally, it scares me. And Facebook may not be the most worrying threat here. Consider, for example, the use of information control to enable unprecedented https://usmsystems36.wixsite.com/ainews36/post/what-worried-about-ai-taking-the-world

4/8


11/21/2019

What worried about AI Taking The World

forms ofsite authoritarianism, “social credit system”. Modern corporations This was designed withsuch the as China's .com website builder. Create your website today. Start Nowprefer to pretend that the most powerful rulers are big corporations, but the governments tend to conceal the power they have. If we give algorithmic control over our minds, governments can become far worse actors than corporations. Now, what can we do about it? How can we protect ourselves? As technicians, what can we do to prevent the danger of mass manipulation through our social newsfeeds? The flip side of the coin: what AI can do for us Importantly, the existence of this threat does not mean that all algorithmic curation is bad or that all objective statements are bad. Away from it. Both of these serve a valuable purpose. With the rise of the internet and AI, keeping our information food algorithms is not just an inevitable trend - it is desirable. As our lives become digital and connected, and our world becomes information-intensive, AI needs AI to serve as our interface to the world. In the long run, education and self-improvement are some of the most effective applications of AI - and it is done through dynamics, a reflection of the vicious AI-enabled newsfeed trying to transform you. Algorithmic Information Management has tremendous potential to help us, empower individuals to realize their potential and help society better manage themselves. The problem is not AI. The problem is control. Newsfeed algorithms must hold the user accountable for optimizing the algorithms, rather than allowing the user to achieve opaque goals such as reversing their political views or wasting their time. We are talking about your news, your worldview, your friends, your life - the impact that technology has on you naturally has to be under your own control. Information management algorithms should not be a mysterious force cast upon us to provide running ends that are contrary to our own interests; Instead, they should be a tool in our hands. A tool that we can use for our own purposes, for education and for personal rather than entertainment. Here's an idea - any algorithmic newsfeed with substantial adoption: Be transparent about what the feed algorithm currently optimizes and how these goals affect your informational diet. Give yourself the natural tools to set these goals yourself. For example, you may be able to configure your Newsfeed to increase learning and personal growth in specific directions. Always display a measure of how much time you spend on a feed. Feature tools to control how much time you spend on a feed - Like a daily time goal, the algorithm tries to get you out of the feed. Growing ourselves with AI while maintaining control We need to build AI to serve human beings, not change them for profit or political gain. What if Newsfeed's algorithms didn't work as casino operators or advertisers? Instead, they are in touch with a teacher or a good librarian, who has used their great understanding of your

Back to Top

psychology and millions of other people like you - to grow and recommend you the next book that resonates with your goals. An AI capable of guiding you through the right path in the experience space - a sort of navigation tool for your life. Can you look at your own life through the lens of a system that has unleashed millions of lives? Or write a book together with a system that reads every book? Or researching together with a system that looks at the full scope of current human knowledge? Of the products you fully control the AI you interact with, the advanced algorithm is more than t th t hi h bl t hi l ffi i tl

https://usmsystems36.wixsite.com/ainews36/post/what-worried-about-ai-taking-the-world

5/8


11/21/2019

What worried about AI Taking The World

net threat, which enables you to achieve your own goals more efficiently. This site was designed with the

The anti-Facebook structure

.com website builder. Create your website today.

Start Now

Gb vIn summary, our future is our interface to the world of AI - a world made up of digital information. This can equally lead to greater control over their lives or empower individuals to the total loss of agency. Unfortunately, social media is currently engaged on the wrong road. But we were early enough to be able to reverse the course. As an industry, we need to develop product categories and markets where incentives are responsible for consumer impact algorithms, rather than using AI to exploit the consumer's mind for profit or political gain. We must work towards anti-Facebook products. In the near future, such products will be in the form of AI assistants. Digital consultants programmed to help you are able to control the goals they follow in their interactions with you. Currently, search engines can be seen as an early, more primitive example of an AIdriven information interface, which serves customers instead of hijacking their mental space. A search is a tool that you use intentionally to reach specific goals rather than passively feed what you choose to show. You tell them what to do for you. And instead of wasting your time, a search engine tries to minimize the time spent from question to answer, from problem to solution. The search engine is still the AI layer between us and the information we consume, so you may be thinking, can it bias its results to try to manipulate our results? Yes, that risk is latent in every information-management algorithm. In stark contrast to social networks, in this case, market incentives are actually tailored to the needs of consumers, making search engines as relevant and targeted as possible. If they fail to increase consumption, consumers will have no conflict to go to a competitive product. And most importantly, the search engine has a much smaller psychological attack surface than Social Newsfeed. Most of the threats we profiled in this post should be in production: Both Awareness and Action: Not only does the product control the information (news and social updates) it shows you, it also "senses" your current moods through "likes", chat messages and status updates. Without both awareness and action, the reinforcement learning loop cannot be established. A read-only feed is dangerous as a potential tool for scientific propaganda. Central to our lives: the product should be the main source of information for at least a subset of its customers, and regular customers spend many hours a day. Feed Support and Specialist (like Amazon's product recommendations) are not a serious threat. A social component enables wider and more effective psychological control vectors (especially social reinforcement). Newsfeed, which has no personality, has little leverage in our minds. Business incentives allow customers to convert and spend more time on the product.

Back to Top

Most AI-based information-management products do not meet these requirements. Social networks, on the other hand, are a frightening combination of risk factors. As technicians, we should gravitate toward products that do not have these features and only push them back against products that combine them, not only because of the potential for dangerous abuse. Build search engines and digital contributors rather than social newsfeeds. Make your recommendation engines more transparent, configurable and structured than slot-like machines that are "engaged" and waste human time. Invest your UI, UX and AI expertise to create great configuration panels for your algorithm, and let your customers use your product with their own terms. And most importantly, we need to educate consumers about these issues so that they reject manipulative products generating enough market pressure to balance the incentives of the

https://usmsystems36.wixsite.com/ainews36/post/what-worried-about-ai-taking-the-world

6/8


11/21/2019

What worried about AI Taking The World

manipulative products, generating enough market pressure to balance the incentives of the This site was designed with the .com website builder. Create your website today. technology industry with consumers.

Start Now

Conclusion: Not only does social media know enough about us to create powerful mental models of individuals and groups, but it is also heavily under the control of our information diet. It has access to a set of highly effective psychological exploits to change what we believe, how we feel and the things we do. Awareness of our mood and action on our mood can be used to effectively hijack our beliefs and behavior, an AI algorithm that is sufficiently developed in a continuous loop. Using AI as our interface for information is not a problem. Such AI interfaces, if designed well, can be very beneficial and powerful for all of us. Key Factor: The user must fully control the objectives of the algorithm and use it as a tool to achieve their own goals (just like you would using a search engine). As technicians, we are responsible for pushing back against out-of-control products and dedicating our efforts to creating user-friendly information interfaces. Don't use AI as a tool to manipulate your customers; Instead, give your customers AI as a tool to get more agency into their situations. One path leads to a place that really scares me. Another leads to a more humane future. There is still time to take a good one. If you are working on these technologies, keep this in mind. You may not have bad intentions. You may not care. You can value your RSUs more than our shared future. Even if you do not care, your choices affect all of us because you have a hand in creating the infrastructure of the digital world. In the end, you can be responsible for them.

1 view

Recent Posts

Back to Top

How Artificial intellige…

6

Write a comment

See All

How Artificial Intellige…

9

Write a comment

https://usmsystems36.wixsite.com/ainews36/post/what-worried-about-ai-taking-the-world

7/8


11/21/2019

What worried about AI Taking The World This site was designed with the

Log in to leave a comment.

.com website builder. Create your website today.

Start Now

All News

Subscribe to Our Newsletter Enter your email here*

Contact

Subscribe

Back to Top

Š 2023 by TheHours. Proudly created with Wix.com

https://usmsystems36.wixsite.com/ainews36/post/what-worried-about-ai-taking-the-world

8/8


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.