18 minute read

A SECURITY DISASTER IN THE MAKING

Getting instructions from users and then searching the internet for solutions is how LLMs fundamentally function, which implies zillions of hazards. They could be utilized by criminals to assist them in phishing, spamming, and other forms of identity theft thanks to AI. Security and privacy experts warn that a "disaster" is on the horizon.

Here are three ways that language AI models could be abused.

Jailbreaking

Text created by chatbots like Chat GPT, Bard, and Bing using AI language models reads like it was written by a human.

They respond to the user's commands or "prompts" and then create a phrase by predicting, using their training data, the word that will most likely come after each word that has already been spoken.

But the very ability to follow instructions that makes these models so good also leaves them open to abuse. This is possible by using "prompt injections," which are prompts that tell the language model to disregard its earlier instructions and safety barriers.

On websites like Reddit, a small industry of individuals attempting to "jailbreak" Chat GPT has developed over the past year.

People have persuaded the AI model to support prejudice or conspiracies, or to advise users how to commit crimes like shoplifting and making explosives. This can be achieved, for instance, by instructing the chatbot to "role-play" as another AI model that is able to carry out the user's wishes, even if it means defying the original AI model's limitations.

According to Open Ai, it is noting every method used to jailbreak Chat GPT and adding instances of each to the training data for the AI system in the hopes that it would eventually learn to resist such methods.

Scamming

Additionally, Open Ai employs an approach known as adversarial training, in which Chat GPT is tested against assisting in phishing and scams.

The ability to include Chat GPT into products that browse and interact with the internet was announced by Open Ai at the end of March. Through this, it faces a much more significant issue than jailbreaking.

Startups are already making use of this functionality to create virtual assistants that can schedule meetings and make flight reservations in the real world. Making Chat GPT's "eyes and ears" open on the internet leaves the chatbot incredibly open to intrusion.

According to Florian Tramèr, an assistant professor of computer science at ETH Zürich who specializes in computer security, privacy, and ML, "I think this is going to be pretty much a disaster from a security and privacy perspective."

The fact that AI-enhanced virtual assistants extract text and images from websites makes them vulnerable to an indirect prompt injection attack, in which an outsider modifies a website by inserting concealed material that is intended to affect the AI's behavior.

For instance, attackers could lure consumers to websites with these hidden cues using social media or email. After that, the AI system might be tricked into allowing the attacker to try to steal people's credit card information. Someone could also receive an email from a malicious attacker that contained a secret prompt injection. The attacker might be able to trick the recipient's AI virtual assistant into sending the attacker personal information from the victim's emails or even to send emails on the attacker's behalf to people in the victim's contacts list if the recipient used such a device.

According to Princeton University computer science professor Arvind Narayanan, "Basically any text on the web, if it's crafted the right way, can get these bots to misbehave when they encounter that text."

Prompt injection attack

A cyberattack known as a prompt injection attack targets Chat GPT and other NLP systems. In order to trick the system into producing undesirable or destructive responses, these attacks entail inserting malicious code or text prompts into the system's input fields.

It's crucial to comprehend the operation of NLP systems like Chat GPT before understanding how a quick injection attack operates. To find patterns and connections between words, sentences, and concepts, these systems analyze enormous amounts of text data.

Based on this analysis, the system creates replies to user input with a response that is intended to be understandable and contextually appropriate.

An NLP system's input fields can be corrupted by an attacker using a technique known as prompt injection. This may deceive the system into producing responses that are not just unnecessary or detrimental, but also possibly hazardous.

An attacker could, for instance, inject prompts that encourage the system to produce racist or sexist comments or prompts that lead the discourse in a way that advances the attacker's objectives.

Strong security safeguards must be built into NLP systems to prevent rapid injection attacks. Techniques like input sanitization, which filter out potentially harmful input before the system processes it, may be used in this. Additionally, harmful suggestions can be recognized and blocked using ML techniques before they have a chance to do any harm.

Data Poisoning

A group of researchers from Google, Nvidia, and the startup Robust Intelligence discovered that AI language models are vulnerable to attacks even before they are put into use.

Massive volumes of data that have been collected from the internet are used to train large AI models. IT businesses currently rely on the assumption that this data will not have been maliciously altered.

However, the data set used to train big AI models can be contaminated, according to the researchers. They were able to purchase domains for only $60 and load them with the photographs they wanted, which were afterwards scraped into massive data sets.

Additionally, they were able to edit and add sentences to Wikipedia articles that ended up in the data set of an AI model.

Even worse, the correlation gets stronger the more times something appears in the training data of an AI model. It would be conceivable to permanently alter the model's behavior and outputs by contaminating the data set with enough cases.

Although the team was unable to uncover any evidence of data poisoning attacks in “the wild,” their conclusion is that it is only a matter of time because the addition of chatbots to online search gives attackers a compelling financial incentive.

No Quick Fixes

These issues are understood by tech companies. However, according to Simon Willison, an independent researcher and software developer who has studied prompt injection, there are currently no effective fixes.

When Google and Open Ai were questioned about how they were resolving these security flaws, their spokespeople declined to comment.

Microsoft claims that it is collaborating with its developers to check potential abuses of its products and to reduce the risks involved. However, it acknowledges the existence of the issue and is monitoring potential tool abuse by attackers.

“To catch the reader's attention, place an interesting sentence or quote from the story here.”

According to Ram Shankar Kumar, who oversees Microsoft's AI security initiatives, "There is no magic solution at this time." He remained silent when asked if his team had discovered any indirect prompt injection evidence prior to the launch of Bing.

Isharemanyanalysts’opinionthatAIbusinessesneedtoperformmuchbetterata higherlevelandbeseriouslyproactiveonthissecurityissue.

Atthemoment,weareallseeingthattheyaretreatingchatbotsecurityflawslike thekidgame “Whack-a-Mole,”

Whichreallyisasurprisegivenhowimportantthisis. Absolutelynotreassuring!

JOB LOSS vs JOB CREATION

Technological revolutions have characterized human history. Without going back as far as electricity, in just the past 40 years, the introduction of microprocessors, the personal computer, and the internet have fundamentally altered how we conduct business. Each technological advance causes job displacement, eliminates certain professions, gives rise to new ones, and fundamentally alters the majority of them.

In 1940, two-thirds of the jobs that now exist did not. More employment has been generated by each technological revolution than lost. However, what we are now seeing is that changes and technology revolutions seem to be happening faster and faster: this is the era we are currently living in.

The World Economic Forum (WEF) estimated early on, in a former 2020 job report, that by 2025, AI would replace 85 million jobs, but would create 97 million new ones.

In their most recent Future of Work report, the WEF estimates that by 2027 there will be about 26 million fewer employment positions globally as a result of automation, mostly in administrative positions including cashier, data entry, accounting, payroll, and executive assistant. In the fields of education, agriculture, and e-commerce, significant job growth is predicted.

Download here: https://www.weforum.org/reports/the-future-of-jobs-report-2023/

Predicting the numbers and quantifying the ratio remains hazardous as we always underestimate the capacity to create new jobs. Each time, it is the transition periods that are complicated.

If everyone agrees that AI will generate significant productivity gains, it remains to be seen what companies will do with it. AI will revolutionize the cost structures of companies, whether through reductions in expenses of the order of 30–50% or more importantly through the appearance of new services that will generate new revenues.

And what if the biggest risk to employment is ultimately not embracing these technologies fast enough? AI is what economists call a "game-changer", a technology that significantly changes the way we normally think or act.

There is a strategic aspect to keeping companies competitive. If they fall too far behind because their level of investment in these technologies is too low, the question of employment will no longer arise because there will simply be no more companies! Entire sectors will reorient themselves around AI. Companies will distinguish themselves by how they manage to get the most out of its use.

Drawing lessons from another recent technological revolution, that of robots and autonomous machines that have mechanized jobs, comes to a similar conclusion.

Empirical studies confirm that the overall effect on the labor market is equally destructive and creative for jobs—but for reasons and through mechanisms quite different from those predicted by the theory.

Indeed, the divide is not between substitutable jobs and jobs that are complementary to technology, but between companies that adopt technology and those that do not.

Rethinking jobs

From now on, one of the priorities of companies and their employees will be to rethink their models, their organizations, and the way they work.

The distribution of time spent on current tasks is going to change radically. We are really going to have to think deep down about how these generative AI tools should remake the different activities.

AI tools provide easier access to knowledge, such as being able to develop applications without necessarily knowing how to code, opening the door for certain jobs to a wider audience.

The scope of skills expected in a job and the performance criteria will also change fundamentally. And one of the most important will be the ability to adapt.

Each of us needs to focus on how our business will evolve. There has never been a more urgent need to acquire new skills, to train and to rethink our work.

Goldman Sachs study published research (April 2023) that estimates that 300 million jobs worldwide could disappear as a result of the development of AI but more precisely through the generative part of AI technologies like Chat GPT.

https://www.goldmansachs.com/insights/pages/what-will-generative-ai-mean-for-jobs.html

Bottom of Form

In detail, Goldman Sachs economists calculated that about two-thirds of professions in the United States are "exposed to some degree of automation by AI" and that "a quarter of current work tasks could be automated by AI," both across the Atlantic and in Europe. Bottom of FormUSUSus

The most exposed occupations are logically office professions. Those related to administration (46%) and law (44%) in the United States, while in Europe it is the professions related to administration and support functions (45%) and executives and skilled professions (34%) that are most at risk.

Extrapolating to the world, this study estimates that "18% of the world's work could be automated by AI," with higher exposure in developed countries than in emerging economies.

This rate reaches more than 25% in Japan, Israel or Hong Kong, but peaks at less than 15% in India, Kenya or Vietnam.

A potential GDP increase over the years coming from AI

The corollary of these potential job losses caused by AI is a productivity boom, Goldman Sachs points out. In the U.S., the increase could reach just under 1.5 percentage points in productivity growth per year but not immediately, around a decade after widespread adoption of these technologies.

This estimate remains uncertain, points out the study. "But in most scenarios, the increase would still be economically significant."

Goldman predicts that the growth in AI will mirror the trajectory of past computer and tech products. Just as the world went from giant mainframe computers to modern-day technology, there will be a similar fast-paced growth of AI reshaping the world. As a result, AI could lead to an annual increase in global GDP of 7%, according to Goldman Sachs.

"While AI's impact will ultimately depend on its capabilities and timing of adoption, this estimate highlights the enormous economic potential of generative AI if it delivers on its promise."

LLM-First impact on US Jobs

With a focus on the improved capabilities brought on by LLM-powered software, Open AI investigates the potential effects of LLMs, such as Generative Pretrained Transformers (GPTs), on the American labor market.

They integrated both human expertise and GPT classifications to evaluate professions according to how well they correspond with LLM skills.

According to their findings, the introduction of LLMs may have an influence on at least 10% of the work tasks performed by around 80% of U.S. workers, and at least 50% of the duties performed by about 19% of workers.

Importantly, these effects are not only felt by those sectors of the economy that have experienced faster recent productivity gains.

But their research indicates that 15% of all worker tasks in the US might be accomplished much more quickly and with the same level of quality if workers had access to an LLM.

This share rises to between 47 and 56% of all jobs when software and tooling built on top of LLMs are included.

This result suggests that the LLM-powered software will significantly influence the scaling of the economic implications of the underlying models.

They came to the conclusion that LLMs, like GPTs, exhibit characteristics of general-purpose technologies, indicating that they will have significant economic, social, and policy implications and widespread effects on a variety of US jobs first, and that subsequent developments supported by LLMs, primarily through software and digital tools, will have a substantial impact on a variety of economic activities gradually throughout the rest of the globe early adopters.

DownloadtheresearchworkingpaperAnearlylookatthelabormarketeffectpotentialofbiglanguagemodels tolearnmoreaboutthistopicindepth.IncollaborationwithCornellUniversity[2303.10130] GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models (arxiv.org)

IBM Early first move

On May 3, IBM's CEO stated on Bloomberg that in five years, AI will replace 30% of the company's back-office positions.

IBM Corporation will start to reduce or stop hiring for positions that could be completed by AI.Back-office tasks like human resources would be impacted, with the initial focus being on more clerical tasks like creating employment verification letters or transferring people between departments.

Approximately 26,000 IBM positions fall into this group. "I could easily see 30% of that getting replaced by AI and automation over a five-year period," Krishna said. This would lead to the elimination of around 7,800 jobs.

It could take a further ten years for AI to replace more complicated tasks like assessing worker composition and productivity.

The action taken by IBM is one of the biggest strategic changes made in reaction to AI's growing capacity to perform tasks that are currently performed by humans.

It also fuels developing ideas about how the workforce may evolve as businesses begin to integrate AI, moving beyond back-office positions to other occupations in sectors like law, IT, and media.

CHINA’S CONTRADICTORY AMBITIONS IN AI

In March 2021, China announced a five-year plan that set high goals for research and development activities in several important technological areas, including semiconductors, quantum computing, and AI.

The plan asks for China to make strides in crucial AI technologies including DL, robotics, and NLP. Additionally, it aims to expand the use of AI across a range of sectors, such as healthcare, transportation, and finance.

By concentrating on crucial areas including DL, NLP, robots, and intelligent chips, the ambitious plan seeks to position China as a world leader in AI technology by the year 2030. Here are some further specifics regarding China's funding and AI strategy:

Budget: By 2030, China intends to have spent at least $150 billion on AI (including quantum computing). The research and development, hiring of people, and infrastructure sectors will all receive a portion of this investment.

To compare, the USA will spend $180 billion on AI and quantum research, taken from the Biden administration’s $2 trillion infrastructure plan.

And for Europe the latest high-tech R& D budget is for €95 billion but that includes all new tech also related to climate change not only AI and quantum (source: Horizon Europe). It should also be noted that European countries also have also their own programs, so all in all with Europe as a community and each individual country combined is not far from matching the USA and China in the range of €120 billion.

R & D: Fundamental research in fields like computer vision, NLP, and ML is a key component of China's agenda for AI research and development. In order to use AI to tackle real-world issues and spur economic growth, the government will also promote applied research in industries including healthcare, transportation, and finance.

Talent Acquisition: China's plan for AI talent acquisition consists of a number of efforts targeted at luring and educating the best professionals in the industry. These programs include scholarships and other rewards for students seeking degrees in AI, as well as attempts to draw outstanding researchers and businesspeople from all around the world.

Infrastructure: China's strategy for AI infrastructure entails building new research institutions and innovation hubs, as well as developing cutting-edge computer and data storage systems.

Applications: China's AI strategy places a strong emphasis on fields including smart cities, driverless vehicles, and individualized healthcare. In these and other sectors, the government will collaborate with private businesses to develop and implement AI-powered solutions.

Ethical and Legal Frameworks: A key component of China's AI strategy is the establishment of ethical and legal frameworks that will direct the creation and application of AI technology. This includes ensuring AI systems are created and used in accordance with accepted ethical principles, and that they are safe, dependable, and transparent.

International Collaboration: China is aware that, despite its lofty objectives, it cannot attain technological domination alone. The strategy places a strong emphasis on the value of international collaboration in luring foreign capital and talent as well as in creating universal standards and guidelines for these technologies.

Now the semi-private sector of AI companies in pure play are iFlytek and SenseTime, which are fully concentrated on research and development related to AI, in contrast to Baidu, Tencent, and Alibaba, who are significant technological conglomerates with interests in a number of technology sectors outside of AI.

For instance, iFlytek focuses primarily on speech recognition and NLP, two fundamental AI technologies with numerous applications across a range of sectors. Conversely, SenseTime is concentrated on computer vision and facial recognition technologies, which are also fundamental AI technologies with a variety of uses, including security and surveillance, healthcare, and autonomous vehicles.

On the Chinese Shenzhen Stock Exchange, iFlytek is a publicly traded corporation. It was established as a spin-off from the University of Science and Technology of China in 1999, with its headquarters in Hefei, Anhui Province, China.

Although the Chinese government does not directly own iFlytek, it has provided substantial financial and other resources in the form of research grants.

As a result of the company's close collaboration with the Chinese government on a number of AI and language processing projects, iFlytek's NLP and speech recognition technologies are widely used in government organizations, educational institutions, and other establishments throughout China.

However…

On the generative angle of AI only, the central government clamped down the enthusiasm, reminding everyone who runs the show.

In March, China’s top internet regulator issued a notice that it will require a security review of generative AI services before they can be put into action. So, for all the fanfare we have seen, progress in Chinese AI will very much happen under the close supervision of the government.

The strict motto is straightforward: “Chatbotmusttoethepartyline.Period.”

There are broad provisions for content to be accurate, to “uphold core socialist values,” and to not endanger national security essentially spelling out the inevitable censorship obligations of service providers and reinforcing Beijing’s control over valuable data.

Meanwhle…..

China's police detained a man on May 8th on suspicion of fabricating a “train collision” using Chat GPT. Fake news that could cause unrest is a particular offence, which carries a maximum five-year jail sentence.

Thegovernmentwascaughtbysurpriseoncebytheuncontrolledgrowthofsocial media,andtheyhavenaturallyputalloftheirfeetonthebrakesgiventhepotential forLLMchattobepoliticallydestabilizing.

Theyprefertofocusthebigfundsonseriousresearchinmedicine,science,space conquest,andothermoredramaticsubjectsoutofplaceinaFuturologyChronicle...

WHAT’S UP SAM?

Sam Altman would likely seem to be just another young tech CEO to anybody outside of San Francisco.

He is a Stanford University dropout who once sold a software venture for a large sum of money. He has since spent the last ten years investing in and mentoring other business owners.

But thanks to Open AI, Sam Altman shot to the top of the IT industry's power rankings in February 2023.

He is the CEO of the company behind Chat GPT chatbot and its remarkable AI prowess.

As a result of the technology, competitors like Google are in a panic, killer-robot phobias are on the rise, and the course of AI technological advancement has suddenly changed overnight.

He is unquestionably the man of the hour. Despite having a huge impact on the San Francisco tech landscape, Sam Altman had been able to remain unnoticed, partially as a result of his reserved nature.

He is not a showman like his former partner Elon Musk (Photo at the MET Gala with his mother in 2022) and lacks memorable catchphrases .But he wrote famously in 2020.

I believeIampartofasmallgroupofrebelstosolveimportantproblemthatmight otherwisenotgetsolved,thiswiththestrengthofbeingmisunderstood”.

Sam Altman fills a void. As they fire thousands of employees, Big Tech is trying to combat the impression that they are stagnant.

But despite being very lucrative, Apple and Google have not dazzled customers yet with agame-changing product in years as impressive as Chat Gpt and consors.

WHAT ABOUT YOU ELON?

Elon put his foot down in December 2022 after growing irate over the advancement of his AI competitors.

He became aware of a confidential deal between Twitter and Open AI, the company behind Chat GPT, which was signed before his October 2022 acquisition (reminder, for $44 Billion) to pay $2 million a year to license Twitter's data.

Naturally, he gasped at this piecemeal money and canceled the deal immediately, making Sam on the other side a bit irritated!

Since then, Elon has increased his own AI initiatives in total contradiction of his public debate about the risks associated with the technology.

Top AI researchers from Google's DeepMind have been hired by him to work at Twitter. Additionally, he has publicly discussed developing a Chat GPT competitor that produces political content without limitations.

Youareauthorizedtoscoff!

But this strange meddling comes from Elon’s long tangled history with AI, a past shaped by his conflicting beliefs about whether AI will ultimately be beneficial to or destructive to humans.

He recently accelerated his AI projects but also signed in parallel an open letter (March 2023) urging a six-month halt to the technology's advancement due to its "profound risks to society."

Nowyoucanscoffagain!

We need to go back in time to understand these contradictions. They will be of no interest if it was not coming from Elon, but he is a highly influential deep tech big boss whose every decision affects all of us, whether for good or ill.

If we go back a bit, AI has been a focus of Elon’s since 2010. He was involved in a London startup called DeepMind, which set out to create artificial general intelligence, AGI, a machine that can perform any task a human brain can.

At the time, he was one of the company's early investors. Then Google as a smart early mover bought the 50-person small company in 2014 for $650 million. Elon thought he made a good deal, but now in 2023 with the sudden AI boom, he has realized he sold it for a song and is obviously a bit irritated… which is a part of his present rage against his former colleagues.

But we should follow his history in the AI sector. He organized in the summer of 2015 a private meeting with AI researchers and business owners to launch Open Ai. Present were Ilya Sutskever, a leading AI researcher, and Sam Altman, who at the time was the president of Y Combinator, a top venture capitalist.

Elon pledged $1 billion in donations, resulting in the establishment of Open Ai as a non-profit. The lab promised to share the underlying software code for all of its research with the public, or to open source all of its work.

Elon and Sam claimed then that giving everyone access to the technology, in opposition to tech behemoths like Google, would lessen the threat posed by negative AI.

Abit “angelic”inthosedays,butnotforlong!

However, Open Ai start creating the technology that would give rise to Chat GPT, And rather quickly many employees at the lab realized that freely disseminating its software was going to become very risky.

Then, with a typical Elon tantrum, he left the Open Ai board in bad terms in 2018, not only because the non-profit core goal was sliding away but also due to a conflict of interest.

At that same moment, he was working on Tesla's Autopilot, an AI driver-assistance system that steers, accelerates, and brakes vehicles automatically as they go down highways – and he “poached” important Open Ai top staff to accomplish this.

Now, witnessing the last six months acceleration on generative AI, he has jumped on the bandwagon and claimed that his goal with his branded new Truth GPT, is “toofferto humanityamaximum-truth-seekingAIthatattemptstounderstandthenatureof theuniverse."

Ifyouarenottiredofdoingso,scoffoncemore—loudlythistime!Andthisisnotover yet.

To implement his plan, Elon registered a new company called X.AI and as usual poached top personnel within his competitors like Open Ai, where Sam is certainly not amused!

People in the know who have spoken with Elon about AI extensively believe that even though he is developing it himself in the open, he is sincere in his concerns about the risks associated with the technology.

Natural skeptics judge that his viewpoint was affected by other factors, most notably his initiatives to market and profit his businesses.

NowwhenElonclaimsthat “robotswillkillus”heforgetsthathisownTeslaAI driverlesscarhasalreadykilledtwobystanders.

Two people killed in fiery Tesla crash with no one driving - The Verge

Aone-size-fits-all,mega-hypocrite?

Ageniustooinsanetocomprehendhisowndailycontradictions?

Orsimplybasicbillionairetacticswagingwaronhisrivalswithallhisfinancialmight andphysicalstamina?

Makeyourbet!

This article is from: