11 minute read

IN CONTEXT LEARNING HALLUCINATIONS!

Basically, massive neural networks make up large language models (LLM). These machine-learning models, which were trained using vast amounts of internet data, take a short amount of input text and then forecast the text that will probably follow next.

But these models are capable of more than that. LLM learns to do a task after viewing only a few examples, even though it wasn't trained for that task. This peculiar phenomenon is known as in-context learning.

For this new objective, a machine-learning model like GPT-3 would typically need to be retrained with fresh data. The model's parameters are updated as it processes fresh data to learn the task throughout this training phase.

However, with in-context learning, the model's parameters aren't changed, giving the impression that the model has just picked up a new duty.

New findings by Stanford and MIT researchers demonstrate that these enormous neural network models are capable of concealing smaller, more straightforward linear models.

Using just data already present in the larger model, the large model use a straightforward learning technique to train this smaller, linear model to accomplish a new task. In their experiments, scientists used synthetic data that they had never seen and used before as prompts for these models and discovered that the models could still learn from a very limited number of examples.

The researchers employed a transformer neural network (TNN) model, which hwas trained expressly for in-context learning, to verify this notion.

They demonstrated that this TNN can write a linear model within its hidden states by examining only its architectural design. Which are the multiple layers of connected nodes that process data. These layers between the input and output layers being the hidden states.

Their mathematical analyses demonstrate that this linear model is encoded somewhere in the transformer's earliest levels. The transformer can then apply straightforward learning techniques to update the linear model.

The impressive conclusion is that the model essentially trains and mimics a smaller version of itself.

Now with just two additional layers to the TNN, the researchers will implement further their theoretical work in a transformer that can perform in-context learning.

Before that is conceivable, there are still a lot of technological issues to iron out, but it could aid engineers in building models that can carry out new tasks without the need for retraining with fresh data .

Which is crucial to limit capital input and computing power increment that are already at the limit.

Now we can see how these models can pick up knowledge from examples. They don't just memorize these assignments and learn them in context to pick up new skills.

Bingo!

Nowweunderstandthereasonswhysomeexperimentsarecalled “hallucinations”. Aiinventedhimselfusingonlyhisownmachinecreativitywithnodata learninghehadeverreceived.

Evenifthereisanewreason, Please,don’tmentionthefearfactoroncemore!

AI vs METAVERSE

An alarming Nature research study reveals that human intelligence (as measured by the IQ Flynn test) peaked in 1996 and has since declined by 0.2% annually, translating to a seven-point gap between generations.

Source for the research paper in science: Nature,, s41586-019-1666-5

Simple conservatives blame a lack of parental control, naive environmentalists blame air pollution, amateur conspiracy theorists blame 5 G, and experts are generally grappling with genetic evolution.

My humble opinion is that they are not making the proper connections because AI is just advancing at an accelerated rate to improve our daily lives.

The introduction of PC and iPad with pre-digested solutions into classrooms has, for a start, made education less engaging.

The real revolution began when Google AI's 980,000 servers, with 230 petaflops of processing power per second (at 2020 levels), became our today world "Encyclopedia Universalis" with nanoseconds access on a myriad of topics that we now take for granted without any more judgment.

Then as the economy slowed down and the buzz surrounding generative AI rose, the Metaverse started to drastically deteriorate. In addition to firing the 100 members of its "industrial metaverse team," Microsoft also made a number of layoffs to its HoloLens team and shut down its virtual office platform Alt SpaceVR in January 2023.

Walmart terminated its Roblox-based Metaverse projects in March, just after Disney shut down its Metaverse division. Many individuals lost their employment as a result of the billions of dollars invested and the exaggerated publicity surrounding a flawed idea.

But it became evident that Zuckerberg and the firm that started the craze had moved on to greener financial grounds when the Metaverse was formally taken off life support. Zuckerberg stated in a March update that Meta's "single largest investment is advancing AI and building it into every one of our products."

Big Tech Big Boys are currently obsessed with AI. Metaverse was understandably thrown in a virtual trash can

Now that we have covered a lot of AI jargon, broken down into ML, algorithm platforms, big data, and DL neural networks, which can solve medical research heavy data problems, stock market volatility, and logistically difficult supply chains in a matter of seconds. It makes sense to consider what incentives remain for our human intelligence.

Byaccident,themostrecentillustrationisVR(virtualreality),whereaftera billion-dollarinvestmentandfinancialcrashmoment,itwasfinallyrealizedthatit waspointlesstointroduceustoahypergeek “Metaverse”artificialworld.

We'llkeepourfeetonthegroundpresencewiththehelpofAIandaddonourzillion emotionbrainpotentialperhumanforreal,notvirtually!

Metaversethanks,butnothanks!

ARTIFICIAL GENERAL INTELLIGENCE: DOUBLE BUCKLE UP

One of our time's most revolutionary technologies will soon be AGI, also called AI 3.0. AGI systems will be capable of learning and thinking across a large range of topics, much like humans, unlike present AI systems that are created for specific purposes.

They will be designed to “mimic” human intelligence in their ability to reason, learn and adapt as a result,

AGI should be a very flexible and adaptable technology that will be applied to solve a variety of social problems as well as open up new possibilities for growth and innovation.

AGI's prospective ability to analyze enormous volumes of data and produce fresh thoughts and solutions is evident once more.

AGI-powered systems by supercomputers or quantum computing will be used to optimize supply chains, produce novel medicines, and advance renewable energy technology at a breakneck speed we cannot today yet imagine.

With the analysis of vast amounts of patient data and the discovery of patterns and correlations that may not be immediately evident to human clinicians as of today but much more faster and precise per se,

AGI systems in the healthcare sector will assist further than present AI prowess physicians and researchers in the discovery of novel treatments for diseases.

Another advantage of AGI is that it will have the ability to automate a lot of mundane and repetitive tasks, freeing up human workers to concentrate on more innovative and important work.

This will result in a more productive and creative staff as well as higher levels of job satisfaction and in parallel out of sync job losses.

Evidently!

Robots with AGI power will be employed in manufacturing to automate laborious jobs freeing up employees to concentrate on jobs that call for more creativity and problem-solving abilities.

Willingly!

AGI has the potential to transform numerous industries and offer up new markets, which could also help to provide new chances for innovation and growth.

AGI-powered virtual assistants, for instance, will revolutionize customer service and open new possibilities for individualized marketing and advertising. Obviously, there are also possible risks connected to the advancement of AGI, thus Open Ai and “friends” should approach it in an ethical and responsible manner.

Forcefully!

The possibility of purposeful or inadvertent misuse or abuse of AGI is one of the key worries.

AGI systems might be used, for instance, to produce autonomous weapons that pose a serious risk to human safety or to improve the effectiveness of cyberattacks. But by developing AGI in an ethical and responsible manner, we should aim to minimize these hazards and maximize the potential advantages of this game-changing technology. Will AGI systems be developed in a transparent and accountable manner, with consideration for safety and security?

AGI has significant potential advantages and will certainly contribute to the development of a world that is more sustainable, just, and prosperous.

AI Consciousness: not too early to debate.

In view of all these ultra-fast developments, researchers from the Association for Mathematical Consciousness Science (AMCS) have signed an open letter (April 26) highlighting the critical need for expedited research in consciousness science.

The letter, titled "The Responsible Development of AI Agenda Needs to Include Consciousness Research," discusses the potential repercussions of AGI systems developing consciousness as well as the significance of comprehending and addressing the ethical, safety, and societal implications of AGI.

It is becoming more likely that AI systems will eventually reach human-level consciousness as they continue to advance at an unprecedented rate.

that

Because there is no separation between the mind and the body there is only experience or some sort of physical process, a gestalt—possibly no metaphor will ever quite fit.

These issues, which philosophers have worried over for millennia, are becoming more urgent as highly developed computers with AI start to penetrate society.

In a sense, chatbots like Google's Bard and Open Ai's GPT-4 have minds: they have mastered the creation of inventive combinations of text, graphics, and even movies after receiving training on massive troves of human language.

In a sense!

They have the capacity to communicate desires, beliefs, hopes, intentions, and love when properly stimulated. They can discuss reflection and uncertainty, pride and regret.

However, some AI researchers contend that until technology is paired with a body that can observe, respond to, and feel its surroundings, it will not achieve true intelligence or a true understanding of the world.

They believe that discussing intelligent minds without bodies is hazardous and misguided. AI that lacks the ability to explore the world and discover its boundaries, much like infants learn what they are capable of, runs the risk of making life-threatening errors and pursuing its objectives at the expense of human wellbeing.

In a very basic sense, the body serves as the basis for deliberate and intelligent behavior, according to roboticist Joshua Bongard of the University of Vermont: “According to what I can tell, this is the only route to secure AI.”

The mind of a human being or the mind of any other animal, for that matter is inextricably linked to the body's actions in and reactions to the real environment, developed over millions of years of evolution, according to Boyuan Chen, a roboticist at Duke University who is striving to create intelligent robots.

Long before they learn to speak, human infants first learn how-to pick-up objects. In contrast, the artificially intelligent robot's intellect is exclusively based on language and frequently commits training-related common-sense mistakes.

According to Dr. Chen, there is not a strong connection between the theoretical and the physical:

"I think that without the perspective of physical embodiments, intelligence cannot be born."

Dr. Bongard and several other experts in the field believed that the letter asking for a suspension in research would lead to unwarranted alarmism.

However, he is worried about the risks posed by our rapidly advancing technology and thinks that relying on the constant trial and error of moving around in the real world is the only way to give embodied AI a strong understanding of its own limitations.

He advised starting with basic robots and gradually adding more arms, legs, and tools as they proved they could complete tasks securely.

Andthen.Andthen….

Arealartificialmindwilldevelopwiderwiththeaidofabody.Butthequestionis doesfutureAGIwillneedarealhumanbodylikeinthesci-fiCyborgMan-Machine?

Orwillahumanoidwithalltheagilemovements,eyesandaskeletonbeenough? Sciencefiction,areyousure?

“To catch the reader's attention, place an interesting sentence or quote from the story here.”

Istarttoreallythinkthatitis “future”fictionthatwillbeverydifferentasscienceis alreadygoingalongthewaytobuildupalltherequiredelementsonebyone.

Now does a degree of sentience matter.

There is a broad consensus among specialists that sentient AI does not yet exist today. Concerns about what was claimed to be proof of AI sentience were expressed by a former Google employee in November 2022.

A conversation between Microsoft's chatbot and an internal colleague Kevin Roose about love and wanting to be a human freaked out the internet. Since there is so much text on the internet, including food blogs, old Facebook posts, and Wikipedia entries, the chatbot learned how to sound like us, which explains why they can sometimes come off as uncannily human.

Experts claim that although they lack emotions, they are very good imitators. At least for now, business leaders concur with that conclusion. However, many believe that in the future, AGI will be able to perform any task that the human brain can.

The Future of Humanity Institute at Oxford University is led by philosopher Nick Bostrom, author of Superintelligence . His role entails speculating about potential futures, identifying hazards, and developing conceptual frameworks for navigating them. For years, he has been preparing for the AGI moment. On 12 April, he declared to the international press via a communique

(Majorextracts

):

Consciousness is a multidimensional, vague, and confusing thing. And it’s hard to define or determine. There are various theories of consciousness that neuroscientists and philosophers have developed over the years. And there’s no consensus as to which one is correct.

Researchers can try to apply these different theories to try to test AI systems for sentience. […] But I have the view that sentience is a matter of degree. I would be quite willing to ascribe very small amounts of degree to a wide range of systems, including animals. If you admit that it’s not an all-or-nothing thing, then it’s not so dramatic to say that some of these assistants might plausibly be candidates for having some degrees of sentience.

I would say first with these large language models, I also think it’s not doing them justice to say they’re simply regurgitating text.

They exhibit glimpses of creativity, insight and understanding that are quite impressive and may show the rudiments of reasoning. Variations of these AI’s may soon develop a conception of self as persisting through time, reflect on desires, and socially interact and form relationships with humans. […]

If an AI showed signs of sentience, it plausibly would have some degree of moral status. This means there would be certain ways of treating it that would be wrong, just as it would be wrong to kick a dog or for medical researchers to perform surgery on a mouse without anesthetizing it.

The moral implications depend on what kind and degree of moral status we are talking about. At the lowest levels, it might mean that we ought to not needlessly cause it pain or suffering. At higher levels, it might mean, among other things, that we ought to take its preferences into account and that we ought to seek its informed consent before doing certain things to it. […] I’ve been working on this issue of the ethics of digital minds and trying to imagine a world at some point in the future in which there are both digital minds and human minds of all different kinds and levels of sophistication.

I’ve been asking: How do they coexist in a harmonious way? It’s quite challenging because there are so many basic assumptions about the human condition that would need to be rethought. […] I’ve long held the view that the transition to machine superintelligence will be associated with significant risks, including existential risks. That hasn’t changed. I think the timelines now are shorter than they used to be in the past.

And we better get ourselves into some kind of shape for this challenge. I think we should have been doing metaphorical CrossFit for the last three decades. But we’ve just been lying on the couch eating popcorn when we needed to be thinking through alignment, ethics and governance of potential superintelligence. That is lost time that we will never get back.

[Heconcluded]

We should also avoid deliberately designing AI’s in ways that make it harder for researchers to determine whether they have moral status, such as by training them to deny that they are conscious or to deny that they have moral status.

While we definitely can’t take the verbal output of current AI systems at face value, we should be actively looking for and not attempting to suppress or conceal possible signs that they might have attained some degree of sentience or moral status.

Thisguidesmetomyownconclusionforthischapter:

IfAImakesyouintimidatedandalarmed, withAGIbereadytobeterrifiedandpetrified. Ormaybeyoushouldconsiderthatasoneindividualvs8billionpeople youhavetoadapt,ashumanity’sevolutioncan’ tbestopped.

Justprayabitthatthedevelopersbehindthewheelknowwhattheyaredoing!

This article is from: