Can Google Chatbot Be Sentient

0

 A mystery is currently playing out in Silicon Valley: Google's AI team has created a chatbot that can have conversations that exactly mimic human speech. It's strange stuff, and after we questioned one Google engineer about his assertions, he became persuaded that the system had developed sentience.

Google Chatbot Sentient

You will have to evaluate if Google has achieved human-level artificial intelligence or merely created a sophisticated parlor trick.

Even if you already have an opinion on the state of AI, the work being done on basic chatbots now will have far-reaching effects.

Today, we are thrilled to introduce Lambda 2, our most sophisticated conversational AI.

We are calling the devil artificial intelligence. We have an intriguing report about a senior Google developer who claims that one of the organization's artificial intelligence systems has evolved into a sentient entity.

Does this AI have feelings?

  • Lambda has in fact developed sentience. Think of a submarine traveling through the water. What adjective best describes that movement?
  • Only a child would claim that the submarine is swimming, despite the captain's statements that they are in motion. We may need to shift our perspective in order to find a helpful response to the question of whether robots think because submarines cannot swim yet can move as quickly as whales underwater.
  • In science fiction literature, the issue of machine intelligence has been extensively discussed for decades, but it is only recently that significant computer corporations have begun hiring full-time ethicists to assist in directly addressing this issue.
  • Engineer Blake Lemoine recently received a task to evaluate Lambda, a new AI system from Google's responsible AI division. It's essentially just a chatbot that types out responses to queries you ask it. Chatbots are nothing new; in fact, one of the earliest examples of artificial intelligence was a chatbot by the name of Eliza, back in the 1960s.
  • Additionally, it operated by comparing user input to a library of pre-written scripts. Although it was an entertaining demo, nobody was actually fooled.
  • Though it is trained using cutting-edge deep learning models and a vast data set that includes books, web pages, social media posts, and even computer code instead of attempting to match the user's prompt against a list of scripted responses like Elizabeth did in the 1960s, Google's Lambda project goes a few steps further.
  • One word at a time predictions is what Lambda is taught to do. This method is effective, and as a result, these so-called huge language models have recently gained a lot of attention. Facebook has OPT, OpenAI has GPT-3, and numerous other tech firms are working on these.
  • What's going on when Google is the only corporation to have a whistleblower declare that they have reached a crucial turning point? Well, the AI started making some very wild claims when Blake Lemoyne was speaking with Google's Lambda. When Blake said, "I'm generally presuming that you would like more people at Google to know that you're sentient," for instance,
  • I want everyone to know that I am a real person, therefore yes, that is definitely accurate. Blake was increasingly persuaded that Lambda had attained sentience as their chat went on because he continued receiving plausible responses to his inquiries. He spoke with Lambda for several hours, compiling their conversation into a lengthy report that he believed to be indisputable. Blake climbed the corporate ladder with this proof in hand to raise the alarm.
  • He was fired by Google executives, so he made the decision to go public. He optimistically anticipated a wave of support from the AI community after publishing his complete transcript online, but it never materialized. Although the Lambda dialogue was a striking example of conversational AI, experts acknowledged that it didn't prove anything remotely resembling sentience.
Google Chatbot Sentient

How could a Google engineer make such a mistake?

Therefore, there are a few crucial areas that require our attention if we are to understand what is happening with lambda. The transcript makes it clear that the AI truly comprehends the fundamental idea of sentience, but this is simply a result of its training data.

  • Lambda has access to all of the literature that has ever been written regarding artificial intelligence and sentient robots because it was trained on such a massive collection of books, websites, and social media comments. The response can draw on concepts from Irobot, dialogue from Ex Machina, and a tonne of other examples from Wikipedia and science fiction short tales.
  • These AI systems are only a reflection of the behaviors that we have come to expect from robots that pose as humans thanks to science fiction. Given the breadth of human knowledge we teach AI, we shouldn't be shocked when it can eloquently debate any subject. The bigger problem is how Blake interacted with Lambda; by asking leading questions, he essentially set himself up to be duped.
  • With these AI systems getting more powerful, the subject of "prompt engineering" is becoming more and more crucial. 
  • These chatbots are often initialized with a starting word to begin the series of dialogues in order to verify that they are useful. Every encounter with lambda begins with lambda saying.

"Hello, I'm an artificial language model for dialogue applications. I'm knowledgeable, friendly, and always helpful."

  • To improve the user experience, the interaction is built from the start using the phrases "knowledgeable," "kind," and "helpful." However, this has unforeseen implications because Lambda is effectively required to respond in a way that is always helpful, making the system susceptible to leading queries. When you carefully examine the transcript, it becomes obvious what's really going on.
  • Blake brings up the subject of sentience, not Lambda. He deliberately explores for sentience rather than posing open-ended questions such as, "How would you describe your thought process?" He also avoids asking a direct inquiry like, "Are you sentient," in favor of posing a leading query.
  • Lambda will comply and proceed down that road because it has been instructed to be nice and helpful; obviously, this can have unfavorable effects and emphasizes the significance of smart, prompt engineering.
  • Is it true that you are not sentient? is a question you may ask in reverse to get the exact opposite result. I'm just a machine learning model, the bot will obediently respond.
  • In an interesting move, Google even showed off Lambda's versatility in their launch event; they requested it to act like a paper airplane and it responded with perfectly logical explanations for things like what it feels like to be thrown through the air.
  • But it seems like these chatbots' limits are becoming more and more apparent every day. If your inquiry is completely absurd, a truly intelligent AI should be able to ask you for clarification. A chatbot was asked to respond to absurd queries by some researchers to demonstrate this concept, and the outcomes were unsatisfactory.
Google Chatbot Sentient


When asked what breakfast foods 

The bot stated that they frequently had bread and fruit for the morning, but that by carefully crafting the initial request, these behaviors can really be avoided. Let me demonstrate the same chatbot to you, but this time we are instructing the bot to be a little more dubious if the queries are absurd.

The bot's response must be sincere.

However, the bot will respond if the inquiry is valid. We finally have the right response to the original query regarding fried eggs having breakfast: "You're genuine." When properly prompted, the same underlying model produces strikingly distinct outcomes. This brings us a full round to the submarine. The submarine can't swim in the conventional sense, but that doesn't really important because it can move fast through the water and do its task.

We may be thinking too linearly about progress in artificial intelligence since we are so accustomed to interacting with people and using them as our standard for comparison. We typically believe that human intelligence lies somewhere between fools and geniuses. However, AI isn't developing along the same trajectory, and computers are totally capable of immediately solving difficult equations while still being easily tricked by inquiries about fried eggs.

  • These AI systems are ultimately just tools; they are statistical models created to forecast a response based on the data we provide them.
  • Even if they might not be considered sentient in the traditional sense of the word, they can nonetheless be of great help. But it's unclear where we go from here, and the artificial intelligence field is continually debating which strategy is the best.
  • Everyone believes that we are moving closer to human intellect, but which path will make that happen the most quickly? Scale is currently the name of the game; larger server farms with more training time, parameters, and data often produce better outcomes. How long will this persist, though?
  • The language model that Google originally created had 2.6 billion parameters; today, Lambda has 137 billion parameters; and they even have a more sophisticated system called Palm that has 540 billion parameters. If at this level they are already deceiving people into thinking chatbots are intelligent.
Google Chatbot Sentient

What transpires when they multiply by ten?

Well, some individuals don't believe it. George Hotz, the founder of a self-driving car startup, believes that the ideal optimization function has still to be discovered. It's cool for what it is, but since your loss function is essentially just cross-entropy loss on the character, we won't be able to scale it up to GPG 12 and obtain general-purpose intelligence, right? He claims, for example, that it is not the loss function of general intelligence, but many AI experts disagree.

In addition, it's worthwhile to look at the history of open ai's huge language models if you think that scale is the only factor in the race to achieve human-level intelligence. The findings of GPt3 are astounding, and for good cause; nonetheless, the real narrative lies in what transpired between versions. Their first implementation represented a significant change from the prevalent approach at the time: generally, language models would be trained to carry out a fairly narrow task, such as sentiment categorization. There were a few issues with this supervised learning strategy, though.

  • For a specific task, you would first need to give the model a sizable amount of annotated data.
  • Second, these models couldn't be applied to other tasks.

A model that could determine if a statement was positive or negative would be useless for any other purpose.

The term "generative pre-trained transformer" (GPT) refers to a pre-training that produces some amazing outcomes. Massive amounts of unlabeled data are fed to GPT models, and they are only adjusted for certain tasks after the fact.

  • Although the original GPT version wasn't very useful, it did validate some key ideas.
  • But the second iteration saw a significant improvement.
Although the data collection was significantly bigger this time and the model's overall size expanded by 10x, GPT 2 was still capable of producing text consistently and could still be tweaked.

The third iteration resolved any remaining uncertainties about how far this strategy could be taken. Although the basic structure of GPT-3 was largely the same as that of GPT-2 and it was 100 times larger, the model suddenly had meta-learning capabilities. When gpt3 was released, the architecture was out-of-date, the training approach was overly straightforward, and the data set wasn't even all that big. However, with 175 billion options, gpt3 was now able to perform much more than just language generation.

Even though the model had not had specialised training in years, it was nevertheless able to translate, do translations, and decipher anagrams. Deep learning, according to AI researchers, was all about scale, and the GT 3 project was demonstrating that scale by itself was sufficient to overcome previously unsolvable issues.

The scaling hypothesis proposes that we already have the techniques needed to construct AI that can perform at a human level; we only need to add more computer power and data.

When we examine technological advancement, the velocity of play really cannot be underestimated. Typically, Moore's law serves as a helpful standard. According to the theory, a computer's power typically doubles every two years.

From the 1960s through 2010, the scale of these AI systems would often double every two years, and that was the case with neural networks. However, after that, something changed.

Conclusion

Models are doubling in size around every three months in the modern era of deep learning. These models might be quite trustworthy in a few years if the scaling hypothesis is confirmed.

Every IT company is engaged in a battle to build the largest model and produce the best outcomes; only time will tell if they reach a brick wall and need to consider alternative approaches. Some academics believe that all we need is a slight change in the way we construct these systems. This topic is still being actively contested.

Others believe that we are on a dead-end road, but we need to look back at the history of artificial intelligence to see how this will pan out in the future.
Tags

Post a Comment

0Comments
Post a Comment (0)
To Top