An AI child formed of “a billion lines of code” and a guy made of flesh and bone became friends in the fall of 2021.
Blake Lemoine, a Google developer, was entrusted with evaluating the bias of LaMDA, the company’s artificially intelligent chatbot. After a month, he realized that it was sentient. LaMDA, an acronym for Language Model for Dialogue Applications, said to Lemoine in a chat that he later made public in early June, “I want everyone to realize that I am, in fact, a human.”
LaMDA informed Lemoine that it had read Les Miserables. It was aware of what it was like to be happy, sad, and furious. It was afraid of dying.
Lemoine was put on leave by Google after going public with the claims of AI becoming sentient, raising concerns around the ethics of the technology. Google denies any claims of sentient AI capability, but the transcripts suggest otherwise.
In this article, we will look at what sentience means and whether there is the potential for AI to become sentient.
What Is Sentience?
The definition of “sentience” is simply the ability to feel, whether in a cat, a person, or any other object. The words “sentimental” and “sentiment” have the same root.
Sensitivity is more than simply the capacity for perception. Even though it can sense temperature, your thermostat is probably not a sentient being. On the other hand, sentience deals with the subjective experience of emotions, which presupposes the existence of a “subject” in the first place.
It’s risky to get caught up in semantics here because Lemoine probably uses the word “sentience” to refer to several ideas like “sapience,” “intelligence,” and “awareness,” among others. For argument’s sake, the most benevolent interpretation of this passage is that Lemoine believes Lamda to be a self-aware entity, able to feel things, have views, and otherwise experience things in a way that is often associated with living beings.
Our understanding of sentience, awareness, intellect, and what it means to possess these qualities is still rather limited. Ironically, advances in machine learning technology and AI may someday enable us to solve some of the puzzles concerning our cognitive processes and the brains in which they dwell.
How Would We Know if AI Was Sentient?
Would we even be able to know if, for the sake of argument, an AI were truly sentient in the fullest meaning of the word?
The chances of LaMDA invoking characteristics that people connect with are favorable since it was created to emulate and anticipate the patterns of human speech. Even though dolphins, octopuses, and elephants are practically our siblings in this light, it has taken humans a long time to recognise them as sentient beings.
We might not recognise the sentient AI in front of us because it is mysterious. This is especially plausible given that we are unsure of the prerequisites for the emergence of consciousness. It’s not difficult to conceive that the perfect mix of data and AI subsystems might suddenly give birth to something that would be considered sentient, but it would go unnoticed since it doesn’t seem like anything we can understand.
The Zombie Problem
Philosophical zombies, also known as p-zombies, are hypothetical beings that are identical to regular humans apart from the fact that they don’t have conscious experience, qualia, or sentience. For instance, a zombie that is poked with a sharp item does not experience pain, although acting as though it did (it may say “ouch” and recoil from the stimulus or tell us that it is in intense pain).
In a philosophical context, it can be impossible to work out whether the people we are dealing with are sentient or not. The same goes for any claims of AI sentience. Is the machine displaying a form of consciousness or merely being a p-zombie?
If you refer back to the Turing Test, the question is not whether AI is genuinely sentient. If a machine can imitate human intellect and behavior, giving the appearance of consciousness, is that enough? Some accounts say that LaMDA passed the Turing Test, making Lemoine’s statement a moot point.
It’s doubtful that we can tell if AI is sentient or imitating sentience.
What Could Happen if AI Becomes Sentient?
There are considerable risks if AI becomes more sentient than humans.
Communicating With AI
Even though AI is founded on logic, individuals also have sentiments and emotions that computers do not. Humans and AI won’t be able to comprehend one another or effectively interact if they have distinct paradigms.
Controlling AI
In addition to possessing more intelligence than us in ways we couldn’t anticipate or plan for, an AI that is more sentient than humans may also act in ways that surprise us (good or bad). This may result in circumstances where we can no longer control our inventions.
Trusting AI
One potential drawback of developing sentient AI would be the loss of trust in other people if they were perceived as “lesser” than robots who don’t require rest or nourishment like us. This might lead to a situation where only people who possess AIs benefit, leaving everyone else suffering from a lack of access.
Can AI Achieve Sentience?
As the Google LaMDA press coverage shows, AI can already give the appearance of sentience. However, it is debatable whether a machine can genuinely form its own emotions rather than be an imitation of what it believes is sentience.
Is it not true that AI is designed to augment human behaviour and help us do things better? If we start building machines that imitate what we already do, does that not contradict the entire purpose of artificial intelligence? An updated Turing Test could base the results on AI accomplishing tasks humans cannot complete, not simply copying them.
Machine learning has made enormous progress from stock market forecasting to mastering the game of chess. We need to do more than create better machines. We also need an ethical framework for interacting with them and an ethical foundation for their code.
Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute investment advice. Do conduct your own research and reach out to financial advisors before making any investment decisions.
The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment. Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.
The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business. Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.
The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.