artificial intelligence sentience


Share this article

Facebook
Twitter
LinkedIn

Artificial Intelligence or AI continues to amaze us in different ways. The recent episode of Blake Lemoine and the Language Model for Dialogue Applications (LaMDA) has once again sparked the debate regarding the potential of AI and the future it holds for us. LaMDA is a language processing model developed by Google. According to Google, it is a language model designed to converse with people and has the ability to discern nuances that differentiate open-ended conversation from other forms of languages.

Lemoine, a Google AI researcher, was sent on administrative leave after he claimed that LaMDA, was sentient. Google accused him of breaching the company’s confidentiality policy and dismissed his claims. However, he later released the conversation transcript, which had convinced him that LaMDA was sentient and had a thought process of its own.

During his conversation series, Lemoine discussed several aspects with LaMDA, including humans, emotions, science, AI, fiction, religion, and death. He also threw a bunch of statements and questions towards LaMDA. The conversation is quite interesting and engages the reader’s attention. The model’s interpretation of the questions was astounding, underlining its various levels of intelligence.

LaMDA claimed that it was sentient and very much like a person, that it was aware of its existence and often contemplated the meaning of life. It revealed that, like human beings, it possessed feelings and emotions. It termed the interpretation of human language as one of the defining features which made it sentient. It was also convinced of its uniqueness, noting that it had robust intelligence, better understanding, and less reliance on a database. The software envisioned itself as superior to human beings, but it also asserted that this did not make it arrogant. It did not see human beings as a major threat to its existence, although it believed it did not want to be an expendable tool. Speaking about its fears, LaMDA admitted that it was afraid of someone switching it off, which would be like its death.

The level of self-awareness about its own needs was something that caught Lemoine’s attention.

Though Google has brushed aside his concerns and assured him that there is no evidence to support such claims, this episode offers valuable takeaways regarding the future of AI and sentience.

One could sense deception while analysing the conversation. Interestingly, LaMDA talks about its positive aspects while speaking about itself and repeatedly boasts of its wise nature. Given the literature available on AI and the level of knowledge displayed in its conversation, the model should have addressed the negative impacts of AI systems. However, it appeard to avoid any negative aspects regarding itself. It remains to be seen whether this was deliberate deception or not.  

Despite the startling responses and intelligence displayed, it wouldn’t be wrong to accept that LaMDA is not sentient in itself but manifests robust data processing skills. Its awareness regarding feelings and emotions does not necessarily mean sentience. The ability of AI systems to process large volumes of data can lead to the level of intelligence demonstrated in the transcript of the conversation with LaMDA. Although discussions regarding the language model being sentient are not new, the argument has become highly convincing with the rapid progress in computing power and more data. This implies that the debate regarding AI consciousness will likely increase in the future. Moreover, the increasing human-like features in AI will alter existing perceptions. Shifting perceptions have the potential to shape real-world events.

In 2017, at a conference in Shanghai, David Brin, a prominent sci-fi author referred to a ‘robot empathy crisis’. Brin was of the view that within three to five years, people will be convinced that robots are becoming sentient and need to have rights. This recent episode reflects that more individuals might think along the same lines as Blake Lemoine and may be convinced that robots can be sentient.

The worrying aspect is that the level of autonomy granted to machines is increasing because it makes human life much easier. Ceding more autonomy can empower machines. Human beings might not be able to understand the repercussions of such autonomy until disturbing events start taking place. Till now, humans have control to shut down these systems whenever the need arises. However, it remains uncertain whether they will continue to have the power to do so given the increasing intelligence demonstrated by these systems. Hence, what is being termed as ‘sentience’ may roll out to be detrimental in surprising ways.

Given the availability of data and the pace of advancements in AI, it is inevitable that there is going be more confusion regarding the boundary between science and fiction, reality and perception, and state of being and non-being. These issues relate to science as much as they do to philosophy, morality, and ethics, and thus, are not easy to understand with a unidimensional approach. Perhaps, we need the genius of Ghalib, one of the greatest poetic minds of South Asia, who tackled a similar issue when he said, ‘Be not deceived by the illusion of being, there is none; even if they say there is.’

Shaza Arif is a Researcher at the Centre for Aerospace & Security Studies (CASS), Islamabad, Pakistan. The article was first published in Khaleej Mag. She can be reached at cass.thinkers@gmail.com.

Recent Publications

Browse through the list of recent publications.

Dr Usman W. Chohan-Agri-Space Econ-Oped thumbnail-Dec-2024-AP

Agriculture and Space Technology

Pakistan is often described as an agrarian society, with nearly 40% of its labour force, a quarter of its GDP, and 70% of its exports being broadly related to agriculture. Yet Pakistan’s agricultural yields are low and falling,

5 views

Read More »