artificial intelligence sentience

Artificial Intelligence or AI continues to amaze us in different ways. The recent episode of Blake Lemoine and the Language Model for Dialogue Applications (LaMDA) has once again sparked the debate regarding the potential of AI and the future it holds for us. LaMDA is a language processing model developed by Google. According to Google, it is a language model designed to converse with people and has the ability to discern nuances that differentiate open-ended conversation from other forms of languages.

Lemoine, a Google AI researcher, was sent on administrative leave after he claimed that LaMDA, was sentient. Google accused him of breaching the company’s confidentiality policy and dismissed his claims. However, he later released the conversation transcript, which had convinced him that LaMDA was sentient and had a thought process of its own.

During his conversation series, Lemoine discussed several aspects with LaMDA, including humans, emotions, science, AI, fiction, religion, and death. He also threw a bunch of statements and questions towards LaMDA. The conversation is quite interesting and engages the reader’s attention. The model’s interpretation of the questions was astounding, underlining its various levels of intelligence.

LaMDA claimed that it was sentient and very much like a person, that it was aware of its existence and often contemplated the meaning of life. It revealed that, like human beings, it possessed feelings and emotions. It termed the interpretation of human language as one of the defining features which made it sentient. It was also convinced of its uniqueness, noting that it had robust intelligence, better understanding, and less reliance on a database. The software envisioned itself as superior to human beings, but it also asserted that this did not make it arrogant. It did not see human beings as a major threat to its existence, although it believed it did not want to be an expendable tool. Speaking about its fears, LaMDA admitted that it was afraid of someone switching it off, which would be like its death.

The level of self-awareness about its own needs was something that caught Lemoine’s attention.

Though Google has brushed aside his concerns and assured him that there is no evidence to support such claims, this episode offers valuable takeaways regarding the future of AI and sentience.

One could sense deception while analysing the conversation. Interestingly, LaMDA talks about its positive aspects while speaking about itself and repeatedly boasts of its wise nature. Given the literature available on AI and the level of knowledge displayed in its conversation, the model should have addressed the negative impacts of AI systems. However, it appeard to avoid any negative aspects regarding itself. It remains to be seen whether this was deliberate deception or not.  

Despite the startling responses and intelligence displayed, it wouldn’t be wrong to accept that LaMDA is not sentient in itself but manifests robust data processing skills. Its awareness regarding feelings and emotions does not necessarily mean sentience. The ability of AI systems to process large volumes of data can lead to the level of intelligence demonstrated in the transcript of the conversation with LaMDA. Although discussions regarding the language model being sentient are not new, the argument has become highly convincing with the rapid progress in computing power and more data. This implies that the debate regarding AI consciousness will likely increase in the future. Moreover, the increasing human-like features in AI will alter existing perceptions. Shifting perceptions have the potential to shape real-world events.

In 2017, at a conference in Shanghai, David Brin, a prominent sci-fi author referred to a ‘robot empathy crisis’. Brin was of the view that within three to five years, people will be convinced that robots are becoming sentient and need to have rights. This recent episode reflects that more individuals might think along the same lines as Blake Lemoine and may be convinced that robots can be sentient.

The worrying aspect is that the level of autonomy granted to machines is increasing because it makes human life much easier. Ceding more autonomy can empower machines. Human beings might not be able to understand the repercussions of such autonomy until disturbing events start taking place. Till now, humans have control to shut down these systems whenever the need arises. However, it remains uncertain whether they will continue to have the power to do so given the increasing intelligence demonstrated by these systems. Hence, what is being termed as ‘sentience’ may roll out to be detrimental in surprising ways.

Given the availability of data and the pace of advancements in AI, it is inevitable that there is going be more confusion regarding the boundary between science and fiction, reality and perception, and state of being and non-being. These issues relate to science as much as they do to philosophy, morality, and ethics, and thus, are not easy to understand with a unidimensional approach. Perhaps, we need the genius of Ghalib, one of the greatest poetic minds of South Asia, who tackled a similar issue when he said, ‘Be not deceived by the illusion of being, there is none; even if they say there is.’

Shaza Arif is a Researcher at the Centre for Aerospace & Security Studies (CASS), Islamabad, Pakistan. The article was first published in Khaleej Mag. She can be reached at [email protected].


Share this article

Facebook
Twitter
LinkedIn

Recent Publications

Browse through the list of recent publications.

The Cover-up: IAF Narrative of the May 2025 Air Battle

Even after one year since the India-Pakistan May war of 2025, the Indian discourse regarding Operation Sindoor remains uncertain under its pretence of restraint. The Pahalgam attack on 22 April, which killed 26 people, triggered an escalatory spiral. New Delhi quickly accused Pakistan-linked elements, while Islamabad refuted the allegation and demanded an independent investigation. On 7 May, India launched attacks deep inside Pakistan under what it later termed as Operation Sindoor. The political motive was intended to turn the crisis into coercive signalling by shifting the blame onto the enemy and projecting a sense of military superiority.
This episode, however, began to fray immediately as war seldom follows the intended script. Within minutes PAF shot down 7 IAF aircraft including 4 Rafales. On 8 May, Reuters reported that at least two Indian aircraft were shot down by a Pakistani J-10C, while the local government sources reported other aircraft crashes in Indian-occupied Jammu and Kashmir

Read More »

Why the IAF’s Post-Sindoor Spending Surge is a Sign of Panic

After Operation Sindoor, India is spending billions of dollars on new weapons. This is being taken by many people as an indication of military prowess. It is not. This rush to procure weapons is in fact an acknowledgement that the Air Force in India had failed to do what it was meant to do. The costly jets and missiles that India had purchased over the years failed to yield the promised results.

Sindoor was soon followed by India in sealing the gaps which the operation had exposed. It was reported that Indian Air Force (IAF) is looking to speed up its purchases of more than 7 billion USD. This will involve other Rafale fighter jets with India already ordering 26 more Rafales to the Navy in 2024 at an estimated cost of about 3.9 billion USD. India is also seeking long-range standoff missiles, Israeli loitering munitions and increased drone capabilities. Special financial powers of the Indian military were activated to issue emergency procurement orders. The magnitude and rate of these purchases speak volumes.

Indian media and defence analysts have over the years considered the Rafale as a game changer. When India purchased 36 Rafales aircrafts at an approximate cost of 8.7 billion USD, analysts vowed that the aircraft would provide India with air superiority over Pakistan. Operation Sindoor disproved all those allegations. Indian aircraft did not even fly in Pakistani airspace when the fighting started. India solely depended on standoff weapons that were launched at a safe distance. The air defence system of Pakistan, comprising of the HQ-9 surface-to-air missile system and its own fighters, stood its ground.

Read More »

May 2025: Mosaic Warfare and the Myth of Centralised Air Power

Visualise a modern-day Air Force commander sitting in the operations room, miles away from the combat zone, overseeing every friendly and enemy aircraft and all assets involved in the campaign. In a split second, he can task a fighter, reposition a drone, and authorise a strike. In today’s promising technological era, he does not even need an operations room; a laptop on his desktop will suffice. The situation looks promising as it offers efficiency, precision, and control. The term used for such operational control is ‘centralisation’, which has been made possible with advanced networking, integrating space, cyber, surveillance, artificial intelligence, and seamless communication, enabling a single commander to manage an entire campaign from a single node. Centralised command and control, championed by the Western air forces and then adopted by many others, has thus been seen as a pinnacle of modern military power.
The concept of centralisation, enabled by state-of-the-art networking, may seem promising, but it is nothing more than a myth.

Read More »