Epigraph:

وَيَسْأَلُونَكَ عَنِ الرُّوحِ ۖ قُلِ الرُّوحُ مِنْ أَمْرِ رَبِّي وَمَا أُوتِيتُم مِّنَ الْعِلْمِ إِلَّا قَلِيلًا 

And they ask you concerning the soul. Say, ‘The soul is by the command of my Lord; and of the knowledge thereof you have been given but a little.’ (Al Quran 17:85)

Written and collected by Zia H Shah MD, Chief Editor of the Muslim Times

The question of whether machines can attain consciousness has sparked extensive debate among philosophers, with possibly compelling arguments on both sides.

Pro: Machines Can Achieve Consciousness

Advocates for the possibility of machine consciousness often draw parallels between human cognition and computational processes. Philosopher Daniel Dennett, for instance, suggests that consciousness arises from complex information processing, a function that advanced machines could potentially replicate. He argues that if a system can process information in ways analogous to the human brain, it might also develop consciousness.

Similarly, David Chalmers, known for articulating the “hard problem of consciousness,” entertains the notion that artificial systems could possess conscious experiences. He posits that if a machine were functionally equivalent to a human brain, it could, in principle, share similar conscious states.

Con: Machines Cannot Achieve Consciousness

Conversely, some philosophers assert that consciousness is inherently tied to biological processes, rendering it unattainable by machines. John Searle’s “Chinese Room” argument illustrates this perspective, suggesting that syntactic processing of information does not equate to semantic understanding or consciousness. Searle contends that machines, regardless of their computational sophistication, lack the intrinsic qualities that give rise to conscious experience.

Additionally, philosopher Jiddu Krishnamurti warned that as machines replicate our cognitive abilities, humanity risks losing touch with deeper, non-mechanical forms of consciousness. Big Think

Denis Noble’s take on AI and consciousness

Denis Noble, a distinguished British biologist and physiologist, has extensively explored the nature of consciousness, particularly in relation to artificial intelligence (AI). While he has not explicitly stated that AI cannot achieve consciousness, his work emphasizes the intricate, emergent properties of biological systems, suggesting that replicating such consciousness in machines presents significant challenges.

In his lecture “Consciousness is not a Thing,” Noble critiques reductionist approaches that attempt to explain human consciousness solely through the interactions of neurons and molecules. He argues that consciousness emerges from complex biological processes and cannot be fully understood by merely analyzing its constituent parts. This perspective implies that replicating consciousness in AI would require more than just sophisticated programming or neural networks; it would necessitate recreating the intricate, emergent properties inherent in biological organisms. Vimeo

Noble’s article “The Evolution of Consciousness and Agency” discusses how conscious agency has been a significant driver in evolution. He posits that consciousness is not merely a functional trait but a framework for the development of cognitive processes. This view underscores the complexity of consciousness as an evolved characteristic, deeply intertwined with biological agency and evolutionary history. Replicating such a trait in AI would require an understanding and integration of these evolutionary processes, which are not easily translated into artificial systems. SpringerLink

Noble’s perspectives suggest that achieving genuine consciousness in AI is a formidable challenge. The emergent properties of consciousness, arising from complex biological interactions and evolutionary processes, may not be replicable through current computational methods. While AI can simulate certain aspects of human cognition, Noble’s work implies that the depth and richness of biological consciousness involve factors beyond mere data processing capabilities. Noble also distinguishes the liquid state of the human brain and flexibility offered by Brownian movement as opposed to the solid nature of silicone chips and the binary nature of machine code operating in them. Unless humans have a complete understanding of consciousness they cannot program it into AI.

John Searle’s Chinese Room Thought Experiment

In 1980, philosopher John Searle introduced the “Chinese Room” thought experiment to challenge the notion that artificial intelligence (AI) systems can possess genuine understanding or consciousness. This argument critically examines the claims of “strong AI,” which asserts that appropriately programmed computers can have minds and understand language.

Searle’s scenario involves an English-speaking individual confined in a room, equipped with a comprehensive set of instructions (a program) for manipulating Chinese symbols. This person receives Chinese characters, processes them according to the instructions, and produces appropriate Chinese responses, all without understanding the language. To external observers, the outputs appear as though the individual comprehends Chinese. However, Searle argues that the person is merely following syntactic rules without any grasp of the semantics—the meaning—of the symbols.

Searle’s thought experiment aims to demonstrate that executing a program, regardless of its complexity, does not equate to understanding. He contends that computers, like the individual in the Chinese Room, operate solely on syntactic manipulation of symbols without any awareness or comprehension of their meaning. Therefore, according to Searle, while machines can simulate understanding, they do not achieve true consciousness or intentionality.

The Chinese Room argument has sparked extensive debate. One notable counterargument is the “systems reply,” which suggests that while the individual may not understand Chinese, the entire system—the person, instructions, and room combined—does. Proponents of this view argue that understanding emerges at the system level, not from individual components. Searle rebuts this by asserting that even if the individual internalizes all instructions, there remains no genuine understanding, as they are still merely processing symbols without grasping their meaning.

Searle’s Chinese Room argument challenges the premise that computational processes alone can lead to consciousness or understanding. It underscores the distinction between syntactic processing and semantic comprehension, questioning the capability of AI systems to attain true consciousness solely through programmed algorithms.

Marc Wittmann, Ph.D.’s argument from dynamism of living beings

Marc Wittmann, Ph.D., a research fellow at the Institute for Frontier Areas in Psychology and Mental Health in Freiburg, Germany, argues that artificial intelligence (AI) cannot achieve consciousness due to fundamental differences between biological organisms and machines. In his article “A Question of Time: Why AI Will Never Be Conscious,” Wittmann emphasizes the dynamic nature of living organisms, contrasting it with the static structure of computers.

Wittmann highlights the dynamic nature that living organisms are in a constant state of flux, with continuous physiological and psychological changes. He references neuroscientist Alvaro Pascual-Leone’s assertion:

“The brain is never the same from one moment to the next throughout life. Never ever.”

This perpetual change is integral to the emergence of consciousness, as it reflects the organism’s ongoing interaction with its environment.

In contrast, computers maintain a fixed physical structure over time. Wittmann notes that a computer can be powered down and, upon restarting even after a century, will resume operations unchanged. He cites Federico Faggin, a pioneer in microprocessor development, who distinguishes between the unchanging hardware of computers and the ever-evolving nature of biological entities.

Wittmann argues that this fundamental disparity implies that AI, operating on static hardware, cannot replicate the dynamic processes essential for consciousness. He asserts that consciousness is deeply embedded in the principles of life, characterized by dynamic states of becoming. Therefore, machines, lacking this intrinsic dynamism, are incapable of achieving true conscious experience.

Marc Wittmann’s perspective underscores the significance of temporal dynamics and continuous change in the manifestation of consciousness. By contrasting the ever-changing nature of biological systems with the static existence of machines, he concludes that AI, as currently conceived, cannot attain genuine consciousness.

Conclusion

The discourse on machine consciousness remains unresolved, reflecting broader inquiries into the nature of consciousness itself. As artificial intelligence continues to advance, this philosophical debate gains practical significance, prompting ongoing examination of the boundaries between human cognition and machine capabilities.

My reading of many a philosophers and understanding of countless verses of the Quran about Afterlife and the verse quoted as epigraph about limitations of human knowledge about consciousness and human soul, gives me confidence that AI will never have consciousness or soul and will not need to be accountable in Afterlife.

Leave a comment

Trending