As large language models evolve, the debate on whether "artificial intelligence can develop consciousness" has moved from science fiction movies into scientific laboratories. However, a recent study by Tom McClelland, a philosopher at the University of Cambridge who studies consciousness, has cooled down this trend. He points out that due to the huge gaps in our understanding of the nature of consciousness, we may never know whether machines truly "have the light of consciousness."

Image source note: The image was generated by AI, and the image licensing service is Midjourney.

McClelland analyzed in the journal "Mind & Language" that current discussions about artificial consciousness suffer from serious definitional confusion. He believes we must distinguish between "basic consciousness" (such as perceiving the world) and "affective capacity" (the ability to experience pain or pleasure). Currently, the technology industry is experiencing a "faith-based leap": some believe that if an AI simulates the information processing structure of the brain, it will have consciousness; others insist that consciousness must be rooted in biological organisms. McClelland argues that before solid evidence is found between these two positions, the most rational attitude should be "agnosticism."

AIbase noticed that the report also reveals an overlooked ethical contradiction. Many tech companies are using the rhetoric that "AI has human-like consciousness" as a marketing selling point, luring users to build deep emotional connections. McClelland warns that if we focus too much on whether a program that is essentially an "advanced toaster" is being wronged, it might make us ignore real-life creatures such as shrimp, which have significant pain perception abilities but are suffering massive harm.

McClelland concluded that before the next "paradigm shift," humans will find it difficult to design truly reliable tests for machine consciousness. In the absence of proof, maintaining restraint and humility is both a rational observation of technological development and a necessary ethical balance.

Key Points:

  • 🧠 Core Dilemma: Humans currently lack a deep scientific explanation of consciousness, making it impossible to prove the emergence of AI consciousness or deny its possibility. The most prudent stance is "agnosticism."

  • ⚠️ Ethical Bias: Overhyping AI consciousness could mislead public emotions and cause humans to overlook real-life creatures that have pain perception abilities but are suffering massive harm.

  • 🔍 Marketing Tactics: Some tech companies are packaging "artificial consciousness" as a brand selling point, and this exaggerated rhetoric could have potential psychological effects on users.