As the global AI competition reaches a fever pitch, Geoffrey Hinton, the Nobel laureate known as the "father of AI," has once again issued a serious warning. In an interview with Fortune magazine, he bluntly pointed out that current tech industry leaders have not truly considered the endgame of technology, and their core driving force is merely short-term profit.
Hinton believes that both company owners and frontline researchers are currently highly fragmented in their focus: owners focus on financial reports, while researchers are busy solving specific engineering problems such as making images clearer or videos more realistic. The grand question of "what will happen to humanity" is put aside in the rush for commercial success.
Hinton clearly divides the risks brought by AI into two dimensions:
Abuse by bad actors: Fake videos (Deepfakes), cyber attacks, and AI-assisted virus synthesis that may appear in the future have already emerged.
AI itself becoming a miscreant: This is the "long-term threat" that Hinton is most concerned about. He believes that once AI reaches the level of "superintelligence," it will develop motives for survival and control, and at that point, the assumption that "humans can control AI" will completely fail.
Hinton has given a chilling prediction: after achieving superintelligence, the probability of AI causing human extinction could be as high as 10% to 20%. To counter this threat, he proposed a concept with strong biological characteristics — "maternal instinct" mechanism.
"The only example of a more intelligent being being influenced by a weaker one is the influence of a baby on its mother."
