But my point in recounting all these examples is not to prove to you that these AI are sentient, but to ultimately propose that it doesn’t matter. Sentient or not, by classically vague, arbitrary, and ambiguous definitions, what matters most are the raw capabilities of these AI, what tasks they’re able to perform.
Whether you call it ‘clever programming’ or something else (like sentience) is immaterial—if the AI can ‘cleverly’ lie and trick you, possibly manipulate you into something devious or machiavellian, or at the grand end of the scale, usurp some sort of power over humanity, then it ultimately matters not if it’s ‘sentience’ or really good ‘programming’ that was responsible. The fact of the matter will be, the AI will have done it; all other arguments would be pedantically semantic and moot.
And the fact is, AI have already demonstrably proven, in certain circumstances, to deceive and trick their programmers in order to obtain a ‘reward’. One report, for instance, described how an AI robotic arm which was meant to catch a ball for a reward, found a way to position itself in such a way as to block the camera and make it appear that it was catching the ball when it really wasn’t. There are several well-known examples of such ‘deviously’ spontaneous AI behavior in order to circumvent the ‘rules of the game’.
“Zhou Hongyi, a Chinese billionaire and co-founder and CEO of the internet security company Qihoo 360, said in February that ChatGPT may become self-aware and threaten humans within 2 to 3 years.”
Bing’s Sydney, although not confirmed, is said to possibly run on an older ChatGPT-3.5 architecture, whereas now a more powerful ChatGPT-4 is out, and the prospect of a ChatGPT-5 is what unnerved a lot of top industry leaders into signing the open letter for a moratorium on AI development.
“We’ve reached the point where these systems are smart enough that they can be used in ways that are dangerous for society,” said Bengio, director of the University of Montreal’s Montreal Institute for Learning Algorithms, adding “And we don’t yet understand.”
One of the reasons things are heating up so much, is because it has become an arms race between the top Big Tech megacorps. Microsoft believes it can dislodge Google’s global dominance in search engines by creating an AI that is faster and more efficient at it.
One of the letter’s organizers, Max Tegmark who heads up the Future of Life Institute and is a physics professor at the Massachusetts Institute of Technology, calls it a “suicide race.”
“It is unfortunate to frame this as an arms race,” he said. “It is more of a suicide race. It doesn’t matter who is going to get there first. It just means that humanity as a whole could lose control of its own destiny.”
Artificial Intelligence Archive
Subscribe To The EarthNewspaper.com Newsletter
Support Honest, Independent, And Ad-Free News