Humanity is unprepared to survive an encounter with a much smarter artificial intelligence, Eliezer Yudkowsky says Shutting down the development of advanced artificial intelligence systems around the globe and harshly punishing those violating the moratorium is the only way to save humanity from extinction, a high-profile AI researcher has warned. Eliezer Yudkowsky, a co-founder of the Machine Intelligence Research Institute (MIRI), has written an opinion piece for TIME magazine on Wednesday, explaining why he didn’t sign a petition calling upon “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4,” a multimodal large language model, released by OpenAI earlier this month. Yudkowsky argued that the letter, signed by the likes of Elon Musk and Apple’s Steve Wozniak, was “asking for too little to solve” the problem posed by rapid and uncontrolled development of AI. “The most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die,” Yudkowsky wrote. Surviving an encounter with a computer system that “does not care for us nor for sentient life in general” would require “precision and preparation and new scientific insights” that humanity lacks at the moment and is unlikely to obtain in the foreseeable future, he argued. Full Article: https://earthnewspaper.com/everyone-on-earth-will-die-top-ai-researcher-warns-by-rt
Oh well At least they have choices None of us are immortal anyway We are all going to die .