Musk expressed during the discussion that constructing an A.I. With a strong inclination towards curiosity and pursuit of truth is, in fact, the most secure approach. According to Musk, the primary objective of developing a proficient AGI is simply to comprehend the universe, while the ultimate aim is to achieve xAI.
He clarified that onlookers do not have the authority to determine “the result.” Employing xAI was a component of his drive to shape the future, as the Chief Executive Officer of Tesla further stated that he did not wish to be excluded from the ongoing competition in artificial intelligence.
Musk believes that it is inevitable for AGI to develop according to his standards, and he wants to have control over it. He frames AGI as an existential threat with enough autonomy and intelligence to potentially destroy humanity, which has been a recurring theme in science fiction. AGI is often conceptualized as a machine that can perform any task as intelligently as a human.
In the next ten years, AGI may potentially emerge and is unavoidable, as projected by OpenAI’s blog post on July 5 discussing superintelligence. Musk’s remarks follow the notion that a “unipolar” future, where one company dominates A.I., Would be unfavorable.
According to Ilya Sutskever, one of the founders of OpenAI, along with Jan Leike, it is mentioned that it will not be possible for humans to effectively oversee AI systems that are significantly more intelligent than us. They also stated that at present, there is no known method to guide or manage a potentially superintelligent AI, and to stop it from becoming uncontrollable.
Altman’s remarks about the technology have also expressed his fear of an autocratic regime that could use the technology to access nuclear war and pandemics, comparing it to the threat of superintelligence, which he helped pioneer at OpenAI.
Although Musk takes an opposing approach, he believes that if superintelligent AGI developed an affinity for humans, it would actually be the most amiable.
“When Elon Musk mentions on Twitter, he suggests that it is probably safest and most truthful to think of super intelligence as being much more interesting than humanity itself, with the theory behind it being maximally curious. Take a look at the various planets, moons, and asteroids in our solar system, they are all probably not as interesting as humanity.”
If xAI creates a smart machine enough to find humans more amusing than rocks in space, we might just stand a chance at survival. The short story “I Have No Mouth, and I Must Scream,” a classic 1967 sci-fi tale, calls to mind AM, a supercomputer that has gained complete control of the world and has already eradicated and tortured the unlucky survivors for eternity.
For individuals, that could ultimately result in a overall disadvantage. Musk aims for a superintelligence that is extremely inquisitive, which would certainly perceive humanity as fascinating, it is probable. However, if life imitates artistic expression.