Top Computer Scientist Thinks Super-Intelligent AI Could Be Here By 2029

The computer scientist who popularized the term artificial general intelligence (AGI) believes that it could arrive as early as 2029.

Ben Goertzel, who founded SingularityNET, which aims to create a “decentralized, democratic, inclusive and beneficial Artificial General Intelligence”, gave a talk at the Beneficial AGI Summit 2024. In the talk, he told the audience that we could reach a point where artificial intelligence is capable of improving itself.

Though such a point may seem far off, he lists a number of reasons why he believes it could happen so quickly. According to Goertzel, the reason for this is because we are in a period of exponential rather than linear growth, which can be more difficult to wrap your head around and comprehend the speed of change.

“In the next decade or two [it] seems likely an individual computer will have roughly the compute power of a human brain by 2029, 2030,” Goertzel said in his talk. “Then you add another 10/15 years on that, an individual computer would have roughly the compute power of all of human society.”

        

Goertzel cites large language models (LLMs) such as ChatGPT as waking the world up to the potential for AI, but does not believe that LLMs themselves are the path towards AGI, as they do not demonstrate genuine understanding of the world, operating more like a spicy autocomplete.

However, he believes that LLMs could be a component of AGI that moves us towards the singularity, perhaps in his company’s own OpenCog Hyperon.

“One thing we can plausibly teach a Hyperon system to do is design and write software code,” Goertzel wrote in an unreviewed preprint paper posted to arXiv. “LLMs are already passable at this in simple contexts; Hyperon is designed to augment this capability with deeper creativity and more capable multi-stage reasoning. Once we have a system that can design and write code well enough to improve upon itself and write subsequent versions, we enter a realm that could lead to a full-on intelligence explosion and Technological Singularity.”

Goertzel has concerns about this, as well as excitement for it. Proper safeguards would need to be in place before we let Pandora out of the box, something which we have not yet got a handle on. If the singularity is as close as Goertzel and other computer scientists believe (and that’s still a ginormous “if”) we’re under a lot of pressure to get things right fast.

“My own view is once you get to human-level AGI, within a few years you could be at radically superhuman AGI, unless the AGI threatens to throttle its own development out of its own conservatism,” Goertzel added in his talk. 

“I think once an AGI can introspect its own mind, then it can do engineering and science at a human or superhuman level. It should be able to make a smarter AGI, then an even smarter AGI, then [there would be] an intelligence explosion. That may lead to an increase in the exponential rate beyond even what [computer scientist Ray Kurzweil] thought.”

[H/T: Live Science]

Leave a Comment