The AI Winter Is Coming In 2024, A Top Scientist Predicts

2023 was the year when the hype around artificial intelligence (AI) went into hyperdrive. Following its release in late 2022, ChatGPT-3 made AI technology accessible and genuinely useful to the general public, prompting the development of numerous other Large Language Models (LLM) by some of Silicon Valley’s mightiest giants. AI was the word on everyone’s lips last year, but could it be set to enter a period of stagnation?

Rodney Brooks believes so. Brooks is a former director of the Computer Science and Artificial Intelligence Laboratory at MIT who regularly comments on technology’s progress (or lack thereof). Since 2018, he has posted his predictions of self-driving cars, human space travel, and – last but not least – robotics, AI, and machine learning. He’s promised to keep making the forecasts each year until 2050 when he’ll turn 95.

In his latest scorecard, Brooks predicted that 2024 won’t be a golden age for AI, noting that the current fanfare is “following a well worn hype cycle that we have seen again, and again, during the 60+ year history of AI.”

“Get your thick coats now. There may be yet another AI winter, and perhaps even a full-scale tech winter, just around the corner. And it is going to be cold,” Brooks concluded.

Brooks is far from a pessimistic Luddite. He’s been studying AI since the 1970s and has been dubbed “one of the world’s most accomplished experts in robotics and artificial intelligence.” If he seems cynical, it’s simply because he’s seen it all before; all the publicity, letdowns, false promises, and setbacks. Take a look at his former predictions and you’ll see his technological prophecies are often right on the money.

When talking about AI in his 2024 scorecard, Brooks is referring to LLMs, chatbot systems like ChatGPT, and others made by the likes of Bing and Google’s Deep Mind. While he believes these AI systems are capable of some impressive feats, he thinks they don’t have the capability to become an all-powerful, earth-shattering Artificial General Intelligence. In his mind, these systems lack true imagination and genuine substance.

“[I encourage] people to do good things with LLMs but to not believe the conceit that their existence means we are on the verge of Artificial General Intelligence,” Brooks added.

“There is much more to life than LLMs.”

Speaking in an interview with IEEE Spectrum, Brooks goes deeper into his criticism, explaining how advanced LLMs still make regular mistakes when tasked with relatively simple coding tasks.

“It answers with such confidence any question I ask. It gives an answer with complete confidence, and I sort of believe it. And half the time, it’s completely wrong. And I spend 2 or 3 hours using that hint, and then I say, ‘That didn’t work,’ and it just does this other thing. Now, that’s not the same as intelligence. It’s not the same as interacting. It’s looking it up,” said Brooks.

Ultimately, he believes, LLMs have a long way to go before they can be considered anything like a fully-fledged Artificial General Intelligence because they are merely clever wordsmiths, not uber-intelligent beings. If his musings are accurate, the same could be true of GPT-5, GPT-6, and beyond.

“It doesn’t have any underlying model of the world. It doesn’t have any connection to the world. It is correlation between language,” Brooks explained.

“What the large language models are good at is saying what an answer should sound like, which is different from what an answer should be,” he added.

Leave a Comment