Many Artificial Intelligence Researchers Think There’s A Chance AI Could Destroy Humanity

A survey of researchers working in artificial intelligence (AI) has shown that the industry as a whole thinks the rate of progress is speeding up and could benefit humanity in all sorts of ways, while many have concerns about potential downsides to our race towards more advanced AI. 

The survey, which has not yet been peer-reviewed, asked 2,778 AI researchers an array of questions on AI topics, including how bad or good high-level machine intelligence (HLMI) will be for humanity. Participants were asked to rate the percentage likelihood that future AI advances will cause “human extinction or similarly permanent and severe disempowerment of the human species”. The mean prediction put the odds at 5 percent, while a question asking the chances of this same thing happening within a 100-year timescale produced the same mean prediction.

“Depending on how we asked, between 41.2 percent and 51.4 percent of respondents estimated a greater than 10 percent chance of human extinction or severe disempowerment,” the team added in their study. “This is comparable to, but somewhat higher than, the proportion of respondents — 38 percent — who assigned at least 10 percent to ‘extremely bad’ outcomes ‘(e.g. human extinction)’ in the question asking ‘How good or bad for humans will High-Level Machine Intelligence be?’.”

The survey had the advantage of comparison with the results of the same survey conducted in 2022. Overall, participants believed that progress towards certain milestones, such as AI automating all jobs or writing a New York Times bestselling fiction novel, would come at earlier points than they did back in 2022. Then, the average prediction for the year that AI would write a bestseller was after 2050. In the latest survey, perhaps due to excitement around chatbot progress in the last year, it was slightly before 2030. Other language-based tasks saw similar changes to their predicted timescale. Tasks such as driving a truck and competing in human marathons were predicted to take place further into the 2030s, though those predictions have been moved forward slightly too.

“While the range of views on how long it will take for milestones to be feasible can be broad, this year’s survey saw a general shift towards earlier expectations,” the team explained. “Over the fourteen months since the last survey, a similar participant pool expected human-level performance 13 to 48 years sooner on average (depending on how the question was phrased), and 21 out of 32 shorter term milestones are now expected earlier.”

“In general, there were a wide range of views about expected social consequences of advanced AI, and most people put some weight on both extremely good outcomes and extremely bad outcomes,” they concluded. “While the optimistic scenarios reflect AI’s potential to revolutionize various aspects of work and life, the pessimistic predictions — particularly those involving extinction-level risks — serve as a stark reminder of the high stakes involved in AI development and deployment.”

The survey results are available as a pre-print published on the AI Impacts website.

Leave a Comment