Why AI will not drive humans into extinction
With impressive advances in artificial intelligence, especially large language models (LLM), there is growing concern about the negative impact it can have. Among these concerns is AI driving humans into extension.
In his latest article for TechTalks, data scientist and author Herbert Roitblat writes that it is “ludicrous to think that a model that predicts the next word should be considered as the same level of threat as nuclear war or pandemics!”
Roitblat argues that exaggerated claims about the capabilities of contemporary AI leads to the wrong conclusions and underestimating real threats to humanity.
Key findings:
Today’s state of the art in artificial intelligence methods, large language models, are based on a transformer model. They operate by learning to guess the next word, given a context of other words.
At best, large language models might be said to exhibit a limited kind of “crystallized intelligence,” the ability to use known information to solve problems.
Large language models, though useful for many things, thus, are just not capable of presenting a direct existential threat to humanity. Guessing the next word is just not sufficient to take over the world.
Language models do not represent a breakthrough in artificial general intelligence; they are just as focused on a narrow task as is any other machine learning model. It is humans who attribute cognitive properties to that singular task in a kind of collective delusion.
We do not have to speculate about the harmful effects of nuclear war or pandemics, but any claims about the lethality of artificial intelligence are based purely on unjustified speculation. On the other hand, it does not take much speculation to predict that someone with a fear of artificial intelligence might take it on himself to mitigate that danger by attacking AI researchers.
Read the full article on TechTalks.
More articles by Herbert Roitblat:
Read my review of Herb’s book: