The myth of artificial intelligence
Recent advances in deep learning have rekindled interest in the imminence of machines that can think and act like humans, or artificial general intelligence. By following the path of building bigger and better neural networks, the thinking goes, we will be able to get closer and closer to creating a digital version of the human brain.
But this is a myth, argues computer scientist Erik Larson, and all evidence suggests that human and machine intelligence are radically different. Larson’s new book, The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do, discusses how widely publicized misconceptions have led AI research down narrow paths that are limiting innovation and scientific discoveries.
And unless scientists, researchers, and the organizations that support their work don’t change course, they will be doomed to “resignation to the creep of a machine-land, where genuine invention is sidelined in favor of futuristic talk advocating current approaches, often from entrenched interests.”
In my latest column on TechTalks, I reviewed Larson’s book and spoke to him about abductive inference, the blind spot of contemporary AI.
Read the full article on TechTalks.
For more AI book reviews: