Understanding AI's "representation" problem
Artificial intelligence researchers have come a long way in creating algorithms that can solve different complicated problems. But current AI algorithms are still a far shot from the general problem-solving capabilities of the human mind, also known as artificial general intelligence, the holy grail of AI research.
Current AI algorithms can solve specific tasks, but can’t generalize their capabilities beyond their narrow domains. Why? Data scientist Herbert Roitblat argues in his book Algorithms Are Not Enough that the key shortcoming of current AI systems is that they need representations. This means that they can only solve a problem that has been simplified by a human architect, broken down into specific steps or represented as a set of input data and desired outcomes.
The need for representation limits the uses and scope of AI systems, Roitblat argues, and we need to think of new ways to create intelligent agents that can actively seek new problems and find their solutions.
In my latest column on TechTalks, I discussed AI’s representation problem and spoke to Roitblat about his book. Read the article here.
More AI book reviews: