Last week, I wrote an analysis of “Reward Is Enough,” a paper by scientists at DeepMind. As the title suggests, the researchers hypothesize that the right reward is all you need to create the abilities associated with intelligence, such as perception, motor functions, and language.
The article and the paper triggered a heated debate on social media, with reactions going from full support of the idea to outright rejection. Of course, both sides make valid claims. But the truth lies somewhere in the middle. Natural evolution is proof that the reward hypothesis is scientifically valid. But implementing the pure reward approach to reach human-level intelligence has some very hefty requirements.
In this post, I’ll try to disambiguate in simple terms where the line between theory and practice stands.
Read the full article on TechTalks.
For more on the philosophy of artificial intelligence:
People are different. In fact they're opposites. One wonders how good something will be. Another wonders how evil. I noticed right away instead of wondering if AI would raise our conscience to a new level you wondered if it would even have one. Just sayin'