Why competitions are a bad measure for AI
Last week’s announcement of AlphaCode, DeepMind’s source code–generating deep learning system, created a lot of excitement—some of it unwarranted—surrounding advances in artificial intelligence.
As I’ve mentioned in my deep dive on AlphaCode, DeepMind’s researchers have done a great job in bringing together the right technology and practices to create a machine learning model that can find solutions to very complex problems.
However, the sometimes-bloated coverage of AlphaCode by the media highlights the endemic problems with framing the growing capabilities of artificial intelligence in the context of competitions meant for humans.
I like to think of AI—at least in its current form—as an extension instead of a replacement for human intelligence. Technologies such as AlphaCode cannot think about and design their own problems—one of the key elements of human creativity and innovation—but they are very good problem solvers. They create unique opportunities for very productive cooperation between humans and AI. Humans define the problems, set the rewards or expected outcomes, and the AI helps by finding potential solutions at superhuman speed.
Read the full article on TechTalks.
For more: