By Rich Heimann In a popular blog post titled “The Bitter Lesson,” Richard Sutton argues that AI’s progress has resulted from cheaper computation, not human design decisions based on problem-specific information. Sutton diminishes researchers that build knowledge into solutions based on their understanding of a problem to improve performance. This temptation, Sutton explains, is good for short-term performance gains, and such vanity is satisfying to the researcher. However, such human ingenuity comes at the expense of AI’s divine destiny by inhibiting the development of a solution that doesn’t want our help understanding a problem. AI’s goal is to recreate the problem-solver ex nihilo, not to solve problems directly.
Revisiting Rich Sutton's "The Bitter Lesson"
Revisiting Rich Sutton's "The Bitter Lesson"
Revisiting Rich Sutton's "The Bitter Lesson"
By Rich Heimann In a popular blog post titled “The Bitter Lesson,” Richard Sutton argues that AI’s progress has resulted from cheaper computation, not human design decisions based on problem-specific information. Sutton diminishes researchers that build knowledge into solutions based on their understanding of a problem to improve performance. This temptation, Sutton explains, is good for short-term performance gains, and such vanity is satisfying to the researcher. However, such human ingenuity comes at the expense of AI’s divine destiny by inhibiting the development of a solution that doesn’t want our help understanding a problem. AI’s goal is to recreate the problem-solver ex nihilo, not to solve problems directly.