Chain-of-thought tokens don't reflect genuine reasoning in LLMs is misleading. They're navigational aids devoid of true cognitive processing or reliability.
I’m skeptical about CoT despite the original paper showing an increase of accuracy from 20% to over 50%. I can’t stop wondering where lies the difference between and actual new thought and ‘showing your work’
I’m skeptical about CoT despite the original paper showing an increase of accuracy from 20% to over 50%. I can’t stop wondering where lies the difference between and actual new thought and ‘showing your work’