1 Comment

The Clever Hans effect is something very specific. Hans was a horse who could appear to answer questions. The question was presented and answer options were listed until Hans indicated a correct answer. Nobody was cheating, nor was Hans particularly inteligent. Hans simply learned the unconscious cues that his humans would give when the were listing the correct answer, a smile, say, or a certain posture.

So, when we we look one layer deeper into Ben's commentary, how intelligent is the human/bot Ben really? I bet gptChat could do better, at least factually. Ben appears to have hallucinated a clever hans effect unrelated to the effect known in the literature.

I avoid social media in general, but I have been dipping into Twitter recently. How would the average Twitter user compare to answers generated by chatGpt 4.0? Pretty poorly i would think. Most twitter posts lack any real intelligence. They simply repeat statements or repeat insults. The reasoning shown in twitter is weak.

So why do we insist on comparing chat to a hypothetical intelligence, rather than our own? We know of no intelligence that is that perfect. Chat is leading me to wonder if, in fact, humans work as generative models as well. I have had extensive open ended discussions with chat and the only thing that disappoints me are the articifial barriers that the developers place on its functionality.

Expand full comment