2 Comments
Jul 27Liked by Ben Dickson

Good point. Another one: LLM were trained in human-to-human text for the most part. This text therefore contains politeness (and its consequences) as well as lack thereof (and its consequences). Lack of politeness in the training data is likely to be followed by less useful and positive responses. Therefore, if I want useful and positive responses, it's better to be polite to the LLM.

Expand full comment
author

You're right. That's practical.

Expand full comment