Why you should be nice to AI
When you make mean remarks to an LLM, it is not the AI that gets hurt—it's you.
Let’s be clear: This is not a warning about robots taking over the world and killing every person who had been mean to them (though the memes are amusing).
It is more of a rant about why being mean to AI can negatively affect you. In the past months, I’ve worked with several people who were interested in using large language models (LLM) such as ChatGPT in their daily work. The tasks ranged from helping write essays to writing draft code or debugging existing code.
For most people, the initial interaction with an LLM is awe-inspiring. Interestingly, programmers with zero knowledge about how LLMs work find it even more fascinating than people with no technical background.
Albeit imperfect, the experience of being able to have a computer do things through natural language is very useful. I’ve seen people integrate LLMs into their daily work and get things done faster and more efficiently.
However, I’ve also seen a darker side to interacting with language models. On the surface, LLMs are very human-like and can handle simple tasks quite well. This makes the experience very convenient and human-like.
But as soon as your task becomes specialized and requires a bit of back-and-forth with the model, things can get frustrating as the model might misunderstand your intent. And I’ve seen more than a few people get vulgar and use profanity against the models when they don’t get the expected responses.
When I ask them why they act like this, they usually answer something along the lines of “It’s just a stupid robot and it has no emotions.” They’re right. LLMs have no emotions. They are not sentient. Once they are trained, their parameters are locked in and every new chat session gives the model a blank slate without having memory of past interactions (unless you have memory features).
I’m not worried about the model’s feelings—I’m worried about the humans who normalize such behavior in their professional interactions. Unlike machine learning models, the human brain does not have separate training and inference stages. Every experience rewires your brain and repetition causes you to form new habits.
But why is this important with respect to language models? The responses of LLMs are becoming increasingly human-like. They have already beaten the classic text-based Turing test (although they are still far away from replicating many aspects of human intelligence). We are already assigning them tasks that we previously would have done ourselves or assigned to a colleague. And there are already platforms in which AI agents are added to conversations to cooperate with humans. And the role of LLMs and AI agents will only grow with time.
How long will it take for abusive behavior toward AI agents to spill into human relations? When you share the same chat application with humans and LLMs, it is easy for these habits to traverse across bots and humans.
We have seen similar trends happen in the past. The pseudo-anonymity of the internet allowed people to behave in ways that they would have not acted when they were facing people in the physical world. But the habits formed on the internet eventually found their way into real life.
This is similar but in a different fashion. We start by dismissing LLMs as non-sentient entities and allow ourselves to engage in abusive behavior. But in the process, it is not the LLMs that we hurt. It is ourselves.
I am all for harsh criticism and candid feedback. But a workplace also has to have a professional code of conduct. These remarks might sound funny and absurd right now. I hope we can say the same thing two years from now.
So the next time a language model frustrates you, try to be nice and go read a prompt-engineering manual—not to please our robot overlords but for your own sake.
Good point. Another one: LLM were trained in human-to-human text for the most part. This text therefore contains politeness (and its consequences) as well as lack thereof (and its consequences). Lack of politeness in the training data is likely to be followed by less useful and positive responses. Therefore, if I want useful and positive responses, it's better to be polite to the LLM.