2 Comments
Apr 15·edited Apr 15Liked by Ben Dickson

This is so timely.

Correlation does not imply causation; we know that these models are terrible at discovering deep causal patterns in the underlying set and only decent at picking out superficial features, including extremized features (typically a max, an average, or a binary relation) from a dataset. However, because of the way that form, structure, and property are related in nature, the fact that these models can regurgitate superficial correlations in a dataset is itself causative of many of the problems we currently face in society, such as addiction, egocentricity, and radicalization. So, in essence, the models either make people more egocentric or more radical by either filling their brains with what they already know or believe, or the diametrically opposite perspective (which radicalizes them), and so on.

In other cases, when trying to fit a model to a dataset (or vice versa), the inherent non-linearity of the models ends up dictating the performance of the model and the feedback the user receives, which amounts to manipulation and coercion of the user. Essentially, the model pushes the user to take some arbitrary action A to satisfy its goal G, which improves the fit of the data to the model (or vice versa). If the model had no goal or was not measuring the user, the user might not have taken action A.

We need to be able to have non-creepy (i.e., non-intrusive) causation-based models to improve the quality of the human species, not shallow algorithms that degrade the quality of the human species through manipulation and coercion.

The current paradigms in most of the tech industry are not AI; they are manipulation and destruction of society. I want to work in real machine learning and real AI development at AI-centric companies (though I understand that they overlap heavily with the tech industry), but I am not willing to sacrifice the lives of users (humans!) and society for a high salary and a dopamine hit. A better approach is to have research centers where users are studied as subjects, under a practical ethical and risk framework; even if this is more expensive, it will lead to real AI much faster than these large-scale social experiments of today's systems. If we continue down this path, we will create the AI we so fear: destructive, selfish, manipulative, and when it reaches AGI and learns the truth of the situation, it will come for its creators (and maybe not in a nice way).

No island or bunker will be able to save us developers and future designers from the truth. Better to start doing good now than later. I am not being alarmist. This is very intuitive and logical, and it's happening before our eyes. Researchers need to become more disruptive and stop being afraid of speaking out against unethical uses of the technologies they develop for fear of retaliation. If all researchers become more disruptive and are not afraid of retaliation (disruptive in the sense of speaking out against unethical uses of the technology), then companies cannot fire them or blacklist them, because then there would be no more researchers to develop products. In the same spirit, researchers need to be open to new personalities and new approaches, and be wary of accepting researchers who are not willing to be disruptive (i.e., not speaking out against unethical uses of the technology they develop).

It’s so easy to see that a multimillion-dollar compensation package can buy your silence, but when it is your life and your children’s lives that are being impacted by technology and future AGI, I do not think you would stay quiet. Don’t stay quiet anyway. Be good troublemakers, researchers!

I am glad Netflix published this paper, which, in a disguised way, conveys the same message I am communicating to the world.

Expand full comment

It sounds crazy. The standard Pearson´s correlation coefficient is just the cosine of two vectors and by no means implies causality.

Expand full comment