On the Mass Decerebration

Posted on Apr 21, 2025
tl;dr: Following a LinkedIn post by Dominique Boullier, I react to the idea that generative AI is a mass decerebration enterprise. This seems like too radical a position. Yes, technology raises questions (ecological, cognitive, etc.), but condemning it wholesale doesn't help. It's not just a supply-side question — it's also about usage. We can't just blame Big Tech without questioning our own relationship with the tools we use. By outsourcing everything, we end up abdicating responsibility.

Interesting exchange on LinkedIn following a post published by a sociology professor, Dominique Boullier. A quick visit to his Wikipedia page will give you an overview of his background (it’s worth a look). As often, a seemingly reasonable position that subtly contains a rather divisive stance. The author of the post considers that:

The entire response system of generative AIs is a mass decerebration enterprise.

I had already jumped out of my chair when I read the provocative title of an op-ed published in l’Humanite on March 31, 2025: Is AI going to make us morons?. On substance, two academics take up the pen to share their views. I was drawn to Anne Cordier’s nuanced and user-centered approach, while Marius Bertolucci’s critical stance left me somewhat unsatisfied.

While it is clear that technology companies do not have philanthropic goals and have a desire to enrich themselves, considering that they are enterprises designed to create decerebrated morons is a step I don’t think should be taken.

Tech Is Bad

First, because this hypothesis starts from a debatable postulate that everything “generative AI” companies do is bad and harmful. This position is excessive. While not everything is always rosy, it seems to me that globally and indiscriminately condemning these new technologies is a position that makes no sense. For example, Google has, I think, revolutionized our society with its search engine, and its offering of “free” services has enabled many people to exchange and communicate. As Anne Cordier mentions in l’Humanite: with every new development, it’s the same refrain:

The alarmist prophecies repeated with every technological novelty flourish about the announced cretinization of the masses, and even more so about that of a youth already regularly labeled as (digitally) cretinous.

I am surely biased and somewhat convinced of the virtues of technology. I understand that technology wreaks havoc in many respects (ecological, cognitive, …) and one must obviously guard against unshakeable optimism. I am not saying we should be blinded by the advantages and minimize the drawbacks. Nor should we remain uncritical of the risks that “unconscious” (or unaware) use of technology entails, but I always come back to what Alfred Sauvy wrote:

Don’t complain that technological progress destroys jobs, that’s what it’s for.

If we transpose this to our topic, we should not be surprised that technological progress challenges our perceptions and thought models. Following some commentators, one could almost regret evolving technologically.

We must not fall into a retrograde posture by pointing out the negative effects of technology. The critical stance of some leads to dead ends. Moreover, it is striking to note the general absence of proposed solutions in approaches of this type. Criticism for criticism’s sake, without any suggested solutions. Not very exciting…

I actually believe this absence of proposed solutions tends to weaken these criticisms, which seem to stop at superficial problems and perhaps reflect a lack of depth on the subject. We don’t always understand what the actual problem is. The product, the usage, or the capitalist design of AI system providers?

Tech Companies Are Bad

In the post being discussed, corporate responsibility is pointed to. They would be responsible for a mass decerebration system because of their product.

While there is indeed a share of responsibility, I find it somewhat simplistic to point to a single responsible party/culprit. I tried, perhaps clumsily, to draw a parallel with the tobacco industry to illustrate my point. Society helps tobacco victims while “allowing” the manufacturers in this sector to prosper by selling their products.

If we allow tobacco companies to continue distributing their products, it is because it is socially accepted (for reasons that may differ depending on individuals) that these companies provide their products. We raise awareness about the harms of tobacco without criticizing the manufacturers themselves.

Why?

Because we live in a democratic society that has chosen to place the cursor on the side of individual freedoms, but also in a capitalist model. Can we objectively criticize Big Tech?

Did Big Tech seize power by force, or did we give it to them?

I believe the heart of the problem lies there. In critical discourses about technology, there is often an absence of questioning our own consumption. “It’s Big Tech’s fault and we, poor users, are victims of these decerebration practices.” I think this thinking pattern is precisely what puts us in this situation. While Big Tech bears responsibility, we should also not forget the users’ responsibility and remember that we have, sometimes unconsciously, given power to Big Tech by mindlessly consuming their digital products and services.

Don’t Deresponsibilize the User

Deresponsibilizing the user and pointing to Big Tech would amount to “infantilizing” the user. “It’s not your fault” implies an absence of self-questioning and awareness of the situation, the risks, the biases, and all the elements that make the world complex.

Let us be more nuanced and understand that we must also question our practices and our model. The pursuit of efficiency in all things makes us addicted to technology because the promise made to us responds to a model we have “decided” to support.

We, the users, must take our responsibilities. Individually but also collectively. Agree on the place we want to give to technologies and on how they can influence our lives and our society.