AI makes you write nonsense.
This morning, I read a newsletter in which the author wrote that:
“AI is no longer a trend, but a silent earthquake.”
A few weeks ago, I had spotted similar content. The reflection of an AI “thinker” in practice, who said he used it to explore subjects he was unfamiliar with. In this context, he explained how AI had led him, through his questioning, “down unexpected paths.”
These observations are not recent, but I have the feeling they are becoming increasingly numerous. Content generated with the help of artificial intelligence is being published without being truly understood by its authors. Until now, I had refrained from commenting, thinking I was splitting hairs, but this new episode makes me want to voice this frustration and try to understand why it makes my hair stand on end (to stay with the hair metaphor).
Obviously, if in 2025 one considers that AI is no longer a trend but a “silent earthquake,” that person clearly does not live in the same world as I do. AI is everywhere — in the media, in social media posts. The worst part is that the person who wrote this sentence was presenting, in the same text, five conferences they would give on artificial intelligence at a major Parisian event in late November 2025. It is therefore quite a contradiction to speak of a silent revolution while announcing a 2-day public event almost exclusively dedicated to AI.
Regarding the notion of “unexpected paths” mentioned above, I was also quite irritated. The author explains how they use AI as an exploratory tool. On substance, I like this idea, and it is one of the ways I can use generative AI. But it is precisely without any particular expectation that the author uses AI since they want to “explore” (and thus discover) what they do not know. Writing that the result obtained by artificial intelligence constitutes an “unexpected path” implies that they initially expected to obtain a certain (expected) result. The “unexpected” presupposes that there was an “expected”: for a path to be unexpected, one must have had a prior expectation of another path. If you explore without expectation, the term “unexpected” loses its meaning because nothing was anticipated in advance. That was, moreover, the whole point of the author’s reflection.
What strikes me is that, although the “machine-like” and repetitive style characteristic of the early days of generative artificial intelligence has been humanized, the meaning often remains hollow, and now contradictory, in a sort of generalized indifference. Content is generated by generative AI based on a few ideas and cobbled-together instructions to write “like” a person. The fluidity is improved, the length of sentences and paragraphs is more random, giving a more “personal” feel, but the meaning remains superficial (in the sense of appearance, neither deep nor essential) because it is artificial (in the sense of not natural).
This artificiality of content amounts to mediocrity. This mediocrity has already been observed in new forms of digital expression. As Julia Cage and Dominique Cardon wrote:
The informational space has opened up to a much greater diversity of voices and, above all, the public is no longer an aggregate of silent spectators as with the press, radio, or television. One may bemoan the dangers, mediocrity, or vacuity of new forms of digital expression, but there is no going back.
This mediocrity of new forms of digital expression therefore extends, unsurprisingly, to content published by users who rely on generative AI. A few months ago, I wrote here:
we must remain vigilant so as not to fall into cognitive lethargy and settle for a response regurgitated by the machine. (…) this fascination with AI-generated content primarily reveals our collective acceptance of mediocrity.
We are therefore right in the thick of it, and what I feared is manifesting with distressing regularity.
I would like not to be as fatalistic as Cage and Cardon. I hope we can, not turn back, but at the very least correct course, straighten the helm, and not sink further down this path that has nothing beneficial about it.
In my estimation, I think the reason for all this is most certainly linked to a pursuit of efficiency and this desire to be visible, particularly through newsletters, which feed a phenomenon of overproduction. Nothing new under the sun, you might say, but artificial intelligence is becoming the magic recipe for producing content quickly and with an appearance of “mediocre quality.”
Today, I have the feeling that these details very often go unnoticed because the reader themselves is now immersed in this same logic of efficiency. They subscribe to 36,000 newsletters, consume them by skimming at best, pile them up in their inbox, and ask AI to summarize them at worst. It is moreover this provocative example that we are starting to read in AI critiques: a 300-page report written by AI that is too voluminous for its recipient, who will themselves run it through AI to summarize it, and so on… We are going in circles.
I will conclude with a passage from Anne Alombert - Thinking with Bernard Stiegler who writes:
The potentially toxic effects of certain digital devices prematurely released require the design and experimentation of concrete therapeutic practices as well as alternative technologies to those from the disruptive industries of Silicon Valley.
Here is an unfortunate illustration of this. I do not know what concrete therapeutic practices could be implemented at this stage, but I modestly hope that this “outburst” will lead you to reflect on this subject.