A generative AI of nonsense

Posted on Feb 17, 2025

I began exploring the personal website of Richard Stallman to deepen my reflections on the digital world and ways to “resist” the omnipotence of Big Tech companies.

There is a lot of content, and I started with What’s bad about ChatGPT.

Unsurprisingly, Stallman is not very enthusiastic about the use of this conversational agent, for several reasons, notably regarding the proprietary nature of the software (logical for the founder of GNU and the GPL).

Beyond these technical considerations, he offers his perspective on the terminology to adopt. Besides stating the obvious (AI is not intelligent because it is agnostic to truth — everyone knows this if they do their homework), he proposes on the gnu.org website a classification of terms to use (reproduced below):

*“Here is the terminology we recommend for systems based on trained neural networks:

  • The term ‘Artificial Intelligence’ is appropriate for systems that have the understanding and knowledge of a particular domain, whether narrow or broad.

  • ‘Bullshit generator’ is appropriate for systems like ‘Large Language Models’ (LLMs), for example ChatGPT, which generate bland verbiage that appears to assert things about the world without understanding the semantics of the verbiage they produce. This conclusion is supported by the article by Hicks et al. (2024)ChatGPT is bullshit.’

  • ‘Generative system’ is appropriate for a system that generates artistic works for which the notions of truth and falsehood are not applicable.”*

This classification is interesting, particularly the considerations regarding LLMs and generative systems.

The distinction made between these two categories rests, it seems to me, on a debatable reference point. He defines the generative system specifically for productions that are agnostic to truth or falsehood.

By considering the LLM as a “bullshit generator,” one attributes intentions to the use of an LLM. This subjective view reminded me of the article by Florence Maraninchi - Why I don’t use ChatGPT. I have already briefly commented on this article under Yannick Meneceur’s post, and I continue to believe that all these considerations rest on a series of assumptions and biases that reflect a lack of open-mindedness. On this subject, Florence Maraninchi does not hide behind her little finger and states, without ambiguity:

  • “I have never been tempted to try it myself.”

  • “The following overview is exclusively critical. Since the media space is saturated with political promises and glowing articles, this can be seen as a small exercise in rebalancing the discourse.”

In short, announcing a rebalancing of the discourse while stating one is being exclusively critical and without ever having tried the tool is somewhat disappointing coming from a figure in the academic world. However, this one-sided article does confirm certain excesses (energy consumption, pollution, slop, and other problems of using generative AI to favor quantity over quality) that are sometimes set aside by “influencers” and other AI experts.

That being written, calling an LLM a bullshit generator is a rather marginal viewpoint that would warrant examining the reasons for this characterization. In her aforementioned article, Florence Maraninchi explains her reasons and her refusal to use the tool, notably stating that her reasoning is supported by (her) “distrust of opaque, non-deterministic, and untestable systems, but it is also nourished by political positions.”

While one should indeed not worship these generative AIs, considering that they are only good for generating bullshit is an excessive stance that perhaps demonstrates a lack of research on the subject and unreasonable expectations about what the tool can do. The LLM does not have infinite wisdom. It does not hold the truth and is even agnostic to it, as written above. So why play the shocked innocent on this subject? What would one say about someone who wants to hammer a nail with a saw and then complains about the uselessness of this serrated blade for driving in a piece of metal?

Let us take a socially acceptable use case: Ask an LLM to reformulate scientific content to explain it to a 10-year-old (or 20-year-old) and you will see the power of the tool. I am not saying that AI will replace teachers or schools, but I believe we can legitimately consider LLMs as very good tools for coaching and learning, for example. The UK Department of Education released a policy paper on the subject on January 22, 2025, which states, for example: “If used safely, effectively and with the right infrastructure in place, AI can ensure that every child and young person, regardless of their background, is able to achieve at school or college and develop the knowledge and skills they need for life”.

I do not want to appear as a defender of AI without criticism or nuance, but we are seeing an increasing number of critiques of this type with, let us acknowledge, unnecessary vehemence. I too wanted to rebalance the discourse while noting, moreover, that criticism is easy but art is difficult. Because beyond the obvious and displayed bias, these critiques and comments offer no proposed solutions, and I would be curious to read a constructive critique with concrete proposals for improving these tools.