AI is a transformation

Posted on Mar 24, 2025

In early March 2025, I came across a reflection on the distinction between SaaS business models and those offered for artificial intelligence. While the SaaS model is essentially based on providing standardized services accessible via subscription, AI, particularly through large language models (LLMs), introduces a different economic logic. It cannot simply be treated as an added feature because it profoundly impacts all of an organization’s internal processes, going well beyond a simple technological improvement or a new product characteristic — it is a genuine transformation.

Personally, I was drawn to this way of looking at things because it quite faithfully captures the impact of AI in our lives. Describing AI as a transformation is not new.

Indeed, AI is not merely a one-off optimization of existing processes but represents a structural transformation of the way we work, interact, and design solutions. It changes:

  • our relationship with knowledge, by making gigantic amounts of information accessible,
  • our practical approach to daily tasks, by enabling the automation of activities that were previously complex or repetitive, and
  • our cognitive mechanisms themselves, since it directly intervenes in the way we make decisions.

Anne Alombert has shared several interesting reflections on the consequences of the delegation we make to AI. (See in particular The pharmakon of generative AI: How algorithmic automatons transform our minds and the paper Healing artificial stupidity).

Although Anne Alombert’s perspective is quite critical of AI, I believe it nevertheless provides a relevant framework for understanding the “cognitive revolution” brought about by the arrival of AI in our lives.

The main point is that, according to her:

the know-how of writing (and with it the know-how of thinking) has here been externalized into the digital machine in the form of algorithmic automatisms.

This point is important because it allows us to target the real challenges we may face by “excessively” using AI in our lives. However, I believe AI remains a transformation in at least three aspects of our personal and/or professional lives.

The three dimensions of transformation through AI

Generative artificial intelligence is redefining our world at a dizzying pace, transforming our knowledge, our know-how, and our ways of thinking. This technology, capable of creating content from existing data, opens up perspectives in various fields.

The transformation of our knowledge

LLMs are ultra-connected libraries. Technicians might be unhappy with this shortcut, which is technically incorrect, but the image is interesting.

LLMs have been trained on an astronomical volume of data. To give an order of magnitude, GPT-3 was trained on the equivalent of 2,600 years of uninterrupted reading, which far exceeds any human capacity for assimilation. It is materially impossible to have enough “human” time to review all this data. The machine is therefore stronger than the human here.

This is important to specify because some critiques, even older ones, of technology consider that incremental advances in technology (i.e., doing better than humans) can be “useless” in the sense that efficiency gains would cause direct or indirect harmful constraints on our lives. One should therefore not innovate at all costs. This is partly found in degrowth theories or in the low-tech spirit. For example, mechanical cultivation of the soil may allow better production but has a significant impact on soils and natural balance. We are therefore increasingly moving toward gentler methods that are more respectful of nature in agriculture. In this sense, technical progress is then considered “useless” or at least not necessary. (I am taking some shortcuts on these aspects. If you wish to delve deeper into the subject of degrowth and low tech, I refer you to the best-sellers by Timothée Parrique, Philippe Bihouix, or Kate Raworth, which helped me understand the subject more precisely).

That being said, the difference in technical progress here is that the computer performs tasks previously inaccessible to humans. As Jacques Ellul wrote:

there is therefore no competition between (the machine and man). The ideology of the servant or rebel robot, or of the computer eventually replacing man in the evolutionary process of beings, all these are stories that prove that those who speak of the computer have not yet understood what the computer is and proceed by anthropomorphism. It is not enough to say that the computer can do this and that, etc. All this talk is absurd: the sole function of the computing system is to enable the flexible, informal, purely technical, immediate, and universal connection between technical sub-systems. (Jacques Ellul, “Technology considered as a system”)

By defining the role of technology and already rejecting simplistic anthropomorphism (which is omnipresent today), Ellul reminds us that computing is a connection technology. And this is precisely one of the great strengths of LLMs: their ability to link diverse knowledge, well beyond human capabilities.

Does this mean that the machine is better than humans? Some are tempted to think so, but if we focus on the notion of knowledge, I believe it is useful to take a detour through Gilbert Simondon’s theory, which distinguishes “machine memory” from “human memory.”

  • For machine memory: we know it can retain a very large set of data, sometimes complex, faithfully and unaltered. However, there is no structure or hierarchy in the retained content.

  • For human memory: it is almost the opposite. The amount of data retained is reduced, but the connections between data are strong. The ability to relate concepts and to attach new things to existing knowledge is a characteristic of our human memory.

To return to the words of [[Georges Canguilhem]], cited by Anne Alombert:

The perceived present is immediately connected to a set of forms or schemas resulting from past experience: “Acquired experience serves as a code for new acquisitions, to interpret and fix them.” (The brain and thought, 1980, in Georges Canguilhem, philosopher, historian of science, Albin Michel, Paris, 1993, p. 21)

Returning to LLMs, they are therefore not a pure knowledge base. I will not go into the details of how LLMs work here, and to avoid paraphrasing, I will limit myself to reproducing the explanation given in a post by Génération IA:

A model like Claude or GPT does not have a database where it stores information that it could simply retrieve. Its knowledge exists in the form of connections between concepts, encoded in billions of mathematical parameters. During training, the model went through immense amounts of text and learned the relationships between words, ideas, and patterns of thought. When I query it, it does not retrieve a specific document to answer me, but reconstructs a response by predicting the most probable word sequences in that context. A simple direct question activates only the most obvious connections, producing a superficial response. But a structured dialogue stimulates deeper connection networks.

Consequently, and subject to biases and hallucinations, LLMs allow us to quickly grasp a large volume of information on any subject by creating connections between data — something that was until then essentially a competence of human memory alone.

Our knowledge is “multiplied” with LLMs in the sense that we have access to this hyperconnected library. This multiplication of knowledge is not without danger but remains revolutionary and above all transformative. By using certain prompt techniques or by employing a RAG-type system, hallucinations are reduced through constraints (from the prompt or RAG sources), enabling us to generate better quality content on a wide range of subjects.

To see for yourself, use Perplexity. This AI-powered search engine constructs a response using content found on the web. ChatGPT’s “Deep Research” feature (an equivalent feature is now available on nearly all LLMs depending on the version chosen) allows going even further in exploring internet sources to build a relevant response and content that yields increasingly satisfying research results.

The transformation of our “know-how”

Enthusiastic testimonials about generative AI are numerous, but it is useful to counterbalance them with constructive criticism. Articles like this one and that one offer critical perspectives that deserve consideration. Although some of these critiques may seem superficial or biased, they raise questions about the limits and challenges of generative AI.

For example, the author of the first post, cited in the second, mentions that she has never used the generative AI she criticizes. This raises the question of the relevance of critiques based solely on theoretical knowledge, although practical experience is not the only criterion of validity for a critique. Theoretical perspectives can offer interesting viewpoints on the ethical, social, and economic implications of generative AI.

It is true that finding objective critiques of generative AI, particularly regarding know-how and its role as a “skills multiplier,” can be difficult. I will deliberately not address questions related to the appropriateness of using generative AI or its impact on the world of work, as these subjects touch on sensitive and complex issues concerning the replacement of certain “human” functions by machines, which goes beyond the scope of this note.

To start and be aligned, I therefore believe it necessary to establish the definition of what I mean by “know-how.” I will use the Larousse definition, which states that it is:

competence acquired through experience in practical problems, in the practice of a profession.

Traditionally, skill acquisition occurs through practice and experimentation. Practice makes perfect, as the saying goes. It is obvious to many, but practicing a given activity allows one to “build competence.” It is also said that “repetition is the mother of learning.” Repetition strengthens neural connections and thus helps memorize information long-term (source).

Generative AI also impacts our know-how because it accelerates the acquisition of certain skills. For example, for those who work with ideas, AI can become a cognitive optimization tool that enables structuring, clarifying, and efficiently transposing a thought into a digital document. One of the challenges of “intellectual expression” lies in transposing an abstract idea into a clear and precise written form. This process, which involves organizing reasoning, eliminating redundancies, and adjusting tone, can be time-consuming and energy-intensive. If writing is thinking, and the time spent on this task is generally considered beneficial, the use of generative AI can also transform our way of working by channeling a flow of ideas and enabling their transposition in an orderly and structured manner.

Also serving as a genuine springboard, AI enables those who are less comfortable with writing to get started. I have in mind cases of people who had difficulty putting pen to paper: an entrepreneur who wanted to communicate about their business through a website but struggled to find the right approach. Another who had to write a “service memo” with no other instructions, or a colleague working alone who uses an AI to challenge themselves on certain questions. In some cases, the effort required to transpose ideas is such that one simply avoids doing it altogether. Generative AI has transformed them to varying degrees.

I am also discovering wonderful experiences and reflections on creativity. Some consider that AI cannot generate original or creative content because:

these devices have the effect of systematically reinforcing averages by eliminating singular, original, and improbable expressions that are not accounted for by statistics (Anne Alombert, section 22).

Others consider themselves to become undeniably more creative thanks to AI, like Sebastien Bailly who acknowledges that:

Not only am I more creative because I can interact with AI as I would with a human working group, during a creative dialogue that can be structured, but also because AI is sufficiently advanced to surprise me by suggesting avenues that had not crossed my mind.

My reading on the subject also led me to software development and the “vibe coding” that Simon Willison has been doing. Generative AI has clearly enabled this developer to create small software tools thanks to generative AI, which then becomes a productivity ally, amplifying his “know-how.”

The transformation of our “ways of thinking”

While I am enthusiastic about the “transformative powers” of AI on our knowledge and our “know-how,” I am significantly more cautious about the transformation of our ways of thinking.

Yann LeCun reminds us that “neural networks” no more imitate the brain than an airplane wing reproduces that of a bird.

It is therefore pointless to see in generative AIs any attempt to reproduce human thought. Generative AIs are powerful tools capable of processing and generating information efficiently, but they do not (truly) possess the consciousness, intuition, or capacity for judgment that characterize human thought. It therefore seems futile to me to try to place an LLM and the human brain on equal footing from a functional standpoint.

Regarding results, however, we can venture to compare the product of “human” reflection with that produced by a machine. I embarked on this reflection and it led me to identify risks and dangers, because by using an LLM, we delegate all or part of our cognitive capacities, and this cannot be done at any price.

To use Anne Alombert’s terms:

Instead of forming and expressing our own thoughts, based on our experiences, memories, expectations, and singular desires, we risk relying on algorithmic automatons and ceasing to exercise these faculties — that is, unlearning how to write, express, and think.

And this is indeed already a first pitfall. By delegating a series of “low-value-added” tasks to the machine, one might consider that this delegation would cause no harm because it would be, by essence, beneficial — saving us time to devote to nobler tasks. Writing a reply to an email, for example, should not be “that serious” or harmful to our cognitive faculties. On an occasional basis, the contribution of AI to these tasks should indeed not harm our “ways of thinking.”

In the long term, however, we must remain vigilant so as not to fall into cognitive lethargy and settle for a response regurgitated by the machine. Someone on LinkedIn wrote:

Let’s be clear: this fascination with AI-generated content mainly reveals our collective acceptance of mediocrity.

And this is where the problem lies. We live in a world of immediacy. Everything must always go faster. Our messaging applications show when a sent message has been read. We end up being offended or worried that the recipient has seen/read our message without having (yet) responded. Delivery services promise you, almost by default, next-day delivery if you order before 10 PM.

These examples among many others reveal our conditioning to speed. Things must happen fast, without really knowing why. Everything must go fast, and so it is quite logical that the use of generative AI, which allows producing “apparently relevant” content in seconds, fascinates us. The success of generative AI may also be due to this race for efficiency, but this speed of execution entails sometimes negative or even dangerous consequences.

The choices made by “AI designers” have an impact on generated responses. The notion of bias in AI is omnipresent, and recommendations are made invisible. On this subject, I invite you to watch this video that powerfully illustrates biases in AI. Very often, the user of an AI does not know how and on what parameters the LLM was configured. The example of DeepSeek, the Chinese LLM released in January 2025, is a fine illustration.

Nevertheless, and taking the opposite stance, once these bias elements are integrated and understood, AI enables generating new proposals, and if you ask it the right question, AI can broaden horizons. It is therefore essential to use the machine for what it is and what it does. We must understand its limits and pitfalls. Stay in control, maintain mastery of the subject, and above all continue to exercise critical thinking. It is therefore imperative to learn how to use these new tools, or risk losing control of them.

But that is not all. While I generally hold a favorable position toward technologies, I find myself becoming annoyed by the excessive and unapologetic use of generative AI. For some, industrialized or standardized products are sufficient. This is the case in the consumer industry, but for “ways of thinking,” this satisfaction has its limits.

So let us avoid algorithmic thinking!

In my view, it is detrimental to use generative AI to write reflective texts. I am referring here to texts, essays, commentaries, or any written content aimed at conveying a reflection on an idea through a form of reasoning. Because the result of generative AI is not tested by reflection.

In the introduction written by Simone Goyard Fabre about La Boetie and his “Discourse on Voluntary Servitude,” Goyard Fabre describes the questions surrounding the true author of this Discourse, which seemed to some far too elaborate for a young person of 16 or 18 (the estimated age of La Boetie when he supposedly wrote the Discourse). And on page 54, Goyard Fabre offers us a phrase that is remarkably relevant in the age of generative AI:

Let the initial idea of a thought in ferment be corrected, polished and repolished by a more mature person, and it can only gain in value: the initial idea, while retaining its native force, acquires, through adjustment, more depth — it becomes more incisive, and if it sounds better, it strikes harder too.

This is, in my view, the strength of the individual who can, through their work and reflection, bring more depth to their idea while allowing it to retain “its native force.”

By using generative AI, we are in reality using a statistical tool that will mimic reality and reproduce the content that is statistically most common. Using generative artificial intelligence to create a reflection seems contrary to the very principle of reflection. Moreover, it regularly leads to an average result, in the statistical sense of the term, due to the way the technology works. It is the “average” content that prevails, thereby preventing the fresh thought or deep reflection needed to achieve an interesting result.

Also, subject to a few workarounds, artificial intelligence has frozen knowledge in the sense that the training database does not evolve much once the tool is made available to the public. Since this base no longer evolves, the content that can be extracted from an AI will follow the “rigidity” of the database used for its training. You cannot fundamentally improve your text using artificial intelligence since the knowledge linked to the model used is frozen. The complete opposite of the person who reworks, polishes and repolishes their reflection, nourishing it with their experience, discoveries, and learning.

It is also impossible to polish, or only marginally and superficially, the written text. The meaning of the verb is too rare in the result of artificial intelligence, or else you have to ask it to act by mimicry and copy the style of a particular author.

But is all this new? Haven’t we already succumbed to this fatality for millennia? I will once again draw on the words of Anne Alombert, who writes:

In the Phaedrus, Socrates relates the Egyptian myth of the invention of writing, which he describes as a pharmakon, a Greek word meaning both the remedy and the poison: first presented as a remedy for memory, because it allows the storage and preservation of accumulated knowledge by surpassing the failings of living memories, writing also constitutes a poison for thought, because by fixing knowledge and sparing individuals from having to remember it, it prevents their memory from exercising itself and knowledge from evolving. In short, writing, which seemed innocent and beneficial, also has cause for concern.

Will generative AI make us unlearn how to think, or is it a pharmakon of modern times?

One thing is certain, in my view: what matters is that a person who gives up exercising their creative “muscle” will very quickly be stripped of their competence, like a pianist who never approaches their piano. (source). Let us continue to exercise our creative muscles and to think, with and without the machine, to maintain and enrich our ways of thinking.

Conclusion

While it is legitimate to question the cognitive cost of these technical prostheses that externalize some of our competences, we should not condemn the use of these AIs without nuance. As I wrote here, we must refrain from attributing virtues to these AIs and use them for what they are. Unfortunately, we read resistant discourse toward these technologies for sometimes obscure or unreasonably subjective reasons.

Should we not simply consider AI as:

a silent ally that helps make the right choices. A tool that does not replace experience, but allows saving time, testing approaches, refining one’s instinct? (source)

This is how I approach these tools while keeping in mind the problems they can cause when used without awareness.

Sources