Knowing Without Doing. Doing Without Knowing.
We read it constantly: generative artificial intelligence (hereinafter GAI) is presented as an extension of the individual that augments them in all their cognitive facets. GAI synthesizes, reformulates, explains, produces. Nevertheless, this narrative masks the more insidious and even pernicious consequences of this unrestrained use: we are gradually losing our operational capacity.
In many uses, and particularly in the legal field, GAI no longer merely supports reflection. It tends to take charge of reasoning itself, relegating the user to a validation role.
It is in this schema that a contemporary form of proletarianization plays out. The use of LLMs engenders a dispossession of the concrete faculty of transforming knowledge into action. This dispossession is all the more difficult to perceive as it is accompanied by an apparent gain in efficiency, speed, and volume.
In early January 2026, I published a note on the proletarianization of law in the AI era.. Following this, I had some very interesting exchanges with other lawyers that nourished my reflections on the subject.
During these discussions, a form of paradox relating to the proletarianization of knowledge appeared. Marx spoke of proletarianization in terms of relations of production. In this framework, the artisan is proletarianized because their relationship to the means of production changes. Stiegler extended this notion of proletarianization to knowledge and expertise. In my previous note, I had focused on transposing this proletarianization to the lawyer’s knowledge, which was being lost to the machine. Pushing toward an “excessive” view, legal reasoning would then be carried out by the machine and the lawyer would be relegated to the role of a mere verifier.
By mobilizing, among other things, the concepts of proletarianization and retention developed by Bernard Stiegler, this note proposes to examine how LLMs operate a transformation of our relationship to epiphylogenesis.
Although Stiegler defines the concept of “proletarianization” by referring to the notion of deprivation of knowledge (know-how, thinking, or living), this does not mean that the individual loses their knowledge.
We must consider that proletarianization affects the individual not because they lose their knowledge but because they are dispossessed of it due to:
- The place where legal reasoning is carried out (in the system);
- Their position in the value chain (from actor to spectator-verifier);
- The loss of control over the process (algorithmic opacity);
Cognitive “proletarianization” is not merely a dispossession of knowledge and competence. It engenders a reconfiguration of the individual’s role: from producer of reasoning to verifier. It is because of the magnitude of this reconfiguration that I will examine more deeply this concept of “dispossession.”
A Technical Contradiction?
Before addressing the core of this note, I must raise a contradiction that might be obvious to technicians. It is constantly said that an LLM has no knowledge. It executes operations. They are merely machines that predict probable sequences of signs ([A. Alombert, Panser la betise artificielle](http://journals.openedition.org/appareil/6979; DOI: https://doi.org/10.4000/appareil.6979)) or machines that modulate, displace, hierarchize, order the position of a-signifying signs (M. Corteel, Ni Dieu, Ni IA, Une philosophie sceptique de l’intelligence artificielle, La Decouverte, 2025, p.59).
Therefore, how can there be extraction or dispossession of knowledge toward something (an LLM here) that, by definition, cannot “know”?
Beyond this contradiction, we must also consider that proletarianization can also tend toward a displacement of the social recognition of knowledge, regardless of the ontological question of whether the machine truly “knows.”
Today, even if technically the LLM does not “know,” socially and economically, it is increasingly treated as if it did. Moreover, the image of the stochastic parrot is no longer immediately invoked when discussing LLMs (see for example here and there).
There is therefore, at the very least, a statistical knowledge encoded in the machine. It is not able to recite, verbatim, the content on which it was trained, but it could reproduce the first words of Les Miserables by Victor Hugo, for example. There is therefore indeed “a form” of knowledge integrated into LLMs, but comparing it with human knowledge would be quite clumsy.
On the Accessibility of Memory Supports
Some consider that thanks to GAI, knowledge and expertise have never been so accessible and that this accessibility of knowledge would therefore be incompatible with this dispossession.
In a certain way, this observation is relevant. Accessibility is increased, and GAI allows accomplishing a series of things of which we were not necessarily capable (see on the subject: AI Is a Transformation).
Nevertheless, these operational capacities are not appropriated in the same way as our knowledge (know-how, thinking, or living). GAI gives us the possibility of using new competencies through its use, but this usage does not equate to appropriation as such. Multiplying 169,284 by 15,844 in one’s head is not within everyone’s reach. Doing this calculation on a calculator does not give us the competence to perform this kind of mental multiplication afterward. For GAI, asking it to write in the style of Baudelaire or Rimbaud will not give you the competence to do so without a machine. Appropriation is therefore quite relative.
Accessibility to knowledge means that we have a technical device allowing us to access more content than ever before. In fact, it is clear that the content used for training LLMs is unimaginable for the human mind (we are talking about more than 2,600 years of uninterrupted reading for the data used to train ChatGPT…3).
This knowledge was already more or less accessible online before GAI. The novelty with LLMs is that the result of a search on this data yields a text rather than a list of results that the user must explore. It is therefore a transformation of accessibility to memory supports rather than to knowledge as such. The data is now extracted from memory supports to be transformed into new information. In my field of activity, GAI does not increase autonomy through the accessibility of knowledge; it increases the speed of access to memory supports. But is there anything else?
The LLM: Beyond Tertiary Retention?
I borrow the notion of “memory support” from Bernard Stiegler, who uses it to describe the concept of retention. On this subject, Bernard Stiegler built his reflection around this concept by developing notions proposed by Husserl. These clarifications are necessary to situate the thinking framework to adopt: phenomenology.
In Stiegler, there is a distinction between:
Primary retention: immediate memory. It is what allows us to understand the meaning of what we read or listen to, for example.
Secondary retention: recollection, memory. It is what allows us to remember, with a form of imagination and personalization of the fact or of information.
Tertiary retention: externalized memory: writing, books, databases.
We could therefore situate LLMs as a form of tertiary retention. All this somewhat pedantic vocabulary is important for understanding that:
Tertiary retentions constitute the milieu in which our secondary retentions operate a selection of our primary retentions. (Christian Faure)
If GAI becomes the dominant element of tertiary retention, it acts as instant access to knowledge but also as a filter and an automaton. We no longer consult the machine to learn or produce ourselves; we delegate to the machine the execution of reasoning, decision, and/or production.
In Stiegler’s categorization of memory, phylogenetic memory (primary retention) corresponds to information inscribed in DNA. It is transmitted biologically from generation to generation and determines the basic structures of living beings. It evolves slowly, through mutations and natural selection, and is not modified by individual experience.
Epigenetic memory (secondary retention) designates what is transmitted within a lifetime and through learning: behaviors, know-how, habits, language. It is not genetically inscribed but is constituted through experience, education, and environment. It disappears with the individual who holds it.
Epiphylogenetic memory then designates memory externalized in technical objects: tools, writing, works, machines, digital supports (tertiary retention). This memory lives in artifacts. It allows intergenerational transmission independent of individuals and has the capacity to transform the individual through individuation (see Simondon on the subject).
I would be tempted to place the LLM in the category of tertiary retentions because these are inert and passive supports. Tertiary retentions must be mobilized by the individual to exist, and this existence will only be experienced through the way the individual sets it in motion with their own retentions, primary on the one hand (immediate comprehension of the text) and secondary on the other (memorization, integration into their own system of thought). The technical support remains a means of access and a characteristic of tertiary retentions.
With the LLM, however, this activation of the “memory support” will partly be carried out by the individual and a significant part carried out according to the parameters encoded in the machine. It is therefore a hybrid form of tertiary retention: it sets in motion operational schemas extracted from billions of documents that allow obtaining information, but part of this setting in motion is not decided by the individual and their retentions.
We enter a new paradigm where information is delivered to the user in a directly “consumable” manner. There is no longer a barrier to cross to access information. With GAI, there is a blind use of this epiphylogenetic. We use a memory support external to our body, but we have no knowledge of the criteria used by the machine to deliver the information to us in such a manner. But in this triptych of retentions, the opacity of the criteria makes the place of GAI uncertain.
Chatonsky considered the place of GAI within a new retention: quaternary retention. He describes it as that which bears on attentional gestures and habits themselves.
This new category of retention, based on attentional gestures and habits, is interesting but appears erroneous to me. There were already many attentional gestures and habits when Stiegler forged these concepts, and he did not need a 4th category to classify these habits and gestures. If I must take an example to illustrate my point, I will choose the Fosbury Flop, popularized by Dick Fosbury. This back-first clearance technique in high jump could, according to Chatonsky’s category definition, be considered a quaternary retention. Yet Stiegler never considered it as such, and I think this is because tertiary retention plays its role well as a memory support externalized from the body. It allows us a posterior consultation in order to reproduce this or that movement or gesture, for example.
Chatonsky also proposes the concept of distension to qualify “the statistical processing of retentions” performed by this fourth memory.
The statistical processing evoked by Chatonsky to justify this new category does not appear sufficient to me to justify this distinction. This processing is a way of generating a result following the use of a technical device. In this sense, is statistical generation not another form of result than content indexing or searching within a database? One could obviously retort that the non-deterministic nature of statistical generation would prevent this consideration. Is this to say that this statistical generation cannot be a tertiary retention? I have my doubts. The random character of the generation does not disqualify it as a means of accessing information, at this stage of my reflection.
I do not know if the LLM is a new form of retention, but it is assuredly a hybrid retention technique. Not quite tertiary but not necessarily an entirely new retention either.
I must therefore supplement what I indicated above: AI does not increase autonomy through the accessibility of knowledge; it increases the speed of access to tertiary retentions while reducing the necessity of activating one’s own secondary and primary retentions.
The Dispossession of Operational Capacity
This reduction in the activation of one’s own retentions (primary and secondary) leads us to consider the concept of operational capacity from the angle of proletarianization.
Indeed, the proletarianization of knowledge and therefore the deprivation in favor of the machine is perhaps above all a dispossession of the individual’s operational capacity. The machine will take charge of knowledge by replacing the individual’s capacity to act in relation to it.
This operational capacity refers to the concrete ability to execute a task, to make decisions, to apply reasoning, or to transform knowledge into action. If this capacity is transferred to the machine, the individual gradually ceases to be the actor. They become dependent on a system that acts in their place. We therefore do not lose theoretical information but rather practical mastery and autonomy of execution.
It is a form of material and ontological dispossession of which we must be wary.
“Material” because it is no longer the subject but the machine that does.
“Ontological” because the subject is no longer the one who knows how to do but the one who clicks to get it done.
The stake is therefore not the individual’s loss of access to knowledge. It is a dispossession of its exercise. This aspect is not a detail because this form of dispossession is insidious. It produces no feeling of immediate loss – quite the contrary! The individual gains in speed, in volume, in apparent capacity. But unfortunately, as Jacques Ellul wrote, all technical progress comes at a price: here, it will be a reduction in operational capacity. Progressively, for example, the individual loses the competence of writing a text because they concentrate on the efficient interrogation (again, always this efficiency!) of the machine that will write it, rather than on the exercise of autonomous writing.
Moreover, due to the link between the three types of retentions, access to a tertiary retention conditions the moment when the other retentions are actualized. Yet the machine ultimately imposes its own logic, often opaque, non-configurable, and externalized relative to the individual. This default of actualization then engenders an atrophy of cognitive competencies contained primarily within secondary retentions.
This is where the material dimension meets the ontological dimension: when externalization no longer forms, it replaces and becomes a dispossession.
And if you still have any doubt, think of your mobile phones. Before, we knew several phone numbers. I grew up without a mobile phone until I was 12. Without a mobile phone, I knew several phone numbers. Since I have had a mobile phone, I have stopped memorizing new numbers. I still know some numbers I had memorized by age 12, but some people confide to me that they have forgotten these numbers since they started using mobile phones. This is a dispossession of knowledge by tertiary retention. Our memory atrophies through lack of use and we forget. For phone numbers in a mobile phone, it is not very serious, you might say, but for an operational capacity used professionally, it can become so.
The Short-Circuit and the Disappearance of Complexity
What I describe as a domination of tertiary retentions, Edgar Morin had diagnosed as a disintegration of knowledge.
In 1984, Edgar Morin stated in his book “Sociologie”:
The predominance of information over knowledge, of knowledge over thought, has disintegrated understanding.
This sentence is part of the critique of the fragmentation of knowledge in contemporary societies. Morin observes a stacking of layers: raw information, produced massively and circulating rapidly, takes precedence over constructed knowledge. The latter in turn dominates thought, that is, the reflective activity that connects, questions, and hierarchizes. Understanding, which should be a coherent (complex) system integrating these three levels, is then disintegrated. They are now isolated elements, unconnected by organizing thought, and simply delivered by the machine.
These are two ways of naming the same rupture: when the technical memory supports (tertiary retentions) massively produce information without training in thought, they short-circuit both the formative function that Stiegler attributes to them and the reflective activity that Morin considers necessary for the integration of knowledge and the construction of (complex) thought.
The LLM crystallizes this double rupture: it is both a technical memory that no longer forms (in Stiegler’s conception) and an informational flow that crushes reflection (in Morin’s conception).
Toward a Therapeutic Prescription
Having read the above, the question some might ask is: is it serious, doctor?
The simple (and banal) answer would be to consider that no, provided the user remains conscious and attentive to their use. This is, however, a bit thin.
If we return to the notion of pharmakon (discussed here), technology being both remedy and poison, we must implement therapeutic practices as Stiegler advocated.
I could reformulate his remarks from an intervention transcribed here:
What therapeutic prescriptions for LLMs for individuals? How to prevent us from becoming mere verifiers and AI from being a knowledge dealer?
I do not believe in miracle solutions. As Gaston Bachelard said:
Nothing is simple; there is only the simplified.
I will however propose a few leads (simplified, therefore) for “still thinking when the machine thinks for us.”
A therapeutic practice could begin with a simple gesture: writing alone first. Even a clumsy, incomplete, or disorderly first version. It does not matter. This draft materializes thought: at that moment, it is still the individual who operates. They had to choose an angle, establish distinctions, fix a hierarchy. The machine has not yet entered the dance, and it is this threshold that must be preserved.
Only then can the LLM intervene. How? Rather as a mirror. We do not ask it to write in our place. We invite it to clarify what we have already attempted to say: reformulate a thesis, make explicit a logical chain, propose a perspective, test internal coherence. In this way, the tool becomes metacognitive: it does not replace thought. It illuminates gray areas, proposes alternative formulations that oblige us to choose. The stake is not to obtain text but to obtain material for reflection. As Morin would say, to “think one’s own thinking.”
But this clarification has a price if it becomes sufficient. Because the generated synthesis, however convincing, remains a screen: it gives an impression of grasp on the content, without real contact with the materials. The third prescription is therefore a voluntary reintroduction of effort: returning to primary sources. Reading the decision, the text, the doctrine, the exact passage. Even when it seems redundant. This is not merely verification: it is a way of appropriating the information. Not to “control the machine” but so that information becomes knowledge, by actualizing one’s retentions.
Finally, we must refuse the most insidious slide: becoming a verifier. This is where routine becomes a dispossession. The professional becomes the one who validates a reasoning already produced, until they become the one who no longer dares deviate from a solution proposed by the machine. We must remain authors. This means something very concrete: maintaining responsibility for the chain of reasoning, choosing the premises, deciding on the framework, and being able, if necessary, to contest the answer rather than simply approving it.
These four gestures outline the same requirement and a beginning of therapeutic practice for this pharmakon.
In summary:
- Always produce a first version without AI, even if imperfect, before any solicitation of an LLM.
- Favor metacognitive uses (clarification, reformulation, putting into perspective) rather than productive ones.
- Reintroduce interpretive effort: consult primary sources after a generated synthesis, even if it seems redundant.
- Refuse the posture of verifier: remain the author of the reasoning, even when the machine proposes a “correct” solution.