Grammatize and observe. New therapeutic practices

Posted on Feb 24, 2026

In 1956, in the conclusion of Volume 1 of The Obsolescence of Man, Gunther Anders formulated a call that still resonates today:

Unless men begin, like conscientious objectors, to publicly commit, under oath and in full awareness of the possible danger, to never yielding to pressure — whether physical or merely the pressure exerted by public opinion — and to never collaborating in any enterprise that, however indirectly, could have any connection with the production, testing, and use of the bomb; to never speaking of the bomb except as a curse; to trying to convince those who have resigned themselves to it and merely shrug their shoulders; to publicly distancing themselves from those who defend the bomb.

Unless this first step is taken, and other men follow those who took it, and others still. Until those who refuse to take the oath thereby designate themselves as traitors to humanity’s struggle to continue existing.

Anders was writing about the atomic bomb. His concern was the annihilation of humanity. Today, this passage remains current, although the nature of the danger has evolved. Others would say that the scale of the danger has also changed. This point, however, seems beside the point to me. It does not seem reasonable or relevant to compare the atomic bomb to anything else.

I will draw here only on the method proposed by Anders, because the essence of his argument is transposable to our situation: our collective inability to perceive what technology does to us. Anders called this blindness the “Promethean gap” to describe this inadequacy between what we are technically capable of producing and what we are capable of imagining as consequences of that production.

Thus, Anders mentions that “public opinion pressure” would prevent us from resisting, from objecting. Today, I believe this “public pressure” is still very much present. This pressure makes collaboration with the machine not only acceptable but desirable. It is now omnipresent in the professional world. Generative artificial intelligence (hereafter GAI) is imposing itself as an obligatory passage. Intellectual professions are being urged to adopt these tools or risk being considered archaic or outdated. A European regulation imposes digital literacy for users (deployers) and providers of AI systems. This quasi-imperative is fueled by the dominant discourse that advocates the augmentation of the human by the machine. GAI would enhance our capabilities, free us from repetitive tasks, make us more efficient. This narrative masks a far deeper process: a progressive dispossession of our operative capacity that I attempted to explore in this note. I refer the reader to that note for more information. I will simply summarize by stating that in many uses, GAI no longer merely supports reflection. It tends to take charge of reasoning itself, relegating the user to a validation role. The user is no longer the one who thinks and produces: they become the one who triggers a process and verifies its result.

This reconfiguration of the role is a contemporary form of what Bernard Stiegler called proletarianization. The user’s knowledge is not suppressed but it is externalized into a technical system that executes it in their place, and often differently. The locus where reflection takes place migrates from the mind to the device, and as a result, control over this process escapes the user due to algorithmic opacity.

On the imperceptive: a quasi-imperative

I use the term “quasi-imperative” because the dynamics of GAI adoption in the professional world deserve closer examination. This “quasi-imperative” could be better described using a neologism: an imperceptive, a contraction of imperative + imperceptible. The imperceptive is an imperative that one does not perceive as such.

It is necessary to name this phenomenon because the dominant discourse constructs an environment in which not using GAI becomes a signal of incompetence or archaism. Training programs multiply, injunctions toward “digital transformation” saturate the informational and professional space. All these signals create a pressure that, without explicitly forbidding refusal, makes it a high social cost. The professional who does not adopt the tool is perceived as resisting progress. The one who adopts it shows lucidity, adaptability, modernity, and innovation.

In this configuration, the user considers that they are exercising a rational choice: gaining productivity and staying competitive. They feel they are being innovative and modernizing their practice. The decision to adopt GAI appears to them as an act of professional “sovereignty” (to reuse this fashionable and somewhat overused term — see on this topic my note on digital autonomy). The approach seems fully voluntary. Formally, it is, but it is no longer so substantively. The environment in which this choice is made is itself constructed to make it inevitable. The alternatives — continuing without the tool, taking time to evaluate its effects, refusing cognitive delegation — are (too) often devalued.

These “other” practices are not forbidden. They have become “unthinkable.” The imperceptive is all the more effective because it never presents itself as an obligation. It takes the forms of opportunity, common sense, self-evidence.

The user then progressively delegates their cognitive operations to the machine, not through direct constraint, but because the entire ambient discourse — client expectations, promises of productivity, competitive pressure — invites them to do so. They consent to this delegation and perceive it as an exercise of lucidity and freedom. They deploy an active approach to entrust to the machine what they used to do themselves.

But all of this is merely an illusion of freedom. The user becomes the author of their own dispossession. The illusion lies in this paradigm: a choice that is no longer one, exercised in an environment that ultimately determines its outcome, by a subject who thinks they are acting freely. The formal freedom to adopt or refuse the tool persists, but the conditions in which this freedom is exercised reduce it to its simplest expression.

It is this mechanism that justifies the term “imperceptive.” The “social pressure” evoked by Anders is exerted not on the decision but on the framework in which the decision is made. This pressure has managed to modify the framework to orient the choice without ever having to constrain it.

Two new therapeutic practices

As Anders proposes, but with less radicalism, I believe that GAI must be profoundly, sincerely, and truly thought through. It is a complex exercise and not necessarily a passion for everyone. One must find pleasure in ceaselessly questioning this thing that fascinates as much positively as negatively. Despite all these questions, I remain convinced of one thing: GAI, like all technology, is a pharmakon (a remedy while also being a poison).

The ambivalence inherent in every technology places us permanently in this tension between the “good” and the “bad.” I have already argued here that it is not useful to reject technology wholesale because we would then miss out on interesting opportunities or advantages. I had previously sketched out a few practices that can be qualified as therapeutic. This therapeutic approach is necessary in the face of the pharmakon, and it is a pharmacology that we must ceaselessly put into practice. In addition to the four practices previously sketched, I would like to add two new ones.

Grammatize

First, grammatization as resistance to this dispossession.

According to Ars Industrialis,

grammatization — an expression that extends and diverts a concept from Sylvain Auroux1 — designates the transformation of a temporal continuum into a spatial discrete: it is a process of description, formalization, and discretization of human behaviors (calculations, languages, and gestures) that enables their reproducibility.

We often hear that the human must remain in the loop (the famous human in the loop). For me, this formula always rings somewhat hollow because the criteria of this (quasi-)imperative (here again) to act are never defined. Personally, I observe that this “human verification” is not defined in a very clear or precise manner, but above all that it is carried out a posteriori.

Here, I therefore advocate a temporal inversion of control. Although one must still review afterwards, grammatization must be carried out beforehand. Explicitly formalizing one’s operational methods before entrusting execution to the machine allows imposing one’s “cognitive” grammar on the technical device, instead of passively submitting to the grammar the machine imposes. This awareness becomes a genuine form of resistance to dispossession. It allows us to “better” maintain control.

Concretely, this means that one must define one’s own analysis and execution criteria, structure one’s processes and logic by establishing a line of conduct. We must avoid relying 100% on the machine. Producing instructions that frame the production of texts, documents that do not directly contain usable information but rather criteria for generating it in specific cases. Successfully moving from the particular case to the class of cases, then to the structure for handling that class, is an additional cognitive abstraction that requires deep mastery of one’s own profession. Grammatizing one’s practice thus allows retaining the power to select criteria and operational methods. It means consciously deciding what one delegates and what one keeps.

Observe

Another therapeutic practice is that of observing what we do and what we no longer do. This observation allows us to draw the contours of what we consider to be the (intellectual) work that we will not delegate.

This observation can be “negative”: observing which professional “gestures” have disappeared since regular or daily use of AI. Which skills are being lost? This is a difficult task, and it requires vigilance in the face of our dispossession which, I acknowledge, demands a significant questioning of our usage that is not always opportune or adequate in the context of professional practice.

Conversely, the observation can be “positive”: which moments of resistance persist, where one instinctively refuses to delegate to the machine? What tasks does the user keep for themselves? What do we continue, tirelessly and despite the presence of the machine, to execute ourselves?

This observation allows tracing a boundary between what can be delegated and what cannot or should not be. This boundary echoes what Ethan Mollick defined as “the jagged frontier”: the jagged frontier. It is by nature invisible and concerns the capabilities of GAI: powerful on some tasks, deficient on others. This frontier is irregular and subjective. Nobody knows exactly in advance where it should be drawn. It is through practice and experience that the user will manage to position, almost intuitively, the demarcation between what the machine can do and what must remain with the user.

What this gesture of “auto-ethnography” proposes is learning to read this frontier in one’s own practice. The moments where one refuses to delegate trace the zones where one senses, sometimes intuitively, that AI is beyond its capabilities. The moments where one delegates without checking signal, conversely, the zones of drowsiness. Observing these zones is, in a way, tracing one’s own jagged frontier. Documenting these situations, even retrospectively, means maintaining contact with a process that tends to become naturalized through habit (naturalized in the sense that it is no longer perceived by the user). But it is also, and perhaps above all, accepting that this frontier shifts: what was within AI’s capabilities yesterday may be beyond them tomorrow due to technical evolution.