The Proletarianization of Law in the Age of AI

Posted on Jan 3, 2026

The Symptom of a Transformation

In December 2025, the French National Bar Council (Conseil National des Barreaux, CNB) removed the word “intellectual” from its definition of legal consultation. This administrative gesture inadvertently reveals the crisis of legal expertise in the age of generative AI.

From a report by the CNB’s commission on the practice of law:

The CNB General Assembly of December 12, 2025 approved the removal of the term “intellectual” from its proposed definition of legal consultation initially adopted in 2011, favoring an approach centered on the purpose of the service, to guarantee effective protection of the public and the scope of law in the face of the rapid rise of generative artificial intelligence tools.

more info here

This news may seem trivial or marginal, but the reflections of certain commentators around it deserve to be shared and decoded. Indeed, the amendment voted by the CNB concerns the definition of legal consultation, which in France is a lawyer’s monopoly (unlike in Belgium).

Previously, the definition read as follows:

Legal consultation consists of an intellectual service aimed, in response to a question posed, at providing an opinion or advice based on the application of a rule of law with a view, in particular, to a possible decision.

According to the CNB, with the development of generative artificial intelligence (AI), the focus has shifted to the term “intellectual.” The General Assembly voted in favor of removing the term “intellectual,” resulting in the following new definition:

Legal consultation consists of a personalized service aimed, in response to a question posed, at providing an opinion or advice based on the application of a rule of law with a view, in particular, to a possible decision.

From this perspective, an economic reading would lead us to observe that the “scope” of the lawyer’s monopoly is expanding. This would then be a “protectionist” vote by the CNB to bolster the lawyer’s monopoly against the relentless assaults of legal tech companies.

The Proletarianization Thesis

But this view may be somewhat narrow. For some, the modification of the definition and thus the scope of the monopoly would be a form of proletarianization of knowledge.

According to Bernard Stiegler’s interpretation of this concept:

Proletarianization is, broadly speaking, what consists of depriving a subject (producer, consumer, designer) of their knowledge (know-how, life skills, conceptual and theoretical knowledge). (ars industrialis)

ars industrialis

Proletarianization is therefore the loss of knowledge, not impoverishment. It is the loss of knowledge as formalized by the machine that now implements it.

Because the lawyer would be proletarianized, they would not (or no longer) play an active role in the consultation. They would merely verify what the machine has produced, checking the accuracy of cited sources, for example.

This semantic shift (from intellectual to personalized) could be a fatal blow to lawyers because, in other words, what becomes of legal consultation when it ceases to be conceived as an intellectual activity?

If we return to Stiegler’s definition, the knowledge would then reside in the machine. The legal consultation would be drafted by generative AI and personalized by the lawyer. The lawyer becomes the one who translates the client’s situation into a language the machine can process, then validates (or invalidates) what it produces. The core of the activity is no longer the development of legal reasoning but its verification.

The Weakening of the Lawyer’s Critical Function

For some, this distortion would not be without consequences.

This transformation could weaken the critical and political function of lawyers by reducing their ability to influence debates, that is, democratic processes.

This aspect is complex to address here, but it seems necessary to discuss it by specifying two things.

First, the generalization of consequences seems excessive to me. Although the role of the lawyer is paramount within a democratic society, as the CJEU reiterated in September 2024, not all lawyers are prepared to take on a mission of safeguarding democracy. Many practice their profession with integrity and dignity but without political motives. This is obviously not a value judgment, but one should not attribute political intentions to every colleague. Moreover, if we ventured into this subject, we could legitimately question these intentions: more to the left, more to the right, and depending on this positioning, the consequences of these intentions would sometimes be diametrically opposed.

Second, considering that a “transformation could weaken the critical and political function of lawyers” implies a form of technological myopia see here on this topic. Indeed, we should not forget the growing awareness of the impact of technology on our knowledge (doing, thinking, and living). I must acknowledge an increasingly important attention to technology-related issues within various circles. The very fact of modifying the definition of “legal consultation” has fueled discussions that had perhaps never been held before.

There is a growing awareness of the dangers these technologies represent, and I am inclined to believe that this awareness continues to increase in the face of the dramatic arrival of generative AI in our daily lives.

As I wrote, myopia refers to the situation where we project a technology’s flaws without taking into account future corrective factors, whether technical or political. Myopia can be optimistic or pessimistic. Optimistic myopia refers to the case where one assumes that all flaws will be corrected in the future, while pessimistic myopia refers to cases where current flaws will not be corrected.

This “weakening of the lawyer’s critical function” seems to me to be a case of pessimistic myopia that fails to incorporate the positive developments currently underway.

Practical Nuances

This political and democratic view of the lawyer’s role is certainly relevant and should not be dismissed on the pretext that it is resistant, technophobic, or running counter to a certain hype. Beyond what I have discussed above, there are additional nuances to bring that will help us avoid accepting what might be presented as inevitable.

First, I think it is necessary to push theoretical reflection toward practice.

If we stay with legal consultation, the proletarianization brought about by integrating AI systems does not enable everything and will not exempt the lawyer from some preliminary reflections. If we put ourselves in the situation, the lawyer will, before using any particular tool, need to collect information about the case to be handled. Let us look at this aspect in more detail.

The Questions to Ask (Oneself)

First, the ability to capture and collect information is generally linked to the ability to ask (oneself) the (right) questions. Once again, this ability is not “delegable” as such to the machine (unless a fully automated consultation system is provided, but we are not in that scenario). Here again, the lawyer’s experience will play a decisive role.

So there is already a first element of competence that remains in the hands (but above all in the head) of the lawyer. It is their ability to characterize the situation presented to them and to seek the most relevant information to exercise their judgment.

In this context, the choice of collected information that will enable generative AI to produce a consultation is already a nuance to proletarianization, since the lawyer’s selection of questions and information implies, in principle, a degree of reflection and understanding of the situation they are facing.

Information Gathering

Furthermore, beyond the questions asked, one can also observe the great diversity of practices. An in-person meeting, a video conference, or a phone call does not always provide the same quality of information to the lawyer. For some cases, information exchange may even take place through email correspondence.

These interactions between the lawyer and their client generally do not lead to the same perception and capture of information, nor to the same transmission. Oral exchange, in person and face to face, sometimes allows for capturing other elements that the client had not thought to disclose, or identifying other information that could have an impact or influence on how the case is approached.

The lawyer’s capacity here is not reduced or limited by the machine. They must still ensure they identify the issue and gather the information they deem relevant to handling the case. This information gathering allows the lawyer to reveal all their experience and critical thinking.

This analytical capacity during the exploratory phase should not be underestimated, as it captures the “raw material” of the legal consultation. If you have only 60% of the information, one can estimate that the opinion or advice formulated will be less relevant than if you had 85% of the information. The situation would obviously be ideal if we could capture 100% of the information. In some cases, the lawyer thinks they have achieved this but is actually relying on the client’s perception and interpretation of the relevance of the elements transmitted.

The Use of Generative Artificial Intelligence

Finally, after gathering information by asking the (right) questions, the lawyer must use this “raw material” to draft their consultation. They could, as some believe, pour all the received elements into an AI system. This view seems stereotypical to me and omits the interaction the lawyer would have with the machine.

I have met hundreds of colleagues in training sessions, and the exchanges we have about how to use AI lead me to consider that this view probably represents a minority of use cases. Those who use these tools have technical knowledge and are aware of the risks and dangers of unconscious use of these AI systems.

In this regard, published cases of hallucinations are not legion. Damien Charlotin has identified 709 cases between April 2023 and December 2025 worldwide. That represents 22 cases per month worldwide. For reference, business courts in Belgium registered 44,543 new cases in 2021. If we compare the observed hallucination cases to the volume of cases handled by courts worldwide, it is a drop in the ocean, and even then, I am being generous about the size of the drop.

I obviously do not intend to minimize things but simply to objectify certain observations.

Some might consider that these figures are relatively low due to the limited use of these technical devices. But again, this intuition does not seem tenable given the surveys and polls conducted on the use of AI systems.

The study by the European Legal Technology Association (ELTA) published in 2024 shows significant use of generative AI tools within law firms:

Law firms demonstrate strong engagement with generative AI, with 77% having used it.

A CNB survey from spring 2025 indicates that:

6 out of 10 lawyers declare themselves users of generative AI in their professional practice, of whom 3 out of 10 are advanced users. Approximately 1 out of 10 lawyers is resistant to the use of generative AI in professional practice.

If we therefore cross-reference the number of potential users with the number of hallucinations, we cannot assume an unconscious use of AI by legal professionals.

There are certainly errors and “slip-ups,” but they clearly constitute a small minority of cases. I therefore remain quite optimistic about lawyers’ ability to use AI in a reasoned manner based on these results. I am more skeptical about compliance with certain ethical rules, however.

A Performative Change?

In fact, this semantic change can be frightening because when the CNB says that consultation is an “intellectual service,” it affirms that what matters is the capacity to think. The monopoly protected and valued a form of intelligence.

When the CNB says “personalized service,” it shifts the criterion: what matters is adaptation to the client, personalization. The monopoly then protects a service relationship.

The fear is that the definition may not merely be descriptive but could become performative.

In short, that this definition dictates to professionals what is expected of them. Can we go that far? Perhaps not, but when lawyers are told that their central act is an “intellectual service,” it encourages a form of investment in continuing education and the development of critical judgment.

When they are told it is a “personalized service,” it encourages optimization of the client relationship, service efficiency, and ultimately a consumerist and commercial logic of legal services that is sometimes frowned upon.

This should not become a self-fulfilling prophecy: by redefining the act, the profession is progressively redirected toward other values and other criteria of excellence that do not seem to be to everyone’s taste.

The legal profession remains steeped in a conservatism that in some respects remains quite necessary but is sometimes diametrically opposed to market needs. And that is why the CNB’s gesture, apparently technical and defensive, could in reality have far-reaching consequences. In trying to protect the lawyer’s monopoly against AI, it may be ratifying the very transformation it claims to be fighting.