Ethics and Compliance. Beyond the Hype.

Posted on Nov 17, 2025

The discourse on responsible AI is becoming increasingly pervasive. The declarations of tech players on the subject, the desire to regulate the sector and the technology to frame and demarcate it, are all made with an obsessive will to reach a sufficient level of trust in this “new” technology.

This essay aims to think through this notion of responsible and trustworthy AI and to establish that it rests on an illusion of ethics masking our systemic (and systematic) dependence on technology and our absolute pursuit of efficiency.

I do not hold a technophobic or reluctant view of technology, but I feel it is more than necessary to recalibrate the AI discourse. Between the marketing announcements of tech players, the reports and studies of more or less scientific organizations, we face considerable “noise” and the frantic production of biased content contributes to a hype from which it is becoming difficult to escape. I will therefore try here to set the framework for a discussion we should all be having in the face of the “technological thing” and more particularly generative AI and its promises.

I will first reflect on the reasons for the emergence of this term by considering this “ethical” discourse as illusory and having become necessary to respond to the structural problems of AI.

I will then address the reason why we collectively accept this illusion without questioning our beliefs, and more particularly those about the neutrality of technology.

Finally, I will propose 3 markers to navigate this environment without falling into a reluctant or technophobic posture.


I. Trustworthy AI

1) Why a Trustworthy AI?

Before defining what trustworthy AI is, I would like to point out that if we come to ask questions about the definition of trustworthy AI, about the reason(s) for the existence of this term, we acknowledge, perhaps timidly, that there are “problematic” AIs. This is a premise that seems useful to (re)affirm, given how modest and/or desperately banal the criticisms of AI are (hallucinations or biases, for example).

These “problematic” AIs exist due to a multitude of factors (notably described by Kate Crawford - Atlas of AI).

For my part, if we agree to acknowledge the existence of problems, I would like, without going into detail about how LLMs work, to recall two obvious facts that, strangely, often take a back seat as causes of these problems.

a) From Modeling to Naturalization

Statistical representation is always situated. Any algorithm, whatever it may be, re-presents a phenomenon. It is therefore about presenting a fact through numbers and thus, in fact, reproducing a perception. The phenomenon represented is therefore reproduced based on data: one captures weak and strong signals from a determined dataset. This representation through reproduction is not necessarily objective and true to “reality,” notably because of the data retained, the signals let through, the criteria privileged, etc. There is therefore already a possibility of introducing subjectivity upstream of this reproduction, which may be voluntary (or not).

Next, the quality of the algorithm will depend on how the problem is framed. Before any training, there is modeling: transforming a real situation into “computer code” (to put it simply). This step is also not neutral. It involves encoding norms and values specific to the designers, defined by their own interpretive frameworks. If we take content recommendation algorithms, we can easily observe that a choice is made based on a series of criteria that completely escape us.

Furthermore, the recognized problems of these technologies are also sometimes underestimated due to a phenomenon of “naturalization.” This notion covers the fact of integrating technologies in such a way that they are perceived as “natural.”

In this framework, the modeling that aims at reproducing a representation is a way of reproducing this vision. Yet, because of naturalization, it is no longer “a” vision of the world but “the” vision of the world. The user/consumer/reader disregards this subjective filter that modeling represents and “naturalizes” it.

We no longer question search engine results: we have internalized that the first search results on Google are relevant. Do we not say, after all, that the best place to hide a body is on the 3rd page of Google results? Where nobody goes, for those who did not know.

Because of modeling, the unconsciousness of it due notably to naturalization, users are subjected to algorithmic recommendation choices proposed by platforms that create echo chambers, exposing the user to information or viewpoints that correspond to their beliefs, thereby reinforcing them and naturalization (via confirmation bias). This is obviously, in my view, a significant part of the problem but not the only one.

This phenomenon of naturalization can remain limited, or even nonexistent, if other tools exist and are used concurrently. If one uses several search engines, cross-references results and sources, one already acknowledges the existence of modeling and effectively fights against naturalization. But unfortunately, this is not the case for everyone, and the famous “critical thinking” should be cultivated to counter these pernicious effects of tech.

b) An Unprecedented Adoption

If naturalization can be mitigated by multiple usage, centralized use of a single tool can accelerate this phenomenon.

When ChatGPT managed to break the record for reaching 1 million users (in 5 days) and in October 2025, TechCrunch announced 800 million weekly users, one can legitimately wonder about the naturalization at work among certain users.

The quasi-monopolistic character of technology players must also be questioned beyond competition law. This technological oligarchy has direct consequences on naturalization and consequently on our society.

As Anne Alombert points out, the pharmaceutical or aeronautical sectors are subject to binding regulations regarding the market launch of new products and/or services that prevent commercialization without a battery of tests being performed.

Yet technology can kill. This is not a risk, a potentiality, but a sad reality. I think we must also consider technology as a potential “danger” for humans. These dangers are certainly less visible than in the pharmaceutical or aeronautical sectors, but we have just acknowledged that trustworthy AI exists because there are also “problematic” AIs. We can broaden the reflection to other technologies integrating AI, such as social networks, to note the presence of dangers, sometimes fatal. I do not want to lapse into outraged technophobia, but we must also not close our eyes to the obvious.

No reflection or impact analysis was carried out before ChatGPT was launched on the market. This absence of studies or analysis is a “specificity” of the technology market. Indeed, for many proposed technological devices (hardware and/or software), no study is required, no prior compliance or authorization is sought. This situation can be nuanced and tempered in light of the latest European regulations on artificial intelligence, but in general, there is no study on the impacts these technologies could have on individuals. They “launch” technological products at the cost of millions (or billions) and see what happens.

a) The European Example

In my view, the European initiative and the AI Regulation aim precisely to make AI systems placed on the market acceptable.

The risk-based classification is an approach that appears to be ethical. One defines what is good and what is bad. For what falls in between, a series of conditions to be respected is listed to make “high-risk” use acceptable. For example, one can use an AI system in the recruitment sector but under certain conditions.

On October 8, 2025, Le Monde headlined:

Companies plunged into the legal fog of recruitment with emotional AIs.

Interviewed by the journalist, the co-founding partner of a firm that supports the adoption of new AI uses stated:

At this stage, the first thing that concerns them most is knowing which AIs are prohibited and the risks they face under the regulations if they use them.

There is therefore no questioning of the appropriateness of using an AI to “measure their enthusiasm or evaluate their level of irritation through the intonation of their voice or their facial expression.”

The focus is limited to verifying whether this use is lawful. Yet is there not already an ethical problem in wanting to evaluate candidates using these technological devices? I will return to this later.

The Regulation also targets “prohibited practices.” This approach also seems, at first glance, ethical. (In this regard, I consider that the targeted practices were already prohibited under previous regulations and that we did not really need an additional text to remind us of this.) But the regulation is more “insidious.” Indeed, although certain practices are considered prohibited, States themselves may, under certain conditions, derogate from these prohibitions. This regulation therefore ultimately legitimizes, under certain conditions, significant technological intrusions into citizens’ lives. And this is not a risk, it is a reality, as the City of Paris had the opportunity to demonstrate (Paris 2024 Olympics and on algorithmic video surveillance).

This regulatory framework, although it apparently prohibits a practice, sets a framework for authorizing certain uses. It is therefore once again a way of making an AI use “acceptable.” Moreover, it is quite cynical to define practices that are prohibited for everyone, except for States in certain cases. Do as I say, not as I do, some would write.

2) What Is Trustworthy AI?

In April 2019, a High-Level Expert Group on AI published “Ethics Guidelines for Trustworthy AI.”

According to the guidelines, trustworthy AI should be:

(1) lawful: it complies with all applicable laws and regulations; (2) ethical: it respects ethical values and principles; (3) robust: from a technical standpoint, while taking into account its social environment;

Trustworthy AI is defined based on “objective” criteria (lawful and robust according to standards) but also “subjective” criteria (ethical). This subjective notion is examined with reference to criteria of (i) respect for human autonomy, (ii) prevention of harm, (iii) fairness, and (iv) explicability.

I believe that some of these ethical criteria are already integrated into applicable laws and regulations, so that a lawful AI would already be, in a certain way, ethical.

This is therefore a form of circular reasoning, in a sense, that reflects a superficial “veneer” added to the already multiple layers of applicable regulations.

Furthermore, one may question the reasons for this definition if it actually incorporates the implementation of already applicable legal provisions. Reminding that AI systems must be lawful, ethical, and robust becomes, in a sense, desperately banal.

It is also a way of diverting reflection away from use. The focus is not on the what but only on the how.

3. The Trap of Ethics Washing in Technology

Juan Sebastian Carbonell, in his book “Un taylorisme augmente, critique de l’intelligence artificielle,” writes:

“Ethics is a pure instrument in service of AI development (…) Regulation and ethical AI therefore do not question the use of AI as such, (…) One could say that they primarily aim to make AI acceptable by presenting it as ‘responsible.’”

I do not share this view 100%, but I think Carbonell captures the issue well: we limit ourselves to questioning certain uses to make them “acceptable.” We create an “ethical illusion” to respond to the structural problems of AI. In a sense, it is a form of ethics washing as defined by the French Official Journal of July 16, 2024, and which constitutes:

“a communication strategy of a company or organization that seeks to improve its brand image by abusively claiming ethical values.”

As mentioned above, trustworthy AI as defined can have a very superficial character due to its approach. The what is not questioned, only the how.

Ethics, meaning what we should do (or not), becomes compliance, meaning what we must do.

Developers of solutions will content themselves with following to the letter the principles stated above without truly questioning the technological use and the necessity of the proposed technological solution to the problem examined.

It is a trap to conceive of ethics as a checklist. We treat ethics as a software feature that can be added after the fact, instead of recognizing it as a fundamental element that should guide design decisions from the beginning. I believe that reflection must be done upstream, even before designing the solution.

II. Technological Neutrality

I have denounced the absence of questioning of the use of technology. This observation results, in my view, from a relationship with technology, and more generally with technique, that must be examined from the standpoint of technological neutrality.

All this reflection may seem disconnected in many respects. Notably, we regularly hear:

“technology is neither good nor bad, it all depends on what you do with it.”

To justify this assertion, the example of the hammer is taken to explain that one can build a house or construct furniture but also smash a neighbor’s head or break a window to commit a theft.

If this “reasoning” can work for technique, can it work for technology (which I place within the broader category of technique)?

In my estimation, this premise is no longer tenable and has not been for a long time. To understand this, we must briefly return to the history of critiques of technique and then observe how these analyses resonate today, in the era of technological solutionism and generative AI.

1) The Techno-Critical Movement

The critique of technique is not new. Since always, technique has been criticized and questioned.

The techno-critical movement of the 20th century saw a plethora of inspiring authors. Among them, one author particularly strikes me: Jacques Ellul, because his thought is strikingly relevant today in many respects.

First, Ellul observes that our society and the individuals who compose it are in an absolute pursuit of maximum efficiency in all things. It is not merely about efficiency at work or in production, but efficiency in all layers of our society and our systems.

According to Ellul, technique is systemic: it is an autonomous force that governs social organization as a whole. This systemic character derives, according to Ellul, from the desacralization of the natural world and a transfer of the sacred to technique.

We therefore reinforce the systemic character of the thing through its sacralization and by giving it a new value: efficiency.

The consequence is that technique is no longer an instrument, it is a religion, and because one considers technique as such, one loses all rational and free critical capacity. Technique and efficiency become a dogma.

I was recently reading a newsletter by Tariq Krim that stated:

AI plays an ambiguous role. On one hand, it promises us to “write better,” faster, more clearly, more elegantly. On the other, it reinforces the idea that our raw thought is insufficient. Behind each text generation, there is an implicit negation: we are not enough.

Personally, I find this observation deeply meaningful and revealing of what Ellul was describing nearly 60 years ago. “We” are perpetually searching for maximum efficiency, and technical and technological innovation only reinforces this quest.

2) Technological Solutionism

This efficiency dogma will lead our society into certain “drifts.” We have become so convinced that technology is efficient that it becomes virtually the only audible answer to our problems. We fall into techno-solutionism which holds that there is a techno-entrepreneurial solution to every given societal problem.

Conceptualized notably by Evgeny Morozov, technological solutionism imposes technology without a need being identified. “If it can be done, we do it,” some would say. This form of techno-optimism creates the illusion that technology can solve everything. We systematically transform complex social problems into technical challenges.

This way of doing things is “unfortunately” quite seductive for the political world, as Benjamin Pajot explains, because it offers:

an immediate repertoire of action within reach of public decision-makers under permanent pressure for results and the short timeframe of political communication.

We are therefore confronted with an overamplification of Ellul’s observation, caused by technological innovation and techno-solutionism that lead us to “always more, always better.” We no longer discuss the why but the how, we seek to optimize the system instead of questioning it.

But we must inevitably remind ourselves of one thing (I prefer the English here for conciseness):

technology doesn’t solve problems, humans do.

3) Technology Does Not Solve Problems, Humans Do.

Ellul’s observation about efficiency is not new. It was formulated in an era and in a context. It is surely the striking character of the formulation that is one reason for my affection for the author.

However, many others had, before Ellul, observed the same phenomenon.

Marx wrote:

The steam engine was less a starting point than the materialization of a transformation already underway in capital: the search for ever greater productivity and the domination of labor by technique.

If the causes of this pursuit of productivity or efficiency can be different and debated, the result is undeniably the same.

A brief excursion into the labor sector will, I hope, further illustrate my point of view. Taylorism, for example, was already a form of technological solutionism at the beginning of the 20th century. Where workers were deemed inefficient and/or too expensive, Taylor proposed a decomposition of tasks that would allow replacing these skilled workers with machine operators with a low level of autonomy and thus competence, thereby enabling a reduction in labor costs.

We move from the autonomy of the artisan, applying their own rules (auto-nomos) using their body and tools, to automation (following rules). It is worth noting that the term “automation” was coined by Delmar S. Harder, vice-president of Ford Motor Company in 1948.

4) Technology Is Not Neutral

Technology cannot be considered neutral.

First, because we consider, in the manner of a dogma, that technology is necessarily synonymous with efficiency. This value is considered intrinsic and biases our perception of the “technological thing.” We no longer question this efficiency because it “goes without saying.”

Next, the development of technologies is oriented by sectoral funding choices. A large part of the technologies we currently use originate from research funded by the military sector. It is therefore a choice to develop one technology rather than another. This choice is then inspired and/or guided by considerations specific to the sector that funds it.

Moreover, technological development has the consequence of creating (un)conscious changes. The banking sector seems to me to be a good example. Its digitalization has profoundly transformed banking services and has, in certain respects, “restricted” them. While one can consider ATMs as a form of progress and beneficial for users, the “all digital” approach found in the banking sector has very certainly given you some gray hairs. Between the disappearance of local branches, the difficulties of calling a “human” to resolve a problem, and that human who tells you to use the application for which you are precisely seeking help due to a malfunction, it is becoming increasingly difficult to enjoy one’s money. What was supposed to facilitate operations sometimes becomes an obstacle course. Try closing the account of a deceased person at a fully digitalized bank. You will discover a Kafkaesque labyrinth: the app redirects you to the website, the website to the chatbot, the chatbot to a phone number that redirects you… to the app. This absurdity is not a bug: it is the feature of a system that has optimized its profitability at the expense of its primary function: serving the client and thus the human.

Finally, our relationship with the world is transformed. While we are all still free to use technology and technique, it remains difficult, if not impossible, to ignore it. Unless one lives as a hermit, secluded from society, we all experience, to varying degrees, the technological “intrusion” that also has, to varying degrees, an impact on our lived experience. According to some, the car has “disfigured” our cities, and cyclists who venture onto the asphalt are still looking for their place. This situation is “so” problematic that development work is being carried out to “promote” soft mobility, which, in some cases, is a gentle euphemism.

Technology cannot be considered neutral. It crystallizes choices from its design through its manufacture and during its use. It is therefore necessary to question its use and its relevance.

III. How to Navigate?

If one becomes aware of the stakes, one will generally ask the question of ethics and of “in what society do we want to live?”

Faced with this technological force, we are led to question our values, and some are tempted to define the contours of tomorrow’s society in the face of technology. I think this question is a trap because it aims at observing a problem and evoking an out-of-reach solution like a Kantian ideal: an idea that is often considered a real object when it is merely a regulative idea. (see my article on the subject available here). In that reflection, I concluded that the “logical” solution implied by this question was inadequate.

My conclusion remained insufficient and therefore unsatisfying. I would therefore like to develop here a few elements of solution that could allow each person, at whatever level, to navigate through these complexities.

1) The Trap of False Alternatives

Before presenting my proposed solutions, I would like to briefly explore what cannot be the solution.

As already written, my point is not technophobic or reluctant, and rejecting technology wholesale would deprive us of certain opportunities. Wikipedia would not have been possible without technology. We have interesting advances in the health sector thanks to technologies that are undeniably necessary. The internet has made possible things that were until then impossible to conceive and/or imagine.

Adopting everything is not a path to follow either. We know that the lack of neutrality of technology can lead us to complex situations whose contours we are increasingly perceiving. Moving forward without guardrails or structuring framework, blindly adopting all technologies without discernment would plunge us into a dystopia whose features have already been sketched by many authors.

Regulating everything would then be the “royal” path to solving these technological “problems.” This is moreover the solution that has been applied for several years in Europe and which earns us the mockery of certain competing countries. While regulation has its merits and is necessary, we must admit the sometimes inadequate character of certain norms. This inadequacy can be explained in many ways, and the idea is not to list them here. Let us simply note that technological innovation evolves at such a pace that regulating such a sector is often done out of sync. We are therefore often behind in regulating. And even when the rule is relevant, it can only be so for a few years and can only survive thanks to jurisprudential evolution that will give it currency and restore its relevance.

Finally, there is also sometimes a belief that by refusing the adoption or integration of a technology, one loses control. A form of competitive advantage left to others that would place the renouncer in an inferior position. These beliefs are obviously made through the prism of efficiency which, for more and more people, is no longer necessarily a cardinal value, although the system continues to valorize it.

2) A Compass with Three Markers

It is therefore necessary to know where one wants to go if one wishes to navigate. I do not claim to know where we should go. However, I aim to propose three markers that will guide this navigation and will, I hope, enable a better reflection on the use of technologies.

a) Detecting Solutionism

The first thing that seems necessary to me is to become aware of the existence of these “biases.” To become aware that we have a tendency to plunge into a form of technological solutionism on a recurring basis.

Our society is driven by “productivity.” Many seek to be more productive at work, for example. We then find abundant literature on the subject (David Allen, Cal Newport, or Ali Abdaal, to name just a few).

Everyone seeks to find the means to be more productive. We research, we read what others do, and we seek organizational solutions to do more with less. Very quickly, we fall into catalogs of “magical” applications that allow you to do a whole series of apparently wonderful things and that ultimately divert you from real productivity.

Instead of doing, we search for the best way to do. This search for productivity ends up becoming an infernal procrastination (Let us not reject these applications wholesale, but very often, the best way to “sort things out” is a method and a pen and a sheet of paper).

If one takes an interest in the subject, one quickly sees that the majority of proposed solutions are technological.

As with the example of AI in recruitment, we ask the question of the lawfulness of tools as criteria for the relevance of their use. Yet this is, in my view, a lure. The legality of a technology does not, de facto, make it necessary and effective (and even less ethical).

We must therefore recognize this solutionism that can take over and examine it by critically verifying the problem and its root cause. By understanding the root cause of the problem we seek to solve, we will then be able to more finely gauge the appropriateness of the envisioned solutions. Perhaps a technological solution will ultimately be the best solution. By preserving this questioning, we set up a first filter against this hype.

b) The Technological Pharmakon

Once the solution is envisioned, I propose mobilizing the concept of the “pharmakon.”

Carried forward by Bernard Stiegler, following Derrida, the concept of the pharmakon comes from the dialogue between Socrates and Phaedrus as recorded by Plato.

In the Phaedrus, Aristotle relates, through a story between a god and a Greek king, that writing can be a remedy “against the difficulty of learning and knowing” but also a poison that “will produce only forgetfulness in the minds of those who learn, causing them to neglect memory.”

Transposed to technology, we should consider technology as necessarily complex and ambivalent in its effects. This understanding of ambivalence allows us to move beyond the techno-optimist approach and offers an elegant explanation to circumvent the immobility in which we might find ourselves if we confined ourselves to seeking a technologically “perfect” solution.

The strength of this concept also lies in the pharmacological approach proposed by Stiegler. Like a drug or medication, everything is a matter of dosage. Too light a dose has no effect; too strong a dose can be lethal.

There is therefore a balance to find through this complexity that becomes inherent to all technology.

Far from the myth of technological neutrality, the Pharmakon affirms the ambivalence of each technology and allows, in my view, a more adequate apprehension of the advantages and dangers of the technological solutions being considered.

c) Technological Myopia

To conclude, I will rely on the concept of “technological myopia” developed by Richard Susskind, but I will reinterpret it to give it a broader meaning than the one that was proposed.

This myopia refers to the situation where we project the flaws of a technology without taking into account future correction factors, whether technical or political.

Moreover, myopia can be optimistic or pessimistic.

Optimistic myopia refers to the case where one assumes that all flaws will be corrected in the future, while pessimistic myopia refers to cases where current flaws will not be corrected.

For example, if generative AI hallucinates today, these hallucinations will not necessarily be as present in a few months/years. Conversely, one should not imagine either that these hallucinations will be purely and simply erased in the future through technological innovation.

By mobilizing this concept, we seek to bring rationality to projection. It also allows us to consider correction factors (political or technical, for example) in order to objectify, as much as possible, this projection.

IV. Conclusion

Through this mini-essay, I have attempted to draw up an assessment of our positioning toward technology.

By understanding that technology has efficiency as an intrinsic value, which gives rise to techno-optimism promoting technological solutionism, we can more objectively challenge the directions taken “intuitively.” Without falling into a technophobic or reluctant posture, I observe that we are offered trustworthy AIs, and more broadly “trustworthy” technology, as technology whose flaws would be erased and/or minimized.

We then fall, once again, into an implementation of the principle of efficiency and the absolute pursuit of efficiency. Creating a trustworthy AI means starting from the principle that AI is necessary and that there would be no other solution.

Today, questioning trustworthy AI and ethical technology amounts to making technology acceptable without objectively and sincerely questioning it.

If we detect technological solutionism, understand technological complexity and accept its ambivalence (Pharmakon), we can (perhaps better) think about technology by being careful to correct our projections (optimistic or pessimistic) (technological myopia).