Authenticity and sincerity. The aporia of transparency
For several months, I have taken the time to reflect on the question of transparency regarding the use of generative artificial intelligence (hereinafter GenAI) (see notably here). I had told myself that I would write a follow-up to that first note, as I had received quite a few comments that had reshaped my thinking. I eventually set the project aside. And as often happens, it resurfaced unexpectedly. First, following this article and then a few days later, during a reflection on a writing project integrating GenAI.
This (almost obsessive, I admit) reflection makes me deeply perplexed each time and places me in a very ambivalent situation. On the one hand, I am very “pleasantly” surprised by what I manage to produce with the help of GenAI. Speed of execution is important to me, which is why I like using GenAI to quickly externalize what I have in mind. I then spend time reformulating, cleaning up, and nuancing the text. For me, GenAI is not a cure for writer’s block. It is a way to quickly start a project (which may remain at that stage for months). GenAI is therefore an anti-frustration. As soon as I have something in mind, I throw it into a note and then interact with GenAI to structure that reflection, which I then rework.
However, my readings and reflections push me to consider the use of GenAI as “quasi-dangerous” (see on this topic my note on the dispossession of operative capacity). This tension between these two poles is persistent and is ultimately a good thing.
It forces me, tirelessly, to ask questions and to put into perspective the conclusions I thought I had drawn from my previous reflections. Here, then, is an evolution or a complement, rather than an update.
Authenticity versus sincerity
The argument that struck me the most recently is the distinction between authenticity and sincerity (see The Internet’s New Favorite Insult: ‘Did AI Write That?).
Authenticity refers to the certified character of content in relation to its author. Authenticity in writing is the absence of mediation between the author and what is written. The chosen words, the turns of phrase specific to the author that convey the substance of their thought. Authentic writing would thus be a way of obtaining transparency into the author’s thinking, like an open window onto their thoughts.
Sincerity, on the other hand, refers to the correspondence between what is said or written and what is thought. It is the congruence of thought. Sincerity thus reflects the faithful expression of the author’s convictions, translating them fully and completely, without pretense.
Put differently, one could consider that for authenticity, it is the form that matters, while for sincerity, it is the substance.
These two requirements come into conflict when GenAI enters the picture. I can be perfectly sincere in my convictions while using GenAI to express them. But this “algorithmic” mediation compromises authenticity in the strict sense: the text no longer emanates directly from me; it is the result of a human-machine co-production.
Today, I observe a tendency to value the authenticity of content without genuine consideration for its sincerity. Let me explain: by using GenAI, one can be sincere, but according to some, one would not be authentic in the strict sense of the term.
GenAI would position itself as a means of transposing the author’s thought and, in its formal expression, depersonalizes it.
I believe it is this depersonalization that is problematic today and that leads the reader to seek more authenticity than sincerity.
The feeling of betrayal
To illustrate this tension between authenticity and sincerity, I will refer to a post I saw on LinkedIn where the author describes their emotion following the “discovery” that a creator they followed was using GenAI:
In my mind, everything collapsed. “Everything,” in the sense of credibility, trust, curiosity. And this last point is for me the most important. Instantly, I lost the curiosity I had for their reflections. I can no longer ask myself that delicious and piquant little question when their name appears in the feed: “Oh, here’s Chris Do! What does he have to say this time?” The magic has vanished. Doubt has settled in. His AI clone no longer interests me. The one who spoke about my profession better than anyone has disappeared.
So a betrayal is taking shape. The content is no longer the distinguishing criterion. It is the trust placed in the fact that the author is the one who writes.
If this is truly the case, there is also a form of admission that seems quite troubling to me: we value content based on the person who wrote it and, consequently, we bias our perception and our evaluation of its relevance. If a notable specialist writes and/or says something, we then have more reason to consider the content relevant. This is the figure of the argument from authority. X said this, therefore it is true and relevant. If we learn that this content was written with the help of GenAI, we deny it any interest and its author is discredited almost mechanically, even if the content is sincere in the sense discussed above.
Are we not going too far? Consider this: we constantly talk about critical thinking, nuance, and the need for an objective view of the world and information, and the simple fact of admitting to having used GenAI would completely denature content and eliminate its relevance?
The transformation of communication codes
This search for authenticity masks, I believe, a deeper issue: the normalization of communication codes. The difficulty also lies in the habit we have developed of writing and reading authentic content (content written by humans). Today, we face a growing confrontation with content generated by and/or with the help of GenAI that disrupts our habits, leading to diverse reactions and reflections.
If I were to venture a metaphorical comparison, it is as if we changed the light bulb in our kitchen: accustomed to a warm, soft yellow light, we would switch to a cold, white light. The change is not necessarily negative, but it is unsettling.
With GenAI, it is generally the form that reveals a text was generated with assistance. We notice the style is robotic or mechanical, the sentence structures are similar, the punctuation marks are redundant. What remains problematic is that we invert things: we spend our time looking for markers of authenticity to identify whether a given text was generated by or with the help of GenAI. Energy is therefore devoted to detecting assisted content, without any primary consideration for the quality of that content.
But the situation can be different. The reader realizes that the substance is devoid of meaning and reassures themselves about their judgment in the face of this vacuity by identifying the “patterns” of GenAI writing.
This is a problem, because in reality, it is a form of abandonment by the reader who decides to stop at this formal aspect without considering the substance. What worries me is that if we persist down this path, we are heading toward valuing the imperfection of authenticity rather than the quality of sincerity.
This inversion leads to a phenomenon I recently observed in a developer’s blog post: errors, awkwardness, stylistic imperfections (traditionally negative signs) are today becoming guarantees of authenticity. These imperfections signal a human origin in an environment saturated with algorithmic productions.
As written above, this regression is worrying. We are reduced to valuing imperfections as proof of humanity. But do we still have the ability to recognize thought itself? Because ultimately we fall back on secondary indicators: the rough edges, the traces of imperfection that would confirm a non-automated process.
The tension between effort and quality
Some even go so far as to proudly display their non-use of AI. Why does Derek Sivers take the trouble to write a page on his site announcing he does not use AI? Why do we create “NotByAI” badges?
In my estimation, it is a somewhat insidious way of flaunting one’s “competence.” Something like “I write, and without GenAI,” which leads to a form of denigration of all content generated with the help of GenAI. A new form of pedantry. I am surely generalizing, and the intentions of some may not be those of others, but the idea is there.
I also see a defense mechanism against the massive production of content with the help of GenAI, which in my view betrays the authors. Why reveal this aspect of things? Should the quality of a text generated without GenAI not speak for itself? Quality should be self-sufficient, no? There is therefore also a tension between the notion of effort and the notion of quality. We have been conditioned to consider that the effort put into editing one’s thought and formatting it sometimes amounts to a struggle to articulate one’s thoughts and ideas. This struggle, this effort, is today considered synonymous with quality.
This consideration is insidious. It is deduced from considerations we have regarding texts generated by GenAI. Indeed, if we discredit a text simply because it was written with the help of GenAI, is there not a form of over-valuation of texts written without assistance? Do we not fall into the stereotype of immediately valuing any content written by a human, without consideration of quality, substance, or relevance?
In this framework, we focus on criteria extrinsic to the text (its form, its style, its imperfections, its errors) to consider that it has a proper human quality, while intrinsic criteria concerning the substance, relevance, depth, and angle of analysis are relegated, almost vulgarly, to second rank.
But effort is not a criterion of quality. It is a social signal that reassures us about the authenticity of the author, but says nothing about the relevance of the content. A bad writer who toils for hours will produce a mediocre text despite their effort (some might also consider that this effort will make them better). I share this view, but it is in my opinion external to the question of transparency addressed here). A good writer assisted by AI can produce an excellent text in less time.
Valuing effort for effort’s sake amounts to sacralizing the process at the expense of the result. It is a form of moralism that seems to me to rest on contestable grounds.
Some will also tell me that sharing content means claiming others’ attention, and that if we share content we did not write, why should others read it? This argument presupposes that AI-assisted content is not truly “written” by us. It is this presupposition that must be questioned.
The question of the author in the age of AI
Stephane Vial, in his text “La mort de l’auteur n’aura pas lieu a l’ere de l’IA”, offers a reflection:
We thus move from a logic of authorship (paternity) to a logic of ownership (mastery and control of tools, software, processes, and their results). The ability to orchestrate, own, and manage the means of production and distribution becomes as decisive as the signature itself. To be an author in design is less about inventing alone than assuming the result of a co-production.
This observation seems interesting to me. Writing tools evolve, and this reflection on transparency could result from this evolution and the author’s relationship with their tools. Now that we have tools that can generate content, it is legitimate to reflect on this notion of transparency in order to better think about the notion of authorship.
Vial continues:
To be an author in the age of AI is to assume this responsibility. It does not matter whether one has used ChatGPT, Wikipedia, a dictionary, or Google: what is essential is being able to validate and defend what one signs.
The author’s responsibility does not disappear with GenAI. But this assertion poses a problem for me when Vial also writes:
Do we ask writers to reveal their use of grammatical correction software like Antidote or to confess to using a ghostwriter?
This comparison strikes me as inappropriate. Grammatical correction is not comparable to appropriating content generated by someone else. The comparison of AI to ghostwriting is, on the other hand, much more relevant, albeit imperfect. An author publishes content they claim to have written themselves when in fact they were assisted by a third party (invisible).
The question of transparency: an aporia
All of this brings me back to my initial question: should we disclose the use of GenAI?
I would like to consider the two scenarios that may arise.
Choosing silence, revealing nothing, and letting the text speak for itself. This path bets on qualitative indiscernibility. But it exposes itself to a risk: if the use of GenAI is discovered later, credibility collapses. The example of Nicolas Casaux decrypting Samah Karaki’s work is quite telling on this point. The author had apparently used GenAI without disclosing it. This could be felt in the form but also in the substance (which is an important aspect of my reflection). The revelation destroyed credibility.
Choosing disclosure and publicly assuming GenAI assistance. This path transforms a potentially suspect practice into an assumed approach. This transparency exposes to another risk: that of immediate disqualification, regardless of the actual quality of the content.
Between these two scenarios, there are middle paths. One where a benevolent reader could understand, identifying with the initiative and thereby legitimizing their own GenAI-assisted writings that were not disclosed.
These hypotheses also depend, ultimately, on the quality of the final result, and there is no very clear answer.
Does the question of transparency ultimately reveal our (growing) inability to evaluate a text on its intrinsic qualities? Would we demand to know the conditions of production because we no longer know how to judge the product itself? Or is the space saturated with content and we are operating a filter? Is this demand for transparency a symptom of intellectual abdication? Does it excuse us from judging by allowing us to sort according to binary and reassuring criteria: with or without AI?
The dogmatism that paralyzes
I am still stuck on a form of requirement, of dogmatism in my relationship with writing. Considering that one cannot use GenAI to share one’s thoughts because it represents a form of cheating, of short-circuiting. But at the same time, when I use GenAI as a super assistant, the result often delivers.
This requirement is difficult to sustain in principle because it would suppose rejecting not only GenAI but also all the technical mediations that already structure our intellectual activity. Should I refuse the Internet because GAFAM control its infrastructure and Google directs my searches? The article shared above was suggested by my news feed. Should I live as a hermit to be consistent?
Some might argue, as I read in a comment, that adopting “an uncompromising moral stance against generative AI contradicts the very values of research: doubt, complexity, pluralism, and reflective ethics.”
This dogmatism leads to paralysis. It prevents thinking about technology as pharmakon: both poison and remedy, alienating and emancipating depending on how it is used. It also disregards the reality of our environment, which is technical.
There is therefore a critical use of GenAI that must remain possible. It requires vigilance about what the tool does to our thinking. It supposes becoming aware of the moments when the algorithm makes us drift, when it flattens our reflection, when we must fight against its suggestions. This struggle itself can become productive.
A disturbing question
All these tensions converge toward a single question: what makes a text thought rather than simply produced?
This question has no technical answer. It cannot be resolved by external markers (use or non-use of AI), by moral criteria (effort deployed), or by declarations of intent (sincerity of the author).
It can only be resolved by a judgment on the text itself. A thought text is recognized by its depth but also by its rough edges, by its assumed contradictions. A merely produced text is recognized by its polished vacuity, by its homogeneity.
I persist in believing that the quality of content generated with GenAI can only be guaranteed by human investment in direction, structuring, and the mobilization of concepts. When this attention is lacking, the quality of the text diminishes. The text becomes generic, hollow, recognizable in its characteristic GenAI vacuity.
As a commenter writes: “But even if LLMs could write articles in my voice I still wouldn’t use them due of the ethics of misrepresenting authorship by having the majority of the work not be my own words.” I understand this position. But it presupposes a very strict definition of what “one’s own words” means. Are my words truly “mine” when they are nourished by thousands of readings, conversations, and influences? The question of pure originality is perhaps an illusion.
Despite these elements, I remain undecided on this issue and cannot definitively settle it. There is a certain education that pushes me to favor authenticity, and this is what, in my view, leads to the conclusion that the use of GenAI as a writing aid must necessarily be disclosed, out of respect and in consideration of the trust the reader grants us.
But I also understand that this demand for transparency is itself symptomatic of a deeper difficulty: our inability to judge the intrinsic quality of a text. We cling to reassuring external criteria because we have lost our tools of judgment.
This question is aporetic: it puts into tension contradictory requirements that cannot be simultaneously satisfied in our technical environment. Authenticity against sincerity. Effort against quality. Instrument against expression.
None of these tensions finds a satisfactory resolution. They reveal that our relationship with GenAI-assisted writing is fundamentally contradictory. We want both the benefits of assistance (time savings, formal quality) and the guarantees of authenticity (personal effort, human origin).
This contradiction is the consequence of our historical situation. We are living through a moment of transition where the old categories (author, work, originality, authenticity) no longer hold, but where the new categories are not yet stabilized. The question of transparency about the use of GenAI is merely a symptom of this deeper disarray. It is the Promethean shame evoked by Gunther Anders in 1956: the shame that seizes humans in the face of the humiliating quality of things they themselves have manufactured.
This note reflects a thought process still in progress. I publish it in this state of assumed indecision, because the indecision itself is part of the reflection. And because I care about transparency, I inform you that I used GenAI for this text to format several notes written on the subject. Personal writing work was done on the generated text.