Writing with LLM is not a shame. An essay about transparency on AI use.
For people who are curious about AI, you rapidly detect people who are using it not correctly (for those who use AI “smartly” or correctly, it’s more complicated to detect it). I’m pretty sure you already saw tons of ai-written posts on social media. For me, I started to notice this in January 2024. At this moment, I was shocked of seeing all these ai-written articles or comments and I decided to disclaim my AI uses as a transparency gesture. Rapidly, I received some questions about this transparency, and I was engaged with some people on a, literally, philosophical discussions about the question: do we have to disclaim (or not) our use of AI?
I finally be convinced by not disclaiming all my AI uses. One argument to not disclaim it: people do not disclaim if they Photoshop a picture after publishing it and we are surrounded by a lot of edited pictures.
I took the point and decided, with some rare exceptions, to not disclaim my use of AI. But the transparency question was still in my head.
A few months ago, I saw different initiatives regarding the transparency of AI usages and the question re-emerged.
First, I discovered Derek Sivers’s pages on ai usage.. As mentioned by Derek, he has “never ever used AI to generate text instead of (his) own” and he is using his special page to let people know that nothing claiming to be written by him is written by an AI. I was thinking about this initiative and thought it might be a decent idea to explain how we use AI publicly. Time flies and this idea stay in my head.
Few months later, I stumbled upon notbyai.fyi which have pretty much the same spirit. As mentioned on their website, they are promoting human content. They also have a strong statement on their homepage:
Artificial Intelligence (AI) is trained using human-created content. If humans stop producing new content and rely solely on AI, online content across the world may become repetitive and stagnant. If your content is not AI-generated, add the badge to your work.
Lately, I acknowledged the University of Montreal initiative. They shared a guide on AI and recommended to declare the use of AI for academic work.
All these elements decide me that it is the moment to think a bit more about the transparency and try to understand why it might be important to voluntarily disclaim the use of AI (I will not write about why some laws might oblige to disclaim AI).
Furthermore, I must to narrow the scope of my thinking regarding the content. I think it’s worth thinking about transparency but not for every single content out there. There are a lot of written content that can be generated with AI and it’s ok. A commercial, a generated text about meteo or traffic, all these texts create from data with AI don’t have to be necessarily written by human. The only thing we need is reliable information. So for text about facts, the only need is having “true facts”.
But texts can contain a lot more than facts. We can share idea, vision and other really subjective content. This text is a perfect example. I’m writing what I think about something. It’s not meant to be the truth but only an opinion.
This text will focus on text like opinion or essay with a subjective perspective. It will be interesting to thing about other type of content but I try to stay concise although the subject is broad and complex.
So let’s begin.
It’s (not) all about transparency
The easiest answer to the question (do I have to disclaim my AI uses? ) is: transparency. People must know if what they read is written by a human or a machine. For a lot of aspects, this is true and acceptable.
But can we go against this statement? Yes and i think we need to dig further because we can do so much things thanks to AI that you have to get some granularity on your statement.
We need to define what means “transparency” and explain how we are using AI. As some initiatives mentioned above, there is a scale on the use of AI. You can publish content made 100% by AI, assisted by AI or without any AI.
Let’s focus on the “assisted by AI”. What’s the definition of this?
Unfortunately, there is no clear definition from the organisations which support the initiatives above mentioned.
So, at this point, we might have some issues to disclaim AI without a clear scale of usage. Does proofread your text implies you were assisted by AI? If so, why don’t we see people mentioning they used specific tools to proofread before AI apparition? There are a lot of tools out there (Gramarly, Antidote for naming the most famous) and I did not see someone mentioning he used this or that.
Therefore, it’s maybe something else (or more) than transparency.
You should be credible
We could also ask ourselves where does an idea came from when reading a text or consuming opinion. Basically, you can think if all the ideas of this essay came from my head or are generated by AI (spoiler alert: for this essay, i do not use AI, as an thinking exercise).
So, in fact, is it about credibility? Is it like lying if we imply content made by AI as yours?
For my perspective, the question is only interesting when the content is good and/or valuable for the readers.
Let me explain: for “bad” or low-value content, it’s a useless discussion. Why do we bother about it? Someone uses AI to generate “poor” content, without disclaiming he used AI. What will happen? Nothing. And the content will be lost on the Internet. It’ll not be the first time a poor content or low-value content will be published, with or without AI.
The real question is important where the produced content is good and valuable for the readers because, I think, the author will take credit for something he is not really the author…. sounds weird to write, doesn’t it?
I mean, just imagine yourself, discussing with AI about X, Y or Z. Some ideas are shared to the LLM and one great idea emerged from the discussion. You can ask to draft a blog post or an essay around this idea, copy and paste it to your website, et voilà!
But let’s be honest: in this scenario, the author had to identify the good ideas during the conversation. So maybe he is not the real author, but he is at the conception of the content. It is because he exchanged with the LLM that this idea came out. And obviously, I’m not writing about stealing content or idea from someone else. This is illicit and out of the scope of this essay.
Therefore, the only point where credibility is in discussion is for fresh, new and good ideas/contents. So, it’s maybe more about sourcing content than disclaiming the use of AI per se that is important.
The missing source’s problem
André Gide once said:
Everything has already been said; but since no one listens, we always have to start again.
We have a lot of content out there which are a reformulation of things which were already mentioned. And it was out there before LLM.
I start drafting this essay in May 2025. In May, i was also reading [[Mathieu Corteel - Ni dieu Ni ia]] and at the end of the book, Corteel made a reference to the Terence’s aphorism :
Nothing is said that has not been said before.
It’s quite funny to read this because a few days before, I was referring to Andre Gide’s quote that says pretty much the same thing. And regarding the content of that, Gide was rephrasing Terence, I think.
Actually, it might be interesting to challenge theses quotes.
Can we think that everything has been said before is a « true » principle? I like the theory of Liebniz that says that with a finite alphabet, we can only write or say a finite number of things, mathematically speaking.
This rational explanation conducts Liebniz to say that, in some way, we will arrive to the « horizon de la doctrine humaine ». In a sense, we will exhaust all the combination with our finite alphabet.
And this is maybe why we can say that “nothing was said that has not been said before”? Because we already touched the « horizon »? And this can be truer with LLM because it uses data to be trained and obviously, all the produced content of an LLM is a reorganization of things that have been said/written before.
Let’s come back to our main question: the transparency of AI use might be more a sourcing problem.
We have no issue to read material from someone else who are mentioning other’s ideas and sourced it. In this case, the reader can distinguish whether the idea came from the author or someone else. We can trace back the thinking of the author and understand what he meant and how he meant it.
For a lot of things, we are only dwarfs on the shoulders of giants. Except for important discovery, we’re always trying to connect previous knowledge’s to express informations and try to create new knowledges.
These knowledges came obviously for somewhere and depending how strict we are with ourselves, we could have to comply to some rules about how to source other’s ideas. For academic materials, we know that for every idea or paraphrase, we have to source the content for avoiding plagiarism.
But as you may know, an LLM is not naturally working by citing his sources because of the statistical nature of the technology. We are seeing more and more LLM with this feature to source the content redacted. This features, even if it’s great for a lot of reasons, does not solve the transparency problem, I think. For my understanding, it’s a deeper questioning and the answer can be different from an individual to another. In other words, I’m not convinced by these few reasons to disclaim AI uses. It’s not enough.
Trust is the key ?
During my research, I found various explanations about why we have to disclaim AI but every time the “true” reason wasn’t mentioned clearly.
For example, I read :
You shouldn’t claim work you didn’t actually do. For example, if you use AI to write a blog post, you didn’t write the post – a generative AI did. So there’s an ethical responsibility to be transparent, so that your readers know what they’re getting. [[Disclosure of AI and Protection of Copyright]] link
So basically, it’s bad to claim work we didn’t actually do. I agree with that. It’s something that looks like a theft (as mentioned earlier). And an author who use AI without mentioning it can fool his readers, in a sense. But as mentioned earlier, the content that you are reading might be produced in a million ways. It can be created from a long discussion with AI or with a one-shot prompt. In this last scenario, the content will be often low value and we touch the end of the discussion as mentioned earlier.
So, I think, it can be really difficult to redact high value/valuable/good content without having a discussion or sharing ideas with AI. Henceforth, we can not really say that AI produce the content. The author shaped his ideas with the help of the AI and the AI reformulate it. How can we explain, to keep the trust of our readers, this?
How can we trace the frontier between what we had in mind before discussing with AI, what we had in mind during our conversation and what we modified from the AI? (one of the solutions, but incomplete, might be to share all the discussion(s) the author had with AI).
But again, we do not ask author to do that before. We don’t ask authors to publish their draft or the notebook with their ideas, etc.. So even if it is needed to preserve trust, i think there is something else.
Avoiding bias
Now let’s imagine that someone disclaim he used AI to write a brilliant essay. Are we sure people will like the essay in the same way? From my opinion, I’m pretty convinced that an AI disclaimer will create some sort of bad feeling/bias in the reader’s mind even before reading the content.
It will turn on an alert on the reader’s mind and have some different signification for the readers (author is lazy or he is a bad writer, …). The reader will think that he can’t really trust the authors about the sources or the facts mentioned in the text and will be, potentially, biased. The reader can disqualify the quality of the author only by knowing he used AI.
And it’s not a thinking perspective. I do see people saying that they will not consume content made by AI. In art sector, I can obviously understand that but I think it can enforce my point.
It’s pretty much the same situation if we present you a text from someone you hate : you will be potentially biased by his name event before reading the content.
Conclusion
As written above, i started to write this in May 2O25 and I had a lot of difficulties to finish this essay because I was not finding why “we have to” disclaim our AI uses. All the reasons mentioned was ok but i felt in my gut it was not enough.
Hopefully, I found a solution. I read a post from Christophe Denis and this extract was, for me, the answer I was looking:
What we call “ethics” today often takes the form of empty vigilance, a mechanism of conformity, or an accusatory stance, rather than genuine discernment.
“A mechanism of conformity, or an accusatory stance”! That’s it! And see what I wrote? “Why we have to disclaim”. It feels like an obligation we have to respect in a way. I’m not sure I will agree with this idea all my life but all this fuzz about AI use generate a “thought police”.
I think we felt in a ethic fallacy, in a way, with the apparition of a new tech. Even if we compare LLM with other techs, we do not have the same ethics requirements against LLM. Some might say we have to because LLM can generate much more than previous techs but we are falling again in the same reasoning trap.
The problem is not the use of AI but the people how think they can, arbitrarily, criticize the work from someone else because he used or not AI in the name of “ethics”. But speaking about ethics for a so young technologies is useless, in my humble opinions because today, i guess, we have to build ethics standards in AI and as you can see if you do some research, this field is at the beginning of the exploratory phase.