Sunday, December 07, 2025

Does adding errors make your ChatGPT work seem authentic?

Is it OK to introduce errors into an essay written by ChatGPT to make it appear to be more authentic?

In May, James D. Walsh wrote an article for New York magazine about how college students are using ChatGPT to cheat their way through college. Walsh mentioned one of the way students can attempt to evade detection is by adding typos to essays generated by an AI chatbot.

Some AI experts recommend introducing a minor error or two to the output to give your work more of a sense of “authenticity.”

I was reminded of the old saying attributed to everyone from New York Post columnist Leonard Lyons to comedian Groucho Marx. The late journalist Daniel Schorr recounted a version that involved advice given him before he moved from print journalism to join Edward Murrow for a broadcasting career. “Sincerity,” a producer apparently told him. “If you can fake that, you’ve got it made.”

While the origin of the sentiment remains unknown, the underlying cynicism seems relevant when it comes to authenticity. Just as there is nothing sincere about faking sincerity, there is nothing authentic about doctoring work produced by generative AI chatbots to make it appear you produced the work yourself.

Doctoring AI output to make it less detectable may be clever. It may help avoid getting caught breaking the rules. But it is hardly authentic.

That still raises the question, however, of whether it is OK to do so. It may be OK in some situations, but being dishonest about using it is not.

If you’re a student in a course and the instructor has laid out specific instructions on the course syllabus or if there is a university code against using AI to generate your work for you, then it is not OK to use it. If a syllabus sets out the rules, then trying to evade detection by using AI and introducing a few minor errors suggest the student knows he or she is breaking those rules. It’s also dishonest to present work created by you without disclosing that may not be the case. If a professor makes clear that he or she won’t use AI either, the right thing is to adhere to that commitment.

If the rules aren’t laid out, the right thing is for instructors to make clear what their expectations are. Many syllabuses contain a sentence or two about plagiarism. It might be wise to consider adding similar language about AI use. When in doubt, a student should always ask a professor if what he or she plans to do is OK.

Increasingly, outside of the classroom, AI has been seen as a useful tool when making presentations, creating resumes, writing cogent emails and completing what might be considered mundane tasks. But anyone using AI would be wise to check whatever product AI gives them before releasing it to the world. Even if you don’t create the work, the right thing is to make sure that it reflects whatever message you want it to get across.

Relying on AI can be OK, but doctoring its output to come across as more authentic is not. Go ahead and ask ChatGPT its opinion. It will tell you that it’s not OK, can backfire and that artificial typos feel artificial. At least that’s what it told me when I just asked it.

Jeffrey L. Seglin, author of The Simple Art of Business Etiquette: How to Rise to the Top by Playing Nice, is a senior lecturer in public policy and director of the communications program at Harvard's Kennedy School. He is also the administrator of www.jeffreyseglin.com, a blog focused on ethical issues.

Do you have ethical questions that you need to have answered? Send them to jeffreyseglin@gmail.com.

Follow him on Twitter @jseglin.

(c) 2025 JEFFREY L. SEGLIN. Distributed by TRIBUNE CONTENT AGENCY, LLC.

1 comment:

Penney said...

I agree with all of this. Here is how I use Chat GPT to help me. In the last year I have started to take some courses at my local university. As a member of the 60something group I get to do it for free. That means that I don’t have to write the papers if I don’t want to because I don’t get any credits. However, sometimes I choose to write the paper anyway because I like the challenge, and the opportunity to learn more. I will always choose to do it the right way, even if it doesn’t count.

I’m a terrible writer. I ramble and go all over the place. So I do the research, write my version, then I have my husband read it. If he can’t fix it easily we run it through Chat GPT. The program cleans it up nicely and it is still using my work and point of view. It usually gives me three choices and I choose the one closest to my style, but cleaner. And I always proofread it and correct anything that may be wrong. I find this really helpful.

Now, if I was taking an English course, or a writing course, I would not use Chat GPT. But, I still wonder if I should tell the teacher that I asked for help from AI.