There has been quite a bit of buzz about ChatGPT, an artificial intelligence chatbot that was launched in November 2022 by OpenAI LP, a for-profit offshoot of the not-for-profit OpenAI Inc. After typing in a prompt, the chatbot spits out a readable essay, memo, email, piece of code, poem or other piece of writing the user asks for.
Often the results are remarkably readable and coherent, though not flawless. One former student, for example, sent me the results of their request to ChatGPT to “write an op-ed about Professor Jeffrey Seglin.” ChatGPT spit out a coherent six-paragraph column broadly capturing some things about me, but the resulting essay also got wrong the titles of two of the books I have written.
There were some accurate details in the essay: my name, what I write about and where I work. What ChatGPT got wrong: what it is I teach at the place it has me working. As a result, it misrepresented how influential I had been in certain fields of study without offering any research or detail to support its claims.
Given the factual errors in it and the lack of evidence and support for claims, it would have received a poor grade had it been turned in as an assignment. But if I hadn’t been told by the former student, I’m not sure I would have known for certain that the op-ed column had been generated by an AI chatbot.
Admission application essays are typically short and broadly stated responses to some prompt given to all applicants to the college or university. It is harder to verify the facts applicants write about themselves than it is to verify the title or author of a book or what someone teaches at a particular university. Can, for example, the reader of an application really verify how involved an applicant was in their community cleanup campaign?
Nevertheless, asking ChatGPT to respond to an application essay prompt is simple, and the results get spit out in seconds. It might seem a tempting shortcut. So why not do it?
Because just as hiring someone to write an application essay is dishonest and doesn’t reflect the work of the applicant, so too does farming the work out to an AI chatbot. Although someone somewhere might get away with using an AI chatbot to complete their homework without getting caught, the student will not learn how to think through and do the work themselves.
There might always be people who try to cheat. There might also be those who simply want to get through a course without having to do all of the thinking and work themselves. It should be made clear to applicants or students why trying to pass off an AI chatbot’s output as their own doesn’t result in them learning what they are presumably there to learn.
Although AI chatbot detectors are likely to be developed just as plagiarism detectors developed, the main reason not to pass off a chatbot’s work as our own is that it’s dishonest. Until we start admitting AI chatbots as students, the right thing is for each of us to do our own work even if we might not get caught having someone or something else do it for us. And if we didn’t contribute to that community cleanup effort, we shouldn’t claim we did — though there’s likely still time to pick up after ourselves.
Jeffrey L. Seglin, author of "The Simple Art of Business Etiquette: How to Rise to the Top by Playing Nice," is a senior lecturer in public policy and director of the communications program at Harvard's Kennedy School. He is also the administrator of www.jeffreyseglin.com, a blog focused on ethical issues.
Do you have ethical questions that you need to have answered? Send them to firstname.lastname@example.org.
Follow him on Twitter @jseglin