Sunday, January 15, 2023

The output you submit should be your own

There has been quite a bit of buzz about ChatGPT, an artificial intelligence chatbot that was launched in November 2022 by OpenAI LP, a for-profit offshoot of the not-for-profit OpenAI Inc. After typing in a prompt, the chatbot spits out a readable essay, memo, email, piece of code, poem or other piece of writing the user asks for.

Often the results are remarkably readable and coherent, though not flawless. One former student, for example, sent me the results of their request to ChatGPT to “write an op-ed about Professor Jeffrey Seglin.” ChatGPT spit out a coherent six-paragraph column broadly capturing some things about me, but the resulting essay also got wrong the titles of two of the books I have written.

There were some accurate details in the essay: my name, what I write about and where I work. What ChatGPT got wrong: what it is I teach at the place it has me working. As a result, it misrepresented how influential I had been in certain fields of study without offering any research or detail to support its claims.

Given the factual errors in it and the lack of evidence and support for claims, it would have received a poor grade had it been turned in as an assignment. But if I hadn’t been told by the former student, I’m not sure I would have known for certain that the op-ed column had been generated by an AI chatbot.

Admission application essays are typically short and broadly stated responses to some prompt given to all applicants to the college or university. It is harder to verify the facts applicants write about themselves than it is to verify the title or author of a book or what someone teaches at a particular university. Can, for example, the reader of an application really verify how involved an applicant was in their community cleanup campaign?

Nevertheless, asking ChatGPT to respond to an application essay prompt is simple, and the results get spit out in seconds. It might seem a tempting shortcut. So why not do it?

Because just as hiring someone to write an application essay is dishonest and doesn’t reflect the work of the applicant, so too does farming the work out to an AI chatbot. Although someone somewhere might get away with using an AI chatbot to complete their homework without getting caught, the student will not learn how to think through and do the work themselves.

There might always be people who try to cheat. There might also be those who simply want to get through a course without having to do all of the thinking and work themselves. It should be made clear to applicants or students why trying to pass off an AI chatbot’s output as their own doesn’t result in them learning what they are presumably there to learn.

Although AI chatbot detectors are likely to be developed just as plagiarism detectors developed, the main reason not to pass off a chatbot’s work as our own is that it’s dishonest. Until we start admitting AI chatbots as students, the right thing is for each of us to do our own work even if we might not get caught having someone or something else do it for us. And if we didn’t contribute to that community cleanup effort, we shouldn’t claim we did — though there’s likely still time to pick up after ourselves.

Jeffrey L. Seglin, author of "The Simple Art of Business Etiquette: How to Rise to the Top by Playing Nice," is a senior lecturer in public policy and director of the communications program at Harvard's Kennedy School. He is also the administrator of www.jeffreyseglin.com, a blog focused on ethical issues. 

Do you have ethical questions that you need to have answered? Send them to jeffreyseglin@gmail.com

Follow him on Twitter @jseglin

(c) 2023 JEFFREY L. SEGLIN. Distributed by TRIBUNE CONTENT AGENCY, LLC.

1 comment:

Penney said...

I know that in my community of fellow photographers and artists the AI options are getting a lot of buzz. People are using not just chat versions, but also art versions. There are those who are saying it is just another tool to use, and others who are against any kind of digital art. I can see both sides. I have been using Photoshop as a tool for my art for years. I do use it as a tool, adjusting my art to my taste. Yes, there are filters that can change an image in a blink of an eye. But knowing how to use those filters correctly is part of being a good artist. Those same filters can turn out crap too.

But when it comes to AI there is a bit of a difference. I'm not thrilled that the authors have combed the web to grab any and all images off any website they like and feed it in to their system so it can learn. To me doing that without the permission of the creators is wrong. But many people are arguing with me about this. They liken it to me looking at other photographs or pieces of art work and being influenced by them. I don't know if there really is a difference with that.

Then there are those saying you must identify any work created by AI to be fair. Is it really your art if you just feed a few prompts into the program and it shoots out a fully created piece of art. I mean you can play with the prompts all day until you get exactly what you wanted. But is it the same as using a camera, or a paint brush and creating something? Is it really your art? Or is it like you creating a concept but hiring another artist to create the actual piece?

I am also worried about all the ways people can use these programs to cheat. The possibilities are endless, such as your example of answering questions on a college application. How will we be able to police the abuses?

It is a complex thing that will require a lot of thought and discussion to find out what is ethical and what is not.

This post was created by me without the use of any AI programs!

Penney