Sunday, June 22, 2025

Are you responsible for checking your AI work?

If you use artificial intelligence (AI) to assist you in your professional work, how much responsibility do you have to check to make sure that whatever AI provides is accurate?

I am not an AI basher. There are moments when AI can be useful. In my line of work, some find it useful to correct grammar and usage, help with generating broad ideas, assist with a translation of a foreign phrase, or any number of other items. But by itself, AI – whether it is ChatGPT, X’s Grok or any number of other generative AI tools – is yet to be trusted to replace an actual human being without that human being double-checking all the work.

AI-generated documents are notorious for “hallucinating” or making stuff up. AI output is often rife with errors that might include made-up facts or references to sources that simply don’t exist. Sometimes it even includes links to those nonexistent sources, which go to nonexistent pages.

As an exercise, I asked ChatGPT to write my biography. Because I’ve been writing this column for the past 27 years, have written for other publications, made some television appearances, written books, appear in the directories or on the websites of the schools where I have taught, and have a short Wikipedia profile, there is plenty of information about me available on the internet. Most of what I’ve seen has correct information. There is plenty of data from which ChatGPT can draw.

Most of what ChatGPT came up with was correct, but some was not. Only someone who knows everything about me -- my work and my family -- would be able to discern what’s correct.

It listed my wife’s name as Lynne, which was a surprise to both me and my wife, Nancy. It mentioned my two daughters-in-law, Megan and Monica, who to the best of my knowledge don’t exist. While I do have four grandchildren, their names are not, as ChatGPT insists, David, Rose, Jonah and Mae. My first great-grandchild is not Eleanor Mae, although that is a lovely name. I also did not live with Lynne, whomever she may be, in New York City for a couple of years in the early 1980s. ChatGPT also provided me with a nephew named Joshua, also nonexistent.

ChatGPT also pegged me as the former managing editor of the Harvard Business Review, a publication I admire but have never written for, let alone edited. It listed many of the books I’ve written correctly but included others I didn’t, such as "Writing to Be Understood," a fact that would come as a surprise to Anne Janzer, who actually wrote that book. It also had me listed as the inaugural fellow in residence at the Center for the Study of Ethics at Utah Valley University. I’ve visited Utah, but never Orem where that university is located and never held that fellowship, if it exists (although if Utah Valley makes an offer, I’ll consider it). ChatGPT also thinks I’m a Fulbright Specialist. I’m not.

The right thing when using AI is to remember that it is a tool that is not foolproof without a human check. Sometimes it gets things right. Often it doesn’t. Assuming that you can use it to do your work for you – whether you’re a teacher, student, researcher, bureaucrat, job seeker or anybody else – is wrong if you expect your work to accurately reflect you and your abilities. I’m confident even Lynne would agree, if she existed.

Jeffrey L. Seglin, author of The Simple Art of Business Etiquette: How to Rise to the Top by Playing Nice, is a senior lecturer in public policy and director of the communications program at Harvard's Kennedy School. He is also the administrator of www.jeffreyseglin.com, a blog focused on ethical issues.

Do you have ethical questions that you need to have answered? Send them to jeffreyseglin@gmail.com.

Follow him on Twitter @jseglin.

(c) 2025 JEFFREY L. SEGLIN. Distributed by TRIBUNE CONTENT AGENCY, LLC.

No comments: