Can I Use ChatGPT for the Tedious Parts of My Job? – The New York Times

Subscriber-only Newsletter
The magazine’s Ethicist columnist on using artificial intelligence to assist with mundane tasks at work.

I’m a writer and a college professor at a small college, and I recently became chair of the English department. I usually love to write, but when it comes to administrative documents, I struggle — and this new role asks for a lot of them.
It occurred to me that ChatGPT might prove useful for the reports, proposals, assessments and the like that take up the precious time I could be devoting to students and my own scholarship. Is it OK to use ChatGPT to generate drafts of documents like these, which don’t make a claim to creative “genius”? Do I need to cite ChatGPT in these documents if I use it? — Name Withheld
From the Ethicist:
Many administrative documents, though they may have signatories, aren’t thought of as original compositions. Certainly the documents you’re thinking about — annual reports, budget submissions and the like — tend to be pretty templatized, and my sense is that people in your position often start with an earlier version from the department’s files and adjust them with new information. In fact, some administrations provide chairs with such templates.
What you would be doing with ChatGPT isn’t so different. For an annual report, you might prompt it with specifics about job searches, departmental priorities, student concerns, revisions of the curriculum and so forth. The system — informed by a large number of models in the digital archives it trained on — would, with luck, incorporate your input into a tidily organized, appropriately formatted memo. You would coach it with further prompts and then have a draft that you could edit into shape. I see no reason that you shouldn’t start this way, provided you do the proper revising and are confident that the final document says what you want it to say. Big departments at big universities may employ half a dozen or more full-time administrators. Sometimes a departmental administrator is a dab hand at drafting documents of this sort for the chair to review and revise, and doing the same with ChatGPT is fine as long as you exercise proper vigilance and can stand by what you submit. (We all know that ChatGPT can “hallucinate” facts.)
I don’t think you are obliged to cite ChatGPT any more than you are obliged to say you started with last year’s annual report as a model. Academic writing is different; there are many reasons to acknowledge sources in work that’s meant to be original. By contrast, your reports, as you note, are not being evaluated for their scholarly or creative contributions. But if you do find yourself making good use of ChatGPT (or another such tool), it could be helpful to discuss it with the deans of your college they might want to suggest the idea to others. Rational administrators, especially in the academy, should prize the careful allocation of intellectual creativity.
I have developed a sophisticated GPT-4 prompt that has the potential to revolutionize a specific aspect of a technical workflow — one that is specific to a particular type of white-collar, knowledge-worker job. My prompt will make these tasks more efficient, reducing the time required from weeks to mere minutes.
A friend of mine, however, has a job that involves precisely this kind of work. Sharing and capitalizing on my creation would very likely be financially rewarding for me, but I fear it could jeopardize my friend’s career and leave this person struggling to find new professional opportunities.
I am torn between the prospect of lucrative gains and the impact it may have on someone I care about. How should I proceed in this situation? — Matt S.
From the Ethicist:
First, is it really likely that nobody else will come up with a similarly effective prompt before long? When I consulted with an expert on large language models (though not at OpenAI, which operates GPT-4), I was told that, depending on how niche the field is, it may have already happened or it may happen in the coming months. In any event, it surely won’t be long. What you’ve actually discovered, it seems, is that the current configuration of your friend’s job has no future, and your staying mum won’t change that fact. Maybe your friend will be made redundant. Maybe (via techniques like “few-shot prompting”) your friend’s expertise will lead to a new kind of work that’s conducted in collaboration with A.I. tools. As his friend, you should discuss these realities with him.
Issues like this arise with every major new technology. Should James Hargreaves, whose spinning jenny revolutionized textile production in England in the 18th century, have junked his designs because he had friends who were weavers? Should Henry Ford have stopped developing the mass-market car because he had friends in the stabling business? As these examples should remind us, emerging technologies have created jobs as well as destroyed them. (A recent survey of chief executives found that 43 percent had reduced or redeployed their work force owing to generative A.I., while 46 percent had added workers.) In the past, smart people, like Bertrand Russell and John Maynard Keynes, thought that technology would create mass idleness, and they turned out to be wrong. John Henry is now in sales, busily leasing rock-drilling machines.
Maybe — who knows? — this time will be different. Either way, the big costs of new technologies have been borne by people who have to switch to new jobs and gain new skills, especially when the switch lowers their income. Careers are typically structured by the expectation of a continuously rising income. Social policy needs to take seriously the problems that arise when that pattern doesn’t hold up. Even if generative A.I. does create more employment and more wealth than it destroys, there will be human costs, and they will fall unevenly. The solution isn’t to stop innovating. One crucial innovation we need, though, is at the political and policy level — figuring out how to make sure that everyone has the resources for a decent life. If you develop a prompt for that, let me know.
The previous column’s question was from a reader whose sister asked her to sell three extra concert tickets to the Taylor Swift Eras tour that she purchased for $130 each. She wrote: “[My sister] asked me to post them in a large Manhattan-moms Facebook group I am a member of for $2,400 each or best offer. I acknowledged that this was a bonkers price for these seats, but cheaper than the current market. We thought this was a win-win as people could go to the concert for less than they would pay through a ticket site, and we wouldn’t have to pay seller’s fees. … Do I owe strangers “reasonable” resale values?”
In his response, the Ethicist noted: “The only reason you can resell a ticket at many times the original price is that the artist had decided to offer them for less than she could get, and so make them affordable to fans of more modest means. When people, or their army of bots, buy those lower-priced tickets in order to resell them, they’re abusing that restraint. … That’s the case against Swift scalpers. But your sister didn’t buy these tickets to resell them. She just wound up with three she can’t use and had to distribute them somehow. … Besides, if the Manhattan moms didn’t like the price, all they had to do was not pay it. That’s how markets work. The trouble is that some members of this Facebook group, it seems, don’t think it’s an arena where market values should operate. As a member of the group, you should consider whether they’re entitled to this feeling — or whether they should shake it off.” (Reread the full question and answer here.)

The Ethicist’s answer was well put. The only thing this ticket seller did wrong was to offer any discount whatsoever. There is nothing wrong with exchanging goods at their current market value. In fact, there might be even more ethically at risk by offering special deals to certain types of people (Manhattan moms on Facebook) and not others. Alex

This fortunate family was able to purchase tickets when countless others were not, solely because of luck. They now want to profit from this fortune, which our capitalist economy allows. That is their right, but the Manhattan-moms Facebook group a place where we consider ourselves neighbors, if not friends is not the place to do it. — Eileen

In my opinion, considering market price for a high-demand item for tweens requires a buffer, or mitigating wisdom. This price is way beyond the real value for a concert ticket. You join the soulless in this scenario. Congratulations, everyone does it and you can make some money on the backs of mothers. Yikes. Kim

Basically, I agree with the Ethicist’s response, but I have one further thought: If one charged a high price for the concert tickets to prevent future resale by whoever ends up buying them, why not donate this additional profit or at least a good fraction of it to a charity? James

If these Taylor Swift tickets were meant to be affordable to a wider range of fans, then, ethically, it seems they should be used by those who could only afford the moderately priced tickets. To avoid another hiked price resale, the owner of the tickets could offer them at the original price to the friends of her megafan daughter. It’s a win-win, I believe. Mary Ellen
Kwame Anthony Appiah is The New York Times Magazine’s Ethicist columnist and teaches philosophy at N.Y.U. His books include “Cosmopolitanism,” “The Honor Code” and “The Lies That Bind: Rethinking Identity.” To submit a query: Send an email to More about Kwame Anthony Appiah