Incredibly smart or incredibly stupid? What we learned from using ChatGPT for a year – The Guardian

As the tool becomes less of a curiosity and more a part of daily life, fans are finding clever uses – and discovering limitations
Next month ChatGPT will celebrate its first birthday – marking a year in which the chatbot, for many, turned AI from a futuristic concept to a daily reality.
Its universal accessibility has led to a host of concerns, from job losses to disinformation to plagiarism. Over the same period, tens of millions of users have been investigating what the platform can do to make their lives just a little bit easier.
Upon its release, users quickly embraced ChatGPT’s potential for silliness, asking it to play 20 questions or write its own songs. As its first anniversary approaches, people are using it for a huge range of tasks. We’ve all heard about uses like crafting emails, writing student essays and penning cover letters. But with the right prompts, it can take on jobs that are more esoteric but equally useful in everyday life. Here are a few that might come in handy.
You’re at a work meeting, and the accountants are talking about GAAP operating income for Q4 of FY22, the design people are panicked about kerning, and the CEO wants you to circle back to drill down on some pain points. On top of that, your British boss says your work is “quite good” but strangely doesn’t seem happy with it, while your US colleague claims everything anyone has ever done is amazing. Users say they’ve turned to ChatGPT for help as an intermediary, employing it to translate workplace jargon so everyone’s on the same page about the concerns you flagged, tnx.
This isn’t limited to the office: people have used ChatGPT to, for instance, translate a sleep study’s medical terminology, or help craft a legal opinion. It can serve as an intergenerational go-between: users have turned it into a gen Z slang translator (sample sentence from a description of a key historical event: “Titanic, flexing as the unsinkable chonk, sets sail with mad swag, a boatload of peeps, and the vibes of a 1912 rave”).
Sometimes you want a real critique of your work, a harsh assessment that your friends and family are too nice to provide. For some, ChatGPT is that critic (though whether the word “real” applies here is debatable). “I use ChatGPT to brutally audit where my copy is falling short of the target audience’s expectations,” a copywriter wrote on Reddit. Some have even found it can give decent (if imperfect) criticism of fiction writing, pointing out redundancies, missing characterization or weak imagery.
There are, of course, ethical questions about the use of ChatGPT in work and school settings. In response, some argue that asking it to be your critic, and learning from its feedback, is a way to improve your writing without letting it put words in your mouth.
It’s not always an easy task: what it gives you depends entirely on how you structure the prompt. Some users find it tough to find the language to “convince” it to be harsh enough. And you’ll get more appropriate feedback if you give it a detailed task – “give me feedback” might not help as much as “I’m writing an essay for college – tell me whether it’s well-structured and clear”.
Maybe you don’t want ChatGPT to be mean – maybe you want the opposite. Users have asked ChatGPT for help being nicer in their work emails, especially when they’re secretly fuming. “I write to it: please make me sound like less of an asshole,” said one user.
It’s dinnertime and there’s stuff in the kitchen – but you have no idea what to do with a half-eaten yogurt, a leftover chicken leg, a bag of flour and some forgotten tomatoes on the verge of becoming truly upsetting. Users report that ChatGPT has helped them create impressive meals out of what they have, or come up with ideas based on what’s around and a specified grocery budget. Many users report being pleased with the results, though some recipes sound perhaps too creative: garbanzo bean and cheddar cheese soup, a peanut butter and Nutella quesadilla, and a “carrot and grape salad with muesli crunch” (based on what’s in my own kitchen).
Last month, OpenAI, the tool’s developer, added an image-recognition feature that makes this task even easier – instead of having to list ingredients, users can take photos of the food in their cabinets and ChatGPT will come up with recipes.
Sign up to First Thing
Our US morning briefing breaks down the key stories of the day, telling you what’s happening and why it matters
after newsletter promotion
Results have been mixed. Beyond the fact that the bot has no taste buds, some users have expressed safety concerns, saying ChatGPT may, for example, convince inexperienced chefs to undercook meat.
Following the update allowing ChatGPT to “see”, users have found its interpretation skills to be alarmingly impressive. In a clip making the rounds, an AI developer, Mckay Wrigley, shows it a hand-drawn flowchart on a whiteboard, which it’s able to turn into code that Wrigley runs – and it works. The platform can even tell that the green arrows indicate the steps should be reordered. So you can stop beating yourself up for never having learned to code.
You can give ChatGPT a picture of your team’s whiteboarding session and have it write the code for you.

This is absolutely insane.
ChatGPT can act as your personal SparkNotes, condensing large quantities of information into small ones – whether that information is in the form of articles, meeting notes or book chapters. Combined with the right browser plugin, it can even summarize entire YouTube videos so you don’t have to listen to an insufferable Ted Talker.
Some users have found it goes overboard with summaries, even making them longer than the original text. Others say clever prompts, such as “be my secretary and act as though you were taking the minutes of a meeting”, seem to help.
It’s important to remember that while ChatGPT can seem incredibly smart, it is also incredibly stupid, as this index of some of its many failures proves. It has struggled to count the number of N’s in “banana”, failed to correctly answer its own riddle and agreed that 1+0.9 makes 1.8. Far more dangerously, it makes up “facts” – such as a sexual harassment scandal that didn’t happen, starring a real professor.
You’re a human, it’s a bot – take it all with a big grain of salt. Or vinegar, which it recommends as a substitute.