Can AI shopping tools like ChatGPT pick meaningful Christmas gifts? – qz.com


Photo by Catherine Ivill; AMA/Getty Images
Gift-giving, at its very best, is a tiny act of surveillance that feels like a whole lot of love. So every December, I become Santa Shannon: part curator, part private investigator, part small-business logistics manager with a Pinterest addiction and too many open tabs. I have a running Notes app list of gift ideas, the way normal people have groceries and to-dos. I’ve collected offhand comments, fleeting obsessions, and “This would look perfect in my home” texts starting way back in March as if they’re scouting reports — because nothing hits like watching someone unwrap proof you were paying attention. 
Being Santa Shannon is very fun. Being Santa Shannon is also very tiring.
I’ve written a lot about AI shopping tools this year, and I remain a skeptic — both on aesthetic grounds and because I don’t love the idea of strip-mining the Amazon rainforest to buy my mom another tote bag. But for the sake of a good story, I decided to try the tech everyone insists is the future. What did I find? An algorithm deeply convinced my dad wants socks and that finds my mom just as hard to shop for as I do — only with more confidence and worse judgment.
I told ChatGPT my strategy: Thoughtful. Preemptive. Unique. No Target or Amazon unless absolutely necessary. I told it about my sister, Clare, an immigration attorney in Washington, who rock climbs, bakes, reads constantly, steals from my closet — like any good little sister — loves her Chelsea boots, and hikes a bunch with her one-year-old dog, Red. I told it about my mom, a former PR pro with a full kitchen and even fuller taste, who loves French-inspired cooking and gardening, and who misses Lake Tahoe daily — she and my dad recently moved to Delaware, and she lost a lot of things in the process. I told it about my dad, an author and former business journalist who just reorganized his bookshelves to make them the star of his house, loves Civil War history and Russian literature, does the New York Times crossword as soon as it’s posted, and is woefully underprepared for a humid East Coast summer.
I limited my showers all week to five minutes to atone for the environmental cost.
ChatGPT “listened” to my lovingly detailed portrait of my family and did what it does best: It summarized them into a set of neat, purchasable archetypes. My sister became a “tasteful, athletic, high-functioning adult with opinions.” I read that out loud to her. “High-functioning?” she said. “Whoa. I’ll take it.” My mom emerged as “high taste, low tolerance for stuff, emotionally anchored to place.” My dad landed as “brilliant, opinionated, under-invested in clothes, ready for upgrades.” And then the model started recommending gifts for the version of my family that exists in the space between “stereotypical gift guide” and “found in an airport bookstore.”
For Clare, ChatGPT proposed the kind of cookie-cutter sister gift ideas that, as it so happened, included cookie cutters (for office bake-offs). ChatGPT suggested a “genuinely good claw clip or hair accessory,” and got specific about materials, because nothing says intimacy like acetate or tortoiseshell. But Clare bakes brownies and cakes, rarely cookies — and she never uses one of the 100 cookie cutters I keep in our shared kitchen. I’ve never seen Clare once use a claw clip; she usually has at least one scrunchie on her wrist. ChatGPT’s gift ideas relied on a phantom version of my sister — a certain kind of woman, a certain kind of tidy, neutral, unfussy — and then built from there. 
The thing about AI “personalization” is that half the time it’s not personal. ChatGPT’s ideas were fine. They were plausible. They’re what you’d buy if you were shopping for a person-shaped outline. But I’m not shopping for an outline. I’m shopping for my (occasionally annoying and always beloved) little sister. AI doesn’t know that my mom is especially picky, and it can’t help me when she is. AI can’t offer a suggestion for my dad that carries the history embedded in our daily texts. It boils my family — these complicated, gorgeous, wonderful people — down into tropes that flatten them.
I expected ChatGPT to struggle with ideas for my mom, and it did. She’s the hardest person I know to shop for, which is partly a compliment and partly a warning label. She has impeccable taste. She doesn’t like random clutter. She’s been collecting beautiful things for more than 60 years, which means ChatGPT’s “She’ll love this” is a high-stakes claim. She doesn’t need new objects nearly as much as she needs the right objects, and the right objects tend to be specific to her in a way that’s hard to explain to someone who hasn’t watched her look at a perfectly nice thing and say, “Not my favorite.”
The model heard “French-inspired cooking and gardening” and started tossing Provence-coded items at the problem. A French utensil holder. French gardening tools. A French lavender bath soak. A classic French-style picnic basket. Each suggestion had that cheerful “this supports her life rhythm” logic that made me want to argue with the premise. I could feel myself getting more annoyed as ChatGPT and I chatted. There’s a specific kind of irritation when someone (or something) keeps insisting they understand your mother, and you know they don’t, and now you have to decide whether to educate them or let it go. I tried to educate ChatGPT. I found myself typing little corrective facts, like I was doing customer service for my own family. My mom doesn’t really take baths. My mom already has gardening shears. My mom doesn’t want a tacky Lake Tahoe souvenir disguised as décor. My mom wants something that feels like I know the exact difference between her taste and the internet’s taste.
And then it suggested: “A premium smart skipping rope (counter + metrics).”
I actually laughed, because what else do you do when your AI chatbot casually recommends high-impact cardio for a woman over 60 who has had two knee surgeries and whose athletic abilities aren’t what they were when she was younger. I told my dad. He responded immediately:
“Is ChatGPT trying to kill your mother?”
Well, it certainly wasn’t trying to keep her alive.
There are a lot of ways to be wrong in gift-giving. There’s wrong because you guessed. There’s wrong because you didn’t listen. There’s wrong because you bought something you wanted them to have. The skipping rope wasn’t just a miss. It was a bright, chirpy example of what happens when a system doesn’t know the difference between “fitness” as a category and “fitness” as a complicated, emotional, physical reality. Great! Merry Christmas! Please don’t land wrong!
My dad’s section was a different flavor of wrong. The model heard “author” and started handing me props, as if writers are sustained by stationery. A pencil set. A notebook. A desk accessory. I could practically see the invisible gift guide headline: For the brilliant writer dad who has a lot of big thoughts. Then came socks. I told ChatGPT that I didn’t really want to give my dad socks this year. The model responded by doubling down with a more earnest version. Not socks, it insisted. Performance socks. Athletic socks. This works. This is care. My dad can buy his own athletic socks; my relationship with him isn’t moisture-wicking. I want a present that says something closer to: “Thanks for helping me make sense of my word vomit when I text you frantically on deadline.”
That’s when I realized what I kept bumping into: The model wasn’t trying to know my family. It was trying to finish the task. Every time I rejected something, it didn’t get curious. It got efficient. It pushed another plausible object across the table and explained why it was a good idea. 
The funny part of this experiment was that the gifts that actually mattered had never been up for debate. By the time I was feeding ChatGPT this rich dossier and watching it spit back a list of plausible objects with the confidence of a horoscope, I wasn’t actually shopping for anything.
Clare’s gift was already sitting in my cart before I opened the chatbot: a jewelry box. She loves to buy rings when we travel together, and they sit scattered across her room like a tiny, sparkly landmine field. My sister and I had already picked out the big things for my parents — a first-edition set of Ulysses S. Grant’s memoirs for my dad and replacements for the things my mom lost in her move: a Jimmy Choo purse that I scoured for on The RealReal and a gorgeous KitchenAid stand mixer that’s something close to burgundy — how very French of me after all. (I made them all promise not to read this story until after Christmas, so this will be our little secret, OK?)
ChatGPT did eventually wander toward those lanes. After I steered the model in the right direction, it mentioned getting my dad a beautiful edition of a beloved author, something tactile and display-worthy, and it talked about honoring the “books as identity moment” my dad was in, whatever that means. ChatGPT gestured toward replacing what my mom lost with upgraded versions and praised the concept of one perfect thing over many — but it felt like ChatGPT was narrating back the conclusions I’d already reached. Once I told it to think about bookshelves, it sounded insightful about bookshelves. Once I told it to think about move-loss replacements, it sounded insightful about replacing lost things. I kept having the same suspicion, over and over, that I’d led the horse to the trough, then watched it take a long, confident drink and tell me it had discovered the water.
That’s the part that made the experiment clarifying instead of just annoying. And my issue isn’t just with ChatGPT. I tried Google’s Gemini and Claude’s Anthropic, too. AI can be useful as a search tool, and Gemini was better at that kind of usefulness when I tested it. The instinct from Claude was to ask a dozen follow-ups, which felt closer to how a good salesperson would behave. Gemini’s tone was smoother and more helpful in a gift-guide way. But none of the models did the thing I actually needed, which was to replicate my unfair advantage: knowing them in real life.
Clare is a rock-climbing lawyer, sure. She’s also a former cross-country champ, a history major at Yale, and the person who loves to share a bottle of wine with me on the couch, halfway listening to whatever sport we’ve put on TV. My mom has the highest level of taste of anyone I know — unless our tastes clash, in which case I’m right. My dad is a big-hearted genius and has read more Russian literature than is probably healthy. None of them fit cleanly into a summary, and every time the model tried to place them into one, I could feel it sanding off the best parts. ChatGPT kept trying to get me to buy objects that would make sense for the people in its summary, not the people in my life.
Santa Shannon doesn’t want to outsource her job. She wants fewer tabs, sure. She wants a little less cognitive load. But she doesn’t want a machine to flatten her family into a list of plausible objects and call it thoughtfulness. The personalized giving is the fun part for me. And that perfect find is the payoff. If I’m going to spend my December feeling mildly insane, I want it to be on purpose. It’s my family. I know too much about them — and I still want to prove it, one December at a time.

source

Jesse
https://playwithchatgtp.com