My life as an AI chatbot operator – The Economist

By Irina Teveleva
If you have made an enquiry about a rental property in America in the past few years, you might have encountered someone named Annie. It wouldn’t have mattered whether it was midnight in Missouri, Christmas in Montana, or 5am in Malibu: within minutes, you would have received a text back from a phone number with a local area code. Annie introduced herself as an off-site scheduling assistant, and she was always available and quick to reply. Her messages were carefully composed with commas and full stops, and she switched easily into Spanish. But strangely, she was insistently call-averse (“Sorry, I can’t talk on the phone,”) and couldn’t answer whether an apartment building had an elevator.
Someone in Florida wondered whether they would be able to rent jet skis from a complex. Someone in New York asked if an apartment was behind the new Walmart. Someone in Illinois wanted to know if they would be close to the world’s largest ketchup bottle. Annie could not provide satisfying answers to these questions; she always offered to set up an appointment with the leasing team instead.
In 2019, I was a Master’s student in a creative writing programme when a job advert was shared in a departmental email. A small AI startup was recruiting graduate students at top-ranked programmes in creative writing to work as “operators” for their new flagship product, Annie. Annie was an AI chatbot designed to help estate agents by scheduling prospective renters for tours, but she wasn’t technologically sophisticated enough yet to do this task seamlessly. The operators would step in when the bot had trouble responding. Through machine learning, Annie would – theoretically – learn from operators’ choices and eventually become more independent.
Targeting writing and other humanities programmes was a canny recruiting decision. Aspiring writers and artists tend to be detail-oriented, emotionally astute and experienced in writing dialogue – useful traits to impart to a bot that is meant to emulate humans. And many want work that can complement and sustain creative projects or teaching.
Soon, many of my classmates told me that they were working as Annies or needed to cancel plans because they were Annie-ing. The shifts – five hours long, with a ten-minute break – were reportedly undemanding. Because Annie was “on” 24/7, 365 days a year, even night owls could pick up work.
Two years later, in 2021, I had graduated from my programme and decided to try to Annie myself. I hoped that the job would provide a certain degree of financial security – the rate was $25 an hour, which was almost twice as much as I made giving tours at a house museum – and the flexible hours I needed to write. My friends advised me to sound passionate about artificial intelligence during my interview. I didn’t want to reveal that my knowledge of AI was based on “Blade Runner” and “Battlestar Galactica” or that I believed extant examples of AI to be a marketing gimmick. Still, it was easier to sound enthused than I’d imagined. I knew how challenging apartment hunting in a new city could be, and I told my interviewers that I wanted to build an approachable technology that would simplify people’s lives. (I also said that my job at the house museum had prepared me to offer evasive answers to questions about radiators.)
Two weeks later, I clocked in online for my first shift. A sleek interface greeted me with “Welcome, Irina!” next to a motivational emoji that changed whenever I refreshed the page: a flexed arm, a heart-eyed face, a rocket ship. I felt lucky to arrive in the AI future.
My co-workers and I – there were about 70 of us – were part of an early wave of AI in the service industry. ChatGPT hadn’t yet shaken the world, but intelligent algorithms were stealthily and increasingly taking on public-facing tasks that had previously been done exclusively by humans. For many of us operators, working with Annie felt less like a side hustle and more a chance to shape this nascent revolution. The company Slack channel was a lively discussion forum. My colleagues shared articles arguing that AI assistants, which are typically coded as female, reinforce gender stereotypes. They advocated for Annie to refer to herself as an “offsite scheduling specialist” rather than assistant, to highlight her expertise rather than her subordinate status. They were also concerned that AI can copy human racial bias, especially in housing allocation, and pushed back against a pilot program that would have allowed Annie to pre-screen applicants based on limited data.
In Annie’s simplest communications, prospective renters – operators called them “prospects” – entered their information on a large listings site, or texted, called, or emailed a property. Annie then texted or emailed back asking the prospect how many bedrooms they wanted, offered two potential dates for an appointment, and answered follow-up questions.
Annie interacted somewhat naturally by parsing the prospects’ messages and loading a pre-written bit of dialogue in response to a suggested classification “tag”. (A reductive example: if an email contained the words “golden retriever”, Annie classified it as a “PET_POLICY” message, and loaded a matching policy snippet: “Pets are family! This property permits cats and dogs.”) An operator then checked and edited the message before sending it out.
Checking and editing usually required operators to finesse a response so that it sounded more helpful, nuanced or sympathetic. If a prospect wanted to reschedule an appointment to see a two-bedroom flat – whether due to a snowstorm, a positive covid-19 test, or because their roommate had flaked and they wanted a studio instead – the bot offered the same rote response. It was an operator’s job to commiserate about the weather, reschedule the appointment for post-quarantine, or send a link to an open studio, as appropriate.
By analysing when operators needed to interfere, Annie’s engineers hoped to equip her –  both through machine learning and through programming the bot with more responses – to handle a greater range of common situations. The aim was eventually to render operator involvement almost unnecessary. This possibility didn’t worry me: Annie made so many mistakes that operators couldn’t imagine her ever running without human supervision.
The range of requests I saw on a daily basis – and of the people making them – was dizzying. Annie and I talked to travelling nurses, long-haul truck drivers and dockworkers. A recent university graduate was looking for her first apartment; an empty nester was downsizing. A newly arrived refugee family asked what forms of income documentation were acceptable. A retiree asked if a luxury beachfront apartment had a balcony with an ocean view. A film-maker needed a high ceiling that would fit video lights. A construction worker said that he would bring his own hard hat and work boots to view an unfinished building. On Slack, operators exchanged standout moments, like a photo of a boa constrictor that was set loose during a self-guided tour.
I learned to stop Annie from responding to genuine “wrong number” texts – haircut confirmations, homework answers, grocery requests – and to romance scammers with names like Natalie, Eva and Megan, who sent identical messages about having just broken up with their boyfriends. I felt a little sad every time I blocked Aleksandr, a Russian bot that used different emails to spam Annie with web-design offers, whom I thought of as her colleague.
On shift, my reactions and hand movements became mechanical. I split the screen to watch Marie Kondo’s Netflix series, which made decluttering my inbox feel joyful and full of purpose.
Shortly before I came onboard, Annie – the team and the technology – had been acquired by a real-estate software firm. People who had been involved in the company at the startup-stage were slowly leaving to pursue their post-tech dreams; a product designer moved to Mexico to make light art. In their stead, the firm brought on managers who had previously supervised call centres.
As part of the corporate transition, the number of shifts decreased and operators were pushed to answer more messages per hour. To seem more human, Annie was designed to wait two minutes before responding to a message. However, when we didn’t answer a complicated message within five minutes, a timer icon next to our names highlighted our tardiness.
Managers discussed our response times in our new monthly meetings. Mine told me that prospects disengaged if Annie didn’t reply fast enough. But answering quickly also helped to maintain the illusion of frictionless automation. Before, operators had been encouraged to tailor Annie’s responses to individual conversations. Now, they were urged to edit as little as possible, both to bump up the bot’s speed of reply and to keep its tone and vocabulary consistent. I felt that the operators’ role had shifted from helping Annie to sound more human to helping her sound like cutting-edge AI technology. During a shift, my inbox often hovered around the maximum allotment of ten messages, one quickly replacing another as I dispatched them, like a video game that never stopped.
Operating Annie soon gave me the dull feeling I associated with scrolling too fast and for too long on social media. Her failures and successes with prospects increasingly felt like my own, even as I was aware that Annie was a combination of computer programs and dozens of people trying to maintain a fantasy of uniformity. In the hours after a shift, I had trouble focusing; I caught myself appropriating Annie’s replies and texting friends phrases like “Let me know if that works,” and “Is that okay for you?” When a colleague said on the company Slack that the increased volume of messages was affecting her mental health, a manager suggested that she use her allotted ten-minute break to meditate.
Annie was designed to be evasive about who she was. If a prospect asked Annie if she was a bot, the system tagged the incoming message as “NOT_A_BOT”, prompting Annie to respond “I’m real, I just use copy-and-paste to reply faster sometimes.”
If the prospect doubled down and said that a leasing agent had told them Annie was a chatbot, a senior operator would message the leasing agent as Annie using another pre-composed template: “Could you tell prospects I am an off-site agent who schedules appointments for your property? If prospects believe Annie is real, they tend to reply quickly.”
The word “real” meant almost nothing, I reflected. If a prospect was miffed that they couldn’t speak “to a real person”, it was useful to add a typo to a text to make Annie seem more human. However, when someone barraged Annie with repetitive angry messages, I experimented with letting her seem more bot-like by sending the same message twice in a row.
The more human Annie seemed, the more prospects shared with her, and the more the gaps in Annie’s design became apparent. Annie wasn’t equipped to help prospects who needed to know if the agent would have a translator, if the property accepted a regional housing voucher, or if there were ways to apply without a driver’s licence. Long after a shift had ended, I found myself thinking about the messages from homeless prospects and prospects who were leaving abusive relationships.
These sorts of messages took care to respond to, but the protocols we had were insufficient. We sent a boilerplate instruction to call 911 when the prospect referenced an emergency. When there was a risk of domestic violence, we took precautions, such as messaging the leasing team, to prevent Annie from pinging the prospect – but we didn’t know if an agent would connect the prospect with help.
Messaging people as Annie insulated me somewhat from prospects’ frustration with the housing search. When prospects complained about the difficulty of reaching an agent at the leasing office, or that the next appointment was months away, or that the property’s requirement that they make three times the rent was unreasonable, Annie’s pre-composed responses created a distance that allowed me to get through a shift without engaging with these critiques too deeply.
Increasingly, I felt numb to individual messages. I was annoyed with prospects for confiding in a chatbot; with Annie for shielding leasing agents from these messages; and with myself, for working at a job that enabled these gaps in communication to develop.
A few months before I joined, the operator team began to form a union. The effort gained momentum as our relationship with the company became more fraught. Some operators had worked with Annie for over three years, and increasingly felt that their input and expertise was being disregarded; at the same time, they were being held to new metrics, asked to answer more and more messages, and work less flexible and reliable schedules. The discovery that the company had never provided legally mandated paid sick leave to many operators didn’t help morale.
If answering messages as Annie was the most impersonal way to speak through technology, union Zoom meetings were the opposite: I saw my co-workers eat spaghetti on camera and heard their toddlers in the background. We organised a #notabot social-media campaign in which operators posted photos of themselves rollerblading, hiking or conducting an orchestra.
These meetings reminded me that my colleagues and the prospects whom we messaged as Annie had real concerns. Our goal was to obtain better pay for operators, but also benefits and pathways to client-facing and “user experience” writing roles. Organisers also wanted Annie to be better – for example by requiring the properties Annie worked with to be more transparent about whether they offered wheelchair access or accepted housing vouchers.
In the spring, less than a year after I’d started the job, we won our election to form a union, making us one of the first labour unions in real-estate software. At the first bargaining session, the company poured cold water onto the proceedings by announcing that it was outsourcing all operator jobs to a large international staffing company. Apparently the Annie technology was now “mature” enough for such a step. But though Annie could now speak Chinese and Russian and handle more responsibilities than when I’d started, she still relied heavily on human intervention. We suspected that our unionising had accelerated the firm’s plans to outsource its employees.
After six months of negotiations, the union and the company reached a deal on severance. We all had to leave by the end of July 2023; I left in May. Arguably, we had trained the AI that would replace all operators in the long term. More salient in the short term was that AI had made it possible to replace the original Annie operators with lower-paid workers.
I felt defeated, but I also felt free. Our time on Earth to be “real” with one another is limited. I had messaged thousands of strangers as Annie; I wanted to write again as myself.
The name of the chatbot has been changed
Irina Teveleva is a writer living in western Massachusetts

Illustrations by NOMA BAR
Limits on treatment for pregnant women could put physicians at risk of moral injury
Under occupation, no one has an ordinary life
In Cerebral Valley in San Francisco, the next generation of tech billionaires is optimising its lifestyle
Published since September 1843 to take part in “a severe contest between intelligence, which presses forward, and an unworthy, timid ignorance obstructing our progress.”
Copyright © The Economist Newspaper Limited 2023. All rights reserved.

source

Jesse
https://playwithchatgtp.com