Personalized A.I. Agents Are Here. Is the World Ready for Them? – The New York Times
Artificial Intelligence
Advertisement
Supported by
The Shift
The age of autonomous A.I. assistants could have huge implications.
You could think of the recent history of A.I. chatbots as having two distinct phases.
The first, which kicked off last year with the release of ChatGPT and continues to this day, consists mainly of chatbots capable of talking about things. Greek mythology, vegan recipes, Python scripts — you name the topic and ChatGPT and its ilk can generate some convincing (if occasionally generic or inaccurate) text about it.
That ability is impressive, and frequently useful, but it is really just a prelude to the second phase: artificial intelligence that can actually do things. Very soon, tech companies tell us, A.I. “agents” will be able to send emails and schedule meetings for us, book restaurant reservations and plane tickets, and handle complex tasks like “negotiate a raise with my boss” or “buy Christmas presents for all my family members.”
That phase, though still remote, came a little closer on Monday when OpenAI, the maker of ChatGPT, announced that users could now create their own, personalized chatbots.
I got an early taste of these chatbots, which the company calls GPTs — and which will be available to paying ChatGPT Plus users and enterprise customers. They differ from the regular ChatGPT in a few important ways.
First, they are programmed for specific tasks. (Examples that OpenAI created include “Creative Writing Coach” and “Mocktail Mixologist,” a bot that suggests nonalcoholic drink recipes.) Second, the bots can pull from private data, such as a company’s internal H.R. documents or a database of real estate listings, and incorporate that data into their responses. Third, if you let them, the bots can plug into other parts of your online life — your calendar, your to-do list, your Slack account — and take actions using your credentials.
Sound scary? It is, if you ask some A.I. safety researchers, who fear that giving bots more autonomy could lead to disaster. The Center for AI Safety, a nonprofit research organization, listed autonomous agents as one of its “catastrophic A.I. risks” this year, saying that “malicious actors could intentionally create rogue A.I.s with dangerous goals.”
We are having trouble retrieving the article content.
Please enable JavaScript in your browser settings.
Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.
Thank you for your patience while we verify access.
Already a subscriber? Log in.
Want all of The Times? Subscribe.
Advertisement