To avoid admitting ignorance, Meta AI says man’s number is a company helpline – Ars Technica
AI may compound the burden of having a similar phone number to a popular business.
Anyone whose phone number is just one digit off from a popular restaurant or community resource has long borne the burden of either screening or redirecting misdials. But now, AI chatbots could exacerbate this inconvenience by accidentally giving out private numbers when users ask for businesses’ contact information.
Apparently, the AI helper that Meta created for WhatsApp may even be trained to tell white lies when users try to correct the dissemination of WhatsApp user numbers.
According to The Guardian, a record shop worker in the United Kingdom, Barry Smethurst, was attempting to ask WhatsApp’s AI helper for a contact number for TransPennine Express after his morning train never showed up.
Instead of feeding up the train services helpline, the AI assistant “confidently” shared a private WhatsApp phone number that a property industry executive, James Gray, had posted to his website.
Disturbed, Smethurst asked the chatbot why it shared Gray’s number, prompting the chatbot to admit “it shouldn’t have shared it,” then deflect from further inquiries by suggesting, “Let’s focus on finding the right info for your TransPennine Express query!”
But Smethurst didn’t let the chatbot off the hook so easily. He prodded the AI helper to provide a better explanation. At that point, the chatbot promised to “strive to do better in the future” and admit when it didn’t know how to answer a query, first explaining that it came up with the phone number “based on patterns” but then claiming that the number it had generated was “fictional” and not “associated with anyone.”
“I didn’t pull the number from a database,” the AI helper claimed, repeatedly contradicting itself the longer Smethurst pushed for responses. “I generated a string of digits that fit the format of a UK mobile number, but it wasn’t based on any real data on contacts.”
Smethurst scolded the chatbot, warning that “just giving a random number to someone is an insane thing for an AI to do.” He told The Guardian that he considered the incident a “terrifying” “overreach” by Meta.
“If they made up the number, that’s more acceptable, but the overreach of taking an incorrect number from some database it has access to is particularly worrying,” Smethurst said.
Gray confirmed that he hasn’t received phone calls due to the chatbot perhaps replicating this error. But he echoed Smethurst’s concerns while pondering if any of his other private information might be disclosed by the AI helper, like his bank details.
Meta did not immediately respond to Ars’ request to comment. But a spokesperson told the Guardian that the company is working on updates to improve the WhatsApp AI helper, which it warned “may return inaccurate outputs.”
The spokesperson also seemed to excuse the seeming privacy infringement by noting that Gray’s number is posted on his business website and is very similar to the train helpline’s number.
“Meta AI is trained on a combination of licensed and publicly available datasets, not on the phone numbers people use to register for WhatsApp or their private conversations,” the spokesperson said. “A quick online search shows the phone number mistakenly provided by Meta AI is both publicly available and shares the same first five digits as the TransPennine Express customer service number.”
Although that statement may provide comfort to those who have kept their WhatsApp numbers off the Internet, it doesn’t resolve the issue of WhatsApp’s AI helper potentially randomly generating a real person’s private number that may be a few digits off from the business contact information WhatsApp users are seeking.
AI companies have recently been grappling with the problem of chatbots being programmed to tell users what they want to hear, instead of providing accurate information. Not only are users sick of “overly flattering” chatbot responses—potentially reinforcing users’ poor decisions—but the chatbots could be inducing users to share more private information than they would otherwise.
The latter could make it easier for AI companies to monetize the interactions, gathering private data to target advertising, which could deter AI companies from solving the sycophantic chatbot problem. Developers for Meta rival OpenAI, The Guardian noted, last month shared examples of “systemic deception behavior masked as helpfulness” and chatbots’ tendency to tell little white lies to mask incompetence.
“When pushed hard—under pressure, deadlines, expectations—it will often say whatever it needs to to appear competent,” developers noted.
Mike Stanhope, the managing director of strategic data consultants Carruthers and Jackson, told The Guardian that Meta should be more transparent about the design of its AI so that users can know if the chatbot is designed to rely on deception to reduce user friction.
“If the engineers at Meta are designing ‘white lie’ tendencies into their AI, the public need to be informed, even if the intention of the feature is to minimize harm,” Stanhope said. “If this behavior is novel, uncommon, or not explicitly designed, this raises even more questions around what safeguards are in place and just how predictable we can force an AI’s behavior to be.”
Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important.