Wait, ChatGPT Didn’t Take My Job? | PRO Insight – Yahoo Entertainment

It was so over. Remember? As soon as artificial intelligence began to read, write and code, all manner of professions were supposed to automate — fast. Lawyers were toast. Entry-level engineers commodified. And journalists, well, it’s a small miracle we’re writing this story.
And yet, eight months after the release of ChatGPT — and several years since the advent of other AI business tools — the fallout’s been muted. AI is being widely adopted, but the imagined mass firings haven’t materialized. The United States is still effectively at full employment, with just 3.5% of the workforce unemployed.
The usual narrative may say otherwise, but the path toward AI-driven mass unemployment isn’t simple. AI technology, however impressive, is still not good enough to handle most jobs. Rather than eliminate our positions, companies would like us to simply be better at them. And firms hoping to replace humans with bots are learning that change management is hard.
“The demise of industries due to AI is just not going to be a thing,” says Sarah Guo, a venture capitalist who invests in AI startups.
Legal work, for instance, was supposedly squarely in AI’s sights, but law firms enthusiastically incorporating AI aren’t using it to replace lawyers. Allen & Overy, a firm that employs more than 3,000 lawyers worldwide, started working with a generative AI tool called Harvey last year and hasn’t replaced a single person with it.
Harvey scours legal sites, contracts, and other large documents, and then answers queries and writes summaries. It’s exactly the type of application people said would send paralegals and junior associates to the bread lines. Yet it’s helping them perform better, adding value to the firm, and not threatening their livelihood. Why get rid of more effective employees?
Besides, although the bot is helpful, it’s not nearly good enough to handle all their tasks, and it still gets things wrong often enough to require human supervision. “This profession, it’s a service business. So you need to have reliability, you need to have accuracy,” says Daren Orzechowski, a partner at the firm. When computers digitized law books, there was a similar panic, Orzechowski says, “but here we are today and there’s more lawyers than there were when that technology came online.”
Aaron Levie, CEO of cloud storage company Box, says generative AI is typically quite good at handling one task but struggles to take on the array that humans perform at work. “They can do one discrete information-oriented task, basically, at a time, before they need a human to review what they’ve done and then move to the next thing,” he says. “Not that many jobs are relegated to only that kind of thing.” This makes AI great at assisting people but terrible at replacing them.
To underscore the complexity of passing work along to AI, consider radiologists, still in high demand despite serving as a favorite example among those predicting robots will take our jobs. At the Mayo Clinic, for instance, approximately 500 radiologists use AI tools to outline and classify images of the body. The AI helps them get more done, to make up for shortages in medical personnel. And though it’s incredibly useful, it isn’t ready to make judgment calls on spotting rare diseases or recommending treatments.
“In some respects, it actually can increase demand for radiology, because AI helps us get more information out of images than we could do before,” says Dr. Bradley Erickson, a neuroradiologist who runs the Mayo Clinic’s AI Lab. “We’re still looking to hire.”
Such complexity exists in every field, so anytime you see a company announce that it’s replacing workers with AI, read that with some skepticism. These organizations tend to be downsizing anyway and are looking for a positive spin for investors. As one ex-IBM employee told us, it’s much easier for the PR department to announce you’re going to turn 7,800 roles over to AI than to actually get the AI to do those people’s work.
To be sure, there will be jobs affected by this wave of AI, as when any new technology arrives. And it’s possible that as the technology gets better, some companies will iron out the details and automate away. At that point, there could be meaningful displacement, even if mass unemployment is unlikely. But so far, the hot takes have run into reality.
Daron Acemoglu, an economics professor at MIT, says that even if the technology gets good enough to replace, say, call centers or cab drivers en masse, employers and industry will still have a choice about what to do. No outcome is predetermined, he says. In the meantime, though, it makes sense to bet on the humans. “We know from many areas that have rapidly automated that they don’t deliver the types of returns that they promised,” says Acemoglu. “Humans are underrated.”
The post Wait, ChatGPT Didn’t Take My Job? | PRO Insight appeared first on TheWrap.
There was a bit of a hubbub in February as it emerged that OpenAI had seemingly purchased AI.com in order to redirect it to the ChatGPT web interface. Redirect it to the obvious candidate and then dangle it in front of their competitors for a 50% markup.
Microsoft, Google and OpenAI are among the leaders in the US artificial intelligence space that will reportedly commit to certain safeguards for their technology on Friday, following a push from the White House.
AI and climate change represent two ways humans may ravage life as we know it on Earth, but the former can also help with the consequences of the latter. The California Department of Forestry and Fire Protection (Cal Fire) revealed a new program today that uses AI to detect wildfires. Created in partnership with the University of California San Diego, the Alert California AI program takes feeds from 1,032 360-degree rotating cameras and uses AI to “identify abnormalities within the camera feeds.” It then notifies emergency services and other authorities to check if a potential blaze warrants a response.
Zoom has reversed course (again) and updated its terms of service after a backlash earlier this week. Following consumer blowback about an update to its terms which appeared to grant the platform the unlimited ability to use customer data to train AI models, it now says it will not use any consumer data to train AI models from Zoom or third parties. The previous wording said it wouldn’t do so “without customer consent,” which raised eyebrows since “consent” was (at best) a gray area for people joining a call (and acknowledging a pop-up) in which the meeting organizer enabled the feature and already agreed to the terms.
Singleton's journey included: a lengthy drug suspension, time running a gym and playing in the Mexican League.
11xAI announced the closing of a $2 million pre-seed round led by Project A Ventures today. The London-based company builds automated digital workers that can be used in lieu of human employees. It has built an AI sales development representative called Alice and plans to create James, focused on automated talent acquisition and Bob, targeting automated human resources work in the upcoming years.
We've got a new twist in one of the NFL's most intriguing QB battles.
Can the Gators turn Mertz into the quarterback Wisconsin wanted?
As the genre turns 50, hip-hop legends, fans and more are helping to mark the musical milestone. Here's why.
A studio that handled Baldur’s Gate III translations has apologized after outsourced workers were omitted from the game’s credits. Altagram Group says the issue will be resolved in an upcoming patch.

source

Jesse
https://playwithchatgtp.com