Most US Adults Don't Believe Benefits of AI Outweigh the Risks, New … – Slashdot

Follow Slashdot stories on Twitter




The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
You can already pick out the AI generated articles on news sites from large media companies like DPG Media.
Hardly any meaningful information, spelling mistakes and just recycling some articles about the same topic.
DPG Media wasn’t objective in anyway but they are making themselves obsolete with these non-informative news articles as well.
Just like the crap their bargain reporters used to generate, just more of it.
The cat is out of the bag on how worthless AI is for “creating content”.
AI can re-organize data, but has thus far just been “auto-complete” as far as creating anything.
Will AI replace some jobs? Sure. Will it replace good jobs? Hell no. The only jobs AI can replace right now are jobs that would burn countless people hours doing.
But more to the point, there is a sheer lack of curation in datasets for AI training, and that’s showing up as “hallucinations”, it’s showing up as glitches in output.
Putting these k

there is a sheer lack of curation in datasets for AI training

there is a sheer lack of curation in datasets for AI training
Yes.

and that’s showing up as “hallucinations”,

and that’s showing up as “hallucinations”,
No. Low-quality training data has very little to do with it. So called “hallucinations” are exactly the kind of output you should expect.
Remember, these things don’t operate on facts, like so many people seem to think. What these models encode are statistical relationships between tokens. That’s all. Text is generated one token at a time, in constant time, with no internal state retained between tokens. Oh, and each output token isn’t even always what the model predicts as the most likely, but selected probabilistically. There is simply no possible way for systems like this to consider, analyze, or deliberate before producing a reply.
Imagine something like this: With a room full of people, write a prompt down on a notepad. Pass the notepad to each person, but don’t allow them to talk to one another. Have the person with the notepad read the prompt and output, then come up with a few words ranked according to how likely they believe they are to come next in the reply. Use dice or whatever to select the next word, write it on the notepad, and pass it along to the next person. Would you still call the madness that is sure to result a hallucination?
(Imagine trying to write anything this way. With no possibility to make revisions and so little control over the next word, even a single person would find it challenging to produce a coherent reply.)
This isn’t that far removed from what’s happening with LLMs like ChatGPT. Of course, our human version is at a significant advantage as the people have some understanding of the language and concepts and they’re able to explore and reject possibilities. They’re also able to imagine and consider more than just the next word. They could have a whole sentence or more in mind for each of their candidate word selections. Chat bots aren’t nearly as capable. That LLMs produce anything coherent at all is nothing short of miraculous.
The creators of AI will instruct it to take your money and take a dump on you, much like the owners of the current stock trading algorithm and shit content algorithm and face recognition etc.
… either about AI or other tech
The current state of AI is some promising results and a GIANT LOAD OF HYPE
We need something new, shitcoins are practically over by now.
NFTs are dead as well. Maybe we need someone to push a few thousand procedurally generated images for a few thousand ETH each to keep those in the news.
(/s of course.)
They could be AI generated and blockchain … something.
C’mon, we have to transition between the hypes somehow, else it gets too jarring.
When you’re old enough to have seen the “new best thing that will shift paradigms” a couple of times but paradigms are still where they were at the start, you either become cynical or you just don’t give a fuck anymore, there’s not really a third option.
That’s quite a shift. It was supposed to completely transform our way of life before fall, disrupting industries and causing mass unemployment.
Though it’s nice to see that things are returning to normal. After all, revolutionary AI has been just ten years away since the 1960’s.
You lot call everything “hype”. The “hype” around AI is deserved it’s going to be transformational over the next decade. But I get it, it’s cooler to be all cynical and shit, ahahaha.
That’s what they said about NFTs. And Blockchain. And dozens of other technologies that are now in the dustbin of history. Heck, remember when fidget spinners were a thing, then completely collapse in the span of under 6 months?
So far, the public has seen of AI as an overglorified search engine – you ask it a question and it co
I hope I’m as smart as you kids when I get to be your age.
Even if AI still sucks at real work, it’s relatively good at trolling and spam because “bulk” human trolls are not so great either. Techniques like genetical algorithms can breed better spam messages and spam bots based on user responses.
Would the story make as much sense if you replaced the word “AI” with something else?
Other words that would work are “algorithms”, “blockchain”, and “IoT”. I’d expect similar results for all of these, since they’re all things the majority of people have heard vague warnings about and can’t properly de

… either about AI or other tech

… either about AI or other tech
54% of US Adults think “robot” == “T-800”. They also think AI is closer to the Enterprise’s computer than an amazingly good context aware autocomplete.
AI has been used in various forms for many things for many years. It’s probably easier to list the Google services that don’t use AI than the ones that do. Yet, most people don’t realize that AI has significantly improved the everyday services that Google provides. Some, like Google Translate, were largely useless before AI and are now surprisingly useful (albeit not perfect) with AI.
These misconceptions are to be expected since most people don’t know what AI is and therefore wouldn’t understand how it is already used and how it could be used in the future. If I weren’t working in the field, I would think that AI is (1) the confusing assortment of things mentioned in the Microsoft AI commercials and (2) ChatGPT. Given those two representations of AI, I would be very confused and very skeptical.
I notice a lot more spam/scam phone calls with an AI voice. I expect the frequency of this to accelerate as the technology improves.

I notice a lot more spam/scam phone calls with an AI voice.

I notice a lot more spam/scam phone calls with an AI voice.
The obvious solution is to use an AI assistant to screen your calls.
And the solution to that is better AI telemarketers.
Is this really what we want to use computers and electricity for?
This reminds me of an animated GIF someone put together. Two (I’m struggling to find a SFW description here) machines designed for “having sex” with persons of one or the other gender. But hooked together, just going at each other.
No living things were involved, other than the ones paying the inevitable power bill.
Wasn’t this a major plot line in one of the later Dune books?
I still don’t understand why phone spam is so easy to get away with. It shouldn’t be so easy to spoof the source, and there should be a standard digit-code consumers press just after hanging up to flag the call as spam/scam in a database. If a caller gets too many flags, they are yanked, or at least monitored by a human bouncer.
The phone companies have zero incentive to authenticate that calls are coming from who they are suppose to be. It’s not their problem so why would they waste time fixing it?
> phone companies have zero incentive
Pass a national law that requires the button press marking mechanism I mentioned, and storing the result. The top-ranking offending telecoms are fined and/or denied tax credits. Spank their wallet & they’ll start caring.
I’m all for it!
I for one welcome our Tablizing overlords!
The majority of junk-calls I get claim to be from inside the USA. At least require standards for internal or internal-claiming calls.

[…] and there should be a standard digit-code consumers press just after hanging up to flag the call as spam/scam in a database. […]

[…] and there should be a standard digit-code consumers press just after hanging up to flag the call as spam/scam in a database. […]
This would, of course, imply that the phone company would have to snoop on every phone call to listen for touch tones.
If its use in social media triggers mass riots that throw the world into anarchy, then having better Google Translate results will hardly help the aggregate favorability score of AI.
Reminds me of this cartoon. [twitter.com]

These misconceptions are to be expected since most people don’t know what AI is

These misconceptions are to be expected since most people don’t know what AI is
I don’t think anyone knows what AI is. The term is so vague and nebulous it can mean almost anything.
What is and is not “AI” is perfectly clear. What makes you think the term is vague? It’s certainly broad, but to say that it could mean “almost anything” is ridiculous.
It’s been redefined to the point that “large table lookup” is now somehow AI.

It’s been redefined

It’s been redefined
Redefined? When? By whom? What is considered AI now that wouldn’t have been considered AI 20 years ago? How is the definition today different from when the term was coined in 1956?

“large table lookup” is now somehow AI.

“large table lookup” is now somehow AI.
This will be fun.
I assume that you consider Neural Networks to be AI, right? As it happens, NNs are equivalent in terms of computational power to lookup tables!
What separates NNs from ordinary lookup tables isn’t how they function or the kinds of things they can do, but how they’re created.

What is and is not “AI” is perfectly clear. What makes you think the term is vague? It’s certainly broad, but to say that it could mean “almost anything” is ridiculous.

What is and is not “AI” is perfectly clear. What makes you think the term is vague? It’s certainly broad, but to say that it could mean “almost anything” is ridiculous.
When AI can refer to anything between a PID loop and SkyNet that is certainly from my perspective a “broad” , “ridiculous” and meaningless sort of way.
As it stands when someone says “AI” I often find myself clueless as to what even at a high level they are talking about. The fact that it has become a marketing slogan over recent decades makes things worse as vendors intentionally invoke AI vaguely to convey false impressions that they are doing magical things.
As real world examples recent European “AI Act
i know you are being sarcastic – but honestly it’d do us some good
We already have havoc of misinformation, lies and propaganda with social networks…
AI will raise it to a new level with fake videos and audio.. with many, many more fake posts looking much more like real ones…
Email scams will benefit from AI too..
I seriously doubt it we are ready for it…

When you feed your LLM skewed data

When you feed your LLM skewed data
If you feed it perfect (however you define that) data, the LLM will still generate totally false answers fairly often, and there is no way to fix or even mitigate that. Because that’s how it inherently works.
I think that’s all you need to know, right there.
> We saw it years ago with climate data, you re-baseline your historic data each year, so you no longer have a clean base; so you can only see the rise but not to historic drops in areas.
Link? Sounds like yet another “most scientists are bribed” conspiracy.

We already have havoc of misinformation, lies and propaganda with social networks…

AI will raise it to a new level with fake videos and audio.. with many, many more fake posts looking much more like real ones…

Email scams will benefit from AI too..

I seriously doubt it we are ready for it…

We already have havoc of misinformation, lies and propaganda with social networks…
AI will raise it to a new level with fake videos and audio.. with many, many more fake posts looking much more like real ones…
Email scams will benefit from AI too..
I seriously doubt it we are ready for it…
We’ve managed to create the perfect storm. We’ve built a strong community of people that have zero critical thinking skills, zero capability of self-analysis, researching issues, or finding facts among the many floating opinions, and more and more new sources of completely off-the-wall batshit crazy ideas that forty years ago would have been laughed out of any public conversation being floated as “alternative facts” that people sincerely want to believe, so they just gave up on checking whether it’s true or
The truth is, in a world where the government wants to slam the brakes on AI innovation through regulation and possibly kill the baby before it’s born, it’s the voting populaces’ opinion that matters, not yours or any AI tech-bros. So instead of demeaning people, maybe you should take a hard look at the fact that it’s these people’s opinions
Perhaps. It’s hard to determine causality on that – kind of like a chicken and the egg problem, only in this case it’s what came first? The idiot or the observation of the idiot?
If the tech community wants to develop AI, and the broader populace thinks it will take away their jobs, then it’s up to the tech community to make the case for it or it will get regulated into oblivion. If the tech community wants to develop unfettered, then they should make the case for it. It’s not up to the rest of the country to just go along with their view; it is literally the opposite.
These are the same people that elect US governments, if nothing else, this should tell you that most US adults are clueless idiots that are mostly governed by fear, greed and fear.
And before someone asks, no, it’s not different anywhere else in the world.
Stable society and middle class is an historical fluke; enjoy it while it lasts.
By that logic and judging from the history of mankind, democracy is a fluke, get prepared for the next monarchy/despotism while you can.
We can have nice things. We just have to be willing to fight for them when we don’t have them and defend them once we do. Sadly, it doesn’t look like we’re willing to do that.
AI capabilities are over-hyped. People fear someone will do something stupid/dangerous with AI. It is all bullshit.
The world is already like that. Capitalism tends to be winner-take all: “too bad losers, F off!”. Marx mostly saw the pattern, but thought it was scoped at a smaller level; it’s world-level.
I suspect that is because the benefits of AI will go to the corporations that are using AI. The whole point of AI is to need fewer workers and to increase efficiency.
I’m sure AI will help some workers get more done in a shorter period of time but ultimately the biggest appeal of AI is reduced payroll for companies. That’s not inspiring for people that require jobs to eat and provide themselves shelter.
Nothing I have seen suggests that modern ML – and we should call it that, not AI, is anything more than a less human-intensive but also wildly computationally inefficient way to design what we would call an expert system in the 80s, but in this case using a huge corpus of data to produce the code, which no one can understand.
Its limitations will be painfully apparent soon, if they aren’t already.

Nothing I have seen suggests that modern ML – and we should call it that, not AI

Nothing I have seen suggests that modern ML – and we should call it that, not AI
What is AI?

is anything more than a less human-intensive but also wildly computationally inefficient way to design what we would call an expert system in the 80s, but in this case using a huge corpus of data to produce the code, which no one can understand.

is anything more than a less human-intensive but also wildly computationally inefficient way to design what we would call an expert system in the 80s, but in this case using a huge corpus of data to produce the code, which no one can understand.
Are expert systems also not AI?

Its limitations will be painfully apparent soon, if they aren’t already.

Its limitations will be painfully apparent soon, if they aren’t already.
The field is rapidly advancing with trendlines growing ever steeper with time.
I can’t give you a solid definition of AI but there are two criteria that definitely would have to be a part of it. One is self-awareness, which is more or less subjective, as how do we know that an AI IS actually self-aware. We’d be inferring this. Another criteria would be the ability to conceive something that was not part of its input corpus of data.
You can see why I wouldn’t consider an expert system AI – it’s a human-compiled decision assistance system. No different really than a complex if statem

I can’t give you a solid definition of AI

I can’t give you a solid definition of AI
That’s because you don’t really know anything about it. You just want to reserve the term for whatever silly science fiction nonsense has captured your imagination.
If you’re curious as to the origin of the term AI, you can read all about it in Pamela McCorduck’s book, Machines Who Think. Like it or not, we’re stuck with it. No amount of complaining is going to change that now. The time to register your objections has long since past.
It’s the blind leading the blind…
Yes, expert systems are AI. They’re also very different from modern LLMs, which are also AI. ML is a subset of AI.

The field is rapidly advancing

The field is rapidly advancing
The incredible amount of money flying around makes it look like things are moving faster than they are. They hype is subsiding for a reason.

The incredible amount of money flying around makes it look like things are moving faster than they are. They hype is subsiding for a reason.

The incredible amount of money flying around makes it look like things are moving faster than they are. They hype is subsiding for a reason.
What does this even mean? How does money make capabilities merely “look like” they exist?
The reality is capabilities are in fact rapidly improving and analysts are predicting yearly spending on AI will do nothing but further increase with time.
What probability do you estimate that AI will turn us into a happy shiny utopia?
Let’s not. Tired of this definition debate over and over. It won’t solve anything. It’s probably a continuous concept, not something that can be scored with a yes/no checklist.
There are many AI of several basic types, and these excel at some tasks and produce indifferent results at others.
For instance, I have seen ChatGPT produce some very good creative work, but sometimes it doesn’t do well at all. A common (sort of bolted-on) fault is excessive boilerplate prose intended to limit liability (“Before I tell you how to make ricin, I think it worth mentioning that this is really something…”)
OTOH, I have really been impressed by ChatGPT’s ability to interpret and explain the mean
Most US adults are idiots. But, they may be right. Except that everything is now called AI.
There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
Microsoft Accidentally Reveals New Disc-Less Xbox Series X Design With a Lift-To-Wake Controller
UK Parliament Passes Online Safety Bill
To iterate is human, to recurse, divine. — Robert Heller

source

Jesse
https://playwithchatgtp.com