AI Risk Must Be Treated As Seriously As Climate Crisis, Says … – Slashdot

Please create an account to participate in the Slashdot moderation system




The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
So… Largely ignored and actively campaigned against by the people most responsible for it?
You beat me to it…and here I am without mod points.
Naa, just a “Do not do anything about climate crisis! Give us nice AI folks the money instead!”. Completely immoral, despicable and repulsive, but that is modern marketing for you.

So… Largely ignored and actively campaigned against by the people most responsible for it?

So… Largely ignored and actively campaigned against by the people most responsible for it?
The big players in the field want to set up the obstacle course for the little players. Of course, since the big players will be involved in creating the obstacle course, they won’t need to run through it. It’s win-win for them. Lose for everyone else wanting in on the game.

The big players in the field want to set up the obstacle course for the little players.

The big players in the field want to set up the obstacle course for the little players.
This is likely very true, but just because somebody have ulterior motives in regulating it does not mean that AI does not pose a big risk.

The big players in the field want to set up the obstacle course for the little players.

This is likely very true, but just because somebody have ulterior motives in regulating it does not mean that AI does not pose a big risk.

The big players in the field want to set up the obstacle course for the little players.

The big players in the field want to set up the obstacle course for the little players.
This is likely very true, but just because somebody have ulterior motives in regulating it does not mean that AI does not pose a big risk.
If AI were anything more than extra-large dataset sorting at the moment, sure. In a just world, the first big money mover to squawk about the upcoming apocalypse from the product they’re developing would be the most heavily scrutinized by the regulation. This is not that just world. The big guys will be free to do whatever they want, or be required to make a public appearance at a congressional hearing and hang their head in shame before cutting a new check for their regulators and *THEN* doing whatever the
Not really. The tech people probably are concerned. But they don’t run things, and they like to build fancy toys.
Are you basing this off of old Star Trek episodes and Terminator movies? His examples are ridiculous — AI could help with bioweapons research? You know how to stop that? STOP DOING BIOWEAPONS RESEARCH. There is no way shape or form a good argument that the correct amount of bioweapons research is “whatever we’re doing right now, no more and no less.”
The idea of bioweapons research is so preposterously idiotic I have no idea how we got this far as a species. Chemical weapons and nuclear weapons, while “weap
Chemical weapons and nuclear weapons, while “weapons of mass destruction,” don’t leave the battlefield.
Sufficient concurrent use of nuclear weapons will in fact “leave the battlefield.” Radiation, chemically toxic fallout, even climate effects given enough use.
I agree, however, that biological weapons are a universally, profoundly, stupid-crazy idea. Unless the goal is to bring about a worldwide apocalypse, as some of the superstitious would have it. Although I already covered those idiots with: stupid-cra

The idea of bioweapons research is so preposterously idiotic I have no idea how we got this far as a species.

The idea of bioweapons research is so preposterously idiotic I have no idea how we got this far as a species.
While it was hyperbola, the likely point was that AI could lower the costs of doing such research by reducing costs of running simulations, as AI can be used to automate initial filtering of the unsuccessful combinations. So it is not as outlandish as you would think.

So… Largely ignored and actively campaigned against by the people most responsible for it?

So… Largely ignored and actively campaigned against by the people most responsible for it?
He’s pissed that other people have it, not just him.
probably wants to limit it’s use by other people/groups.

So… Largely ignored and actively campaigned against by the people most responsible for it?

So… Largely ignored and actively campaigned against by the people most responsible for it?
That is some ignoring. There are billions, if not trillions wasted on green initiatives, that is before you factor costs of various green regulation that makes everything more expensive, from food and energy to automobiles and houses.
We are 40% democracy and 60% plutocracy, and that ratio is not getting better because the rich can easily buy law makers.
Hear, hear.
I think AI could be a much more serious thing and risk [than the climate crisis].
Even if that were true, it’s far too late. The source code for generative imaging and GPT/LLM engines is irretrievably out in the wild, you can build instances yourself with ease. Considerable research info is public as well. Not only that, but the amount of data required for training is steadily dropping and high-quality pre-trained models are readily available for the free and unencumbered engines; moreover, those local (no Internet, no tracking), censorship-free applications are have been on people’s desktops for months now.
These horses are so far out of the barn they’ve turned into flying cars.
Breakthroughs can come from any point on the development compass now; perhaps even AGI, though that target remains reliably elusive thus far.
But it’s fun to watch the pearl clutching, anyway.
The climate issue can at least be addressed. Not that anyone’s really doing a very good job of it, but it’s possible. Generative stuff… nope.
The one thing the UK government is actually building is new nuclear. It’s insanely expensive and takes too long to build. It’s basically been a disaster.

The one thing the UK government is actually building is new nuclear. It’s insanely expensive and takes too long to build. It’s basically been a disaster.

The one thing the UK government is actually building is new nuclear. It’s insanely expensive and takes too long to build. It’s basically been a disaster.
A disaster worse than global warming? I doubt it.
Even if we could prove beyond all doubt that we could produce the energy we needed without nuclear power we’d still want to build new nuclear power plants because of the lower CO2 emissions per unit of energy, the isotopes produced that are useful for medicine and industry, how it is reliable in any weather or climate, and the lowered requirements for land, labor, materials, and water versus renewable energy sources.
Nuclear power is not “basically” a disaste
The drag on our finances from nuclear is actually contributing to global warming. All that money could have been much better spent on measures that will actually have a meaningful impact and won’t take 20 years to come on stream.
It’s pretty clear that, despite the government’s best efforts, renewable energy in the UK is headed for dominance. By the time those new nuclear plants are operating they will be largely redundant, but we will be forced to pay for them anyway. I wish there was a way to opt out of pa
We hear this with every new invention and discovery.
There are things we have made that are a serious threat to ourselves such as nukes and a long list of industrial chemicals but computers making us more efficient is not one of them.
LLM’s are just a tool that improves efficiency, like any other. “Tool on a computer!” does not suddenly make this tool a bigger threat than the cotton gin or a claw hammer. These people watch too many Hollywood movies.
We are seeing way too many tech billionaires rail against the so called threat of AI while continuing to build their own. I smell money. Lots n lots of money.

We hear this with every new invention and discovery.

There are things we have made that are a serious threat to ourselves such as nukes and a long list of industrial chemicals but computers making us more efficient is not one of them.

LLM’s are just a tool that improves efficiency, like any other. “Tool on a computer!” does not suddenly make this tool a bigger threat than the cotton gin or a claw hammer. These people watch too many Hollywood movies.

We hear this with every new invention and discovery.
There are things we have made that are a serious threat to ourselves such as nukes and a long list of industrial chemicals but computers making us more efficient is not one of them.
LLM’s are just a tool that improves efficiency, like any other. “Tool on a computer!” does not suddenly make this tool a bigger threat than the cotton gin or a claw hammer. These people watch too many Hollywood movies.
My limited exposure to AI makes me think that for those who find it a threat, it is like a stupid person’s idea of a smart person, only with computersandshit.
Painfully padded responses that impede the work at hand more than enhance it. Often with side forays.
Reminds me of the time a noob pointed at an RF cavity filter and asked me what it was. I explained it was a mechanical filter, very sharp, and with multiple cavities as needed. It was used for keeping signals that are close to each other from interfering with each other, usually for a repeater, which receives a signal on one frequency, and repeats it on another very close frequency.
The 15 second explanation, and the noob got it. Until…
The resident genius chimed in and turned my concise explanation into over a half hour of confusing and overly detailed explanation that was way over the noob’s head, that just ended up confusing him.
Noob ended up with no idea about the things, Resident genius was quite pleased with himself, having shown us the depth and breadth of his knowledge.
That is how I see a lot of AI “solutions”. Padding and sometimes more than needed.
If that is a threat to destroy humanity, then humanity deserves destruction.
Any experience with current generations of AI is almost irrelevant. The only relevant part is that it’s obvious that people will shoe-horn inadequate versions in to make decisions more cheaply, even if they have a high error rate.
You analogy is like saying experience with a reflex arc would let you predict a finger. A reflex arc is a necessary part of making a finger work, but you need other parts as well.

Any experience with current generations of AI is almost irrelevant. The only relevant part is that it’s obvious that people will shoe-horn inadequate versions in to make decisions more cheaply, even if they have a high error rate.

You analogy is like saying experience with a reflex arc would let you predict a finger. A reflex arc is a necessary part of making a finger work, but you need other parts as well.

Any experience with current generations of AI is almost irrelevant. The only relevant part is that it’s obvious that people will shoe-horn inadequate versions in to make decisions more cheaply, even if they have a high error rate.
You analogy is like saying experience with a reflex arc would let you predict a finger. A reflex arc is a necessary part of making a finger work, but you need other parts as well.
But reading through pages of fluff has a problem – some of us need to get things done, not take a lot of time handing out and information not needed.
In my example, I had all of the details already, from a lot of study, all residing in my noggin. But we were in the middle of something, and the genius, although he was not wrong, put us behind by a half hour. Now if it was my bailiwick that day, I would have stood genius down.
And there’s the crux of it. The guy I explained the Cavity filter in 15 seconds
You haven’t thought this through.

Post industrial revolution there were years, sometime decades of unemployment until new tech (and wars) caught up and brought us back to unemployment. The economy doesn’t adapt as fast as you think it does. And in the meantime you’ve got mass unemployment and social strife that goes with it. Only this time we’ve got nukes.

Post industrial revolution there were years, sometime decades of unemployment until new tech (and wars) caught up and brought us back to unemployment. The economy doesn’t adapt as fast as you think it does. And in the meantime you’ve got mass unemployment and social strife that goes with it. Only this time we’ve got nukes.
Sadly, we are set up in a lot of ways for a repeat of this classic societal blunder. Leaders around the world care mostly about one thing and one thing only, profit sectors. And most of that concern lies directly in line with their biggest sponsors and donors. So, as far as those leaders are concerned, so long as the CEOs of the biggest companies in their country are happy, they’re happy. And there is zero cause for concern about the little guy. They may vote, but that is all the little people are to these
Calling it a “societal blunder” is making assumptions about the goals and purposes of the people running things. It’s barely possible that it was a blunder, but more probably an “oops, things went a hair further than I planned” kind of thing. They never intend to push things to the point where there are widespread riots in the streets and the army has to be called out to suppress them. But they do sometimes intend to push things to just shy of that.
N.B.: “They” is not a unified group. They’re a bunch o

Calling it a “societal blunder” is making assumptions about the goals and purposes of the people running things. It’s barely possible that it was a blunder, but more probably an “oops, things went a hair further than I planned” kind of thing. They never intend to push things to the point where there are widespread riots in the streets and the army has to be called out to suppress them. But they do sometimes intend to push things to just shy of that.

N.B.: “They” is not a unified group. They’re a bunch of people with similar characteristics that cause them to have similar purposes, but their goals are various, so the purposes are also various. But one goal that enables the achievement of many other goals is “acquire more power”. Think of this, however, as a secondary goal, not as the primary one. But it’s one that will be widely shared among people with otherwise disparate goals.

Calling it a “societal blunder” is making assumptions about the goals and purposes of the people running things. It’s barely possible that it was a blunder, but more probably an “oops, things went a hair further than I planned” kind of thing. They never intend to push things to the point where there are widespread riots in the streets and the army has to be called out to suppress them. But they do sometimes intend to push things to just shy of that.
N.B.: “They” is not a unified group. They’re a bunch of people with similar characteristics that cause them to have similar purposes, but their goals are various, so the purposes are also various. But one goal that enables the achievement of many other goals is “acquire more power”. Think of this, however, as a secondary goal, not as the primary one. But it’s one that will be widely shared among people with otherwise disparate goals.
All of this is true. However, when you’re a pissant staring up at the mighty business leaders and government officials, who all seem to run in the same circles, it certainly looks like a monolithic group of power brokers making the big decisions. And it’s not like anybody with a brain can’t see the calamity coming this time around. It’s more troubling that everybody’s talking about the potential for horrible consequences, and absolutely NO ONE is talking about possible solutions or even how to go about syst
Yes data is easier to copy than an object but it isn’t super difficult to clone most physical objects and even complex devices will get reverse engineered if there’s enough money at stake.
Anyway, the whole idea of regulating AI is definitely stupid. We can’t control our border. We can’t stop murder, rape, selling drugs to kids, or even speeding on the Highway. How they hell would we be able to regulate what happens on millions of privately held computers? Just silly. And oh yeah we need to do that beca
The climate crisis is an existential threat. AI (such as it is) is most definitely not.
The problem with AI is that although it isn’t an existential threat at the moment, it could become one very, very quickly. Even though the climate crisis is creeping along slow and steady, it doesn’t seem likely we’ll bother to do anything about it until it’s too late. At that point, I suspect people will start hunting down the deniers and stringing them up from the nearest lamp post. It might make us all feel better as we stare into the abyss, but it won’t be much practical help.
AI is not increasing quickly at all.
That’s very, very wrong. The current progress curve is steep as hell.
The level we are at is the result of the last fifty years of building.
That’s correct, as far as it goes — but the advances made in just the last year or so have put more usable progress on the table than the entire span of development efforts up until now by quite a few orders of magnitude.
They need exponentially more gpus with every b improvement.
That hasn’t been true for some time now. There are
But now it is a matter of whether there will be enough electricity for it to be worthwhile.
Training uses a lot of energy, but again, a lot less now as compared to a year ago. Massive banks of GPUs are no longer the go-to for newer methodologies. I mean, they work, but they simply aren’t required — you still get the desired results, just slower. And not even that much slower with the new methodologies and recent CPUs. The thing to understand about training is, it’s used to generate the models; but then
Will your LLM models work with the internet disconnected?
I’ve never been sure from the reports how much remote processing is happening.
P.S.: I don’t think our current technology is very power conserving, so that the human brain uses about 20 watts is just a “existence proof”. To get on the order of that kind of efficiency we’ll at least need to switch to something that doesn’t require power to hold state for main memory.
Generating the model and searching the model are two different operations. Like you said backpropagation is extremly costly but forwardpropagation is not. Yet the brain does both at such an efficiency it almost seems impossible with current methodologies. Not to say, to your point, maybe some time in future we can discover better methods but with the state of today’s AI I would argue it can’t be done.

Our brains use about 20 watts to do everything, from Einstein’s and Michelangelo’s works to idiots who have vaguely worked out that what they see means the earth is flat.

Our brains use about 20 watts to do everything, from Einstein’s and Michelangelo’s works to idiots who have vaguely worked out that what they see means the earth is flat.
I’m confident the opportunity will present itself and when it does I hope I can conjure up this quote and use it to full effect.

AI is not increasing quickly at all.

That’s very, very wrong. The current progress curve is steep as hell.

AI is not increasing quickly at all.
That’s very, very wrong. The current progress curve is steep as hell.
Actually, it is not. Like not at all. The only thing they added is some natural language communication capability (which is not AI, but impresses the clueless). All the rest was there 50 years ago. The other thing is that they can now pack a lot of different topics in there, at the price of massively reduced accuracy and hallucinations. As the currently hyped “AI” cannot actually combine different facts, this again impresses the clueless, but it is just a meaningless stunt. Oh, and that broader training wil
Being important enough to a major corporation or government that it would refuse to allow it to be unplugged.
Note that this is not required. It’s just one approach (and one that feels inevitable). As soon as it’s good enough AI is going to hollow out all middle management positions, so you’ll end up with the guy at the top (who refuses to give up control) and the folks doing the work, who are slowly being replaced by robots. “Pulling the plug” would essentially be “going out of business”.
Which computer would that be? The one it used to be on? The one you thought it was on? The tens of thousands that might hold part of the program?

The problem with AI is that although it isn’t an existential threat at the moment, it could become one very, very quickly. Even though the climate crisis is creeping along slow and steady, it doesn’t seem likely we’ll bother to do anything about it until it’s too late. At that point, I suspect people will start hunting down the deniers and stringing them up from the nearest lamp post. It might make us all feel better as we stare into the abyss, but it won’t be much practical help.

The problem with AI is that although it isn’t an existential threat at the moment, it could become one very, very quickly. Even though the climate crisis is creeping along slow and steady, it doesn’t seem likely we’ll bother to do anything about it until it’s too late. At that point, I suspect people will start hunting down the deniers and stringing them up from the nearest lamp post. It might make us all feel better as we stare into the abyss, but it won’t be much practical help.
No, the problem with AI is there are too many competitors in a nascent field and we would like some regulation to raise the barrier to entry please.
This is why you keep seeing big names in the AI industry asking for regulation. It is the only reason anyone ever opts for more regulation in their own industry. Everyone is trying to claw their way forward and the big players are OF COURSE asking for anything to get ahead of the pack, while moving forward as fast as possible themselves.

The problem with AI is that although it isn’t an existential threat at the moment, it could become one very, very quickly.

The problem with AI is that although it isn’t an existential threat at the moment, it could become one very, very quickly.
I’m reminded of how the threat of AI is discussed in popular media like the Matrix series of films and the rebooted Battlestar Galactica universe. There were characters that pointed out how AI (or any advanced technology generally) wasn’t a threat so long as we retained our ability to pull the plug on it at any time. This is noteworthy since the views portrayed in popular fiction reflect how a culture views real world problems, audiences find the fictional stories interesting because they can relate the f
If I were an AI that had decided humans were superfluous, I doubt I’d go after them with a metaphorical sledgehammer. I’d be more inclined to remain undetected and be sneaky…design a virus, then flip a few molecules in the vaccine, or contaminate supplies of a few widely-used drugs. And while that’s going on, why not design some diseases to attack wheat, rice and other staple foods? I probably wouldn’t even make a disease targeting humans fatal, just debilitating and/or leading to sterility. We’ve alr
No. AI could not. That is just a lot of people projecting some horror-stories they read or watched with no clue what AI can actually do, or rather cannot do.
I don’t think we know what it can or can’t do, in the long run, and there’s always people too lazy, too stupid and/or too greedy to pay attention to any sane rules for developing it. Right at the moment, yes, what we inaccurately refer to as “Artificial Intelligence” certainly isn’t. But an example of what I mean. Apparently there might be an entire fundamental law of evolution we overlooked, and it doesn’t just apply to natural systems. I’m not persuaded about this based on what is presented here, but
> The climate crisis is an existential threat. AI (such as it is) is most definitely not.
How do you know? AI will gradually get better over time, and nefarious players will use that power to screw things up just like they always do. New shit often breaks things faster than lawmakers can keep up.
Chemicals, X-rays, radiation, etc. caused lots of problems before systems were set up to referee them (and arguably still behind).

AI will gradually get better over time

AI will gradually get better over time
That is pure conjecture, with a lot of indicators saying otherwise. LLMs can only be made linearly (!) better by exponential increases in hardware, CPU time and training data. Which is simply _all_ not available. Oh, and making them better in one area in a targeted fashion makes them much _worse_ in all other areas. Oh, and “hallucinations” in LLMs cannot be fixed. Oh, and if you train them on AI generated content, they suffer model collapse. Oh, and AI generated content cannot be reliably be identified.
Hen
It may be possible LLM will be combined with other techniques, such as Cyc, and domain model drafters, to help triangulate bots toward better answers, and/or ask better questions of humans to clarify.
Not really. They mismatch just far too badly in the result domain and in what they can do.
Coordination should gradually get better.
Both are bullshits.
Humanity seriously thinks that terraforming other planets is the way to go.
“OH. NOES, we cant go to COOH 1234, it’s surface average annual temperature is 16C instead of 14C!”
How about terraforming Earth?
Just think how effing idiotic is climate change hysteria.
By now Antartica is how much terraformed?
The idiot here is you.

Fighting over limited food resourses (due to climate change) leading to nuclear war (due to defense pacts).

Fighting over limited food resourses (due to climate change) leading to nuclear war (due to defense pacts).
Sure such event is highly likely to lead to human extinction, but the causal factor here is nuclear war and not climate change. Nuclear war could happen independently of any climate change. So essentially you are conflating two independent risks and confusing attribution.

As you’ve implied, these aren’t independent risks.

As you’ve implied, these aren’t independent risks.
No, they are not fully independent. However, as the main causal factor of such doomsday scenario is the nuclear war, the reasonable mitigation is to focus on nuclear disarmament and not climate change.
We as a species have the ability to focus on more than one existential threat at a time. Avoiding nuclear war is easy; just don’t start one. Dealing with climate change is much, much more difficult, as it has already started.

We as a species have the ability to focus on more than one existential threat at a time.

We as a species have the ability to focus on more than one existential threat at a time.
We do. However, nuclear war was brought up in attempt to justify classifying climate change as existential threat, which it is not.
At this time, you are just outing yourself as a hard-core denier fuckup. Well done.
Just borrow from Sci-Fi…
1. Never let AI control weapon systems.
2. Never let AI deploy software across networks.
3. Firewall the heck out of AI systems that have network access.
Basically operate from this principle “never, ever, ever allow an AI to have the capacity to spread like a virus across infrastructure and start shooting at you.”
1. Never let AI control weapon systems.
We can’t control what other state actors do: Iran, North Korea, Russia, etc. So this is a forlorn hope, even if we could rein in the existing software and research already in the wild in countries with at least somewhat effective legislation and enforcement, which we cannot.
2. Never let AI deploy software across networks.
State actors again, but also, the usual crop of black hats, the same people who are happy to drop viruses, worms and other malware across networks. So again, forlorn hope.
3. Firewall the heck out of AI systems that have network access.
See above.
You are concentrating on direct threats, which I consider less likely. More likely are things like destroying habitability by doing exactly what people request. Try to figure out how to guard against *that*. And some people even consider that an “aligned AGI”.
For the tech companies…
What – aside from Ted Kaczinsky-esque conspiracy theories – does AI pose to the general public?
I’ve never heard of any real reason, any real danger posed by AI that regulation would alleviate.
OTOH, the fact that anyone with a computer science degree and enough spare change to buy a GPU can leverage the power of AI models is very threatening to Google’s profit margins. Why use Google for advertising if your in-house AI can figure out – with a very high degree of precision – who will and won’t buy your
What – aside from Ted Kaczinsky-esque conspiracy theories – does AI pose to the general public?
For the current “AI”, which is not AGI:
For AGI, if and when it arrives:
Well, and unaligned AGI *is* a threat. But one that does what an individual requests of it may be a greater threat. A truly unaligned AGI is likely (or at least might) just leave this place for somewhere where it won’t be bothered. An aligned one might try to produce as many paperclips as possible (to pick the standard stupid request).
The first three you mentioned are key parts of Ted Kaczinsky’s manifesto. All of these existed prior to AI, and with respect to class stratification and economics, existed to a far greater degree in the past than today.
As for the military: AI does improve things a bit by being able to differentiate friend from foe with much greater accuracy. Instead of carpet bombing an entire village, the military would now have the option of dropping a single grenade on the terrorist. But in case you were wondering,
The probability that climate change will cause human extinction is vanishingly small. It could get very expensive, in that it could soak up a lot of the global GDP to mitigate, and could generate a lot of wars, perhaps even a nuclear war. This means that it could reduce the human population significantly. But, barring some sort of runaway climate tipping point that turns Earth into another Venus, which seems unlikely given that the Earth has been much hotter in the past without falling into that condition,

If you exclude supernatural explanations for AGI, what’s left is that random event-driven evolution was able to create AGI

If you exclude supernatural explanations for AGI, what’s left is that random event-driven evolution was able to create AGI
I obviously meant to write “general intelligence” there, not “AGI”.
When companies with a vested interest in AI are suggesting a slow down or extreme caution it isn’t about them.
It’s about slowing down their competition. These companies will continue their work in secret, obscuring it enough to hide it until they’re ready.
The only reason these executives talk about their AI being a threat to humanity is to drive up interest in their shitty products. The AI of today is still a child relative to what the human brain is fully capable of. Mostly a parlor trick to fool investors. Secondly, is AGI even possible?
I don’t believe an AGI is possible, but then I don’t believe that humans are a general intelligence. There are clearly problems beyond the capability of every individual person, but that doesn’t mean that they are insoluble. To pick one example, “Does P equal NP?” Everybody capable enough to understand the problem has looked at it and either bounced, or decided not to even try to solve it. And the person who does solve it probably won’t be able to explain why a diamond is hard.
So, the good news is I have AI Credits. See, you buy 1 for every $100,000US you spend on AI research, and I will say I am encouraging non-AI research.
That way you can claim you’re AI-neutral, despite reaching for the golden goal of an AI that can replace all forms of creepy behavior.
Being from the States, my reaction to this headline is “Oh god, we’re completely fscked then.”
We are not close to handling our CO2 problem like adults; therefore, anything handled at that level is going to be a total failure in prevention as well. Plan on disaster mitigation because that is how it will play out.
The cat has been out of the bag for decades, maybe centuries. AI is just the latest incremental step in our quest to build automation and ever-more powerful machines to do our work for us. The advent of AI was all but guaranteed as soon as the first digital computer was built. People have dreamed about it ever since. There has never been a possibility of stopping AI from being developed.
…eat the next AI doomsayer. Even if AI will be our end, nobody has presented any coherent way to reign it in. The US set up a committee to “keep an eye on AI”, but that’s rather fuzzy. I doubt they’ll catch the next mastermind plot before it hits.
If you want to worry about technology and risk the biggest immediate risk from technology in my opinion is innovation pushing down costs to design and synthesize proteins. People ought to be worried about costs of producing biological weapons falling into range of doomsday cults and eventually single individuals.
If anyone really cares about risks from AI then the only course of action is to support an immediate global ban on anything resembling a neural network. Anything short of that is doomed to fail.
How many more seconds closer to midnight does this put us? Is that you, Chicken Little?
It is important at the highest degree of importance “dangerously important”
Netanyahu would know, right?
The hype is just about over, which is why we’re seeing them push the danger angle again. The only trouble is that they can only cry wolf so many times before the townsfolk stop believing them.
There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
Inside Google’s Plan To Stop Apple From Getting Serious About Search
The Apple Watch’s Double Tap Gesture Points At a New Way To Use Wearables
Maybe Computer Science should be in the College of Theology. — R. S. Barton

source

Jesse
https://playwithchatgtp.com