SoftBank CEO Says AGI Will Come Within 10 Years – Slashdot

Please create an account to participate in the Slashdot moderation system




The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
AI is making good progress and generative ai does really cool stuff
Not to rain on his parade but AI people have been saying AGI would be achieved and AI Problem solved within 10 years for at least 50 years.
We’ll likely achieve AGI… but right now all we have is a something like a very good parrot (parrots have some intelligence but I wouldn’t have them perform surgery on me)
It might be 10 year, it might be a 100 years… it’s kind of like the ITER fusion situation, we’ll likely get AGI, but there is not enough to make a reliable prediction
Fundamentally, our intelligence and self-awareness seem to be emergent properties of a bunch of interconnected neural nets with inputs, outputs, and some basic ‘programming’.
I think the challenge is in getting enough complexity in an artificial system to cross whatever threshold needs to be crossed for us to call it intelligent. That comes with a secondary challenge of doing it with enough efficiency to run on a dozen watts in a volume of around 1300 ccs.
If we get there (or ignore efficiency), the step after that is figuring out how to set the basic rules (the equivalent of instincts and emotions) to get a desired result. And then be able to copy or adjust it as we please.
Could happen next year, or might not happen in my grandchildren’s lifetimes. But it’s not impossible to repeat in silicon what evolution did with meat.
I’d just like to be away from AI and all it will entail if at all possible.

Fundamentally, our intelligence and self-awareness seem to be emergent properties of a bunch of interconnected neural nets with inputs, outputs, and some basic ‘programming’.

Fundamentally, our intelligence and self-awareness seem to be emergent properties of a bunch of interconnected neural nets with inputs, outputs, and some basic ‘programming’.
Prove it. 🙂
The simple fact is we don’t have clue how any of this works. People can’t stand saying “I don’t know” so they’ll latch on to anything they think is plausible and insist that this must be how we work. This usually coincides with the current state-of-the-art and will change when something more advanced comes along.
Things can get really stupid, however, when people mistake the state-of-the-art for something that … isn’t. For example, we know for a fact that we aren’t a complex but otherwise o

In 2018, almost every car company CEO said that we’d have fully self-driving cars by 2020. That obviously happened, so why wouldn’t this also come true?

AGI will be powered by clean, fusion, energy.

In 2018, almost every car company CEO said that we’d have fully self-driving cars by 2020. That obviously happened, so why wouldn’t this also come true?
AGI will be powered by clean, fusion, energy.
I don’t understand why CEOs get into that crap, at least they should let the CTOs burn themselves…

AGI will be powered by clean, fusion, energy.

AGI will be powered by clean, fusion, energy.
Running on desktop Linux, no doubt.
I agree.
For example, consider that presently ChatGPT and other LLMs need to be trained on ginormous data sets beyond what any human being could read in hundreds of years. Yet in many ways humans still do better than ChatGPT at a lot of logic problems and math problems, and yet we are trained on a very small fraction of the data sets used to train LLMs.
And at some level I can’t help but think we’re basically anthropomorphizing a parlor trick; a pattern matcher that is so good at predicting how words should g
Each AI generation has generally required a model about ten times the size of the previous generation. This obviously can’t go on forever, so new methods will be needed to deliver equivalent or better results with smaller models. But it does speak to the growing complexity. The different between ChatGPT 2 and 3 is easily visible to anyone who used them. The difference between 3 and 4 is there, but it’s not the same visible growth. There are likely diminishing returns to the current growth chart such that re
I finally found something ChatGPT is actually good at. Make up the most ridiculous, insane tabloid headline you can think of, and tell it to write the article. It will be indistinguishable from the real thing.

I can’t help but think we’re basically anthropomorphizing a parlor trick

I can’t help but think we’re basically anthropomorphizing a parlor trick
That’s because that’s exactly whats happening. It’s a very human thing to do. Joe Weizenbaum’s secretary famously wanted her sessions with Eliza to be kept confidential. She, like many others, was convinced that the program understood and empathized with her problems.
That was with Eliza — A simple program that simulated a Rogerian therapist by simply turning the users statements into questions, using filler statements when a sentence couldn’t be parsed, and occasionally repeating something saved from ea
FWIW (not much) I’ve been predicting an early AGI in 2035 for over a decade, and haven’t seen any reason to change my time estimate.
Note that it will NOT be a human equivalent. It will have different motivations. It will be better than humans at many tasks (they already are) and worse at others. But it WILL be able the generalize it’s learning to handle the physical universe.
This is said, sort of, tongue-in-cheek, because I don’t believe a real AGI is possible, and I also include humans in “not a real ge

Not to rain on his parade…

Not to rain on his parade…
Why not? He made a moronic statement that has exactly 0% chance of being true in the next thousand years (unless we devise a radically different form of computing). His parade should be wiped off the face of the earth by nuclear forces.
… Bard is very upset with me (think like it parsed and learned from top Reddit trolls, which it probably did) if I tell it got wrong the very first digits on basic arithmetic questions.
There are a lot of things lately being called “AI”. They are not intelligent (not even “approaching intelligence”) by any reasonable meaning of the word “intelligent”. In general, these are pattern recognition devices: they input a vast amount of human-generated input (books and wikipedia articles, for example), and find the patterns of what intelligent behavior looks like. They then blindly apply these patterns, without any understanding (or even any attempts at understanding) what the actual thinking is.
Almost everything called “AI” is using the word “artificial” in the sense of “fake.” Just as artificial leather is not real leather, artificial intelligence is not real intelligence. That is what the term has come to mean in common use. So, something does not need to qualify as intelligent in order to qualify as “artificially intelligent.”
And that broad meaning is exactly what makes the word useful. If we restricted it to only those things which equal human intelligence in every way, there would be nothing at all. This special meaning implied by “artificial general intelligence” refers to something that doesn’t exist and is nowhere near existing, but that is why AGI is not a common-use marketing buzzword.

It doesn’t have to kill us – manipulation through media is all that is necessary. Have us do it to ourselves and the rest are sheep. Not hard to do when just about everything comes through the internet.

It doesn’t have to kill us – manipulation through media is all that is necessary. Have us do it to ourselves and the rest are sheep. Not hard to do when just about everything comes through the internet.
That’s the part that nobody foresaw. Manipulating people is, at the heart of it, two things: pattern recognition, and access. Pattern recognition is what “AI” is good at– recognizing what messages work and what don’t (and what makes a message one that people pay attention to), and computers can spew out millions of messages across every possible medium that people use to communicate.
(used to be spam was copy & paste & flood everybody with copies of an identical message. But with AI, each message can be individually tailored to the person targeted, and the AI will have access to pretty much everything about that individual and what works to make the message hit the target.)
The floodgates are open, the flood isn’t here yet, but the storm is coming, and we are completely vulnerable.
The issue isn’t using “artificial” that way. The problem is using it that way while telling potential investors it means something completely different.

He can say it, but there is no evidence that this is true.

He can say it, but there is no evidence that this is true.
More than that, there is no theoretical basis for this claim.
The difference between current machine learning techniques and truly general intelligence is something we simply don’t understand. What’s most likely is that there is some crucial theory of general intelligence that we have not yet discovered. Once we discover it, building AGI will probably be easy (assuming it doesn’t depend on yet other theoretical breakthroughs). Until we discover it, building AGI will be impossible.
How far are we from that theoretical advance? We cannot know. What would a knowledgeable person making predictions around the time of Isaac Newton’s birth have said about when we would understand how things fall? How difficult would it be to build an atomic bomb without Einstein’s work?
Someone could find the crucial ideas tomorrow, or it could take centuries. Or maybe they found it yesterday. We simply cannot know. We can be pretty sure they didn’t find and recognize it months or years ago.
That said, there is an intensive amount of effort and brainpower going into the search, and our tools for analyzing and understanding the existing form of general intelligence and for quickly building and testing proposed new strategies are advancing at a breakneck pace. Also, there is always the possibility that we accidentally succeed, without first developing the necessary theory — after all, evolution did it via randomized variation and selection.
So I think it’s reasonable to say that AGI will be created, but no one can say when. We best hope that it doesn’t happen too soon, though, or that the same theory that teaches us how to build AGI also teaches us how to solve the alignment problem, or that the theory puts an upper bound on possible intelligence that isn’t too far above human level. Because otherwise, we’re toast.
It’s very easy to assume current AI is on “the ladder”, “the road”, and simply needs to ascend from ameoba to insect to ape to superhuman. It just needs to keep incrementing, right?
The chinese room is not on that ladder. You could sooner build up your computer’s immune system by exposing it to small viruses. There is a gross misunderstanding of what’s under the hood.
It is indeed possible to create hatchery conditions to grow along the ladder that has intelligence at the end, just not with our shitty crude f
The man ranges from criminally bad at picking good investment opportunities to mildly insane. I wouldn’t trust him to predict when he’s gonna take his next dump.

There is zero cognitive intelligence in anything the Marketers and Salespeople are call AI today

There is zero cognitive intelligence in anything the Marketers and Salespeople are call AI today
Yeah but in fairness, how would they know? They isn’t a lot of intelligence in marketers and salespeople either, and it takes one to know one.

There is zero cognitive intelligence in anything the Marketers and Salespeople are call AI today./quote>

There’s very little cognitive intelligence in the Marketers and Salespeople.

There is zero cognitive intelligence in anything the Marketers and Salespeople are call AI today./quote>
There’s very little cognitive intelligence in the Marketers and Salespeople.
No. We’ll need to develop fusion so as to have enough power to train the AGI.
Most if not all serious fusion endeavors are doing this. Lawrence Livermore Labs used it to address problems achieving net positive output [nvidia.com] at the LIF. DeepMind trained a model to control fusion reactions [cnbc.com] in a tokamak reactor. Other stories have discussed researchers tasking AI to help develop reaction chamber shapes or parts to reduce the need for physical iteration.
75% of what Softbank does is insensible. If AGI does come along it’s going to eat Softbank for a snack.
I know I trust the CEO of a bank over a credentialed AI researcher to advise me on how the technology is progressing…
Softbank isn’t a bank

would surpass human intelligence by a factor of 10,000

would surpass human intelligence by a factor of 10,000
I guess it will depend on the human. Some humans are apparently only intelligent enough to utter meaningless statements, and even so, they reach high positions in the world, like CEO of a big bank.
Lacking a clear definition of intelligence, the statement is not even wrong. If the idea is that some computer will resolve an IQ test in a 10,000th of the time a human needs, then, I suppose, is true. Computers already beat us at chess, considered a brainy game, so they are already more intelligent than us, no need to wait. The word “intelligence” is used as a throw weapon, like “terrorist” or “nazi”. It’s meaning is reduced to whatever the speakers want to say.
Of course there will be computers more intelligent, in almost any sense, than a human being. However, if that computer takes three stadium-sized data centers, and consumes the power of a hefty nuclear station, I’d argue about what’s the point. Just breed a more intelligent human being, who will consume just a couple of sandwiches.

Of course there will be computers more intelligent, in almost any sense, than a human being. However, if that computer takes three stadium-sized data centers, and consumes the power of a hefty nuclear station, I’d argue about what’s the point. Just breed a more intelligent human being, who will consume just a couple of sandwiches.

Of course there will be computers more intelligent, in almost any sense, than a human being. However, if that computer takes three stadium-sized data centers, and consumes the power of a hefty nuclear station, I’d argue about what’s the point. Just breed a more intelligent human being, who will consume just a couple of sandwiches.
If the vastly-smarter-than-humans computer is huge and power-hungry, you just direct it to design a more efficient version of itself. Maybe you can’t get 10,000X smarter without 10,000X size and power consumption, but the size and power consumption of 10,000 human brains is a lot smaller than three stadiums and a nuclear power plant’s output. And probably you can do better than what evolution managed to find via random walk.
It depends on your metrics. When comparing the energies needed to train various intelligences, it’s difficult to beat something that runs on Cheetos and Mountain Dew.

It depends on your metrics. When comparing the energies needed to train various intelligences, it’s difficult to beat something that runs on Cheetos and Mountain Dew.

It depends on your metrics. When comparing the energies needed to train various intelligences, it’s difficult to beat something that runs on Cheetos and Mountain Dew.
Today.

If the vastly-smarter-than-humans computer is huge and power-hungry, you just direct it to design a more efficient version of itself

If the vastly-smarter-than-humans computer is huge and power-hungry, you just direct it to design a more efficient version of itself
Well, I don’t know. You are intelligent, but can you design a more efficient version of yourself? If not, why you assume the computer will be able to?

If the vastly-smarter-than-humans computer is huge and power-hungry, you just direct it to design a more efficient version of itself

Well, I don’t know. You are intelligent, but can you design a more efficient version of yourself? If not, why you assume the computer will be able to?

If the vastly-smarter-than-humans computer is huge and power-hungry, you just direct it to design a more efficient version of itself

If the vastly-smarter-than-humans computer is huge and power-hungry, you just direct it to design a more efficient version of itself
Well, I don’t know. You are intelligent, but can you design a more efficient version of yourself? If not, why you assume the computer will be able to?
If humans are smart enough to design and build a smarter-than-human intelligence, then pretty much by definition that intelligence will be capable of doing an even better job, particularly when it’s given a headstart by handing it everything that humans have already discovered, including everything we know about our own brains.
Some of the largest supercomputers on offer can simulate ten million or so neurons at a speed of 1 simulated second every 10 wall clock minutes. You could build a computer today that simulated the entire human brain with a biologically accurate simulation, but it would be roughly 5 miles in diameter, 200 feet high, and consume a lot of power.
Now, supposedly, the human brain shrank around 12,000 years ago. This has been put down to greater social structures making personal brain power less useful and higher
It does depend on the human, but it depends a lot more on how you measure the intelligence. Recall that ChatGPT passed the lawyers exam, that lawyers study for years to pass. And few lawyers are really stupid. (Greedy and short-sighted are different from stupid.)
There’s no real reason to take his predictions seriously, but this time I think that parts of his prediction are correct. I do expect an elementary AGI to be extant in around 2035. (Plus or minus 5 years.) But it will only be “smarter than human” in some areas. It will be considerably weaker than human in other areas. A key word here is “general”. That’s what we don’t have so far. Another problematic area is motivations. AFAIK, we’re still flailing around in the dark in that area. Motivations need t
AI will increasingly train on its own hallucinated datasets, eventually becoming a techno-intellectual inbred. Remarkably, it’ll still be smarter than many people.
If SoftBank thinks AGI will arrive in 10 years, that means it will arrive in either 5 or 50 years.
I looked up recent attempts to simulate the brain. About ten million simulated neurons at 1 simulated second every few minutes, on one of the top supercomputers. And that won’t be a biological neuron system, that’ll be a classic neural net program. The brain has 850 billion neurons, and just to reach the same speed as the brain you ned to clock in at 1 simulated second per second.
Based on the current rate of progress, I honestly don’t see full brain NNs being simulated in real time this side of 2063. And bi
You are definitely right that that approach will not be successful within the decade. Your mistake is thinking that’s the only viable approach. That might be the optimal approach if we wanted to build an artificial human … but we don’t know enough to even get started in that direction. Lots more basic research would be needed. But when you interact with someone (say over the internet) you can’t analyze things at that level anyway. An implementation of a higher level of analog should suffice to provid
No, the actual Turing test has never been passed by a computer. (OTOH, close analogs have often been failed by a human.)
There are lots or “weak versions of the Turing test” that have been passed. If you weaken it enough, the first version of Eliza passed it. (The caller tried to get her fired for being insubordinate.) But the actual Turing test, or a close analog, has never been passed by a computer. And several weak versions have been failed by various humans.
The Turing test, however, was not intended
The goal post has been moved. Turing’s actual test was passed quite a long time ago. A “strong” version with a knowledgable inquirer was passed quite publicly by that Google engineer who insisted their language model was sentient.
The comments here are fairly typical. They insist that machine learning algorithms are “parrots,” “just statistics” or “Chinese rooms;” basically, they can’t be intelligent because we know how their components work. This is a silly argument. It’s also factually incorrect in the “Ch
An earlier post suggested that current AI is just pattern recognition within the searchable data. I tend to agree here. I’ve been trying to pair program with Github Copilot the last few months, I can get code snippets that are 80% complete at best and I’m never able to give a query that puts it across the finish line.
Some observations:
As I request changes to the code snippets, I see changes to variable names and other program logic unrelated to my last request. This suggests that it’s not actually rememb
The correct answer is 42. I don’t need AGI to tell me that.
Current AI, for all its cleverness, is basically regression. As a number of AI experts have noted, the work on inference and reasoning basically got stalled when progress on the neural network approaches started to take off.
The problem is that this approach assumes that there is clear, unambiguous, objectively definable truth that can be used to define a training set for the AI. In reality, many if not most interesting problems, and certainly the hard ones, do not lend themselves to this at all. For example, imagine training an AI on the scientific literature of the past 100 years. Much of that literature will be considered wrong by present standards, and much of the rest will be small-scale and speculative. The truth isn’t something that exists objectively, it’s something that we construct out of a combination of verifiable facts, philosophical and epistemological frameworks, our own biases, our own emotions, and often randomness.
It is possible that a general AI could emulate all that, but there’s a pretty decent chance that that would bind that AI to all the problems and biases that exist in human intelligence. And we know almost nothing about other intelligences, like what and how dolphins or elephants take hold of the world. We’ve mostly assumed away that concern by counting on historical dismissal of these beings’ intelligences.
My guess is that AI will rapidly start to go in circles. It’s pretty much already consumed much of human writing and still has no concept of truth whatsoever. This is likely to lead to a torrent of bullshit – basically spam in everything that will make it that much hard to engage in truth-seeking and truth-making.
It may get better some things that involve searching parameter spaces and combinatorics; that will doubtless be useful.
I just am not convinced that reality, knowledge, and epistemology actually lend themselves to the kind of AI that people are envisioning.
What is the job of this assistant?
If it’s to recognize simple tasks and put them in a task list, we’re just about there and the stuff you describe is about improve the assistant’s ability to communicate with you.
If it’s to start doing the task, the assistant will rapidly run into problems of decision-making. Should it buy you ice cream at the store because you’re feeling a little down and could use some ice cream, or should it buy you kale because you haven’t really done as good a job eating your vegetables
Predictions are hard, especially about the future
AGI will be here in ten years and it will be used to design a working power plant employing nuclear fusion.
A CEO, who’s degrees are in exactly what? And what computer science has he studied?
How different is this to a self-proclaimed expert on vaccinations, who’s done all his “research” on Faux Noise?
His degree is in economics. He knows how to make money, and this is just part of that.
…and always will be.

Son said he believes AGI will be ten times more intelligent than the sum total of all human intelligence

Son said he believes AGI will be ten times more intelligent than the sum total of all human intelligence
So, what metric is that 10x intelligence measured by? IQ? And what does a sum of intelligence mean? Is the sum of total human intelligence in a large country orders of magnitude greater than the smartest individual human?

It is wrong to say that AI cannot be smarter than humans as it is created by humans

It is wrong to say that AI cannot be smarter than humans as it is created by humans
Perhaps my intuition is different than Son’s, but I think that a creation is generally not as smart as the creator. In fact, I can’t think of any creation that is smarter than its creator.
Then again, the thought is intriguing. If a creation could surpass the intelligence of its creator
What exactly is AGI? This prediction relies heavily on the precise definition of AGI, which is not clearly defined. So in 10 years, you can say that the prediction was confirmed, by defining AGI to be whatever AI technology we have achieved, after 10 years.
In some ways, AI is already 10x smarter than humans. It can write code in just about every programming language known to man. It can write job descriptions and summarize long articles in a flash. It can search the web for answers on any subject and quickl
Let’s look at Einstein’s thought experiments that produced special and general relativity. Thought experiments.
When a computer can gather information and cogitate on it for a while and say, “Hey, guys, here’s a new thought …”
The messy part is that the computer would be thinking only about the work humans have already produced. That would be useful, but the computer, in order to get “intelligent,” would have to “think” on its own. Einstein used prior human work products, but the thought experiments were tr
… for what is considered “intelligence.” A lot of comments about “this is just pattern recognition” seem to miss the point that most of human cognition is pattern recognition.
In fact, I bet most of you poo-pooing these comments by Masayoshi Son couldn’t even give a proper definition for intelligence (without researching a specific counter example) that deviates substantially from what GPT-4 is already doing. And in that research process, you would probably find that GPT-4 can provide the same -or better
There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
FBI Most-Wanted Russian Hacker Reveals Why He Burned His Passport
Intel Plans To IPO Programmable Chip Unit Within Three Years
Neutrinos are into physicists.

source

Jesse
https://playwithchatgtp.com