Former Google CEO Eric Schmidt Bets AI Will Shake Up Scientific … – Slashdot

Want to read Slashdot from your mobile device? Point it at and keep reading!

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
No, I think you mean “shakedown.”
That would be interesting. Would anyone notice ?
This must be one of the more reasonable and realistic applications of AI that I came across since the hype started. Going through hundreds of papers is a very important but horribly time-consuming step of the research. Would be great if you can have an AI go through millions of them and then ask it for the answer to a question.
But that would work only if you could then ask it how it came to give a particular answer. AFAIK, you can’t ask LLMs how they came to give a particular response.
I’m not sure about the “worms and viruses” part helping anything, but you’re right that relying on technology too much will weaken us, and we’re already deep into it even without AI. What’s different and worse about AI is just that it can move on its own, so unlike cigarettes, the threat won’t necessarily stop just because people start to wake up.
It plays to Schmit’s plan. First, he helped Google surrender our privacy at Google, after fumbling networking at Novell and Sun.
Then he hedged his bet and handily became a citizen of Cyprus so he wouldn’t have to pay those pesky taxes.… []
Now he wants you to believe that AI will shake up the scientific method, where AI has no chain of authorities, and the scientific method is a bust without referential integrity.
Nothing to see here, just another Tech Talking Head spouting foam
You’d be mad to do anything important with AI. AI can be used as an assistant to someone knowledge in the art but then so can a search engine and knowlege base.
And yes of course i’ve used chatGPT, bard, midjourney etc. that’s why I am saying AI sucks. They can improve AI over the next decade in certain specific areas such as factory automation and certain types of diagnostics but beyond that it’s diminishing returns and whack-a-mole. If we are talking human-like AGI intelligence .. that’s a minimum of 50 to 100 years away. Most likely well over 100 years away.
At this time? Very much so. With the current hype, the “ordinary person” interface of AI has gotten a lot better, but the quality of the answers has gotten worse. As to AGI, there is zero indication from what we currently have that it is even possible. It is not a question of computing power or training data either or we would at least have a glimmer of AGI today. Instead, we have absolutely nothing.

You’d be mad to do anything important with AI

You’d be mad to do anything important with AI
That’s saying too much. AI has vastly improved things like handwriting recognition, for example.
I’m not sure what part of the scientific method is compatible with inserting a black box complex enough that no one knows exactly what the processing path is.
“You know what would help this two variable problem? About 5,000 more variables that are completely unknowable.” – Einstein, probably

“About 5,000 more variables that are completely unknowable.”

“About 5,000 more variables that are completely unknowable.”
They’re completely knowable, it’s just that there are a lot of them so they’re hard to reason about.
Finally they’ll use AI to fake the results data and graphs, so that’s it’s not as easily found out as today.
Seriously, why do people think that having money and some expertise in one area makes them experts in all others?
One thing I can see happening is making plagiarism, fake results and low-quality research much harder to publish. That would be a good thing. As to the actual work of having insight and turning that into something useful, “AI” will contribute exactly zero. Also, a lot of applied researchers have had insights while dealing with more mundane stuff. I had the core idea for my PhD when thinking about
“Future House will “have an obligation” to make sure there’s safeguards in place,” he added.
Nope. Won’t work. It doesn’t matter how hard they try to stop it, the cat is going to escape the bag. Maybe it’s a data breach, maybe it’s a new CEO who branches out licensing to questionable partners… but it’ll get out. And the ratio of philanthropic uses to nefarious ones is at the very best 1:1.
Not to be unreasonably fatalistic… but fatalism is appropriate here. It’s coming. People will design new chemical
That the guys who think they hold the reins of power don’t have a fucking clue what they’re talking about.
There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
Meta’s Free AI Isn’t Cheap To Use, Companies Say
Intel’s Failed 64-bit Itanium CPUs Die Another Death as Linux Support Ends
“Hey Ivan, check your six.” — Sidewinder missile jacket patch, showing a Sidewinder driving up the tail of a Russian Su-27