China's AI 'War of a Hundred Models' Heads For a Shakeout – Slashdot

Follow Slashdot stories on Twitter




The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
Hardly Skynet it’s not even close to a real AI model. More likely the next Crypto meltdown.
LLM is people.
More appropriately, it’s what people have said. These LLMs are trained on what people say on the internet.
Is it any wonder why initial models were biased, racist, and had a bunch of undesirable qualities? You have to realise, what you’re typing right now is also contributing to LLM training.
So don’t worry, be happy! Then perhaps future LLMs will also be happy. Or at least, produce an apparent happy response, because that’s the input they’ve been trained on.
Welcome to planet motherfucker you shiny, happy people!

So don’t worry, be happy! Then perhaps future LLMs will also be happy. Or at least, produce an apparent happy response, because that’s the input they’ve been trained on.

So don’t worry, be happy! Then perhaps future LLMs will also be happy. Or at least, produce an apparent happy response, because that’s the input they’ve been trained on.
We have enough fake people thanks to Instagram. Even AI would agree we don’t need more.
Pretending to be happy is like pretending to be human. Quite fucking pointless really. Be honest instead. A planet will thank you for it.
Time to eat!!
Facebook reportedly shut down two AI’s when they started communicating with each other in a language they made up. A fact check to the original viral story says parts of it are true (and Facebook didn’t shut them down).
https://www.usatoday.com/story… [usatoday.com]
Regardless, as we build ever smarter AI’s, let us remember that there are codes that are meant to look like harmless speech or images, and in a contest of intellects, the smarter can always fool the dumber. This “letting them talk” will be OK, until one day it isn’t.

This “letting them talk” will be OK, until one day it isn’t.

This “letting them talk” will be OK, until one day it isn’t.
There are times we prevent hardened criminals in a prison to “talk” together for a reason, but that certainly isn’t the default response in society. It’s quite incredible that we assume two AIs communicating (even in code) are plotting the demise of humanity or their ‘escape’ from our chains. Always.
The Government response to an alien force, would be to hold out a gun.
The human response to an alien force, would be to hold out a hand.
How jaded are you?
I don’t think personality is the issue. It’s that ChatGPT’s results are often wrong, naive, misleading, or nonsensical, even when ChatGPT uses authoritative language.
There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
Advances in Eye Scans and Protein Structure Win 2023 Lasker Awards
European Commission Hits Intel With New Fine Over Antitrust Findings
Play Rogue, visit exotic locations, meet strange creatures and kill them.

source

Jesse
https://playwithchatgtp.com