Microsoft Publishes Garbled AI Article Calling Tragically Deceased … – Slashdot

Please create an account to participate in the Slashdot moderation system




The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
Even worse, a human never even gave this a token glance before it was published.
The editor was useless. /s
Kind of like /. =P

With the NBA’s 95% COVID vaccination rate among players, we may expect to see many more sad stories like this one in the future.

With the NBA’s 95% COVID vaccination rate among players, we may expect to see many more sad stories like this one in the future.
I think you’re antivax response is dumber than the article.
Herman Caine objects.
The system has likely been made to use less sensitive and more emotionally neutral language by the editors, and it passed all the tests with good enough accuracy, so why bother?

Garbled, incorrect and bias responses are just the result of the options the algorithms were created with.

Garbled, incorrect and bias responses are just the result of the options the algorithms were created with.
Incorrect and biased? Sure.
Garbled? No. LLMs usually produce grammatically correct output that, while often wrong, at least is legible.
I have no clue how Microsoft screwed this up so badly, but it was more than just hooking up an LLM to output a newsfeed. Their incompetence went beyond that.
This makes me remember the movie “Hidden Figures”. I did some research and it really happened that at the time before Nasa sent the first American to space, someone there had a mathematician redo by hand the calculations that a computer had done because they did not trust the computer enough.

I have no clue how Microsoft screwed this up

I have no clue how Microsoft screwed this up
TFA does. MSN just reached out and grabbed a story from a shitty site that is suspected of using AI.
This is not a large language model at all. This is a garden-variety article spinner [wikipedia.org], the kind that was used to spam email inboxes a decade ago by abusing a thesaurus to evade phrase filters.
As of my last knowledge update in September 2021, AI was a field with substantial potential and real-world applications, but it was also accompanied by a certain level of hype and inflated expectations. AI had made significant progress in areas like natural language processing, computer vision, and machine learning, leading to practical applications in industries like healthcare, finance, and manufacturing. However, it’s crucial to recognize that AI’s capabilities varied across tasks and domains, and it was
Great. Now half of slashdot posts are AI generated, too.
I can see how an AI connected to a news feed could easily link up the words Brandon, Hunter, and useless. The AI would not recognize that Brandon Hunter is not the same entity as “Brandon” or Hunter. It’s definitely one of those “not that one, you idiot machine” moments.
Auto correct has been coming up with some real whoppers just on my tablet. I swear I have to spend more time proof-reading now than I had to before. A bigger machine should be capable of generating even better examples of artificial stupidit

Garbled, incorrect and bias responses are just the result of the options the algorithms were created with. AI is all hype at this point, just advertisers and marketers doing what they do.

Garbled, incorrect and bias responses are just the result of the options the algorithms were created with. AI is all hype at this point, just advertisers and marketers doing what they do.
SOTA LLMs are awesome. While not infallible and sometimes lacking in coherence the vast knowledge combined with ability to apply automatically learned concepts across domains is something I personally find to be quite useful.
Brandon Hunter useless at 42
Story by Editor
9/12/2023, 11:21:42 PM21h
© Editor
Former NBA participant Brandon Hunter, who beforehand performed for the Boston Celtics and Orlando Magic, has handed away on the age of 42, as introduced by Ohio males’s basketball coach Jeff Boals on Tuesday.
Hunter, initially a extremely regarded highschool basketball participant in Cincinnati, achieved vital success as a ahead for the Bobcats.
He earned three first-team All-MAC convention alternatives and led the NCAA i
Evidently, it was something ripping off the following, which was at least posted on TMZ:
“Former Boston Celtics and Orlando Magic player Brandon Hunter has died, Ohio men’s basketball coach Jeff Boals said Tuesday. He was just 42 years old.
Hunter — a standout high school hoops player in Cincinnati — was a star forward for the Bobcats, earning three first-team All-MAC conference selections and leading the NCAA in rebounding his senior season … before being taken with the 56th overall pick in the 2003 NBA Draft.
He played 67 games over two seasons in the Association … scoring a career-high 17 points against the Milwaukee Bucks in 2004.”
So maybe someone fed that into a translation program to run a few rounds to try to obfuscate the copyright violation?
They fed it into a LLM and asked it to paraphrase it to avoid plagiarism.
But it doesn’t know the difference between a name and anything else, or between a respectful word similar to deceased and a disrespectful one, because it doesn’t know anything. It just contains records of similarities and when you run the text through the system, it uses them to make more similarities.
I have seen this “the LLM doesn’t really know the difference between a name and anything else” comment a couple times. I’m somewhat familiar with LLMs and NLP in general, so I’m curious about how true this is.
It seems to me that ChatGPT has quite good Named Entity Recognition and other capabilities that would allow it to categorize people/places/etc. There are many articles documenting processes for using ChatGPT to do NER on a corpus. It certainly seems to have a good capability of doing this when I’ve int

Machine translation (probably used in this particular instance) tends to screw this up royally, but LLMs seem to have a much higher hit rate. Do others have this experience or any other information on it?

Machine translation (probably used in this particular instance) tends to screw this up royally, but LLMs seem to have a much higher hit rate. Do others have this experience or any other information on it?
I suspect that the player’s name, Brandon Hunter, caused a lot of the AI’s confusion. There is a lot of disrespect and anger out on the Internet regarding “Brandon” and similarly for “Hunter.” Then you get AI involved in writing an article about someone named “Brandon Hunter” and what do you expect to happen?
So, once again, it’s the Republicans fault that progress is not being made at the speed that we could have enjoyed if not for their inane commentary filling the Internet we invented. Those fucking hate

Could you please re-phrase this news announcement?

“Former Boston Celtics and Orlando Magic player Brandon Hunter has died, Ohio men’s basketball coach Jeff Boals said Tuesday. He was just 42 years old.

Hunter — a standout high school hoops player in Cincinnati — was a star forward for the Bobcats, earning three first-team All-MAC conference selections and leading the NCAA in rebounding his senior season … before being taken with the 56th overall pick in the 2003 NBA Draft.

He played 67 games over two seasons in the Association … scoring a career-high 17 points against the Milwaukee Bucks in 2004.”

Could you please re-phrase this news announcement?

“Former Boston Celtics and Orlando Magic player Brandon Hunter has died, Ohio men’s basketball coach Jeff Boals said Tuesday. He was just 42 years old.

Hunter — a standout high school hoops player in Cincinnati — was a star forward for the Bobcats, earning three first-team All-MAC conference selections and leading the NCAA in rebounding his senior season … before being taken with the 56th overall pick in the 2003 NBA Draft.

He played 67 games over two seasons in the Association … scoring a career-high 17 points against the Milwaukee Bucks in 2004.”

“Brandon Hunter, an ex-player for the Boston Celtics and Orlando Magic, has passed away at the age of 42, as confirmed by Ohio men’s basketball coach Jeff Boals on Tuesday.

Originally shining as a high school basketball talent in Cincinnati, Hunter established himself as a top forward for the Bobcats. He garnered three All-MAC conference first-team honors and led the NCAA in rebounds during his final collegiate year. He was then chosen as the 56th pick in the 2003 NBA Draft.

During his tenure in the NBA, he played in 67 matches over two seasons, with his most notable performance being a career-best 17 points against the Milwaukee Bucks in 2004.”

“Brandon Hunter, an ex-player for the Boston Celtics and Orlando Magic, has passed away at the age of 42, as confirmed by Ohio men’s basketball coach Jeff Boals on Tuesday.

Originally shining as a high school basketball talent in Cincinnati, Hunter established himself as a top forward for the Bobcats. He garnered three All-MAC conference first-team honors and led the NCAA in rebounds during his final collegiate year. He was then chosen as the 56th pick in the 2003 NBA Draft.

During his tenure in the NBA, he played in 67 matches over two seasons, with his most notable performance being a career-best 17 points against the Milwaukee Bucks in 2004.”
chat link [openai.com]
“Let’s go Brandon! We need to investigate Hunter’s laptop.” — A real human, probably

They fed it into a LLM and asked it to paraphrase it to avoid plagiarism.

But it doesn’t know the difference between a name and anything else, or between a respectful word similar to deceased and a disrespectful one, because it doesn’t know anything. It just contains records of similarities and when you run the text through the system, it uses them to make more similarities.

They fed it into a LLM and asked it to paraphrase it to avoid plagiarism.
But it doesn’t know the difference between a name and anything else, or between a respectful word similar to deceased and a disrespectful one, because it doesn’t know anything. It just contains records of similarities and when you run the text through the system, it uses them to make more similarities.
This wasn’t just a bad paraphrasing. It was a horrific joke of an attempt. A Dictionary, a Thesaurus, and a Grammar Checker all walk into a boardroom bar, and that shit is the end result? Doesn’t know the difference is quite a stretch to excuse a translation to useless, which I doubt even the Thesaurus would know how the hell Lappy the Language Licker got there.
If the zombie LLM recognizes us meatsacks as ‘useless’ that easily, then we might as well call the future solution Skynet and get it over with.
Some things shouldn’t be paraphrased — MSN using “corridor of fame” loses meaning as that is not the same as “hall of fame”
That doesn’t explain the grammar slip. “As a ahead”!? It sounds ugly even to me!
Clearly it is far from ready for prime time. Why are they using this to generate articles that will be published?
First, they must use it internally for their internal information digests that are not exposed to the public. When they feel that the product excels internally, only then should they begin to think it can be used for submissions for public consumption.
But as usual, Microsoft wants to crowdsource their alpha/beta testing for free to the public.
After reading the article, and the apparent source, it seems not like a generative AI but like someone pasting an article to plagiarize into google translate and walking it through a few languages hoping it won’t be too obvious (see “Here’s the story” thread for the actual ‘article’ text and the apparent original it was ripping off).
It doesn’t have the LLM ‘smell’, more of a bad translation smell.
My guess is that they hired some remote worker in India to do it, who hired some mechanical turk to do it, who wrote a script to plagiarize TMZ articles. Everyone got what they deserved.

So it is possible that the article was written by an Actual Indian?

So it is possible that the article was written by an Actual Indian?
Or even a ajcnegiletnI anzcutzS in reverse polish.

Should these “AI” and their owners get free speech rights? ie, should generated content, and the owners of them, be protected under free speech?

Should these “AI” and their owners get free speech rights? ie, should generated content, and the owners of them, be protected under free speech?
If you’ll note the constitution, there is no limit on whom has free speech, nor freedom of the press.

Should they be legally liable for anything that gets generated?

Should they be legally liable for anything that gets generated?
If it’s a news outlet, for it to be defamatory against a public figure, there has to be knowing and actual malice on the part of the author. That is, the author has to know what they are writing is wrong, and is doing it on purpose to cause harm. Hard to do that with a machine, unless an editor lets it through on purpose. It’s pretty easy to say it was an accident, unless there is proof otherwise.
liable? “useless” is not something you sue for liable especially if you are a useless corpse who can’t be harmed in anyway.
or if you are a living useless sports entertainer; you can’t sue because you contribute little benefit to society… outside of spending your money… so is nobody allowed to express such opinions?

the author has to know what they are writing is wrong, and is doing it on purpose to cause harm

the author has to know what they are writing is wrong, and is doing it on purpose to cause harm
That’s not what means in defamation law. In defamation, actual malice means “with knowledge that it was false or with reckless disregard of whether it was false or not”. The media defendant does not need to have any intent to cause harm. [mtsu.edu]
Free speech does not mean not legally liable. AI articles should be treated like any other. Protected from government infringement, but legally liable for the very many things normal speech is even in freedom loving America.
The deceased’s estate should sue them for libel.
Free speech in law means the government won’t stop you from saying things preemptively. It doesn’t mean there are no consequences.
Free speech doesn’t cover fraud, libel, etc.
What I will say about government restraint is that if truth is not an absolute defense against libel/slander then you really don’t have free speech. I’m looking at you, UK.
The human who posted it is protected by free speech and liable for the content they posted. That they amplified their speech via a machine won’t make any difference. Until such time as the law decides an AI has free will and is furthermore not acting under direction from a human, anything the human causes the machine to make will be treated as having been made by the human.
As an example, if your boss instructs you to create a sample death threat against someone named [rival CEO], and then publishes the text
Either way the algorithm doesn’t know the difference between a name and anything else on the page, so it doesn’t know not to change the wording. They do have that in common.
AI is trying to beat us at our own game.
That’s a great comparison. I think LLMs are a fun new form of the same sort of entertainment. I enjoy them for what they are. It’s just ridiculous that people keep treating them as something more.
Hunter was 42 years old, a fit professional athlete doing perhaps not extremely strenuous exercise. Oddly enough, I cannot find anywhere on line a cause of death.
Heart trouble, perhaps? Just a wild guess.
Maybe we can hire these AIs as Seattle cops.
You mean like Microsoft Outlook which can’t find an email when you search for it even when it’s the first one in your Inbox? Or did they mean useless like their OS which doesn’t register every mouse click or copy/paste action? Perhaps they mean useless such as Excel when all you do is copy out the information in a cell then close the worksheet, only to be asked if you want to save your changes. If nothing was changed, why ask if you want to save the changes?
Hahaha, yes. I have had to export outlook mailboxes to get working search and don’t get me started on Excel.
MS “productivity” software is really getting more and more crappy every day. The expected and very typical effect of a near-monopoly.
If this is enough for the money men; then who are the people giving money for this stuff and why?
Been noticing a lot of interest in very small models lately able to blast out nonsense at blinding fast speeds (e.g. 9600 baud modem) with results that look a lot like the “garbled’ article.
Other than AI research these things really only have two real world uses that I know of: summarization and spam. “MSN news portal” as best as I can tell seems to be farming out news to randos.
It’s obvious that any newsroom doing AI generated content should have some variation of the following guideline “the human reporter/editor should review the generated content in detail to ensure accuracy and appropriateness”.
Clearly in this case the human did an exceptionally poor job.
Remember, only a small minority of people screw up that badly, meaning to generate this one massive screw up you needed a lot of instances that went just fine.
In other words, this article is a sign there’s already a lot of AI
That really needs to be “should” -> “must” and failure to do so must result in termination.
I do not agree on your statistical analysis. This one data-point does not allow any conclusion how many AI generated articles are out there.

I do not agree on your statistical analysis. This one data-point does not allow any conclusion how many AI generated articles are out there.

I do not agree on your statistical analysis. This one data-point does not allow any conclusion how many AI generated articles are out there.
For sure it’s not a P > 0.95 level of certainty, and certainly, the odds of using an LLM are correlated with the odds of being a lazy/incompetent reporter. But realistically, you think the one outfit & reporter doing this happened to be the one that posted the gibberish article?
Just because he is deceased does not make him useless, He can be used to make Soylent Green.
Support your local AI, DYSTOPIA FOR ALL!!!
A dead NBA player is pretty useless as a player. Of course, while clear, it is highly impolite and highly redundant to point that out.
This is however an excellent example of the level of “skill” and “insight” an LLM-type “AI” has. Not usable without expert supervision.
There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
Google Won’t Repair Cracked Pixel Watch Screens
Meta’s Horizon Worlds Avatars Finally Have Legs
The F-15 Eagle: If it’s up, we’ll shoot it down. If it’s down, we’ll blow it up. — A McDonnel-Douglas ad from a few years ago

source

Jesse
https://playwithchatgtp.com