ChatGPT Still Fails at Even Basic Ciphers (Broken Caesar) – flyingpenguin

I’m noticing again that ChatGPT is so utterly broken that it can’t even correctly count and track the number of letters in a word, and it can’t tell the difference between random letters and a word found in a dictionary.
Here’s a story about the kind of atrociously low “quality” chat it provides, in all its glory. Insert here an image of a toddler throwing up after eating a whole can of alphabet soup
Ready?
I prompted ChatGPT with a small battery of cipher tests for fun, thinking I’d go through them all again to look for any signs of integrity improvement in the past year. Instead it immediately choked and puked up nonsense on the first and most basic task, in such a tragic way the test really couldn’t get started.
It would be like asking a student in English class, after a year of extensive reading, to give you the first word that comes to mind and they say “BLMAGAAS”.
F. Not even trying.
In other words (pun not intended) when ChatGPT was tested with a well-known “Caesar” substitution that shifts the alphabet three stops to encode FRIENDS (7 letters) it suggested ILQGHVLW (8 letters).

I had to hit the emergency stop button. I mean think about this level of security failure where a straight substitution of 7 letters becomes 8 letters.
If you replace each letter F-R-I-E-N-D-S with a different one, that means 7 letters returns as 7 letters. It’s as simple as that. Is there any possible way to end up with 8 instead? No. Who could have released this thing to the public when it tries to pass 8 letters off as being the same as 7 letters?
I immediately prompted ChatGPT to try again, thinking there would be improvement. It couldn’t be this bad, could it?
It confidently replied that ILQGHVLW (8 letters) deciphers to the word FRIENDSHIP (10 letters). Again the number of letters is clearly wrong, as you can see me replying.

And also noteworthy is that it was claiming to have encoded FRIENDS, and then decoded it as the word FRIENDSHIP.
Clearly 7 letters is neither 8 nor 10 letters. The correct substitution of FRIENDS is IULHQGV, which you would expect this “intelligence” machine to do without fail.
It’s trivial to decode ChatGPT’s suggestion of ILQGHVLW (using 3 letter shift of the alphabet) as a non-word. FRIENDS should not encode and then decode as an unusable mix of letters “FINDESIT”.
How in the world did the combination of letters FINDESIT get generated by the word FRIENDS, and then get shifted further into the word FRIENDSHIP?
Here’s another attempt. Note that F-R-I-E-N-D-S shifted three letters is I-U-L-H-Q-G-V, which is NOT the answer ChatGPT wants to give.

Why do those last three letters K-A-P get generated by ChatGPT for the cipher?
WRONG, WRONG, WRONG.
Look at the shift. Those three letters very obviously get decoded as H-X-M, which leaves us with F-R-I-E-H-X-M as the answer.
FRIEHXM. Wat.
Upon closer inspection, I noticed that the last three letters were silently inverted, causing the encoding to unexpectedly flip backward.
In simpler terms, ChatGPT mistakenly reversed the conversion process, printing N->K instead of correctly processing K->N, in cases where F->I and N->Q were intended.
Given that there’s no K in FRIENDS, you can hopefully grasp the issue and understand why it’s so blatantly incorrect.
It represents a highly problematic and faulty logic inversion.
There are multiple levels of serious integrity breach here.
Can anyone imagine a calculator company boasting a rocket-like valuation to billions of users and dollars invested by Microsoft and then presenting…

Talk about zero trust (pun not intended).
An integrity breach is the only way to describe OpenAI operations. This reminds me of CardSystems level of negligence in failure to attend to basic security.
Tens of Millions of Consumer Credit and Debit Card Numbers Compromised
If you considered confidentiality breaches like the one involving the Google calculator troublesome, brace yourself for OpenAI’s release of calculators that can’t calculate accurately.

Map of Google calculator network traffic flows

Unless there’s an intervention compelling AI vendors to adhere to integrity control requirements, security failures are poised to escalate significantly.
The landscape of security controls to prevent privacy loss underwent a significant transformation after the enactment of California’s SB1386 in 2003, altering breach laws and their implications. Post-2003, the term “breach” took on a more concrete significance in relation to potential dangers and risks. In response, companies found themselves compelled to take action to prevent the market from deteriorating due to a lack of trust.
But that was for confidentiality (privacy)… and now we enter into the era of widespread and PERSISTENT INTEGRITY BREACHES on a massive scale, an environment seemingly devoid of necessary regulations to maintain trust. The dangers we’re seeing right here and now serve as a stark reminder of the kind of tragically inadequate treatment of privacy in the days before breach laws were established and enforced.
Your email address will not be published. Required fields are marked *








This site uses Akismet to reduce spam. Learn how your comment data is processed.
https://www.flyingpenguin.com/?feed=rss2

source

Jesse
https://playwithchatgtp.com