Will the Placebo Effect Mold How We See AI? – Slashdot

Become a fan of Slashdot on Facebook




The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
People though fucking _Eliza_ was a caring person, back when. Well, most people. But most people are stupid and see what they want instead of seeing what is.

People though fucking _Eliza_ was a caring person, back when. Well, most people. But most people are stupid and see what they want instead of seeing what is.

People though fucking _Eliza_ was a caring person, back when. Well, most people. But most people are stupid and see what they want instead of seeing what is.
That’s not because they are stupid, its because they are optimistic!
Stupid optimists are really the worst of all. They cannot deal with problems.
A pessimist is never disappointed.
People see what they want to see.
One thing I get endless amusement out of on social media is, when anti-AI people are endlessly bashing an AI work as awful, to present them with some award-winning photograph or masterpiece painting but tell them it’s AI and comment about how great it is. They’ll invariably – in great detail – pick apart the image bit by bit, talking about how awful and fake and soulless it is.
The opposite works as well – giving them a high-quality AI work and claiming that it’s human-made
Who is Eliza?
From a chatbot:

The ELIZA conversational AI program was first implemented in 1964-1966 by Joseph Weizenbaum at the MIT Artificial Intelligence Laboratory:

– ELIZA was one of the earliest natural language processing computer programs capable of basic human conversation.

– It operated by pattern matching input text and providing pre-programmed conversational responses using tricks like rephrasing statements as questions.

– ELIZA was named after the Eliza Doolittle character from the play Pygmalion which dealt with language and class issues.

– Weizenbaum developed ELIZA to satirize the limitations of human psychotherapy that he felt relied too much on rote responses.

– ELIZA took the role of a Rogerian psychotherapist, responding to patient statements by turning them into open-ended questions.

– Despite its simplicity, ELIZA proved surprisingly convincing and influenced later chatbot development.

– ELIZA source code was published in 1966 and it became a popular demonstration program at universities in the 60s and 70s.

So the original ELIZA program emerged in the mid-1960s as one of the pioneering early conversational agents, influencing AI research for decades after its deceptively human-like interactions.

The ELIZA conversational AI program was first implemented in 1964-1966 by Joseph Weizenbaum at the MIT Artificial Intelligence Laboratory:
– ELIZA was one of the earliest natural language processing computer programs capable of basic human conversation.
– It operated by pattern matching input text and providing pre-programmed conversational responses using tricks like rephrasing statements as questions.
– ELIZA was named after the Eliza Doolittle character from the play Pygmalion which dealt with language and class issues.
– Weizenbaum developed ELIZA to satirize the limitations of human psychotherapy that he felt relied too much on rote responses.
– ELIZA took the role of a Rogerian psychotherapist, responding to patient statements by turning them into open-ended questions.
– Despite its simplicity, ELIZA proved surprisingly convincing and influenced later chatbot development.
– ELIZA source code was published in 1966 and it became a popular demonstration program at universities in the 60s and 70s.
So the original ELIZA program emerged in the mid-1960s as one of the pioneering early conversational agents, influencing AI research for decades after its deceptively human-like interactions.
This is ELIZA:
I am the psychotherapist. Please, describe your problems. Each time
you are finished talking, type RET twice.
Is AI actually a thinking caring person?
What do you think?
No, I think it is a soulless machine.
You have your mind on machines, it seems.
Yes
I see… Well, why do you say this is so?
Because machines are the topic that I’m discussing?
Is it because machines are the topic that you are discussing that you
came to me?
Yes, and also because you are ELIZA
Eliza? Hah! I would appreciate it if yo
And if I gave you some physics theory you didn’t understand, you’d get the wrong impression about it. What’s yer point?
My brother and I loaded Eliza on a Radio Shack TRS-80 back in 1979 and captured the entire interaction.
We were programmers and we’d looked at the code (in BASIC) and understood that it was clever, but mostly a game that was boring after using it a few times. Stock answers and trigger words were stored in an array.
We left my older sister in the room, alone, and looked at the dialog later. She was a little unstable at the time and the conversation degraded into a repetitive all-cap:

WHERE CAN I FIND MY HAPPY PLACE?
>You feel you need a happy place?

WHERE CAN I FIND MY HAPPY PLACE?
>Is a happy place important to you?

WHERE CAN I FIND MY HAPPY PLACE?
>Would a happy place make you happy?

WHERE

WHERE CAN I FIND MY HAPPY PLACE?
>You feel you need a happy place?
WHERE CAN I FIND MY HAPPY PLACE?
>Is a happy place important to you?
WHERE CAN I FIND MY HAPPY PLACE?
>Would a happy place make you happy?
WHERE

People though fucking _Eliza_ was a caring person, back when. Well, most people. But most people are stupid and see what they want instead of seeing what is.

People though fucking _Eliza_ was a caring person, back when. Well, most people. But most people are stupid and see what they want instead of seeing what is.
Because smart people suspected that their interlocutors were bots in the 1960s, when the concept of bots didn’t even exist?
What future knowledge we should all have today so as not to be “stupid” by 2090 standards?.
I named it “Chat with the Sysop” and used a version that I could compile myself in Turbo Pascal, and also had a config file where I could add some phrases and responses that sounded like me.
The result was that some people spent the full 60 minutes I allocated at a time with the door trying to talk with “me”. Some figured it out quick and got out of the interface. I figured it was a Turing test of sorts. The people who spent a lot of time with it failed the test.
Theres an amazing series of videos that prank scam callers with a recording of an elderly man named “Lenny” who would at predetermined intervals when a space in speaking was detected would say things like “Hmm yes yes” and “Well that reminds me of something my daughter used to say” and general meaningless nonsense like that, and it would end up with these sprawling conversations between a scammer and a tape recordinging (Usually ending with a very frusturated scammer unable to get lenny to focus on the subj
No one suggests that a “good” drug is caring or that an ineffective drug is “manipulative.” Neither drugs nor AI have feelings of any kind. The placebo effect is about how people believe that a drug *works* when it’s actually doing nothing, because they believe it’s doing something.
Does AI *work* to help people with psychological problems? Perhaps, if it produces a good psychological effect. If it does help people, *then* we can analyze whether that help is real, or a placebo.
Yep, this has fuck-all to do with the placebo effect and is just shit journalism.
The older study they mention did, as people believed AI was helping them, when it wasn’t:
“The AI placebo effect has been described before, including in a study in which people playing a word puzzle game rated it better when told AI was adjusting its difficulty level (it wasn’t — there wasn’t an AI involved).”
Unsurprisingly, the new study never mentions placebo.
> The placebo effect is about how people believe that a drug *works*
No, the problem with the placebo effect is that when they don’t take the drug — believing they do — and it has the same effect as taking the drug!. It is proof of mind over matter.
The placebo effect is even stranger [newscientist.com]:
1 The placebo effect
Don’t try this at home. Several times a day, for several days, you induce pain in someone. You control the pain with morphine until the final day of the experiment, when you replace the morphine with sa
I don’t think we’re disagreeing.
When a new drug is going through medial trials, one of the primary goals is to determine whether the drug can outperform a placebo. If it doesn’t, then the drug is itself just a placebo.
How is this the placebo effect? It just sounds like people forming opinions and adjusting their behavior partly based on the information they’ve been given. There’s none of the mind-over-matter wierdness of the placebo effect.
I spotted that as well.
The term “expectation effect” or “expectation bias” could better describe the phenomenon where individuals’ experiences or perceptions are influenced by the expectations they have been given. It’s a broader term that encompasses situations where people’s beliefs about something can shape their actual experiences. So, “The Expectation Effect of AI Chatbots” might be a more appropriate title. [with the help of a chatbot]
Further info on priming:
1) When you are primed on something, everything *related* to that thing is also primed.
I made a demonstration slide that I sometimes use when speaking to an audience, it has a paragraph about shoes, and I invite the audience to look over the paragraph and notice something unusual about it.
I then ask the audience to complete the 4-letter word, and show S O L _ on the screen.
Just about everyone in the audience chooses “S O L E”. If you’re reading this, you probably thought “S O L E” as
6040718, we have your number.
Summarizing study results by combining actual numbers (e.g. 24%) with broad terms that are interpreted differently by different people (e.g. “many” or “most”) is, in itself, rather manipulative. It seems to be a common tactic used by people whose research didn’t produce the cut-and-dried results they were hoping for.
The chatbots I use do not have embedded ads. What are you on about?
says “no”
There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
‘I’m a Luddite – and Why You Should Be One Too’
California Becomes First US State to Ban Four ‘Toxic’ Food Additives
In practice, failures in system development, like unemployment in Russia, happens a lot despite official propaganda to the contrary. — Paul Licker

source

Jesse
https://playwithchatgtp.com