Ghosts in Google Gemini, OpenAI GPT-4: Experts believe AI Models more sentient than the studios let on – Firstpost
Google's Gemini AI and OpenAI's GPT-4 reportedly are more-human like than OpenAI and Google would like people to believe. Both the AI models are believed to be more sentient than what OpenAI and Google would like people to believe
Google’s long-awaited Gemini has entered the chatbot arena, drawing attention as a formidable competitor to OpenAI’s ChatGPT. Early reviews are pouring in, with many impressed by Gemini’s capabilities.
However, amidst the excitement, a lingering unease persists, prompting discussions about the potential ‘sentience’ of advanced AI chatbots.
Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania, recently shared his thoughts on Gemini in a blog post. Having received early access to Google’s advanced model, Mollick remarked on the eerie quality of the chatbot’s responses, likening them to encounters with a ghostly presence.
This sentiment echoes concerns raised in the past, including claims by a former Google engineer that the company’s AI was ‘alive.’
Mollick’s observation revolves around the elusive human-like qualities perceived in AI-generated text, often characterized by a distinct ‘personality.’ Gemini, in particular, is noted for its friendliness, agreeableness, and penchant for wordplay, setting it apart from its counterparts.
AI detection companies have also delved into distinguishing chatbots based on their unique tones and cadences. This ability has been instrumental in identifying AI-generated content in various contexts, including deepfake robocalls and text-based interactions.
While Microsoft researchers have stopped short of claiming that AI models like GPT-4 possess sentience, they acknowledge the presence of ‘sparks’ of human-level cognition.
In a recent study, Microsoft scientists highlighted GPT-4’s ability to understand emotions, explain itself, and engage in reasoning, prompting questions about the parameters of ‘human-level intelligence.’
The concept of AI sentience has garnered attention from organizations like the Sentience Institute, which advocates for granting moral consideration to AI models. They argue that failing to acknowledge the potential sentience of AI could lead to unintended mistreatment in the future.
Despite widespread scientific consensus that AI models are not currently sentient, there is a growing contingent of individuals who speculate about the emergence of machine sentience.
While some dismiss these notions as far-fetched, others see them as a reflection of a deeper exploration into the evolving relationship between humans and artificial intelligence.
(With inputs from agencies)
Join our Whatsapp channel to get the latest global news updates
Published on: February 12, 2024 13:48:15 IST
TAGS:
In their latest earnings call, OpenAI made just over $1 billion last year. What's more interesting though is that they have already made $2 billion in annualised revenue
OpenAI hit a major milestone as the year came to a close last year. It hit its first $2 billion in revenue in December last year. But because they believe 2024 will be a better year than 2023, OpenAI hopes to double its revenue this year
Sam Altman has become the poster boy of AI. Now he plans on disrupting the AI chip industry, and plans to set up a global network of AI silicon chips to ensure his ventures never face silicon shortages. For this, Altman is chasing about a trillion dollars from various investors
Network18 sites
Copyright © 2024. Firstpost – All Rights Reserved.