The Future of AI is Now – University of California San Diego

This story was published in the Fall 2023 issue of UC San Diego Magazine.
Artificial intelligence (AI), in all its varied forms, is rapidly accelerating. From ChatGPT to facial recognition, self-driving cars and virtual assistants such as Alexa and Siri, AI is already a part of our everyday lives. In addition, machine-learning algorithms trained on massive datasets provide a  powerful tool for advancing scientific research. We are living through a technological revolution, and AI is quickly ushering us into tomorrow.*
But what are the implications? And what does the future hold? Six UC San Diego experts weigh in.
Recent advances in AI may seem revolutionary, but we’re just getting off the ground — and the future is beyond the scope of our limited imaginations, says Terrence Sejnowski, distinguished professor in the Department of Neurobiology at UC San Diego and holder of the Francis Crick Chair at the Salk Institute for Biological Studies. 
Sejnowski refers to this moment in history as “the Wright brothers stage,” drawing a parallel to the first powered flight in 1903, which spanned a few hundred yards and reached an altitude of just 10 feet. At the time, no one — not even the Wright brothers themselves — understood just how significant this achievement was, or the ways in which aviation would one day transform the world. 
“I would not believe anything that anybody said about predicting the future because I don’t think we have a good enough imagination to know where things are headed,” Sejnowski advises. “Whenever you have a new technology, it plays out in ways you can’t imagine.”
These insights are particularly consequential coming from Sejnowski, who in the 1980s was part of a small group of pioneering researchers who founded deep learning and neural networks, the subset of AI that powers today’s chatbots. Sejnowski, along with Geoffrey Hinton (often referred to as the “godfather of AI”), questioned the “logic-and-symbol”-based AI that was most prevalent at the time, and they developed their own version fueled by data and modeled after the human brain. 
Today’s large language models such as ChatGPT are a type of neural network. If you look “under the hood,” Sejnowski explains, what you find are simple units that look a lot like the neurons in the brain, connected together with weights that are variable, much like the synapses between neurons. Neurons have synaptic plasticity, which means that as you learn, you change the “weights” in your brain. Large language models are trained on data in much the same way.
“Modern AI is all based on the basic principles of neuroscience,” says Sejnowski. Conversely, as  the fields of AI and neuroscience continue to converge, advances in large language models  such as the utilization of transformers — a type of neural network that learns context — are impacting the way neuroscientists think about the brain. For now, there are still many features of the brain that aren’t incorporated into these transformers. ChatGPT and other large language models can’t yet have goals or long-term memory, but Sejnowksi says they will, and that’s where we’re headed.
“AI is going to make you smarter and enhance your cognitive power,” speculates Sejnowski. “It’s not going to take a job away, but it’s going to change your job. Your job may end up being very different someday, but it’s going to be more interesting. I’m pretty sure about that."
The genie is out of the bottle and there’s no turning back now, according to David Danks, MA ’99, PhD ’01, professor of data science and philosophy, in response to the release of multimodal GPT-4 by OpenAI, creator of ChatGPT, earlier this year. The technology, trained on unprecedented amounts of data, is adept at summarizing content and is able to generate the most high-quality text we have seen from a bot. Today, generative AI is providing humans with capabilities we’ve never had before. But at what cost?
“ChatGPT is not designed to succeed at giving us information or truth. It’s designed to give us the most probable completion,” explains Danks, alluding to the architecture that underlies ChatGPT and, put simply, is built to predict the correct next word in a sequence. While there are undeniable benefits that come from outsourcing certain tasks to technology, Danks, who serves on both the National AI Advisory Committee and the advisory board of the UC San Diego Institute for Practical Ethics, believes that people should be seriously concerned about the potential of these systems to inflict real harms on society without policies or regulations in place. For example, Danks is concerned about the potential for an increased spread of misinformation as generative AI becomes more widely used. “When we think about ethics and technology in general, we should always ask, ‘Whose values are being advanced or harmed by which uses?’” He adds, “We have to recognize that those capabilities could be put to problematic uses. We should be asking, not ‘What is the AI going to do to us?’ but rather, ‘What are we going to do to each other, facilitated or accelerated by AI?’” 
And it isn’t just ChatGPT Danks is worried about. Other tools can produce audio clips, images and even videos so realistic they’re virtually indistinguishable to the human eye and ear. 
“If there’s a YouTube video of you on the internet, someone can create audio of you saying anything they want,” Danks warns. “That should terrify you.” 
While his concerns about the exponential growth of AI are very real, the fact that the ethical implications are already at the forefront of public discourse gives him a sense of hope for the future.
“Unlike some technologies, where it’s taken years for people to recognize the ethical challenges, I think we’ve already seen it in a matter of months. Harms have been done and harms will continue to occur, but I am cautiously optimistic that we can make real progress moving forward so that we can minimize the harms and maximize the benefits,” says Danks.
Could AI help save the ocean’s coral reefs?
As researchers in Stuart Sandin’s lab at Scripps Institution of Oceanography work to better understand why certain corals survive, and even grow, following marine heat waves and other climate-change-induced stressors, they are using AI-assisted tools to accelerate their workflows while tracking changes in individual reefs over time. 
One of these researchers is Marine Ecologist Beverly French, MS ’16, PhD ’22, a member of the inaugural cohort of fellows selected for the Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship. Her work is funded as part of a $148 million initiative to support postdoctoral researchers who are applying AI in scientific research at nine top universities worldwide, including UC San Diego.
French has long been intrigued by the fact that coral reefs have existed for hundreds of millions of years, surviving significant environmental disruptions and mass extinction events. Her work involves metagenomic sampling, or extracting DNA from corals, to further the scientific community’s understanding of what makes the entire coral holobiont — a term for coral, its symbiotic algae and microbiome — survive and adapt.
Key to this effort is layering her findings with other relevant data: namely, from the large-area imagery collection and processing that researchers in the Sandin Lab have been conducting for years. Their work initially involved manually tracing corals from underwater images collected during site surveys and interpreting those images to identify coral species. But with nearly a million corals across 100 islands, that is an enormous amount of data to process. Now, this time-consuming work is performed with the help of AI-assisted tools that can extract these data from imagery, significantly speeding up the process by which the team can observe how these corals are growing, shrinking, changing, dying and recruiting.
The massive datasets and advances in high-performance computing underlying today’s AI revolution aren’t just fueling  chatbots and image generators. French and her colleagues in the Sandin Lab — as well as scientists across numerous fields and disciplines at UC San Diego and beyond — are using custom machine-learning tools to take on the more tedious tasks involved in their work while ensuring their expertise remains central. In these applications, AI is proving to be a game changer for advancing scientific discoveries.
“Rather than blindly trusting the machines and algorithms, I appreciate that using these human-centered AI approaches can empower both machines and humans to do better science,” says French. “It’s an iterative process of collaboration between machine and human that makes us both better in the end.
“This message was automatically generated and edited by Dr. ______.”
This is the disclaimer attached to messages received by some UC San Diego Health patients this year via the MyChart online health portal in response to inquiries about refills, test results, paperwork and other general questions.
As one of the first health systems in the country to participate in a pilot program integrating Microsoft generative AI services such as ChatGPT-4 into its electronic health record, UC San Diego Health continues to lead the way in exploring the potential applications of AI in medicine.
When responding to messages from patients, approximately 100 physicians participating in the pilot are provided a draft reply generated by ChatGPT that’s based on the initial message and the patient’s electronic medical history. They can then edit the draft for content and tone and approve it before sending it.
In a high-risk environment such as the health care setting, says Christopher A. Longhurst, ’96, MD, chief medical officer and chief digital officer at UC San Diego Health, keeping the “human in the loop” is critical — particularly considering that ChatGPT has the potential to generate inaccurate responses.
“We’re caring for patients and sometimes hold their very lives in our hands. We have to be very thoughtful and measured in how we implement these things,” says Longhurst.
According to Longhurst, primary care physicians have seen a three- to four-fold increase in patient messages since the onset of the COVID-19 pandemic. Often, they are answering messages during what he refers to as “pajama time” — the evening hours at home that otherwise would be spent with family or friends. He sees this inbox overload as a major contributor to physician burnout.
If the pilot program is effective in reducing the amount of time that clinicians spend replying to messages while simultaneously improving patient care and satisfaction, this AI-powered advancement could be transformative for the health care industry. But Longhurst believes it won’t stop there.
He says the implementation of GPT-4 in doctor-patient messaging merely scratches the surface of the potential applications of generative AI in medicine. In the near future, tools such as ChatGPT will likely be used to summarize patient charts, draft incident reports and provide medical imaging analysis. Longhurst hopes that generative AI will one day provide diagnostic decision support — essentially assisting doctors in diagnosing or identifying conditions.
“There’s a spirit of innovation at UC San Diego and UC San Diego Health,” says Longhurst. “There’s a lot of willingness to try new things, to invent the future and not be satisfied with our current solutions."
As sophisticated generative AI tools become easily accessible to the public at little to no cost, people are going to become much more skeptical of information being presented to them via the internet, social media and news outlets — and even more suspicious of each other.
According to R. Stuart Geiger, assistant professor of communication and data science, the institutions on which our society is built — from education to science to journalism — were already in crisis and are struggling with these new advances. But Geiger (who uses they/them and he/him pronouns), does not believe that this so-called “AI revolution” is really that different from other technological advances throughout history that have threatened to displace or change the conditions of labor. 
“I see it as yet another thing that we’re going to have to deal with. It feels novel right now and it feels scary now, but I think in a few years, it’s going to be normalized — it’s going to be as ubiquitous as Photoshop,” they say.
Nevertheless, Geiger has real concerns about the barrage of disinformation already generated by AI tools that can produce text, audio clips, images and videos. And given the current social and political climate, there’s not only a demand for disinformation, but there’s also “big money” to be made from it. They suggest the federal government should consider whether certain kinds of disingenuous or unlabeled use of AI-generated content, in a commercial context or in the context of a political campaign, might constitute unfair practices. 
Geiger also believes that the developers of these tools have a responsibility to introduce new technologies around protection to mitigate some of the potential harms. They argue that while AI companies are reaping the benefits, the rest of society is bearing the bulk of the costs.
“We need to be thinking about the role that technologists play in their responsibilities to society,” says Geiger. “Many drop the bomb and then walk away with their backs turned and don’t look at the explosion, and then it’s the rest of society that has to catch up.” 
For many people, robots might be the first thing that comes to mind when they hear the term “artificial intelligence.” But the two fields, though often conflated, are their own distinct disciplines. Currently, most robots that interact with humans outside of research spaces — such as the robot vacuums and drones that have become common household items — are built to perform highly specific duties. They largely aren’t able to move beyond a single, specialized function.
With public attention now fixated on ChatGPT and other recent advances in generative AI, some may wonder what this means for the future of robotics. Will we soon be surrounded by artificially intelligent robots that are capable of thinking — and acting — like humans?
At UC San Diego, Laurel Riek, director of the Healthcare Robotics Lab and professor of computer science and engineering with a joint appointment in the Department of Emergency Medicine, has worked at the intersection of AI and robotics for decades. Her areas of research include building robots for health care applications, studying human-robot interaction and exploring the ethical and social implications of technology.
When it comes to developing new AI-enabled technologies, Riek believes that engineers and developers have a responsibility to think through the social issues and potential pitfalls that might be introduced if they are deployed for public use.
“As researchers, we have ethical principles that guide us when we do these types of technology deployments,” says Riek, who describes the future of AI as nuanced. “We can build anything, but that doesn’t mean we should,” she adds.
When Riek and her students in the Healthcare Robotics Lab develop and build new technologies designed to assist patients and clinicians, she says they remain mindful of the community’s needs, the type of data they’re collecting, how the robots will interact with humans and how to ensure the protection of individual privacy.
With this very deliberate and mindful approach, Riek and her team have leveraged the capabilities of AI to build and program a Cognitively Assistive Robot for Motivation and Neurorehabilitation (CARMEN), a social robot that’s designed to teach cognitive strategies related to memory, attention, organization, problem solving and planning to help people with dementia or mild cognitive impairment. It can learn about the person and personalize its interactions based on the individual’s abilities and goals. Prototypes of CARMEN are currently being used to provide cognitive interventions for individuals affiliated with the George G. Glenner Alzheimer’s Family Centers in San Diego.
Artificially intelligent robots like CARMEN have the potential to improve access and increase independence for individuals with disabilities. Yet, Riek says it is important they are deployed in an ethical manner, mindful of their effects on individuals and communities. 
“It’s been exciting to start to think through these questions in a grounded and real-world problem domain,” says Riek. “AI ethics research can sometimes be broad and far-future but this is a real, true problem that we’re solving.”
It may look vastly different from the dystopian scenes portrayed in movies over the past several years, but this is no hallucination: The future has arrived, and change is imminent. Driven by curiosity and the desire to make the world a better place, researchers at UC San Diego have been at the forefront of AI innovation for decades and continue to lead the way in leveraging these new capabilities to develop solutions to the most pressing global challenges. Tell us what you think about AI — and how it impacts you personally — at
Academic integrity expert shares her thoughts on the role of AI in higher education. 
AI technology protects Californians from the threat of wildfires. 
Intellectual property lawyer discusses AI in the creative space. 
Keep up with all the latest from UC San Diego. Subscribe to the newsletter today.
You have been successfully subscribed to the UC San Diego Today Newsletter.
© 2021 Regents of the University of California. All rights reserved. Terms of Use | Accessibility | Privacy | Feedback
Signup to get the latest UC San Diego newsletters delivered to your inbox.
Award-winning publication highlighting the distinction, prestige and global impact of UC San Diego.