Living in a generative AI world – meer.com
Mainstream debates about artificial intelligence often focus on what AI can do—how well it performs tasks, the risks of hallucinating false information, or the ethical boundaries it should not cross. These are critical concerns. But beneath these surface-level questions, a deeper transformation is taking place—one that reshapes how meaning itself is created, stabilized, and shared in a world saturated with technology. To fully grasp the significance of generative AI, we need to move beyond thinking about AI as a tool that simply produces outputs and begin to see it as something that changes the very conditions of communication and understanding.
Throughout history, each major communication technology—from oral storytelling to writing, printing, and broadcasting—has restructured the way societies create and circulate meaning. Oral cultures relied on memory and shared presence; writing allowed ideas to be fixed in time and space, printing enabled mass dissemination, and broadcasting created one-to-many forms of communication. Each of these media not only transmitted content but also shaped how people thought, what they valued, and how they related to each other.
Generative AI, however, introduces a new form of mediation. It doesn’t just transmit meaning like earlier media; it simulates it. This means AI can generate text, images, or sounds that look and feel like human expression, but without actually engaging in the human process of making meaning. It mimics language, but it doesn’t “understand” what it says. It resembles creativity, but it doesn’t “intend” its creations. In short, generative AI creates outputs that appear meaningful, but these outputs do not emerge from a conscious or interpretive mind.
This shift can be understood through the concept of simulation. Philosopher Jean Baudrillard described simulation as the generation of something that seems real, but which has no origin in reality—a copy without an original. Generative AI operates in this space. When you ask an AI like ChatGPT to write a story or answer a question, it doesn’t retrieve information in the way a library would. Instead, it predicts what words are likely to come next based on patterns in vast amounts of data. The result is something that looks meaningful to us, but for the AI, there is no meaning—just patterns and probabilities.
This doesn’t mean that AI-generated content is useless or meaningless to humans. On the contrary, people often find value in what AI produces. But the meaning we get from AI is not created by the AI itself—it is co-produced by us when we interpret and respond to what it generates. This introduces a new dynamic: meaning is no longer something that is simply communicated from one human to another, but something that is increasingly mediated by systems that simulate rather than understand.
In this context, we can think of generative AI as part of what I call a semantic operating system. Just like a computer’s operating system manages files and processes, a semantic operating system organizes how meaning is created, shared, and stabilized in a society. Generative AI becomes part of this system, influencing what kinds of expressions are possible, what counts as knowledge, and how we relate to information. Importantly, this system doesn’t just reflect our world—it helps shape it.
This idea forms the core of my forthcoming book, AI and the Mediation of Meaning. The book explores how generative AI reconfigures the social processes through which meaning is negotiated and sustained. Drawing on systems theory, hermeneutics, media theory, and cultural sociology, it argues that generative AI should not be understood merely as a technological tool, but as a cultural technology—a participant in the broader ecology of meaning that constitutes our social world. AI mediates meaning not by understanding, but by reshaping the conditions under which understanding takes place.
A central argument of the book is that we are witnessing the rise of a new mode of meaning mediation. Previous media stabilized meaning by anchoring it in human practices of interpretation, whether through dialogue, texts, or shared rituals. Generative AI, by contrast, produces a fluid, dynamic field of simulated expressions that invite, but do not require, human interpretation. This changes how we approach knowledge, creativity, and communication. The book proposes that we need to develop new literacies and frameworks for engaging with this hybrid ecology of meaning, where human sense-making coexists with machine-generated simulations.
This essay is the first in a series that will unfold these themes in greater depth. Two further essays will follow, each building on the ideas introduced here and preparing the ground for the book’s full argument.
The next essay, Generative AI and the Crisis of Objectivity, will explore how AI challenges traditional notions of truth and objectivity. It will argue that AI’s capacity to produce convincing but unverifiable outputs creates a tension with scientific and journalistic standards that rely on stable facts and transparent sources. Drawing on the history of epistemology and systems theory, this essay will suggest that we need to rethink objectivity not as a static property but as a dynamic process of communicative stabilization—one that now includes the role of AI in shaping what is seen as credible or real.
The third essay, Retrieval-Augmented Generation and the Future of Knowledge, will examine how new AI architectures, such as Retrieval-Augmented Generation (RAG), mediate between stored information and generated expression. RAG systems combine the generation of new content with retrieval from existing databases, creating a hybrid model of knowledge production. This essay will conceptualize RAG not just as a technical improvement, but as a model for how AI and human systems can co-evolve in the ongoing negotiation of meaning. It will argue that RAG embodies the shift from knowledge as fixed content to knowledge as mediated interaction—a theme that will be central to the book’s later chapters.
Together, these essays will offer a roadmap for understanding how generative AI is reshaping the symbolic architectures of social life. They invite us to think beyond AI as a tool and instead consider how it participates in the living ecology of meaning that defines our shared world. Through this exploration, we can begin to see not only the risks but also the possibilities of a future where meaning is increasingly co-shaped by both human and non-human actors.