Why is generative AI forcing us to rethink SEO?
We have entered the age of answers and witnessed the death of sources. In this article, we will theorize about the various notions of search, its resources, and evaluation in light of generative AI.
We are living through a turning point in the history of the Web . The mass introduction of large-scale programming language models ( LLMs ), such as Gemini, and their direct integration into search engines have challenged decades of established concepts. For us SEO , the change is comparable to an earthquake. The way we think about search, sources, and evaluation is being fundamentally rewritten.
SEO has changed forever.
For years, the SEO mantra has been "create quality content information , points out, a fundamental question arises: "What happens when AI-powered information systems increasingly provide answers instead of directing people to the sources?"
Before I begin, I invite you to watch the video below:
The video analyzes the conference paper “ Theorising Notions of Searching, (Re)Sources, & Evaluation in Light of Generative AI ,” by Olaf Sundon, professor of information studies. The main objective of the paper is to discuss the concepts of search, sources, and evaluation in an information infrastructure increasingly influenced by artificial intelligence .
A crucial distinction is made between the search for sources (documents containing information) and the search for answers (direct facts or information). In the article, Sundon uses a sociomaterial approach, treating information not as something abstract, but as an active participant in the construction of our world, citing historical examples such as those of Suzanne Brier and Paul Otlet. A technical history of search infrastructures is presented, from ancient libraries and card catalogs to internet search engines and their evolution.
After reading the article, which I found thanks to the video I invited you to watch, I asked myself three questions:
- What happens when AI-powered information systems increasingly provide answers instead of directing people to the sources?
- Can Otlet's positivist notion of extracting abstract facts from documents (or information sources) be compared to the growing tendency of information systems to provide answers rather than sources?
- How can a sociomaterial understanding of information help us understand the increasingly AI-infused information infrastructure from the perspective of Library and Information Science?
Driven by these questions, inspired by this article and the video that explains it, I sought to delve deeply into this issue.
I invite you to explore Sundon's theoretical analysis and connect it directly to our Semantic SEO . The information infrastructure is changing, and our ability to be found depends on understanding this change.
Since information systems, libraries, and similar institutions have existed to organize information, the principles and methods for presenting and finding information have played a fundamental role. There are bibliographies, lists, card catalogs, book indexes, OPACs, classification systems, thesauri with vocabularies , web portals, bibliographic databases , reference management systems, and so on. These systems organize and control access to information sources. New and old technologies often merge. Web search engines depend on their index, which contains only a portion of the web. This means that when we use Google Search, we are searching an index, not the online web, and the size and up-to-dateness of the index are central quality criteria (Lewandowski, 2023, p. 41). These information systems always bring with them a particular perspective, a worldview, whether it be the representation of topics in library catalogs (Olson, 2002), web search engines (Lewandowski, 2023, p. 265; Noble, 2018) or generative AI (Sun et al., 2024).
SUNDIN, Olof. Theorising notions of searching, (re)sources and evaluation in the light of generative AI. Information Research: an international electronic journal, [S. l.], v. 30, p. 291-302, 2025. DOI: 10.47989/ir30CoLIS52258.
The tradition of information retrieval and the search for sources.
To understand the disruption, we first need to solidify what it disrupts. The prevailing tradition of Library and Information Science (LIS), and by extension, traditional SEO, has always been to build systems that provide the user with a source of information.
As Sundon and Hartel's article reminds us, the technical history of searching begins with libraries and card catalogs. When you went to a library , the librarian (or the catalog) didn't give you the "answer"; they gave you a book, a document, or a periodical. They directed you to the source. The information was contained in a document.
Internet search engines, in their original conception, followed exactly this model.
A search engine like Google was, and largely still is, an information retrieval system that crawls, indexes, and ranks documents ( pages ). The SEO job consisted of ensuring that our document was considered the most relevant source for a specific query. The end product was a link to a page.
There was a clear distinction: the search for sources (documents containing information) and the search for answers (the facts or direct information). The SEO's job was to be the best intermediary .
The semantic shift: from link to direct reply
This distinction began to blur long before generative AI. As Sundon's own article points out, Google initiated a substantial shift in 2012. This shift was the introduction of the Knowledge Graph .
Want to learn more about Google's algorithm updates?
Google has ceased to be merely a "page indexer" and has become a "fact organizer." It began the transition from a search engine to an answer engine . We then witnessed the birth of what we now call semantic SEO.
This change has roots in much earlier ideas, such as those of Paul Otlet, who dreamed of extracting "abstract facts" from documents to create a "universal book of knowledge." Google's Knowledge Graph is the digital embodiment of this positivist idea. It began displaying "direct answers," like Featured Snippets .
Since then, our work as SEO professionals has changed. It was around that time that I understood that simply optimizing for keywords . After years of trial and error and frustration, I understood how to structure project data entities using structured data , but only later did I learn how to use taxonomies and ontologies in this process. One of the goals became feeding Google's Knowledge Graph with my information. That's when the Semantic Workflow was born!
However, even in this model, the source was still (usually) visible in SERPs. The Featured Snippet contained a link. The Knowledge Panel cited sources like Wikipedia . But then came generative AI, which represents the culmination of a radical change, and this is where the connection to the source breaks down.
Generative disruption and responses without traceable sources.
Olaf Sundon's article accurately identifies the difference between a traditional search engine and an LLM application.
- Traditional search engines: provide links to documents. They are retrieval .
- Generative AI chatbots: they generate content. They are generation .
Shah and Bender (2022) observe that this new conversational interaction “comes at the cost of direct access to sources.” The result of generative AI is “devoid of traceable connections to sources.” This is related to how the models were built or, even more so, to how Transformer works.
This is what Bender et al. (2021) called a “stochastic parrot,” a term that has become very well-known. As explained in Hartel’s video, the LLM does not “understand” meaning in the human sense. It is a statistical model trained to generate “plausibly sounding word sequences” based on patterns observed in its training data.
The result appears coherent, but it lacks real meaning and a verifiable connection to a stable source. For Information Science, this is a "significant disruption." For SEO, it's an attribution nightmare.
The evaluation crisis
How can you trust a parrot?

The main argument of Sundon's article is that, with this change, "sources are becoming increasingly invisible." This leads us to the next pillar that has crumbled: evaluation .
Traditional information literacy, such as the SIFT method (Stop, Investigate the source, Find better coverage, Trace claims), relies entirely on the ability to investigate the source.
When dealing with retrieved information, we ask:
- Who is the author?
- What is the reputation of this publication?
- What is the publication date?
- their sources ?
As the article that sparked my research points out, the rise of generative AI makes this traditional assessment “almost impossible.” When an LLM generates an answer, there is no author. There is no publication date. There is no “source origin” beyond the model itself.
Since Sundon wrote his text, Google has made several changes, primarily. Now, in most cases (limited to AI Overview and AI Mode), we have citations of sources, but in a decontextualized way, since we don't know exactly which part of the text from each source was responsible for the AI generation.
The user receives a text that sounds plausible, but it could be a "hallucination": a statistical collage of decontextualized, factually incorrect facts that appear true. The problem is that, without a source to trace, the user has no way of knowing.
This has profound implications for students, researchers, and, of course, the general public. How can we theoretically navigate this scenario? The answer, for me, lies in doubling down on the principles of Semantic SEO .
The role of semantic SEO in information retrieval in the generative era.
If generative AI is the new intermediary, how do we, as content creators and website owners, ensure that our information is found and, hopefully, credited?
The answer lies not in trying to "trick" the LLM, or in reheating old strategies, but rather in making our content the cleanest, most structured, and semantically rich data source possible for training and indexing these models.
This is where Information Retrieval meets Semantic SEO .
When I talk about semantic SEO, I'm talking about focusing on meaning, structuring information, and shaping content so that algorithms can understand it conceptually. I'm talking about applying a semantic work model from start to finish.
This is the new reality. SEO has become infinitely more complex. The SERP, composed of a " ranking " of links , is slowly dying, and we are optimizing to be the most reliable and semantically understandable source of facts for the algorithms that feed the AI models.
I'm sorry if I made you anxious. After all, how do you do that?
That's why I'm creating the Semantic SEO Course – The Semantic Workflow. I invite you to follow me on LinkedIn ; I'll be posting updates about its launch there.
The responsibility of the source in the new Semantic Web
Olaf Sundon's article and Jenna Hartel's analysis are a crucial wake-up call for the Information Science community. But I argue that it's an even more urgent warning for us, SEO professionals.
We are entering an era where information is divorced from its origin. The consequence of this is the erosion of trust. If an LLM gives me the wrong answer, who do I blame? How do I correct it? The "black box" of the "stochastic parrot" is hermetically sealed.
Sundon's article concludes with an essential observation: “Not citing sources for our claims was complicated before, but we could work backward and find the source. Today, with generative computing, that's not always possible. So, everyone committed to quality information needs to cite their sources.”
As SEO professionals, our role has evolved from "link builders" to "information architects." Now, we must become "knowledge curators."
Our job is not just to get Google to find our website ; it's to ensure that AI has access to verifiable and structured facts. And, most importantly, we must advocate for and build systems that expose their sources, reward transparency, and allow users to do what Library Science and Information Science have always defended: evaluate the source of information.





Post comment