Why is generative AI forcing us to rethink SEO?

Why generative AI forces us to rethink SEO

We enter the era of responses and witness the death of the forces. In this article, we will theorize about the different notions of search, its resources and its evaluation in light of generative AI.

We are experiencing a moment of inflection in the history of the Web. The massive introduction of large language models (LLMs), like Gemini, and their direct integration into search engines have challenged decades of established concepts. For us, SEO professionals, the change is comparable to an earthquake. We fundamentally rethink the way in which we conceive the search, its sources and its evaluation.

El SEO cambió para siempre

For years, the SEO mantra has been “creating quality content” so that Google will consider us a source of authority ( authoritative source ) and present it to the user. However, as seen in a recent article by Olaf Sundon, professor of information studies, a fundamental question arises: “What happens when AI-enabled information systems increasingly provide more answers instead of directing people to the sources?”

First, I invite you to watch the following video:

(Video embed)

The video analyzes the conference paper “Theorising Notions of Searching, (Re)Sources, & Evaluation in Light of Generative AI”, by Olaf Sundon, professor of information studies. The main objective of this article is to discuss search concepts, sources and evaluation in an information infrastructure increasingly influenced by generative artificial intelligence.

There is a crucial distinction between the search for sources (documents that contain information) and the search for responses (direct information). In the article, Sundon uses a sociomaterial approach, treating information not as something abstract, but as an active participant in the construction of our world, citing historical examples such as those of Suzanne Brier and Paul Otlet. A technical history of search infrastructures is presented, from ancient libraries and card catalogs to internet search engines and their evolution.

After reading the article, which you found thanks to the video that I invite you to watch, I had three questions:

  1. What happens when AI information systems increasingly provide more answers instead of directing people to sources?
  2. Can we compare Otlet's positivist notion of extracting abstract pieces from documents (the sources of information) with the increasing tendency of information systems to provide answers instead of sources?
  3. How can a sociomaterial understanding of information help us understand the information infrastructure increasingly infused by AI from the perspective of library science and information science?

Moved by these questions, inspired by this article and the video that explains it, I decided to put my mind to this question.

I invite you to explore Sundon's theoretical analysis and connect it directly with our Semantic SEO . The information infrastructure is changing and our discoverability depends on us understanding this change.

As long as information systems, libraries and similar institutions exist to organize information, the principles and methods for presenting and finding information play a fundamental role. There are bibliographies, lists, card catalogues, book indexes, OPACs, classification systems, thesauruses with controlled vocabularies, web portals, bibliographic databases, reference management systems and so on. These systems organize and control access to information sources. New and ancient technologies are often fused. Web search mechanisms depend on their index, which contains only part of the web. This means that, when we use Google Search, we are searching in an index, not in the online web, and the size and currentness of the index are central quality criteria (Lewandowski, 2023, p. 41). These information systems always bring with them a particular perspective, a vision of the world, including the representation of themes in library catalogs (Olson, 2002), web search mechanisms (Lewandowski, 2023, p. 265; Noble, 2018) or generative AI (Sun et al., 2024).

SUNDIN, Olof. Theorising notions of searching, (re)sources and evaluation in the light of generative AI. Information Research: an international electronic journal, [S. l.], v. 30, p. 291-302, 2025. DOI: 10.47989/ir30CoLIS52258.

The tradition of information retrieval and source search

To understand rupture, we first need to define what is what ruptures. The predominant tradition of Library Science and Information Science (LIS) and, by extension, traditional SEO, has always been to build systems that provide the user with a source of information.

As the article by Sundon and Hartel reminds us, the technical history of the search beginning with libraries and card catalogs. When you go to a library, the library (or the catalog) does not give you the “response”; Give you a book, a document or a periodical publication. I would direct you to the source. The information was contained in a document.

Internet search engines, in their original design, followed exactly this model.

A search engine like Google was, and largely continues to be, an information retrieval system that tracks, indexes and classifies documents (web pages). SEO work consisted of ensuring that our document was considered the most relevant source for a specific query. The final product was a link to a page.

There was a clear distinction: between the search for sources (documents that contain information) and the search for responses (direct information). SEO work consisted of being the best intermediary.

The semantic change: from the link to the direct response

This distinction began to spread much before generative AI. As indicated by Sundon's own article, Google initiated a substantial change in 2012. This change marked the introduction of the Knowledge Graph (the knowledge graph).

Do you want to know more about Google algorithm updates?

Google stopped being just a “page indexer” to become a “item organizer”. Started the transition from a search engine to an answer engine . We then saw the birth of what we today call semantic SEO.

This transition has its roots in much earlier ideas, such as those by Paul Otlet, who dreamed of extracting “abstract details” from documents to create a “universal book of knowledge”. Google's Knowledge Graph digitally materializes this positivist idea. Pasó showing “direct answers”, like Featured Snippets .

Since then, our work as SEO professionals has changed. It was at that time when I understood that it was not enough to optimize for keywords. After years of intent and frustration, I understood how to structure project data entities using structured data , but only later discovered how to use taxonomies and ontologies in this process. One of the objectives is to Google Knowledge Graph At this moment the Flow of Semantic Work began!

However, even in this model, the source was still (generally) visible in the SERPs. The Featured Snippet contained a link. The Knowledge Panel cited sources such as Wikipedia. But then I took the generative AI, which represents the cusp of a radical change, and here is where the connection with the source breaks.

The generative rupture and the answers without traceable sources

Olaf Sundon's article precisely identifies the difference between a traditional search engine and an LLM application.

  1. Traditional search engines: provide links to documents. recovery systems .
  2. Generative AI chatbots: general content. generation systems .

Shah and Bender (2022) observe that this new conversational interaction “has the cost of direct access to sources”. The result of generative AI is “without traceable connections to the sources”. This relates to how the models were built or, moreover, to how the Transformer works.

This is what Bender et al. (2021) called “stochastic lore”, a term that is very well known. As Hartel's video explains, the LLM does not “understand” meaning in the human sense. It is a statistical model designed to generate “sequences of words that are plausible” from the patrons observed in their training data.

The result seems coherent, but has no real meaning or a verifiable connection with a stable source. For information science, this is a “significant rupture”. For SEO, it is a burdensome task.

The crisis of evaluation

How can you trust a parrot?

The main argument of Sundon's article is that, with this change, “the sources become increasingly invisible”. This takes us to the next pillar that falls: evaluation .

Traditional information literacy, such as the SIFT method (Stop, Investigate the source, Find better coverage, Trace claims), is entirely based on the ability to investigate the source.

When we handle the retrieved information, we ask:

  • Who is the author?
  • What is the reputation of this publication?
  • When is the publication deadline?
  • ¿Cuáles son sus fuentes?

As evidenced by the article that originated my investigation, the rise of generative AI vuelve this traditional evaluation “casi impossible”. When an LLM generates a response, there is no author. There is no publication closure. There is no “origin of the source” but all of the model itself.

Since Sundon wrote his text today, Google has implemented some changes, mainly. Now we have, most of the time (limited to AI Overview modes and IA Mode), the citation of the sources, but in a decontextualized way, since we do not know exactly which part of the text of each source was responsible for the generation by AI.

The user receives a text that seems plausible, but could be a “hallucination”: a statistical collage of decontextualized facts, factually incorrect, but that seem true. The problem is that, without a source to trace , the user doesn't know how to find out.

These have profound implications for students, researchers and, of course, for the general public. How can we theoretically navigate this scenario? The answer, for me, is redoubling the answer based on the principles of Semantic SEO .

The role of semantic SEO in information recovery in the generative era

If generative AI is the new intermediary, how do we guarantee that we, content creators and website owners, find our information and, consequently, give us credit?

The response is not an attempt to “deceive” the LLM. Rekindling old strategies means converting our content into the cleanest, most structured and semantically rich data source possible for the training and indexing of these models.

This is where Information Recovery meets Semantic SEO .

When I talk about semantic SEO, I mean focusing on meaning, structuring the information and shaping the content so that the algorithms can understand it conceptually. I can apply a semantic work model from beginning to end.

This is the new reality. SEO has become infinitely more complex. The SERP, made up of a “ranking” of links, is slowly growing and we are optimizing to be the source of the most reliable and semantically understandable data for the algorithms that feed the AI ​​models.

I feel if you generate anxiety. After all, how do we do this?

For this I am creating the Semantic SEO Course – The Semantic Workflow. I invite you to follow me on LinkedIn ; Therefore, I will update you on its launch.

The responsibility of power in the new Semantic Web

Olaf Sundon's article and Jenna Hartel's analysis are a crucial alert for the information science community. But I argue that this is an even more urgent alert for our fellow SEO professionals.

We are entering an era in which information is divorced from its origin. The consequence of this is the erosion of confidence. If an LLM gives me an incorrect answer, who is to blame? How do I fix it? The “black box” of the “stochastic lore” is hermetically sealed.

Sundon's article ends with an essential observation: "Not giving the sources of our affirmations was complicated before, but we could take the opposite path and get to the source. Today, with the generativity, it is not always possible. Therefore, everyone who has a commitment to quality information needs to provide the sources of their affirmations."

As SEO professionals, our role evolved from “link builders” to “information architects”. Now, we must become “curators of knowledge”.

Our work is not just getting Google to find our site; It is to ensure that the IA has access to verifiable and structured data. And, most important of all, we must advocate and build systems that expose its sources, that reward transparency and that allow the user to do what library science and information science will always defend: evaluate the information source.

Semantic SEO Course Semantic SEO

Hello, I'm Alexander Rodrigues Silva, SEO specialist and author of the book "Semantic SEO: Semantic Workflow". I've worked in the digital world for over two decades, focusing on website optimization since 2009. My choices have led me to delve into the intersection between user experience and content marketing strategies, always with a focus on increasing organic traffic in the long term. My research and specialization focus on Semantic SEO, where I investigate and apply semantics and connected data to website optimization. It's a fascinating field that allows me to combine my background in advertising with library science. In my second degree, in Library and Information Science, I seek to expand my knowledge in Indexing, Classification, and Categorization of Information, seeing an intrinsic connection and great application of these concepts to SEO work. I have been researching and connecting Library Science tools (such as Domain Analysis, Controlled Vocabulary, Taxonomies, and Ontologies) with new Artificial Intelligence (AI) tools and Large-Scale Language Models (LLMs), exploring everything from Knowledge Graphs to the role of autonomous agents. In my role as an SEO consultant, I seek to bring a new perspective to optimization, integrating a long-term vision, content engineering, and the possibilities offered by artificial intelligence. For me, SEO work is a strategy that needs to be aligned with your business objectives, but it requires a deep understanding of how search engines work and an ability to understand search results.

Post comment

Semantic Blog
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognizing you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.