For more than two decades, the keyword was our North Star. Our totem. The atomic unit upon which we built everything: website architectures, content strategies, link building campaigns… Our industry defined itself by our ability to find, analyze and rank these small strings of text.
That era has ended. SEO based solely on keyword optimization is dead.
It has not been a sudden death, but rather a long and progressive deconstruction. Google has gone from being a simple text-matching engine to a sophisticated engine of interpretation and reasoning. It no longer just reads; it interprets. It no longer indexes words; it connects concepts.
So let us perform an autopsy of the traditional keyword and try to see what the primordial material is that we have now.
We’ll talk about…
The Awakening of Intent: When Google Learned to Interpret
“In the beginning God created the heavens and the earth. Then the keyword. Let’s try again.”
At the start, Google was basically a librarian who only understood titles. Its functioning was based on string matching: if you searched for “best running shoes,” the algorithm looked for documents that contained exactly those words.
Our work, therefore, was almost mechanical: identify the keywords that most interested us and ensure they were present on our pages. Repetition and density were the tools of the trade. Later, they were frowned upon…
The turning point was the Hummingbird update in 2013. It was the moment when Google stopped being a simple indexer of text strings to become an interpreter of search intent. It understood that a query is not just a collection of words, but the manifestation of a complex human need.
This was the true birth of semantic SEO: a discipline focused on understanding and satisfying the user’s complete need, not just in responding to the keyword they typed. It no longer mattered so much what words we used, but rather why we thought a user would search for them.
The use of bold text also hides a lot of meaning.
The Birth of the Digital Brain: Entities and the Knowledge Graph
If intent was the “what,” entities became the “who,” the “where,” and the “how it relates to…”. Google went beyond interpreting phrases and began to understand that keywords like “Apple,” “Paris,” or “Cristiano Ronaldo” were not just text strings, but representations of real-world concepts: a company, a city, a fool… a person. Each with attributes, properties, and specific relationships.
The Knowledge Graph was the materialization of this new understanding. Google was not simply indexing the web; it was building a digital brain, a gigantic knowledge graph that mapped entities and their connections.
This introduction by Google in 2012 marked the shift from a keyword-based approach to an entity-based one, creating a network of relationships that allowed the search engine to understand context instead of just matching text.
This progress was driven by transformer algorithms such as Hummingbird (2013), RankBrain (2015), BERT (2019), and MUM (2021), each refining the ability of the engine to process natural language.
For us, the players, this meant a huge change in how we operated:
- From plain text to deep context: It was no longer enough to mention a word. We had to contextualize it. Writing about “Apple” required making clear whether we were talking about Tim Cook’s company or the fruit, using related entities (iPhone, Cupertino vs. Apple tree, Pie) to eliminate ambiguity.
- Optimization beyond our domain: An entity’s authority was no longer built only on our website. We needed a consistent presence in sources that feed the Knowledge Graph: Wikipedia, Wikidata, business profiles, structured databases, etc.
We stopped optimizing simple documents and began optimizing concepts within a global knowledge graph. Our job became more like that of a librarian who catalogs and connects knowledge than a copywriter inserting keywords.
The more complete the semantic map covered by our content, the better Google understood that we provided a comprehensive answer to the user.
Optimizing with entities in mind often implies expanding the scope of content: including synonyms, related terms, FAQs, and all the angles that surround the main topic. This connects directly with search intent: by addressing all facets, we are more likely to resolve the user’s true need.
An added bonus is that such entity-rich content tends to rank for many search variations, including long-tail queries that may not even explicitly contain our target keyword.
Google values content rich in entities that covers the “next natural need” of the searcher, because this increases engagement and more effectively concludes the user’s search journey.
Interestingly, this model of understanding based on entities is a form of Symbolic AI, which relies on structured knowledge, logic trees, and ontologies. This contrasts with the statistical pattern recognition of large language models (LLMs).
Modern search is a hybrid system that combines the precision of symbolic AI with the conversational fluency of LLMs.
A Pause on the User: Decoding Signifiers, Meanings and Micro-Moments
Before continuing with the evolution of the search engine, I’d like to review how user behavior itself has changed. We often focus too much on the algorithm, but it’s been a long time since “cheap shoes.”
We must understand that every search query is a “signifier” (the word or phrase typed), but the “meaning” (the concept it refers to) is fluid and depends on context. For example, the query “compañía de luz” has very different meanings in Spain (a cheap electricity provider) and in the United States (an LED provider or a minimalist design firm). And if I’m present, it’s simply a statement of the obvious.
This concept aligns with Umberto Eco’s idea of the “open work”, where the result of a search is not a single, fixed answer, but an interpretation dynamically assembled by AI based on the user’s context in that precise moment.
This reality demands modular and “remixable” content, designed to be fragmented and reassembled across multiple personalized SERPs. Visibility no longer comes from occupying the number one position, but from appearing in as many of these personalized search experiences as possible, whenever relevant.
The current Messy Middle can be broken down into semiotic phases, each with its own strategic levers:
- Trigger: The pre-conscious spark of meaning, like seeing a video on TikTok or hearing a mention in a podcast. Strategic levers here: brand presence and entity building.
- Exploration: The user is mapping the terrain, deciphering new terms. Here, glossaries and structured guides are crucial.
- Evaluation: The user compares options, weighing signs of trust and authority. Comparative content and brand signals are key levers.
- Experience: The post-click phase, where meaning is converted or collapses. Here, page experience and message consistency prevail.
These phases correlate directly with Google’s “Micro-Moments” (I want to know, I want to go, I want to do, I want to buy).
The design of the SERP itself, with its different modules and features, reveals the dominant intent archetypes we must decode and align with our content.
The consequence of this change is that the fundamental unit of SEO is no longer the keyword, but meaning. This transforms the discipline from a purely technical optimization to a practice that must align a brand’s signs (its content, data and design) with the fluid, contextual meaning a user is trying to construct.
The goal is no longer to optimize for a string of text, but for the constellation of potential meanings associated with a user’s need.
This elevates the role of the SEO professional from a keyword tactician to a strategic architect of meaning, requiring skills in user research, psychology and brand strategy.
The Deployment of Intelligence: Understanding Query Fan-Out
If Hummingbird and the Knowledge Graph were the foundations, then Query Fan-Out is the operational mechanism of the new AI-driven search. This process changes once again the linear relationship that once existed between a query, a page, and an answer.
When a user performs a complex search in an AI environment (like AI Overviews, AI Mode, SearchGPT, or your brother-in-law), the system is not looking for a single answer. What it does is “fan out” the query into multiple sub-queries that are executed in parallel.
For example, if someone searches for “how to train a puppy,” the AI system might fan-out that query into additional questions such as:
- “how many times a day should a puppy eat?”
- “best positive training methods”
- “common mistakes when training puppies”
—even if the user never typed them explicitly. Google uses language models (LLMs) to generate these related questions and then looks for content that answers them.
The AI gathers these pieces of information from multiple sources (the traditional web index, the Knowledge Graph, YouTube, Shopping) and, through a reasoning engine, synthesizes them into a coherent, multifaceted response.
The same would happen with a dragon. Two paragraphs earlier.
Now, query fan-out itself is not entirely new in spirit: at its core, it’s a sophisticated extension of the multiple-intent search concept. This idea of breaking down the user’s question into sub-questions is very similar to what we were already doing with intent analysis, only now it’s automated at massive scale by AI.
Even before generative AI, best SEO practices already advised creating content capable of answering multiple related questions in a single piece, because that tends to rank better and for longer. Or do you think AlsoAsked or PAA (People Also Ask) suddenly appeared out of nowhere?
The difference is that with fan-out, Google does it explicitly and integrates it into the search experience, presenting directly a unified answer that synthesizes what would have been multiple classic results pages into a single piece of content.
The challenge now is to optimize for query fan-out, i.e., to make sure our site is the one Google chooses when expanding the query into those extra questions.
How do we achieve this? Ah, my friend, the big question. There is no magic formula. But we can infer some tactics.
To synthesize (wink): we must thoroughly cover the multifaceted needs of the user and structure the content in a way that the AI can easily understand and select.
To expand (double wink): making a list of questions and answering them is only the starting line. So beware of GEO optimizers, new tools, and other mythological creatures.
To really gain visibility in the GenAI era, it’s also essential to ensure that those content fragments are indexed and properly attributed, and that our site generates enough trust for the machine to prefer us over others.
In other words: it’s not enough to just stuff FAQs on your page (duh!); you have to work on how those answers are technically delivered (breaking into concise paragraphs, using markup, etc.), test whether AI systems are actually pulling them, and give off those signals of authority that inspire the algorithm to cite you.
As Diane Forrester says:
“In the world of AI search, authority means something different: being retrievable, attributable, and recognized by the machine as a trusted source.”
Some tactical advice drawn from findings so far to align our content with Google’s fan-out:
- Write in concise blocks (passages): Draft answers in 40–60 word fragments, starting with the conclusion and then the details. This answer-first style makes it easier for AI to extract the response as a snippet.
- Context-rich headings: Use descriptive headings and subheadings, avoiding generic ones like “Introduction” or “Conclusion” (learn, ChatGPT!). Include semantic keywords, entities, and nuances that AI might use to generate sub-queries. For example, instead of “Details,” use an H2 like: “Battery life of electric SUV cars in winter.”
- Cite authoritative sources: Include references and data from reliable sources (studies, official bodies, etc.) within your content. Research shows LLMs tend to prefer passages with citations because they carry credibility hooks.
- Cluster architecture: Organize your site around clear thematic clusters, with pillar pages linking to more specific content (hub & spoke model). When Google fans out, it often retrieves URLs of varying depth; if your related content is well interlinked, you increase the chances of having multiple pages included in the expansion.
- Internal links: Facilitate navigation inside long pages with internal links to specific sections (fraggles, as Cindy Krum calls them). This not only improves UX but also helps bots and LLMs quickly locate the relevant part of your content that answers a sub-question, instead of drowning in a sea of text.
- Frequent updates (freshness): Keep your data and references up to date, editing periodically even if only small lines. A slight change and a recent date may prompt Google to recrawl your page and consider it for “fresh” queries (like “in 2025”). Since fan-out also looks for “living” information, refreshed content improves your chances of being included.
In summary, query fan-out forces us to think in terms of complete search experiences: we no longer optimize “one page = one keyword,” but ecosystems of content capable of answering a multitude of related questions at once.
It’s about imagining the user’s search journey not as a series of separate Google queries, but as a continuous dialogue where our website should provide all the answers together.
This is an important mindset shift: queries now look more like prompts in a chat than isolated keywords, and we must approach our content with that semantic breadth.
The implication for our strategy: page-level optimization alone falls short. Our content no longer competes to be the only result, but to be one of the most reliable and complete sources that AI chooses to build its answer.
And for that, we need to demonstrate that we don’t just have a good page, but that we master the topic as a whole.
Welcome to topical authority.
The New Holy Grail: The Quest for Topical Authority
And so we arrive at the concept that ties it all together, the strategic goal that defines modern SEO: Topical Authority.
If Query Fan-Out is the mechanism, Topical Authority is the goal. It is the perception (by both users and search engines) that your site is an authority in a specific subject, because you cover that subject exhaustively, consistently, and with quality. It’s no longer about having a good article; it’s about being considered the definitive resource.
And yes, this is not new. It was already evident with the evolution of the search engine itself. For years, people debated whether topical authority was “real” or just an SEO myth. Today we have evidence that it is very real. Leaked internal Google documents and patents suggest that topical relevance (i.e., how completely a site covers related entities and questions) is an important ranking factor. And a recent Graphite study confirmed that pages with high topical authority gain traffic 57% faster than those with low topical authority. They also increase the proportion of pages that achieve visibility in their first weeks after publication. In other words, “covering your bases” pays measurable dividends in SEO.
We’ve noticed this empirically too: many sites saw their organic traffic fall in late 2023 and 2024 despite publishing content, because they didn’t have enough topical authority and Google stopped showing their pages against more complete competitors. At the same time, the new AI experiences in the SERPs, as we mentioned, tend to prioritize responses from sources with recognized brand authority, since an AI summary usually cites only 2–3 sources.
One could argue that topical authority is the new PageRank. But with one fundamental difference. PageRank was primarily an external validation (backlinks). Topical authority, however, is built from within, through exhaustive coverage and intelligent structuring of knowledge. External validation (mentions, authority links) is still important (I’m not dumb), but now it acts as a confirmation of intrinsic authority we’ve already established, not as its main cause.
Building this authority requires methodical, in-depth work:
- Exhaustive coverage: Create content that not only answers the main question (head term), but also anticipates and resolves all related doubts, covering a topic from every possible angle (informational, transactional, commercial, etc.). This involves building a library of content that addresses basic definitions, advanced guides, use cases, FAQs, common problems, comparisons, etc., as relevant to your niche. Coverage is not just about quantity of pages, but depth and usefulness. A site that explores every angle of a topic demonstrates both to users and Google that it “knows what it’s talking about” and deserves to rank at the top. Additionally, this content must provide original or updated information (information gain), not just repeat what’s everywhere else.
- Interconnection (Topic Clusters): This is the most effective practical application of this philosophy. For example, you can organize content around “pillar pages” that cover broad topics and “cluster pages” that delve into specific subtopics, all densely interconnected. This is not just an internal linking tactic; it’s a way of building your own knowledge graph for your niche, helping Google understand that you are the reference library on that subject.
- Demonstration of E-E-A-T: Provide first-hand experience, original data, expert authorship, and total transparency to prove your knowledge is genuine and reliable, not just a regurgitation of information.
- Focus and content pruning: Authority also means knowing how to say no to topics outside your scope. Many sites have improved their rankings by eliminating or consolidating irrelevant content or content weakly related to their core themes. Maintaining a clear topical focus (even if it means fewer pages but more focused ones) often improves the perception of authority. It’s better to be excellent at 5 topics than mediocre at 15. Google seems to reward sites that are hyper-relevant in their niche and don’t spread themselves too thin. After all, topical authority is about hyper-focusing on what matters most to your audience and business, and not trying to rank for everything.
- Relevant external authority: We don’t abandon classic authority building via external links, but with a nuance: links from relevant sites in your same topic or industry matter more.
AI, in its need to synthesize information efficiently, will naturally favor a single complete and well-structured resource (a topical cluster) instead of having to assemble fragments from ten superficial articles across ten different sites.
Therefore, achieving topical authority is building a virtuous circle:
Complete and useful content → greater recognition from users and search engines → better rankings and more traffic → more possibilities of natural links and feedback → even more authority.
It also aligns perfectly with the current emphasis on E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) that has been so highlighted in Google’s quality guidelines.
A site that demonstrates real expertise in a subject, deep knowledge, recognized authority, and that inspires trust will surely be a preferred candidate both for the traditional “10 blue links” and for the new AI summaries in search results.
Finally, we must consider that building topical authority is no longer just a content strategy to improve ranking; it is also a risk mitigation strategy. Recent studies have shown that sites like HubSpot suffered traffic drops of up to 80% after expanding into topics too far from their core, a phenomenon called “overclustering.” Conversely, organizations that have strategically pruned irrelevant content have seen their organic traffic roar back.
AI systems, like those powering AI Overviews (AIOs) and Retrieval-Augmented Generation (RAG), explicitly prioritize “authoritative sources” to reduce the risk of hallucinations and provide reliable answers. Therefore, building deep, focused topical authority is no longer just a way to rank higher—it is a necessity to ensure digital survival.
So, is there a keyword or not?
Once we understand the new search mechanisms, let’s review the new research and content creation models designed specifically for today’s landscape.
Topic-first vs. keyword-first
The traditional approach, centered on the keyword, is now obsolete. This methodology often leads to superficial content and to cannibalization issues, where multiple pages compete for the same search intent, diluting the site’s authority.
In contrast, a topic-first approach starts with identifying the core themes of the business and building an interconnected ecosystem of content around subtopics, use cases, and personas. The goal is no longer to answer a single query, but to become the reference resource for a complete knowledge area.
This approach can be organized using a pillar-and-cluster model, but the key difference is the mindset:
- A topic-driven model covers concepts in depth.
- A keyword-driven model focuses on covering long-tail variations.
The Keyword Universe: A dynamic framework for intent-based prioritization
As an alternative to static, periodic keyword research, the Keyword Universe framework (as defined by Kevin Indig) proposes a more dynamic, business-aligned approach.
It is essentially a living database of the language used by your audience, continuously updated and prioritized based on business impact, not just search volume.
Building this universe involves three steps:
- Extract queries: The language pool goes far beyond SEO tools. The main source is real conversations with customers: sales calls, support tickets, direct interviews. These reveal authentic language and the true needs of the audience.
- Sort, filter, and align: Queries are prioritized using a weighted scoring system. Instead of relying only on metrics like MSV (monthly search volume) and KD (keyword difficulty), points are assigned based on high-value signals such as:
- “Mentioned by a customer”
- “Part of a topic that converts well”
- “Solves a key pain point”
- Refine: The scoring system is not static. It is continuously adjusted based on real content performance, creating a feedback loop that improves prioritization accuracy over time.
This framework transforms keyword research from a tactical exercise into a strategic engine, building a prioritized content portfolio directly aligned with revenue and business value.
The New Content Playbook: Creating Content with “Pulse” that Resonates with Humans and AI
The question of whether “educational content is dead” is often raised in the industry. The consensus is that what has died is generic, low-effort content created solely for SEO.
Content rooted in lived experience, original thinking, and human editorial judgment is now more valuable than ever.
The new content playbook focuses on producing material that is smaller, smarter, and more human. It must have a “pulse”—something that brings new value to the conversation: a personal experience, a strong opinion, client data, or original research.
Key strategies include:
- Explain the product better: Focus on bottom-of-funnel (BOFU) content that demonstrates how the product solves real, tangible problems.
- Build authority through thought leadership: Share unique and often contrarian perspectives, supported by data and evidence.
- Create “Programs” instead of “Feeds”: Develop structured content series with a consistent narrative, instead of a random flow of posts. A “program” has a beginning and an end; each episode builds on the previous one, creating a loyal audience.
- Leverage relevant formats: Prioritize the formats most valued by both AI and users, such as user-generated content (UGC), creative and entertaining B2B content, and intention-based formats like case studies and original benchmarks.
The modern content strategy must balance owned platforms with distributed platforms.
The company blog is no longer the automatic home for everything; it’s just one node in a broader ecosystem.
AI search systems extract information from a wide range of sources, including high-trust UGC platforms like Reddit and professional networks like LinkedIn.
At the same time, publishing platforms such as Substack and LinkedIn offer built-in distribution, often more effective for reaching audiences than relying on declining organic search traffic.
This creates a “Control vs. Distribution” dilemma:
- Publishing on your own site grants full control over data and conversions, but requires building distribution from scratch.
- Publishing on third-party platforms provides reach, but at the cost of losing control.
Therefore, a sophisticated strategy requires deciding case by case where each piece of content should live.
- Some content should be “nomadic” (existing across social networks and communities).
- Foundational “resources” should have a permanent home on the company website.
This means content strategists must also become distribution strategists, making deliberate choices about the right channel for each piece depending on its goal.
From Keyword Hunters to Knowledge Architects
Doesn’t sound bad, right? At least we sound less like spammers.
The evolution is clear. We’ve moved from being “keyword hunters” to becoming “knowledge architects.”
Our job is no longer to find and repeat words; it is to understand and structure universes of information. We no longer optimize text strings; we optimize to satisfy complex intentions and to ensure that an artificial intelligence recognizes us as a reliable source on which to base its answers. And the user. And the user.
This requires us to stop thinking in short-term tactics and start building long-term knowledge assets. It demands that we work more closely than ever with product and business experts to extract and codify the real expertise that resides inside a company.
For us, the challenge is twofold: to adapt and evolve as professionals, and to play a formative role with our clients.
In our daily work, we must tackle many fronts that we may have neglected in the past: deeply researching the intentions and related questions behind each important keyword, mapping the key entities in our niche (and making sure to address them in content), building solid topical clusters on our sites, and measuring how we are gaining authority in each business-critical topic. New tools and techniques (topic maps, AI-based intent classifiers, visibility trackers specific to AI search engines, etc.) are becoming part of the SEO arsenal.
We also need to look beyond “classic Google”: new environments include AI-powered search engines, voice assistants, TikTok-style platforms (where younger audiences search for recommendations), and more. The perception of our brand as an authority must transcend, so that whenever an AI or any system evaluates, “Can I trust this source?” the answer is yes.
In this sense, SEO is no longer just optimizing for a ranking algorithm, but for intelligent information retrieval systems that prioritize machine trust. This implies ensuring information consistency, structured data markup, online reputation, and even correct attribution of authors and sources in our content.
It seems we’ve moved from SEO 1.0 (keywords, links, meta tags) to SEO 2.0 (intent, quality content, semantics), and are now entering SEO 3.0: SEO for answer engines and AI algorithms.
I think one thing remains constant…
The professionals who will stand out are those who understand this evolution and adapt their strategies accordingly. The good news is that although tactics are becoming more sophisticated, the underlying goal remains the same: to understand your audience better than anyone else and deliver the best content in the format they need. If we do that (and make it crystal clear for the machines), we will secure our place in the results—no matter how search technology evolves.
As for our dear keyword, the truth is that a single keyword no longer represents a single intent nor a single type of result in Google. This means the classic methodology of doing keyword research once a quarter and generating content for a fixed list of terms is increasingly ineffective.
Instead, we need to build a dynamic, living “Keyword Universe”: a broad repository of all the terms, phrases, and questions your audience uses, continuously updated as new searches emerge, serving as the foundation for ongoing content planning.
This Keyword Universe concept allows us to map our users’ language and needs in an integral way, rather than focusing only on isolated keywords.
The future of SEO belongs to those who stop playing the old keyword game and embrace the new challenge: to become the most complete and reliable topical authority in their field. We must bet on trust, identity, and loyalty as pillars.
It may seem like hard, deep, strategic work. But precisely because of that, dear friends, it is infinitely more interesting.
