Google's Phantom Dilemma
Hand-wringing over Google’s declining dominance in search has hit a fever pitch. But Google has nothing to worry about.
Google has for years been a VC punching bag and symbol of Big Tech’s corporate sclerosis. Criticism became louder in late 2022, when ChatGPT launched and it became clear that Google had blown a 28-3 lead in the foundation model race despite acquiring DeepMind in 2014 and inventing the Transformer architecture in 2017. And anti-Google sentiment reached a zenith in February following the company’s disastrous rollout of Gemini.
The release should’ve highlighted Gemini’s state of the art capabilities. Instead, everyone made viral memes mocking Google for thinking the Pope is a black woman. The embarrassment also catalyzed a public conversation around Google’s looming twelve-figure problem: what happens to the supremacy of its core search business in the era of LLM-powered “answer engines” like ChatGPT and Perplexity?
I’d been harboring my own doubts about Google for some time. When nearly every friend in venture agreed with me, though, I became worried that we might be crowding into the short GOOGL 0.00%↑ trade. Were we over-extrapolating our experiences with ChatGPT – and “search” more broadly – to those of the general public? Were we, as VCs, reveling in Google’s potential demise because it’s one of the few Big Tech giants with a real narrative around being threatened by the rise of LLMs?
After spending more time on the issue, I’m far more skeptical of the narrative around Google’s competitive position. The “obvious” innovator’s dilemma that Google supposedly faces isn’t obvious at all.
What exactly is the bear case for Google Search?
Let’s start with the bear case: LLMs have unlocked a new search paradigm that threatens Google’s highly profitable “ten blue links” business model. If search was originally about traversing the internet to uncover digital experiences, then about finding source material to answer questions, it’s now about receiving direct answers via an LLM. No users will sift through low-quality, SEO-tainted lists of garbage when an LLM can provide a concise answer to their questions. Because Google makes money when users click on sponsored links, answer engines that obviate the need for those links are an enormous threat.
Most bears believe Google will respond like every incumbent facing an innovator’s dilemma: by doing nothing. Because the search business model is one of the best ever created – it’s a capital-light, high-margin royalty on secularly growing digital ad spend – Google will resist re-architecting the user experience around the answer engine concept. Why slaughter the $175B cash cow? Google will stand still, watching competitors like Perplexity and ChatGPT erode its market share with tailor-made answer engines and no organizational inhibitions.
There are some bears (including former Google employees I spoke to) who believe that Google will move aggressively in the answer engine direction but still “lose” the search war. Monetizing LLM-generated answers is far less straightforward than traditional search – gone are the ten blue links – and LLM responses are much more expensive to serve. Google could successfully adapt and maintain its dominant market share in search, but the search business may simply become far less lucrative as lower revenues and higher costs meaningfully reduce the total profit pool:
Whether it’s because Google does nothing and is overtaken by competitors or because LLMs make search a structurally lower-margin business even if Google “wins” the search wars, bears believe Search profits collapse in the coming years.
The bear case misunderstands what search is
ChatGPT-style queries represent only a fraction of search activity today
The notion that LLMs will fundamentally remake the search experience assumes a very narrow definition of “search,” one that’s popular among VCs in their excitement to anoint ChatGPT and Perplexity as the future kings of the space but that ultimately ignores reality.
Most ChatGPT queries are “exploratory” searches, complex questions for which there might not be a single, straightforward answer but that often initiate a longer learning process. ChatGPT’s amazing capabilities when answering these questions make for great Twitter content, but exploratory searches are just one type of search. Three other types are much more common:
Navigational queries: using a search engine not to find information, but as an easy way to access a webpage the user already knows. Typing “amazon” or “espn” into the search bar instead of the full URL are classic navigational queries.
Informational queries: asking a simple question for which there is a single, concrete, correct answer. Asking Google for “weather in SF” or “square root of 625” or “Final Four schedule” are informational queries.
Commercial queries: using a search engine with high purchase intent to find a product or service. Searching for “latest iPhone” or “cheap flight to Austin” is commercial in nature.
Google dominates these areas. It’s exceptionally good at handling navigational searches, quickly returning the exact link a searcher wants. It’s also excellent at answering most informational queries too, owing its supreme speed and accuracy to its best-in-class web crawling, indexing, and ranking systems and access to user-specific context. And nearly half of shoppers start their customer journeys with a Google search, particularly given Google’s tailor-made Shopping experience.
That “tailor-made” experience is key. The Google product is so powerful in part because it’s already expanded beyond the ten blue links form factor for informational and commercial search queries. It’s simply lazy to characterize Search in that way. I did eight quick Google searches, for instance, and not a single one surfaced a barebones list of links:
For each of these queries, Google delivers context-specific results – direct answers, links, booking engines, carousels, etc. – with unique layouts that offer users exactly what they need. It’s certainly a much richer experience than not only a list of ten links, but also the single-threaded conversation modality of an LLM.
LLM-enabled answer engines aren’t a zero-sum phenomenon
Critically, Google monetizes only a small number of total searches today: roughly 20%. Nearly all of these are commercial queries, which intuitively makes sense.
Google has never really monetized open-ended exploratory searches, largely because these kinds of queries are rare: people seldom use a search engine for them because keyword-based algorithms haven’t offered high-quality responses. In the era of LLMs, these complex queries represent a greenfield opportunity for LLM-enabled answer engines, which are absolutely better equipped than traditional Google Search to serve these queries. But for the most part, these new searches don’t cannibalize existing search activity: today’s searches will still exist, and LLMs will unlock new searches that simply didn’t happen before. Ultimately, LLMs will have a positive-sum impact on the search space.
What does this mean for Google? Because LLMs simply expand the size of the search pie rather than cannibalizing existing searches, Google doesn’t face a significant innovator’s dilemma. The threat to the core business is totally overblown; the queries that the company monetizes today will still exist! Very few will disappear as answer engines become popular, because answer engines tackle a fundamentally different – and new – type of search. What Google does successfully today, it’ll continue to do successfully tomorrow.
The data suggest that ChatGPT isn’t threatening Google’s core search dominance, despite the hype. Since its launch in late 2022, ChatGPT has had almost no impact on Google’s market share:
LLMs aren’t the first “threat” to Google’s search supremacy
The last time Google wasn’t the dominant search engine, I was only four years old. The company incorporated in September 1998, and leveraging its primary innovation – the powerful PageRank algorithm – quickly became the market leader in search, surpassing Yahoo in 2002. It went public in 2004 and has since grown into the sixth most valuable company in the world, leaving competitors like Lycos, Excite, AltaVista, Infoseek, AskJeeves, MSN, and Yahoo (famously) in the dust.
The company has been so dominant for so long that people often forget it’s faced high-profile challenges to its hegemony before: smartphones, e-commerce, social media, vertical search platforms, and more. In some form, each of these “threats” promised to siphon off and redirect search activity to other platforms or make it harder for Google to sell ads. Bears say the same is true now of LLMs: that everyone will use Perplexity or ChatGPT instead of Google and that Google will struggle to sell ads.
I’m skeptical for the same reasons that none of these other fears came to fruition. Google is a dominant business for a reason:
Google Search is an ultra-sophisticated product with a nearly insurmountable data advantage. Forged over two decades and the beneficiary of a powerful data flywheel, Google’s crawling, indexing, ranking, and retrieval mechanisms are more sophisticated, comprehensive, and fast than anything else. Google offers a thorough, accurate, and low-latency search experience that no competitor can rival.
Google’s enormous installed base affords it a massive distribution advantage that has made “Googling” a default behavior for so many people. Even as compelling alternatives became available in the past, users stuck with what they knew and trusted.
Like Apple, Google benefits from strong ecosystem effects. The prevalence of Drive, Gmail, Chrome, and other tools (in addition to Search) reinforces default behaviors and creates switching costs for consumers.
The path forward for Google
Fully integrating LLM capabilities directly into core Google Search
Of course, Google has also thrived for so long because it’s adapted in the face of seismic technological shifts. While LLMs may not threaten Google’s core business – the company can continue investing in core search, which successfully serves most queries today – it’ll need to adapt yet again. Ceding the answer engine category to ChatGPT or Perplexity would be a huge missed opportunity. And it could afford those products enough distribution to start encroaching on Google’s traditional search business. Google’s enormous technical moat in search would make that a tall task – even for OpenAI in partnership with Bing – but this is not a situation Google wants to find itself in.
What does adaptation look like? Gemini must be integrated into core Google Search, a product the entire world knows, uses, and loves. It’s already doing this to some degree, providing low-latency AI summaries at the top of the SERP for some queries. But this integration must be much stronger. Forcing users to navigate to a separate Gemini webpage means squandering an enormous distribution advantage and opportunity to leverage RLHF from billions of users.
Based on its vast troves of data, Google can effectively characterize user queries to understand what kind of responses to serve:
For traditional search queries, Google can serve responses – and monetize them – exactly as it does today, perhaps with innovation around the edges.
For the new wave of exploratory and complex informational queries that LLMs enable, Search should call Gemini and incorporate web-based retrieval to provide real-time context to increase response accuracy and minimize hallucinations.
The core business can continue growing unabated. And Google can unlock new revenue opportunities – albeit at likely lower margins than core search – in the answer engine realm.
The real risk Google faces
With SOTA models and nearly unlimited access to compute, Google is well-positioned to maintain its leading market share in the answer engine race if it chooses to fully leverage its enormous distribution advantage by integrating Gemini into Search.
The real risk to Google isn’t that LLMs cannibalize search; it’s that longtime Google users don’t make Gemini their default answer engine and instead become loyal Perplexity or ChatGPT users too.
Maybe Google is dragging its feet on a true Gemini integration because the answer engine market is still so nascent, and there’s no sense of urgency (despite the Twitter hype) because Search continues growing and maintaining its market share. Maybe it’s because there’s yet another internal power struggle at a company famous for insane corporate bureaucracy and inter-organizational politicking.
I suspect that while both are somewhat true, Google is falling prey to the weak form of the innovator’s dilemma: while the concerns about business model obsolescence are overblown, there is legitimate concern about the impact of AI hallucinations on Google’s brand. Models confidently get basic facts wrong, even with real-time context. And they can be manipulated by nefarious actors to say crazy things. These hallucinations always go viral; Google, whose brand is synonymous with accuracy and reliability, cannot afford more viral embarrassments that erode trust with core Search users.
Doing nothing isn’t an option, which is why Google released Bard in early 2023 – a few months after ChatGPT’s launch – and began slowly rolling out Search Generative Experience. As one Google employee told me: “OpenAI forced our hand.” Still, likely because of worries around brand risk, Google has been too conservative.
LLMs pose very little risk to Google’s core business. But they do create a massive new opportunity for someone. Google has a SOTA model, world-class web-based RAG, enormous installed base, ecosystem lock-in – and now, a chip on its shoulder. The looming question today shouldn’t be whether Search will die, but whether Google embraces what could be another decade-long growth opportunity in the new answer engine market.
Thank you to everyone who weighed in on this piece. If you have questions, comments, or feedback, please reach out: andrewziperski [at] gmail [dot] com.
The views expressed herein are the author’s own and are not the views of Craft Ventures Management, LP or its affiliates.