The End of num=100 - Small Change, Big Impact
David Hussain 5 Minuten Lesezeit

The End of num=100 - Small Change, Big Impact

When Google quietly removed the “num=100” parameter from its search engine logic, hardly anyone outside the SEO bubble noticed at first. Yet, this inconspicuous variable had been a central tool for years for those seeking deeper insights into Google search results. With “num=100,” Google could be instructed to deliver up to a hundred results per query—a convenient backdoor that allowed developers of SEO tools, data service providers, and even AI systems to capture large amounts of search data in a single fetch. Now, this door is closed, and the consequences extend far beyond a few additional lines of code.
google-search seo-tools data-collection keyword-ranking search-engine-optimization num-parameter web-scraping

When Google quietly removed the “num=100” parameter from its search engine logic, hardly anyone outside the SEO bubble noticed at first. Yet, this inconspicuous variable had been a central tool for years for those seeking deeper insights into Google search results. With “num=100,” Google could be instructed to deliver up to a hundred results per query—a convenient backdoor that allowed developers of SEO tools, data service providers, and even AI systems to capture large amounts of search data in a single fetch. Now, this door is closed, and the consequences extend far beyond a few additional lines of code.

For SEO tools, the removal of the parameter is a significant setback. Previously, a single request was enough to retrieve the complete top-100 ranking for a keyword. Now, the same data must be spread over ten or more requests. What sounds trivial multiplies the technical effort, server costs, and time for data collection. Those who used to monitor ten thousand keywords a day now effectively have to make a hundred thousand requests—each with a higher risk of being blocked by Google. Many providers operate their infrastructure on expensive proxy networks to distribute the load and avoid blocks. These costs are now exploding. Small or medium-sized tools that have distinguished themselves with low prices and large keyword databases face a painful decision: either reduce their coverage—and thus the value of their rankings—or increase prices and risk losing users.

The impact is felt not only by tool operators but also by their customers. Those who were accustomed to receiving precise daily reports on the positions of their keywords up to rank 100 will now be served less frequently or only in a limited manner. Many providers will limit themselves to the top 20 or top 30, as deeper positions rarely generate clicks and the effort is disproportionate to the benefit. This fundamentally changes the perception of SEO performance: visibility is more narrowly defined, ranking curves become steeper, and the number of measurable impressions decreases. In many companies’ dashboards, this looks like a loss—less data, fewer impressions, seemingly less success. In reality, often only the measurement method has changed. But the psychological effect remains: those who suddenly see only half the visibility in their reports quickly believe something is wrong. This forces agencies and in-house teams to adjust their communication strategy and explain to clients that it is not a crash but a change in the data basis.

The shutdown also has far-reaching consequences for AI providers. Systems like OpenAI, Mistral, or specialized search and research AIs often use Google’s results as a data basis or signal source to evaluate the relevance of content. The broader the accessible results, the more diverse and valid the training or prompt responses. With the removal of the parameter, this data channel narrows: AIs that scrape Google SERPs or process them as an input source can no longer efficiently retrieve large amounts of results. This leads to longer latency times, higher infrastructure costs, and in some cases, a distortion of the information base. Because if only the first ten results are reliably captured, a large portion of the long-tail information, which often contains the most interesting or specialized perspectives, disappears. For models trained to deliver the most comprehensive or differentiated answers possible, this means a potential decline in quality. They only see what is already visible—not what exists in the digital shadows.

Additionally, this change effectively strengthens Google’s data sovereignty. By preventing the mass retrieval of long result lists, it forces third parties to switch to official APIs or licensed interfaces. For OpenAI, Perplexity, or similar players relying on a mix of crawling, API data, and partnerships, the pressure increases to use legal and contractually secured data sources. This could lead to further market consolidation in the long run: those who can afford the costs and access remain competitive—those who cannot, disappear. Similar to the cloud market, the power structure is shifting towards fewer but resource-strong providers.

For users, that is, for everyone who uses SEO or AI tools in their daily work, this change will initially be hardly visible but significantly noticeable. Rankings will become more volatile, reports more inconsistent, data histories will suddenly break off. What was previously considered a stable metric becomes unreliable. Those accustomed to basing their content strategy on position trends beyond the first page lose these reference points. At the same time, the tools themselves become slower or more expensive. The quality of the analysis increasingly depends on how efficiently a provider deals with the new reality—and whether they are willing to invest in their own datasets or alternative measurement methods.

Beneath the surface, this change points to a larger paradigm: Google is systematically closing the loopholes through which data could flow freely. This affects not only SEO crawlers but also AI in a broader sense. In an era where search engines, AI models, and knowledge services increasingly intertwine, access to structured search result data is a power factor. Whoever controls how many results are visible indirectly controls which data models can be trained—and which narratives are formed from them. The removal of “num=100” is therefore more than a technical footnote. It is another signal that open access to web data is becoming narrower, more expensive, and more regulated—and that those who have relied on feeding their information world from Google’s data will have to rethink in the future.