Engineering

Why Google killed GTIN search on Shopping (and what we did about it)

In 2024 Google removed GTIN as a search filter on Google Shopping. Here's why it happened, what it broke for product-data tools, and the workaround retailerapi uses to still find cross-retailer offers from one search.

By Matt Hall··5 min read

If you tried to search Google Shopping by UPC in late 2023, it worked. If you tried in mid-2024, it stopped working. Google removed GTIN as a search-filter parameter from Google Shopping queries, breaking a workflow that price-comparison tools, dropshipping scripts, and arbitrage software had relied on for over a decade.

This post explains what changed, why, and the workaround we use in retailerapi to still get multi-retailer offers from a single Google query.

What got killed

Pre-2024, you could query Google Shopping with a UPC or GTIN and get back a clean list of merchants selling that exact SKU. Google's product-feed indexing had matched feed entries on gtin (the universal product identifier merchants must include in their feeds) and the search frontend honored that match.

This made cross-retailer scraping cheap. One query per product, and you had Walmart's price, Amazon's price, Target's price, and the rest in a single SERP scrape. SerpAPI, Serper, and the dozen other search-API wrappers all advertised "search Google Shopping by UPC" as a feature.

In April-May 2024, the GTIN parameter started returning empty results consistently. Within weeks the change was confirmed across multiple search-API providers. Google did not announce it; the change surfaced only because everyone's existing GTIN queries broke at once.

Why Google did it (likely)

No official statement. Three plausible reasons, ranked by my read of the situation:

1. Bot-defense pressure. GTIN-keyed search makes mass scraping trivial. Block the GTIN parameter and a scraper has to fall back to title-based search, which is fuzzier and less productive per query. This is the same pattern Amazon used when they killed clean ASIN-based listing access in 2019, forcing scrapers into noisier breadcrumb searches.

2. Merchant-feed quality issues. GTIN data in merchant feeds is unreliable. Estimates from 2023 suggested 30 to 40% of GTINs in Google Shopping feeds were either missing, wrong, or duplicated. Searching on a field that fails 30% of the time hurts user experience. Removing the field forces merchants to lean on title and brand fields, which Google can validate more easily.

3. Driving merchants to paid Shopping ads. Free organic placement in Google Shopping has been quietly dying since 2021. GTIN search was a power feature that made organic placement valuable. Removing it nudges merchants toward Shopping Ads, which Google monetizes directly.

Most likely a combination. Either way, the practical effect is the same: GTIN search is dead.

What this broke

For consumer-facing tools (Honey, PriceBlink, Capital One Shopping), the impact was limited because those tools already used keyword search as the primary lookup signal. Their UX is "I'm on Amazon looking at product X, show me prices elsewhere," and "product X" gets passed as the keyword.

For developer tools, the impact was significant. Anyone whose ingestion pipeline went UPC in → multi-retailer offers out had to rebuild around a noisier signal.

retailerapi launched in 2026 after the GTIN-search era was already over. We never got to use the easy mode. Our cross-retailer enrichment design assumed title-based search from day one.

The workaround

Two-tier dispatch. The full architecture lives in apps/marketing/src/lib/cross-retailer/ but the high-level flow is:

Tier 0 (cheap, broad): One title-based Google Shopping search via Serper. Returns offers from many retailers in a single response. Each offer carries its source retailer name (e.g. "Best Buy", "Walmart - Ping Liu", "Lowe's"), price, image, and direct URL. We classify each offer into our 6 canonical retailer slugs and write it to the product_enrichments cache.

Tier 1 (per-retailer fallback): For retailers Tier 0 didn't return, fall back to retailer-specific scrapers per the scraper cheatsheet (Best Buy Open API, ScraperAPI for eBay, scrape.do for Lowe's, Firecrawl for Target, etc.). These run in parallel after Tier 0 completes.

The key insight: for ~70% of queries, Tier 0 alone covers most of the retailers we care about. A single Serper call (~$0.001) often beats 6 separate retailer-specific scrapes ($0.01 to $0.05 total). The 30% of queries where Tier 0 misses retailers we want still benefit because Tier 1 only fires for the missing cells, not all 6.

Real numbers

For one validation product (Walmart item 19667262713, an electric pressure cooker), one Serper Shopping call returned offers from Best Buy ($69.99), eBay ($100.71), Target ($79.99), Home Depot ($181.64), and Lowe's ($59.99). Five of our six target retailers, in a single ~$0.001 call. Walmart was already covered by our primary catalog.

For another validation product (UPC 194629116676, a school-supply pencil-grip set), Serper returned only specialty merchants (Teacher Tools Inc, Office Depot, etc.). None of our 6 target retailers carried it. Tier 1 then ran for all 6, all returned not_found. The page renders correctly with empty cells marked "No listing." The retailer presence pattern matches what you'd see manually.

Title quality matters

Tier 0 only works if the input title is reasonable. Walmart's titles are usually decent (brand + model + key attribute). When the title is generic ("Pack of Pens") or the product is a common SKU sold under many private labels, Serper's results trend toward unrelated products. We mitigate by:

  • Trimming titles to 120 characters (Serper hard-caps and very long titles waste budget on attribute noise)
  • Filtering Serper offers by image-match heuristics (planned, not yet shipped)
  • Falling back to brand + model + UPC concatenation when the bare title is too generic

Cost picture

At 100k page views per month with 95% cache hit rate (most pages are repeat visits or revalidations of already-fetched products), real Serper spend is around 5,000 calls per month, or about $5 at the entry pricing tier. Add catalog cost (already paid) and per-retailer Tier 1 fallback for the queries where Tier 0 missed (~$2/mo at this volume) and the total cross-retailer enrichment bill is around $7 per 100k page views. Compare to an architecture that scrapes each retailer independently for every uncached page: roughly $15 to $25 per 100k page views, 2x to 4x more expensive.

What this means for builders

If you are building a product-data tool in 2026 and your design assumes "search by UPC" is a thing on Google Shopping, your design is wrong. Plan around title-based search from the start. The fallback per-retailer scrapers are the safety net, not the primary path.

If you are building on top of retailerapi, you don't have to think about this. We do the Tier 0 + Tier 1 dispatch on every cold cache miss. Your code calls lookup_product and gets a populated cross_retailer field back, eventually.

Try retailerapi free and inspect the cross-retailer dispatch in real time on any public product page.


Free account

Build with retailerapi

1,000 free lookups per month, no credit card. Cross-retailer price and history across every major US retailer that carries the product.