Your GSC Performance report just got a natural language interface
On December 4, 2025, Google quietly added something to Search Console that most site owners scrolled past: an AI-powered configuration tool for the Performance report. By February 2026, it rolled out globally to every Search Console user.
The tool is straightforward. Instead of clicking through dropdown menus to set filters, date ranges, and metric selections, you type what you want to see in plain English. “Show me queries with high impressions but low CTR on mobile for the last 3 months” — and the tool configures the report for you.
Most coverage has focused on the convenience angle. Fewer clicks, faster reports, nice quality-of-life update. That’s true but misses the point. The interesting question isn’t what the tool does — it’s what it lets you find. Because the real bottleneck in CTR analysis was never speed. It was the comparisons you never ran because the manual workflow made them tedious enough to skip.
This guide is specifically about using the AI configuration tool for CTR analysis — the five analyses it makes practical, the monthly audit workflow it enables, and the gaps it surfaces that you’re almost certainly missing right now.
Key Takeaways
- The AI tool turns plain English into performance filters — it handles all six filter dimensions (queries, pages, countries, devices, search appearance, dates), metric selection, and period-over-period comparisons, eliminating a manual workflow that typically required 8-10 clicks per configuration. Read the full CTR optimization guide for the broader context on why these filters matter.
- The real value isn’t speed — it’s the comparisons you never ran. Multi-dimensional filters like “mobile CTR for non-branded queries on /blog/ pages vs /product/ pages” are now a single sentence instead of a stacked sequence of dropdowns. That means you’ll actually run them.
- It only works on the Search results Performance report — Discover and Google News are excluded. It can’t sort results, can’t export data, and can sometimes misinterpret your request, so always verify the filters it applies before drawing conclusions.
What the AI configuration tool actually does (and what it doesn’t)
Before diving into CTR-specific use cases, it’s worth understanding the tool’s exact capabilities. Knowing the boundaries prevents frustration and helps you write better prompts.
Three capabilities
Filters across six dimensions. The tool can apply filters on queries, pages (URLs), countries, devices, search appearance, and date ranges. You can combine multiple dimensions in a single request. Example prompt: “Show me clicks and impressions for queries containing ‘pricing’ on mobile devices in Germany for the last 90 days.”
Date range comparisons. It handles period-over-period comparisons that previously required multiple steps to configure. Example prompt: “Compare average CTR this quarter versus the same quarter last year for all pages under /blog/.” This is where it saves the most time — understanding how click patterns shift over time requires exactly these kinds of before-and-after views.
Metric selection. It toggles the four core metrics — clicks, impressions, average CTR, and average position — based on your request. Example prompt: “Show only CTR and impressions for branded queries in the last 28 days.”
The limitations
The tool has clear boundaries you need to know upfront:
- Search results only. It works exclusively on the Performance report for web search. If you need Discover or Google News data, you’re still using the manual interface.
- No sorting or exporting. Once the filters are applied, you can’t ask the AI to sort the results table by CTR or export the data to a spreadsheet. You’ll need to do that manually or through the Search Console API.
- AI can misinterpret. Google’s own documentation warns: “AI can sometimes misinterpret requests. Always review the suggested filters to ensure they match your intention before analyzing the data.” This is especially important for complex multi-filter requests — verify before you act on the numbers.
- No saved configurations. You can’t save a prompt as a recurring report. Each time you return to the Performance report, you start fresh. For recurring analyses, keep your best prompts in a document you can copy-paste from.
Manual workflow comparison
To set up a single filtered view before this tool, you’d click the filter dropdown, select the dimension, choose the filter type (contains, exact match, regex), enter the value, confirm — then repeat for each additional dimension. Add a date comparison and metric toggles, and you’re looking at 8-10 clicks minimum for a moderately complex view.
That’s not a lot of clicks for a single analysis. But it’s enough friction that most people default to simple, one-dimensional filters — top queries by clicks, top pages by impressions — and skip the multi-dimensional views entirely. The queries with high impressions but low CTR on mobile in a specific country? The page sections losing CTR quarter over quarter? Those analyses don’t happen because nobody wants to stack five filters to check a hunch.
That’s what changes. Not the ceiling of what’s possible — everything the AI tool does was already available manually. What changes is the floor of what you’ll actually bother to check.
Five CTR-focused analyses this tool makes practical
These aren’t theoretical. Each analysis targets a specific CTR gap that most site owners know they should check but rarely do because the manual workflow makes it impractical at scale. For each one: the prompt to use, what to look for in the results, and what to do about it.
1. High-impression, low-CTR queries
Prompt: “Show me queries with more than 500 impressions and less than 2% CTR in the last 3 months.”
What to look for: These are queries where Google is showing your pages to a lot of people but almost nobody clicks. The 500-impression threshold filters out long-tail noise; the 2% CTR cutoff flags clear underperformance. Sort the results by impressions (manually, after the AI applies filters) to prioritize the highest-volume opportunities first.
What to do: For each flagged query, check two things. First, does the query actually match the page Google is showing? Mismatches between query intent and page content are the most common cause of high impressions with low CTR. Second, look at your title and meta description for that page — are they compelling for this specific query? A generic title that ranks for hundreds of queries will underperform on most of them. Rewriting meta descriptions for your top 10-15 high-impression, low-CTR queries is one of the highest-ROI CTR improvements you can make. Pair that with strategic keyword placement in your titles to make the listing match the searcher’s intent more precisely.
Why this wasn’t practical before: GSC doesn’t let you filter by metric thresholds directly — not even with the AI tool. But typing a natural language request gets you closer to the intent, and the AI interprets “more than 500 impressions” as a filter configuration. Manually, you’d need to export data to a spreadsheet and apply the thresholds there. The AI tool bridges part of that gap by at least configuring the dimension filters and date range so you’re looking at the right dataset.
2. Mobile vs desktop CTR gaps
Prompt: “Compare average CTR on mobile versus desktop for the last 3 months.”
What to look for: A mobile CTR that’s more than 15% lower than desktop is a red flag. Some gap is normal — mobile SERPs show more features and ads above the fold, pushing organic results down. But a gap larger than 15% usually means your titles are getting truncated on mobile or your pages aren’t compelling on smaller screens.
What to do: Check your highest-impression page titles. Mobile title truncation typically kicks in at 55-60 characters. If your most important titles exceed that, the value proposition gets cut off on mobile — and so does your CTR. Rewrite titles to front-load the compelling part within 55 characters. Also check if your mobile SERP listings look different (missing review stars, FAQ dropdowns, or other rich results that show on desktop). Understanding how search intent maps to title and meta optimization helps you craft listings that work across both device types.
Advanced follow-up prompt: “Show me queries where mobile CTR is below 1.5% but desktop CTR is above 4% in the last 3 months.” This isolates the specific queries where the device gap is most severe — your highest-priority title rewrites.
3. Branded vs non-branded CTR
Prompt: “Compare CTR for queries containing [your brand name] versus queries not containing [your brand name] for the last 6 months.”
What to look for: Healthy branded CTR typically falls between 30-60%. Non-branded CTR varies widely by industry but generally sits at 2-5% for competitive terms. The diagnostic value is in the gaps: if your branded CTR is below 30%, something is wrong with your branded SERP presence (competitors bidding on your name, negative sentiment in results, or weak site links). If your non-branded CTR is below 2%, your organic listings aren’t competitive — your titles, descriptions, and rich results need work.
What to do: For weak branded CTR, audit your branded SERP. Search your brand name and check what appears: are competitors running ads on your terms? Are there negative results pushing down your listing? Are your site links clean and relevant? For weak non-branded CTR, the issue is almost always in the listing itself — the title doesn’t match the query well enough, the meta description is generic, or you’re missing rich results that competitors have. SERP position has a massive impact on expected CTR, so compare your non-branded CTR against position-adjusted benchmarks rather than absolute numbers.
Why 6 months matters: Branded CTR trends reveal brand health over time. A declining branded CTR often signals increasing competitor aggression or eroding brand trust — problems that affect your entire organic strategy, not just the branded queries.
4. CTR by page section
Prompt: “Compare clicks, impressions, and CTR for pages containing /products/ versus pages containing /blog/ versus pages containing /services/ for the last 3 months.”
What to look for: Different sections of your site serve different intents and should have different CTR profiles. Product pages typically have higher CTR (transactional intent) while blog pages may have lower CTR (informational intent, more SERP competition). The red flag isn’t low absolute CTR — it’s declining CTR in a section that was previously performing well. A /products/ section that drops from 5% to 3% CTR quarter-over-quarter signals a problem worth investigating.
What to do: For any section showing CTR decline, dig deeper with a follow-up prompt targeting that specific section: “Show me the top queries driving impressions to pages containing /products/ with CTR below 3% in the last 3 months.” This narrows the problem from “products are underperforming” to specific queries and pages you can fix. Common causes include stale titles on seasonal content, new SERP features pushing your listings down, or competitor improvements in specific product categories.
Advanced follow-up: “Compare CTR for pages containing /products/ this quarter versus last quarter.” This adds the time dimension — you’re not just seeing current performance, you’re seeing direction. A section with moderate CTR that’s stable is less urgent than one with slightly better CTR that’s declining fast.
5. SERP feature interference detection
Prompt: “Show me pages with stable average position but declining CTR over the last 6 months.”
What to look for: When your position holds steady but CTR drops, something else on the SERP is absorbing the clicks you used to get. The most common culprits: new featured snippets above your listing, AI Overviews appearing on queries you rank for, shopping results expanding into your space, or competitors earning rich results (review stars, FAQ dropdowns, how-to schema) that make their listings more visually prominent.
What to do: For each affected page, manually search the top 3-5 queries driving its impressions. Note what’s on the SERP between the user’s search and your listing. If it’s a featured snippet, consider restructuring your content to win it. If it’s SERP features like review stars or FAQ schema, implement the relevant structured data on your pages. If it’s AI Overviews — an increasingly common cause of position-stable CTR decline — you’ll need a different approach: optimizing for citation within the AI Overview rather than competing against it.
Why this is the most valuable analysis: Most site owners monitor position as a proxy for performance. When position is stable, they assume everything is fine. This analysis catches the silent erosion that position tracking misses entirely — and in 2026, with SERP features and AI Overviews expanding aggressively, this category of CTR loss is growing fast.
Building a monthly CTR audit with the AI tool
Individual analyses are useful. A systematic monthly workflow is where the real value compounds. Here’s a five-step audit process you can run in under 30 minutes using the AI tool.
Step 1: Establish your baseline. At the start of each month, run: “Show me total clicks, impressions, average CTR, and average position for the last 28 days compared to the previous 28 days.” This gives you the top-level trend. Is overall CTR moving up, down, or flat? Document this number — it’s the benchmark everything else measures against.
Step 2: Run the five analyses above. Work through each of the five CTR-focused analyses. You’re looking for new problems that appeared since last month — not re-auditing everything from scratch. Focus on changes: queries that newly crossed the high-impression/low-CTR threshold, device gaps that widened, page sections where trends reversed.
Step 3: Prioritize by impression volume. Not all CTR problems are equal. A 1% CTR improvement on a query with 10,000 monthly impressions means 100 additional clicks. The same improvement on a query with 200 impressions means 2 clicks. Rank your findings by impression volume and focus your effort on the top 5-10 opportunities where CTR improvement translates to meaningful click volume.
Step 4: Act on the top opportunities. For the highest-priority findings, implement the fixes. That might mean A/B testing new page titles, adding schema markup to improve rich result eligibility, updating freshness signals on time-sensitive content, or restructuring pages to better match the intent behind declining queries.
Step 5: Measure and repeat. The following month, your first step (the baseline comparison) automatically measures whether last month’s fixes worked. CTR optimization is iterative — each monthly cycle should surface fewer new problems as your listings improve, while catching new issues (SERP changes, competitor moves, seasonal shifts) before they compound.
Keep a running document of your monthly prompts and findings. Over 3-6 months, you’ll build a clear picture of your CTR trajectory and the specific levers that move it most for your site.
What this tool doesn’t solve (and what does)
The AI configuration tool is genuinely useful for CTR analysis. It’s also important to be honest about where it stops.
It shows you the data but doesn’t act on it. Identifying a high-impression, low-CTR query is step one. Improving that CTR requires rewriting titles, updating meta descriptions, implementing schema markup, and sometimes restructuring content. The tool makes the diagnosis faster — the treatment still requires work.
It can’t explain why your CTR is low. The tool tells you that mobile CTR is 1.2% on a specific query set. It doesn’t tell you that it’s because your title gets truncated at character 48 on mobile, or because a featured snippet is sitting above your listing. Diagnosis still requires manual SERP inspection for your highest-priority findings.
It can’t measure what happens after the click. CTR is only half the equation. Dwell time, engagement depth, bounce rate, conversions — these determine whether the clicks you earn actually deliver value. A page with 8% CTR but 90% bounce rate isn’t performing well despite the strong click-through. GSC doesn’t track post-click behavior at all.
And here’s the bigger limitation: even if you identify and fix every CTR opportunity the tool surfaces, you’re still working within the constraint of how Google’s SERP distributes attention to your listings. The tool helps you maximize clicks within your current visibility. What it can’t do is amplify the user engagement signals that influence how much visibility you get in the first place.
Strong CTR is both a result and a cause — sites that earn more clicks send stronger preference signals to Google’s ranking systems, which improves their visibility, which earns more clicks. Breaking into that positive feedback loop, or preventing a negative one, often requires working on the signal layer directly, not just optimizing the listing. If you’re doing everything right on the technical and content side but your CTR still isn’t where it should be, the bottleneck may be upstream of what any analytics tool can fix.
FAQ
Q: Does the AI configuration tool work on Discover and Google News data?
A: No. The AI-powered configuration tool only works on the Performance report for Search results. If you need to analyze Discover or Google News performance, you’ll still need to use the standard manual filter interface for those reports. Google hasn’t indicated whether they plan to extend the AI tool to these other report types.
Q: Can the AI tool misinterpret my requests?
A: Yes, and Google explicitly warns about this. The tool’s AI can sometimes apply filters that don’t match what you intended, especially with complex multi-filter requests or ambiguous phrasing. Always check the applied filters before analyzing the data. If the configuration looks wrong, rephrase your prompt more specifically — for example, instead of “show me underperforming pages,” try “show me pages with CTR below 2% and more than 1,000 impressions in the last 90 days.” The more precise your language, the more accurate the configuration.
Q: Is this tool specifically useful for CTR analysis, or is it equally good for everything?
A: CTR analysis is arguably where this tool adds the most value. The reason: meaningful CTR analysis almost always requires multi-dimensional filtering — you need to segment by device, compare time periods, isolate page sections, and cross-reference position stability with CTR trends. These multi-dimensional views are exactly what people skipped in the manual interface because stacking filters was tedious. For simpler analyses like checking your top queries by clicks or reviewing impressions for a single page, the manual interface was already fast enough. The AI tool’s advantage scales with the complexity of the analysis, and CTR diagnostics are inherently complex.