Target Query Mapping
Before you can measure AI visibility, you need to know what to measure. Target query mapping identifies the specific questions users ask AI assistants about your entity, your category, and your competitors — then maps each query to a page on your site that should answer it. This foundational exercise ensures that your measurement efforts focus on the queries that actually drive business outcomes, rather than vanity metrics that look impressive but deliver little value.
The goal is simple: for every important question a user might ask an AI about your space, you should have a clear answer on your site. When you know which queries matter and which pages should answer them, you can systematically test, measure, and improve your AI visibility across both retrieval and generation paths.
Primary Queries
Primary queries are the questions most likely to drive business outcomes. These are the queries where being cited or recommended by an AI assistant directly influences whether a potential customer discovers you, trusts you, or chooses you over a competitor. Every organization should identify and prioritize their primary queries first.
Category queries follow the pattern "best [category] in [location]" — for example, "best project management software for startups." These queries represent users actively seeking solutions in your space. When an AI recommends your product in response to a category query, you gain visibility at the exact moment of decision-making. Your homepage, main product page, or a dedicated solutions page should answer these queries.
Identity queries follow the pattern "what is [business]" — for example, "what is Acme Software." These queries indicate that a user has heard of you and wants to learn more. The AI's response shapes their first impression. Your about page or homepage should provide a clear, accurate, and compelling answer to identity queries.
Trust queries follow the pattern "is [business] legit" or "is [business] reliable" — for example, "is Acme Software legit." These queries emerge when users are evaluating whether to proceed with a purchase or engagement. A dedicated trust page, reviews page, or about section with credentials and social proof should answer these queries definitively.
The following table maps common primary query patterns to their examples and target pages:
| Query Pattern | Example | Target Page |
|------------------------------------|----------------------------------------------|-----------------------|
| best [category] in [location] | best project management software for startups| /solutions/startups |
| what is [business] | what is Acme Software | /about |
| is [business] legit | is Acme Software legit | /about/trust |
| [category] for [audience] | project management for remote teams | /solutions/remote |
| top [category] tools | top task management tools 2024 | /comparisons |Secondary Queries
Secondary queries indicate active evaluation. Users asking these questions have moved beyond initial discovery and are now comparing options, reading reviews, and assessing pricing. While these queries may drive fewer total visits than primary queries, they often represent higher-intent users who are closer to a decision.
Comparison queries follow the pattern "[business] vs [competitor]" — for example, "Acme Software vs Basecamp." Users want a direct comparison to help them choose. A dedicated comparison page that honestly addresses differences, strengths, and use cases performs best here. Avoid overly promotional content; users seeking comparisons want balanced information.
Review queries follow the pattern "[business] review" or "[business] reviews 2024." Users want to hear from others who have used your product. A reviews page, testimonials section, or case studies page should aggregate and present social proof. Fresh, dated reviews signal ongoing relevance.
Pricing queries follow the pattern "[business] pricing" or "how much does [business] cost." Users want to know if your solution fits their budget before investing more time in evaluation. A clear, comprehensive pricing page removes friction and builds trust. Hiding pricing often backfires with AI-driven discovery.
Alternative queries follow the pattern "[competitor] alternative" — for example, "Basecamp alternative." Users are dissatisfied with a competitor or exploring options. Your comparison page for that competitor, positioned to highlight your differentiators, should capture this traffic.
| Query Pattern | Example | Target Page |
|----------------------------|--------------------------------|--------------------------|
| [business] vs [competitor] | Acme Software vs Basecamp | /comparisons/basecamp |
| [business] reviews 2024 | Acme Software reviews 2024 | /reviews |
| [business] pricing | Acme Software pricing | /pricing |
| [competitor] alternative | Basecamp alternative | /comparisons/basecamp |Long-Tail Queries
Long-tail queries are specific, multi-word questions that indicate high intent and often map to deeper content on your site. While each individual long-tail query may have low volume, collectively they represent a significant portion of AI-driven discovery. Users asking long-tail queries know exactly what they need and are often ready to act once they find it.
Feature + location queries follow the pattern "[category] [specific feature] [location]" — for example, "project management software with Gantt charts for remote teams." These queries signal that the user has specific requirements. Feature pages, solution pages, and detailed product documentation should address these queries by highlighting relevant capabilities.
Use case queries follow the pattern "[category] for [use case]" — for example, "task management for marketing agencies." Users want to know if your solution fits their specific context. Industry-specific solution pages, case studies, and use case guides should demonstrate relevance to their situation.
Problem queries follow the pattern "how to [solve problem]" — for example, "how to manage distributed team workflows." Users are seeking guidance, not necessarily a product. Educational content like guides, tutorials, and blog posts that genuinely help solve the problem — while naturally introducing your solution — performs well here.
Long-tail queries map to your deeper content: feature pages, integration pages, guides, blog posts, and documentation. This content often lives further from your homepage but plays a critical role in capturing specific, high-intent queries.
| Query Pattern | Example | Target Page |
|----------------------------------------------|------------------------------------------------------|---------------------------------|
| [category] [specific feature] [location] | project management software with Gantt charts | /features/gantt-charts |
| [category] for [use case] | task management for marketing agencies | /solutions/marketing-agencies |
| how to [solve problem] | how to manage distributed team workflows | /guides/distributed-workflows |
| [category] with [integration] | project management with Slack integration | /integrations/slack |Building Your Query List
Building a comprehensive query list requires systematic effort. Follow this step-by-step process to create a query map that covers your most important opportunities:
Step 1: List 3-5 primary queries. Start with your core category queries, then add your identity query and at least one trust query. These are non-negotiable — every organization should track their primary queries.
Step 2: Add 3-5 secondary queries. Identify your top 2-3 competitors and create comparison queries for each. Add review and pricing queries. If you operate in a competitive space, add alternative queries for each major competitor.
Step 3: Add 5-10 long-tail queries. Review your most important features, integrations, and use cases. Create queries that combine your category with specific capabilities, audiences, or problems. Draw from customer conversations, support tickets, and sales calls to identify the specific language users employ.
Step 4: For each query, assign the target page on your site that should answer it. Be specific — link to the exact URL. If multiple pages could answer a query, choose the best one and note the others as secondary.
Step 5: If no page exists for a query, that is a content gap. Mark it clearly and prioritize creating the missing content. Content gaps represent missed opportunities for AI visibility.
The following example shows a complete query list for a fictional SaaS company, with queries mapped to target pages and content gaps identified:
| Priority | Query | Target Page | Status |
|------------|----------------------------------------------------|---------------------------------|---------------|
| Primary | best project management software for startups | /solutions/startups | Live |
| Primary | what is Acme Software | /about | Live |
| Primary | is Acme Software legit | /about/trust | Live |
| Primary | top task management tools 2024 | /comparisons | Live |
| Primary | project management for small business | /solutions/small-business | Live |
| Secondary | Acme Software vs Basecamp | /comparisons/basecamp | Live |
| Secondary | Acme Software vs Monday | /comparisons/monday | Content Gap |
| Secondary | Acme Software reviews | /reviews | Live |
| Secondary | Acme Software pricing | /pricing | Live |
| Secondary | Basecamp alternative | /comparisons/basecamp | Live |
| Long-tail | project management with Gantt charts | /features/gantt-charts | Live |
| Long-tail | task management for marketing agencies | /solutions/marketing-agencies | Content Gap |
| Long-tail | how to manage distributed team workflows | /guides/distributed-workflows | Live |
| Long-tail | project management with Slack integration | /integrations/slack | Live |
| Long-tail | how to track project deadlines across teams | /guides/deadline-tracking | Content Gap |Testing Queries Across Both Paths
The same queries should be tested across both the AEO retrieval path and the GEO generation path. These two paths function differently, and your visibility may vary significantly between them.
With web search ON: Test whether the AI finds your page and cites it when answering the query. This is the AEO path — your content must be discoverable, relevant, and authoritative enough for the AI to retrieve and reference it. Look for direct citations, links to your pages, and accurate information pulled from your content.
With web search OFF: Test whether the AI mentions your entity from its training data alone. This is the GEO path — your entity must be sufficiently prominent in the AI's training corpus for the model to know about you and speak accurately about what you do. Look for unprompted mentions, correct descriptions, and positive sentiment.
Different queries perform differently on each path. Category queries like "best X in Y" tend to trigger retrieval because they require current, comparative information that models cannot reliably answer from training data alone. Identity queries like "what is X" may be answered from training data if your entity is well-known, but will trigger retrieval if the entity is less prominent or if the user asks for recent information.
By testing the same queries across both paths, you identify where your visibility is strong, where it is weak, and whether your opportunities lie primarily in improving your content for retrieval or in building broader entity prominence for generation.
For detailed guidance on scoring and benchmarking your results, see the AI Visibility Score protocol.
Quarterly Query Review
With your query list established and tested, you are ready to move from measurement to action. Visit Implementation for optimization strategies that improve your visibility on the queries that matter most. For guidance on structuring your content to perform well across both paths, explore Content Architecture Patterns.