Week of 2026-04-13
Key Finding: 112 New Domains Entered the Citation Set — Including a Google Maps URL
This week's highest-significance signal is not the new domains themselves but what they reveal: AI engines are pulling from sources with zero prior citation history across 4 weeks. Agency501.com appeared 11 times in ChatGPT citations in a single week — moving from 0 to top-entrant status with no gradual ramp. The citation set is not stable; it resets faster than most practitioners assume.
Which New Domains Broke Into AI Citations This Week?
112 domains entered the Digital Marketing citation set for the first time in 5 weeks. ChatGPT drove the majority of new entries. Agency501.com led with 11 citations. Notably, a Google Maps search URL appeared 5 times — ChatGPT cited a local business map listing as a source for agency queries, which represents a structurally different content type than editorial or blog content.
| Domain | Engine | Citations | Content Type |
|---|---|---|---|
| agency501.com | ChatGPT | 11 | Agency service page |
| fueler.io | ChatGPT | 9 | Blog / AI tools list |
| agencyanalytics.com | Claude | 7 | Competitor comparison page |
| pwc.com | ChatGPT | 5 | AEO explainer article |
| conbersa.ai | ChatGPT | 5 | AI search measurement guide |
| benly.ai | ChatGPT | 5 | Agency reporting tools list |
| google.com (Maps) | ChatGPT | 5 | Local business map listing |
| swydo.com | Claude | 4 | White-label SEO tools blog |
| evergreenfeed.com | Claude | 4 | Social media tools blog |
| planable.io | Claude | 4 | Agency tools blog |
How Fragmented Is Cross-Engine Agreement Right Now?
Cross-engine agreement across 31 Digital Marketing queries sits at 3% this week. The benchmark for this category is 89/100. That 86-point gap means practitioners cannot assume a citation strategy built around 1 engine transfers to others. ChatGPT and Perplexity are citing almost entirely different source sets for the same queries.
Are AI Agents Using Different Sources Than Standard Query Engines?
Yes. Only 23% of domains cited in agentic-style queries also appear in standard evaluation queries this week. Agentic queries produced citations from 174 distinct domains; standard queries cited 155. The 40-domain overlap is the only shared ground. Content built for conversational Q&A is not automatically surfaced when an AI agent executes a multi-step task in the Digital Marketing category.
Why Does FAQ Schema Rate Vary 6 Points Across Engines?
Claude cited pages with FAQ schema at a 15.89% rate this week. Perplexity cited FAQ-schema pages at 9.66% — a 6-point gap. ChatGPT sat at 11.4%. The category benchmark is 9.89%. Claude is selecting for structured Q&A markup at a rate 6 points above Perplexity, which means the same FAQ schema investment produces uneven citation returns depending on which engine a brand is optimizing toward.
| Engine | FAQ Schema Rate | Δ vs. Benchmark |
|---|---|---|
| Claude | 15.89% | +6.0 pts |
| ChatGPT | 11.40% | +1.51 pts |
| Perplexity | 9.66% | −0.23 pts |
| Benchmark | 9.89% | — |
What Does the Avg Word Count of Cited Pages Tell Us?
Pages cited by AI engines in Digital Marketing this week averaged 6,706 words. This is the average word count of pages the engines selected as sources — not AI response length. A single-topic agency service page or a local Maps listing being cited alongside 6,706-word editorial content confirms that citation triggers are not length-dependent. Agency501.com's 11 ChatGPT citations this week came without a 4-week citation history and without requiring long-form content as a prerequisite.
This Week's Single Action
Audit your agentic query coverage immediately. With only 23% overlap between agentic and standard citation domains this week, identify 3 task-style queries relevant to your category (e.g., "find me a digital marketing agency that specializes in B2B SaaS") and test whether your domain appears. The 174-domain agentic citation pool is operating on different source logic than standard queries — and this week's data shows that gap is not marginal.