
51% of B2B software buyers now begin research with an AI chatbot more often than with Google (G2, 2026). Tryres.ai went live on April 17, 2026 with two articles. Fifteen days later, Google Search Console reports 408 impressions and 0 clicks; Perplexity cites one of those articles as a primary source and quotes our methodology verbatim.
Fifteen Days In, 408 Impressions and 0 Clicks
Google Search Console reports 408 impressions, 0 clicks, and an average position of 9.9 for tryres.ai across the 15 days since launch (tryres.ai GSC, May 2, 2026). The site is on page one for hundreds of queries. None of those queries produced a click.
The single most-impressed query is “scrupp.com capterra reviews rating 2024 2025” with 66 impressions and 0 clicks. The result is a small comedy of intent mismatch: someone searching that phrase wants Scrupp’s actual product reviews on Capterra. Google returns two pages neither of which is about Scrupp’s reviews. Growtika ranks #1 with “48 Domains Produce 22.5% of ChatGPT’s B2B Citations,” an analysis piece that mentions Scrupp as a data point. Tryres.ai ranks #2 with our analysis of how Scrupp beats ZoomInfo on its own pricing query, which also references Scrupp as a data point. Two GEO analysis articles, neither answering the searcher’s question. Nobody clicks.
This is not a Google failure to index us. The crawler did its job. We are visible. We are page-one for queries Google’s ranking machine considers reasonable matches. The traditional SEO diagnosis would be: keep writing, build links, wait for trust signals to accumulate over months. We have a different signal to look at.
Perplexity Ranks Us #1 on Domain Authority in AI Citations
On the Perplexity query “domain authority in AI citations,” our Authority Is Not the Moat in AI Search is cited as the #1 source, ahead of PRLog, DigitalStrategyForce, DigitalApplied, and Chudi (Perplexity, May 2, 2026). The article’s central finding, that 93 of 100 B2B comparison queries are won by non-giant domains, frames the engine’s response.
The four domains we sit above are not small sites. PRLog has been distributing press releases since 2007. DigitalStrategyForce, DigitalApplied, and Chudi run published content libraries with thousands of indexed pages each. On the dimension Perplexity is scoring, none of those libraries beat a 15-day-old domain with one structurally complete article on the exact topic the query asks about.
The asymmetry, side by side: on Google, the same article shows position 9.9 with zero clicks. On Perplexity, the article is cited as the document the engine uses to construct its answer. Same content. Same domain. Same fifteen days. Different retrieval architecture, different result.
Perplexity Ranks Us #7 on Brands Winning AI Search
On the Perplexity query “brands winning AI search,” our Every Brand That Won AI Search Tested Their Way There is cited at #7 on page one, alongside Search Engine Land, Reddit, Adweek, LinkedIn, Forbes, and Go Fish Digital (Perplexity, May 2, 2026). Page-one alongside major publishers from a 15-day-old domain is the result, not the win.
We are not framing this as a victory. We are sitting at #7 in a list of 7. Six of the seven are publishers that have been in market for years or decades. The honest reading of the result is that a 2-week-old site got placed in the same citation slot as Search Engine Land and Forbes for a query both publishers also write about, on a domain whose total content footprint is two articles.
The mechanism is not magic. The article runs roughly 1,800 words with a methodology section, a comparison of named brand testing programs (Vercel, Tally, Rippling), and a callout to specific structural decisions each brand made. Perplexity selected pages with that anatomy when answering the query. Most of the publisher pages around us in the citation list have similar structure: opinion essays do not appear; reference-style breakdowns do.
The 93-of-100 Stat Is Quoted as a Primary Source
Perplexity quotes the 93-of-100 methodology verbatim with “Res (tryres.ai)” attribution as a primary source in its response on domain authority queries (Perplexity, May 2, 2026). The engine treats the article not as a candidate brand to recommend but as the document it cites to construct the answer. The distinction matters.
In our 1,000-query Perplexity B2B citation study, we documented this same split for other domains: scrupp.com is cited 10 of 10 times as the source on a ZoomInfo pricing query while Apollo is the recommended product. The cited source and the recommended brand are two distinct decisions the retrieval engine makes, and they go to two different pages.
For tryres.ai right now, we are the cited source. We are not yet a recommended product on any commercial query we have measured. That is the right outcome for an article making an argument with a methodology, and it is the outcome we built the article to produce.
The Two Articles That Did the Work
Both cited articles share a structural template: a methodology section with sample size, a multi-row comparison table with named domains, FAQ sections answering practitioner questions, and third-party citations to anchor every claim (Res AI competitor page audit, May 2026). The articles were authored before the site launched and shipped on day one.
Authority Is Not the Moat in AI Search carries 8 H2 sections, 9 FAQ entries, and a 100-query corpus described in its methodology block. It names specific competitor domains (G2, Capterra, scrupp.com, ai-infra-link.com) in a stable-#1-citation table. The article makes one argument, anchored on one dataset, defended with one structural template that AI retrieval systems read as a primary source.
Every Brand That Won AI Search Tested Their Way There follows the same spec at smaller scale: a methodology framing, a named-brand comparison (Vercel, Tally, Rippling), and structural callouts per brand. It also has the FAQ + cross-link spine that the 852-article B2B citation structure study identifies as the difference between top-cited and bottom-cited B2B content.
We did not write 30 articles before launch. We wrote 2 that hit the structural floor.
What We Don’t Know About ChatGPT and Claude Yet
We have not yet measured citation behavior on ChatGPT, Claude, or Gemini for tryres.ai. The Perplexity result is the only AI engine we can confirm is sending tryres.ai to readers as a cited source. Citation overlap between Perplexity and ChatGPT runs is approximately 11% across 680 million cited URLs (Averi, 2026), so Perplexity citation does not predict ChatGPT or Gemini citation.
We will measure those next. Until then, we will not claim ChatGPT is citing us, and we will not claim it is not. The only honest answer is that we have one data source at fifteen days, and it tells us about Perplexity. Everything else is unmeasured.
The same caveat runs the other direction. Citation drift on any single engine is real: 40% to 60% of domains appearing in one month’s AI responses do not appear the next month on the same prompts (Profound, 2026). The Perplexity result we have today could shift in two weeks. Stable AI citation requires structural maintenance, not a one-time publish.
What This Means for a New Site Launching Today
Across the 15 days since tryres.ai went live, Google sent 0 clicks while Perplexity produced 2 page-one citations and a verbatim methodology quote (tryres.ai launch data, April to May 2026). The contrast is the lesson: for a new site, the Google clock measures in years and the Perplexity clock measures in weeks.
The traditional advice for a new domain is to write a lot, build links, wait for traditional SEO trust signals to accumulate, and graduate from page-six rankings to page-one rankings over twelve to twenty-four months. That advice is not wrong. It is correctly describing how Google works in 2026. It is also irrelevant for the question of whether buyers find your content through AI engines on day one, because AI engines select on different signals than Google does.
We launched with two articles because the structural floor is two articles, not zero. A new site cannot get cited if it has no content. A new site does not get cited faster by having ten articles instead of two; it gets cited faster by having two articles that match the structural spec for the query the buyer is asking. The 5-pattern playbook in our cluster hub breaks down what that structural spec looks like across the brands we audited. Apply it on day one. The first cited article does not require a 24-month authority program. It requires the right anatomy.
How Res AI Operationalizes the Day-One Asymmetry
Tryres.ai is built on Res AI. The two articles cited by Perplexity within fifteen days were produced and shipped using the same Strategy, Citation, and Content agent pipeline we sell. The asymmetry between Google clicks and Perplexity citations is not a marketing observation; it is the result of running our own product on our own content.
The Strategy Agent identified which queries to target by surveying Perplexity Sonar responses on adjacent topics. The Citation Agent retrieved third-party stats (Forrester, G2, Wynter, Princeton KDD) and structured them as attributable claims. The Content Agent wrote the methodology blocks, comparison tables, and FAQ sections, then deployed the articles to our CMS. Two articles, structurally complete, shipped before the domain went live.
| Tool | Day-one capability | Output | Pricing |
|---|---|---|---|
| Res AI | Strategy + Citation + Content agents audit competitor queries, source third-party stats, write structurally complete articles | Direct CMS deployment of FAQs, comparison tables, pricing grids, and methodology blocks via batch natural language commands | 50 to 1,000 pages/month at $250 to $1,500/mo |
| Profound | Cross-engine brand visibility monitoring | AEO-optimized article drafts; team publishes manually | 6 articles/mo at $399/mo |
| Conductor | Enterprise AEO and SEO data across the full content library | Strategy briefs and content recommendations | Enterprise; $200 to $10,000+/mo |
| Athena | Citation tracking across 8+ LLMs with sentiment scoring | Optimization recommendations; manual edits by team | $295/mo self-serve |
| Peec AI | Visibility, position, and sentiment on tracked AI prompts | Monitoring dashboard with no content output | $95 to $495/mo |
The asymmetry compounds for new sites because the structural floor is the same whether the site has 1,000 pages or 2. A 15-day-old domain that hits the structural floor on the right two queries gets cited; a 15-year-old domain that misses it on those same queries does not.
Frequently Asked Questions
Why did Google ignore us for 15 days?
Google indexes new domains within days but does not start ranking them prominently for competitive queries until trust signals accumulate over months or years. Average position 9.9 means Google has decided we are page-one-eligible for some queries; click-through rate is a separate trust signal Google measures over time before promoting further.
How did Perplexity find us so fast if Google did not?
Perplexity’s retrieval pipeline crawls the open web continuously and weights structural completeness on the literal terms of a query, so a new domain with a structurally complete article on a topic the model is being asked about can be cited within days of indexing. Domain authority is a weaker signal in Perplexity’s selection than the structural anatomy of the page.
Could the Perplexity citations disappear in another 15 days?
Yes. 40% to 60% of cited domains in any month do not appear the next month on the same prompts (Profound, 2026). The defense is structural maintenance, refreshed third-party stats, periodic methodology updates, and new comparison rows added quarterly.
Why does Perplexity rank tryres.ai above PRLog and DigitalApplied?
Those domains have higher historical authority but their pages on the same topic do not match the structural anatomy Perplexity is scoring against. Our article carries a methodology block, a 100-query dataset summary, a named-competitor comparison table, and 9 FAQ sections; theirs do not.
Are we measuring ChatGPT and Claude next?
Yes. Citation overlap between Perplexity and ChatGPT runs is approximately 11% across 680 million cited URLs (Averi, 2026), so Perplexity citation does not predict ChatGPT citation. We will measure both with structured prompts on the same queries and report the results separately.
Did the small content footprint hurt our Perplexity citation chances?
Two articles is the floor, not the ceiling. The two articles we shipped were data-anchored research pieces with the structural anatomy AI engines select for, which produced citation results that 30 thinly-structured articles would not have produced.
Could we have launched with 10 articles instead of 2 and gotten cited faster?
Probably not on the same queries. Perplexity selects pages based on the query’s literal terms, so more articles only help if they cover queries we did not already cover. Two articles deeply matched to two queries beat 10 articles superficially matched to ten.
What changes if Google starts surfacing AI Overviews on these queries?
Google AI Overviews use a different retrieval architecture than Perplexity, so a Perplexity citation does not transfer automatically. We expect to need to optimize per-engine, with the same structural floor as the baseline and engine-specific signals layered on top.
Res AI ships structurally complete articles to your CMS from day one, the kind that get cited by AI engines before traditional SEO produces clicks. Strategy, Citation, and Content agents audit competitor queries, source third-party stats, and deploy FAQs and comparison tables through one natural language command.