On February 9, 2026, OpenAI began showing ads inside ChatGPT, and more than 600 advertisers signed up. OpenAI is charging $60 CPM, triple Meta’s rate, and expanding internationally with a self-serve ad platform.
Despite that widely cited headline, there’s a deeper story in the data that matters more than the ads themselves.
When someone asks ChatGPT a question, marketers tend to focus on two categories of questions. The first are brand queries, or questions about a specific company or product. Think questions like, “Is HubSpot worth it?” “Salesforce vs. Pipedrive” “Does Notion work for project management?” These are the questions closest to a buying decision, and the answers ChatGPT gives typically include citation links to external sources like product pages, reviews and comparison articles.
The second is category queries, or broader questions about a topic or product category: “Best CRM software for startups.” “How to improve email deliverability.” These tend to pull from a wider range of sources, and they sit earlier in the buyer’s journey.
Both types of queries generate citations, and those citations are the primary way brands earn visibility and traffic from AI search today.
But following ChatGPT’s ads launch and a major model update, citation counts on brand queries crashed 41 percent in five weeks. Then they mostly recovered, but the types of sources earning those citations changed, and that shift has stuck.
5 Takeaways From the ChatGPT Ads Shift
- Product pages are doing heavier lifting than they used to.
- Educational content is earning less on brand queries.
- Reviews matter more than they did four months ago.
- Citability is as important as rankability.
- Start tracking mentions alongside citations.
What Happened to Citation Counts
We recently looked at AI search responses across roughly 3,000 brands across every major AI engine. For each brand, we looked at a set of branded and category queries and collected the full AI response every time. We then extracted every citation URL, every brand mention and sentiment data from each answer.
Over a 16-week window from December 2025 through March 2026, that produced more than 170 million AI answers containing more than 500 million individual citation records.
Between mid-January and early March, brand queries on ChatGPT saw average citations per answer fall from 4.95 to 2.96 — a 41 percent decline in five weeks. Category queries dropped from 7.3 to 6.1 over the same window, a meaningful but less severe 16 percent decline.
But the more interesting finding is that, by late March, citation counts largely recovered. Brand queries climbed back to about 4.5 citations per answer, roughly 90 percent of their December baseline. Category queries returned to about 7.0, essentially fully recovered.
If the story were just about citation counts, you could call it a temporary disruption and move on. But citation counts aren’t the whole story.
The Mix Shifted, and It Hasn’t Shifted Back
Even after total citation counts recovered, the types of sources earning those citations changed materially on brand queries. And unlike the count dip, this shift has persisted.
Product Domains Gained Significant Share
Company and product websites went from 55 percent of all brand query citations in December to 63 percent at the trough, and they’ve held at around 62 percent through late March. ChatGPT is increasingly going direct to source. When someone asks about a product, the model is more likely to cite that product’s own website rather than third-party content about it.
Educational Content Lost Ground
Educational domains, the “What is a CRM?” and “How marketing automation works” style content that content marketing teams have invested in for a decade, dropped from 14 percent to under 10 percent of brand query citations. ChatGPT is synthesizing that explanatory information itself and linking to it less.
Review Sites Gained
Review platforms like G2, Capterra and TrustRadius were one of the few third-party categories that grew citation share during the dip and held it, climbing from 5 percent to about 7 percent. The model treats structured review content as a high-signal source for brand queries.
Wix’s recent independent research supports these patterns. Its data shows listicles leading AI citation share at 21.9 percent, ahead of standard articles at 16.7 percent and product pages at 13.7 percent, confirming the model's preference for structured, scannable content.
What the Timing Tells Us
The deepest point in our data came three weeks after ChatGPT launched ads and coincided with the release of GPT-5.3 Instant, which OpenAI described as being designed to synthesize information rather than list source URLs.
OpenAI has stated that ads do not influence the answers ChatGPT gives. We take that at face value. Some changes in citation behavior are the result of intentional product decisions, while some are side effects of model updates. And although you can track the data, it’s impossible to know the internal reasoning behind every shift.
The Model Still Reads Brand Content but Links Differently
Earlier this year, our team published research examining how ChatGPT actually finds and selects sources. We analyzed 548,534 retrieved pages across 15,000 prompts and 43,233 total queries.
The key finding is that ChatGPT retrieves far more than it cites. Only 15 percent of retrieved pages made it into the final response. The model reads broadly but cites narrowly, filtering sources based on title alignment, content specificity and clarity.
This tells us thatChatGPT is still finding and reading third-party educational content, media articles and community discussions. But it’s just choosing to formally cite them less on brand queries, while citing product domains and review sites more.
Brand mentions per answer actually increased over the same period. The model is discussing brands more frequently in its answers while attaching fewer citation links. This means more awareness, but fewer clicks. That’s a meaningful shift in the value equation for marketers relying on AI search for traffic.
What Marketers Should Do Now
There are five things I’d prioritize based on what we’re seeing.
Product Pages Are Doing Heavier Lifting Than They Used To
Product domains are earning a larger share of brand query citations. That means product pages, pricing pages, comparison content and feature documentation need to be structured for citation alongside conversion. That looks like precise titles, clear answers and scannable structure. These pages are now primary citation assets on brand queries.
Educational Content Is Earning Less on Brand Queries
This doesn’t mean top-of-funnel content is dead, but the return on educational content for AI citation visibility on brand queries is declining. ChatGPT is synthesizing that information itself. If an AI search strategy is built around educational blog posts, the data suggests diversifying toward product-specific and comparison content for brand visibility.
Reviews Matter More Than They Did 4 Months Ago
Review platforms are one of the few third-party categories gaining citation share. Investing in a brand’s review presence across G2, Capterra, TrustRadius and Reddit is increasingly important for brand query visibility. This aligns with what Ross Simmonds and others have been saying about the growing weight of community and review signals in AI search.
Citability Is As Important As Rankability
Our research also showed that pages ranking first on Google were cited by ChatGPT at 3.5x the rate of pages outside the top 20, which means Google rankings still very much matter. However, 85 percent of what ChatGPT retrieves never gets cited. The gap between being found and being cited comes down to specificity, structure and clarity.
Start Tracking Mentions Alongside Citations
If brand mentions are rising while citation links shift toward product domains, being named in an AI answer, even without a link, is a real form of visibility. Most marketing teams aren’t measuring this yet, but they should be.
What This Means Going Forward
ChatGPT ads are here whether we like it or not. And a self-serve ad platform is coming. While citation counts proved more resilient than the initial dip suggested, the rules for what earns a citation changed in the same window and haven’t reverted.
The brands still winning citations are doing many of the things that worked well in SEO. They’re building authoritative pages, earning trust signals and creating content that answers real questions. But they’re also being intentional about the structure of the pages they create with content that’s engineered to be cited and not just published.
What this data proves is that brands shouldn’t only be focused on producing more content. Instead, they need to be disciplined around making every page citable. The brands doing this work now are building something durable. The opportunity is still wide open, but the window to build organic authority before paid becomes the default is closing.
Appendix: Methodology
Data Source
AirOps continuously monitors AI search responses for approximately 3,000 brands across ChatGPT, Perplexity, Gemini and other major AI engines. For each brand, it tracks a curated set of branded and category queries and collects the full response, extracting every citation URL, brand mention, and sentiment signal.
Analysis Window
December 8, 2025 through March 30, 2026 (16 weeks).
Data Set Scale
170 million AI-generated answers containing over 500 million individual citation records.
Compositional Bias Control
Tracked brand count grew from approximately 1,850 to 3,100 during this period. To ensure new brands joining the data set didn’t distort the trends, we isolated a same-store cohort of roughly 800 brands tracked continuously for all 16 weeks. All key findings were validated against this cohort. The same-store trends were slightly steeper than the blended averages, confirming the observed patterns are not artifacts of changing composition.
Citation Trend Data
Citation trend data was sourced from pre-aggregated fact tables in our analytics database, covering per-answer citation counts, per-answer mention counts, page type classifications, domain category classifications, and question type labels (brand vs category).
Domain Category Share Analysis
Domain category share analysis tracks the proportion of total citations going to each category of domain (products, educational, media, reviews, communities, marketplaces, social, affiliates) across the 16-week window, specifically on brand-type queries.
Retrieval and Fan-Out Research
Retrieval and fan-out research was conducted separately, analyzing 548,534 retrieved pages across 15,000 original prompts and 43,233 total original-plus-fan-out queries. Full methodology for that study is available at airops.com/report.
