Ever since Google launched AI Mode, I’ve had two questions on my thoughts:
- How will we guarantee our content material will get proven in AI outcomes?
- How will we work out what works when AI search continues to be largely a thriller?
Whereas there’s quite a lot of recommendation on-line, a lot of it’s speculative at greatest. Everybody has hypotheses about AI optimization, however few are operating precise experiments to see what works.
One thought is optimizing for question fan-out. Question fan-out is a course of the place AI programs (notably Google AI Mode and ChatGPT search) take your authentic search question and break it down into a number of sub-queries, then collect info from numerous sources to construct a complete response.
This illustration completely depicts the question fan-out course of.

The optimization technique is straightforward: Establish the sub-queries round a selected subject after which be certain that your web page consists of content material focusing on these queries. Should you try this, you might have higher odds of being chosen in AI solutions (no less than in principle).
So, I made a decision to run a small check to see if this truly works. I chosen 4 articles from our weblog, had them up to date by a group member to handle related fan-out queries, and tracked our AI visibility for one month.
The outcomes? Effectively, they reveal some attention-grabbing insights about AI optimization.
Listed here are the important thing takeaways from our experiment:
Key Takeaways
- Optimizing for fan-out queries considerably will increase AI citations: In our small pattern of 4 articles, we greater than doubled citations in tracked prompts from two to 5. Whereas absolutely the numbers are small given the pattern measurement, citations had been the primary metric we aimed to affect, and the rise is directionally indicative of success.
- AI citations could be unpredictable: I checked in periodically in the course of the month, and at one level, our citations went as excessive as 9 earlier than dropping again down to 5. There have been experiences of ChatGPT drastically decreasing citations for manufacturers and publishers throughout the board. It simply exhibits how rapidly issues can change while you’re counting on AI platforms for visibility.
- Our model mentions dropped for tracked queries, and so did everybody else’s: Total, we seen fewer model references showing in AI responses to the queries we had been monitoring. This affected our share of voice, model visibility, and whole point out metrics. Different manufacturers additionally skilled related drops. This seems to be a definite problem from quotation modifications—extra about how AI platforms dealt with model mentions throughout our experiment interval.
We’ll focus on the outcomes of this experiment intimately later within the article. First, let me stroll you thru precisely how we performed this experiment, so you’ll be able to perceive our methodology and doubtlessly replicate or enhance upon our strategy.
How We Ran the Question Fan-Out Experiment
Right here’s how we arrange and ran our experiment:
- I chosen 4 articles from our weblog
- For every chosen article, I researched 10 to twenty fan-out queries
- I partnered with Tushar Pol, a Senior Content material Author on our group, to assist me execute the content material modifications for this experiment. He edited the content material in our articles to handle as many fan-out queries as attainable.
- I arrange monitoring for the fan-out queries so we might measure earlier than and after AI visibility. I used the Semrush Enterprise AIO platform for this. We had been primarily occupied with seeing how our content material modifications impacted visibility in Google’s AI Mode, however our optimizations might additionally increase visibility on different platforms like ChatGPT Search as a facet impact, so I tracked efficiency there as properly.
Let’s take a better have a look at every of those steps.
1. Deciding on Articles
I had particular standards in thoughts when choosing the articles for this experiment.
First, I needed articles that had steady efficiency over the past couple of months. Visitors has been unstable recently, and testing on unstable pages would make it inconceivable to inform whether or not any modifications in efficiency had been on account of our modifications or simply regular fluctuations.
Second, I averted articles that had been core to our enterprise. This was an experiment, in spite of everything. If one thing went unsuitable, I did not wish to negatively have an effect on our visibility for crucial subjects.
After reviewing our content material library, I discovered 4 excellent candidates:
- A information on the right way to create a advertising and marketing calendar
- An explainer on what subdomains are and the way they work
- A complete information on Google key phrase rankings
- An in depth walkthrough on the right way to conduct technical search engine optimisation audits
2. Researching Fan-Out Queries
Subsequent, I moved on to researching fan-out queries for every article.
There’s presently no option to know which fan-out queries (associated questions and follow-ups) Google will use when somebody interacts with AI Mode, since these are generated dynamically and might differ with every search.
So, I needed to depend on artificial queries. These are AI-generated queries that approximate what Google may generate when individuals search in AI Mode.
I made a decision to make use of two instruments to generate these queries.
First, I used Screaming Frog. This instrument let me run a customized script in opposition to every article. The script analyzes the web page content material, identifies the primary key phrase it targets, after which performs its personal model of question fan-out to counsel associated queries.

Sadly, the information isn’t correctly seen inside Screaming Frog—every thing obtained crammed right into a single cell. So, I needed to copy and paste your entire cell contents right into a separate Google Sheet.

Now I might truly see the information.
The nice factor is that the script additionally checks whether or not our content material already addresses these queries. If some queries had been already addressed, we might skip them. But when there have been new queries, we would have liked so as to add new content material for them.
Subsequent, I used Qforia, a free instrument created by Mike King and his group at iPullRank.
The rationale I used one other instrument is straightforward: Totally different instruments typically floor totally different queries. By casting a wider internet, I would have a extra complete checklist of potential fan-out queries.
Plus, if sure queries are frequent throughout each instruments, that is a sign that addressing them could also be necessary.
The best way Qforia works is easy: Enter the article’s primary key phrase within the given area, add a Gemini API key, choose the search mode (both Google AI Mode or AI Overview), and run the evaluation. The instrument will generate associated queries for you.

After operating the evaluation for every article, I saved the ends in the identical Google Sheet.
3. Updating the Articles
With a spreadsheet filled with fan-out queries, it was time to truly replace our articles. That is the place Tushar stepped in.
My directions had been easy:
Examine the fan-out queries for every article and deal with people who weren’t already lined and had been possible so as to add. If some queries felt like they had been past the article’s scope, it was OK to skip them and transfer on.
I additionally instructed Tushar that together with the queries verbatim wasn’t all the time obligatory. So long as we had been answering the query posed by the question, the precise wording did not matter as a lot. The objective was ensuring our content material included what readers had been truly in search of.
Generally, addressing a question meant making small tweaks—simply including a sentence or two to current content material. Different occasions, it required creating completely new sections.
For instance, one of many fan-out queries for our article about doing a technical search engine optimisation audit was: “distinction between technical search engine optimisation audit and on-page search engine optimisation audit.”
We might’ve addressed this question in some ways, however one sensible choice was to make a comparability proper after we outline what a technical search engine optimisation audit is.

Generally, it wasn’t simple (and even attainable) to combine queries naturally into the prevailing content material. In these circumstances, we addressed them by creating a brand new FAQ part and protecting a number of fan-out queries in that part.
Right here’s an instance:

Over the course of 1 week, we up to date all 4 articles from our checklist. These articles did not undergo our normal editorial evaluate course of. We moved quick. However that was intentional, given this was an experiment and never a daily content material replace.
4. Setting Up Monitoring
Earlier than we pushed the updates dwell, I recorded every article’s present efficiency to ascertain a baseline for comparability. This manner, we’d be capable to inform if the question fan-out optimization truly improved our AI visibility.
I used our Enterprise AIO platform to trace the outcomes. I created a brand new challenge within the instrument and plugged in all of the queries we had been focusing on. The instrument then started measuring our present visibility in Google AI Mode and ChatGPT.

Right here’s what efficiency seemed like at first of this experiment:
- Citations: This measures what number of occasions our pages had been cited in AI responses. Initially, solely two out of our 4 articles had been getting cited no less than as soon as.
- Whole mentions: This metric exhibits the ratio of queries for which our model was instantly talked about within the AI response. That ratio was 18/33—which means out of 33 tracked queries, we had been being talked about for 18 queries.
- Share of voice: It is a weighted metric that considers each model place and point out frequency throughout tracked AI queries. Our rating was 23.4%, which indicated we had been current in some responses however not all or within the lead positions.
- Model visibility: This instructed us what proportion of immediate responses talked about our model no less than as soon as, whatever the place.

I made a decision to attend one month earlier than logging metrics once more. Then, it was time to conclude our experiment.
The Outcomes: What We Discovered About Question Fan-Out Optimization
The outcomes had been actually a blended bag.
First off, some excellent news: our whole citations elevated.
Our 4 articles went from being cited two occasions to 5 occasions—a 150% enhance. For instance, one of many edits we made to the technical search engine optimisation article (which we confirmed earlier) obtained used as a supply within the AI response.

Seeing our content material cited is strictly what we hoped for, so it is a win. (Regardless of the small pattern measurement.)
Apparently, our remaining outcomes might’ve been extra spectacular if we ended our experiment earlier. At one level, we obtained to 9 citations, however then they decreased when ChatGPT considerably lowered citations for all manufacturers.
This simply exhibits how unpredictable AI platforms could be, and that elements fully outdoors your management might affect your visibility.
However what concerning the different metrics we tracked?
Our share of voice went down from 23.4% to twenty.0%, model visibility fell from 13.6% to 10.6%, and our model mentions dropped from 18 to 10.
Based on our information, we’re not the one ones who noticed declines in model metrics. This is a chart exhibiting what number of manufacturers’ share of voice went down on the identical time.

This occurred as a result of AI platforms talked about fewer model names general when producing responses to our tracked queries. This was a totally totally different problem from the quotation fluctuations I discussed earlier.
Contemplating the exterior elements, I imagine our optimization efforts carried out higher than the information exhibits. We managed to extend our citations regardless of the issues working in opposition to us.
So, now the query is:
Does Question Fan-Out Optimization Work?
Primarily based on what we realized in our experiment, I would say sure—however with an enormous asterisk.
Question fan-out optimization might help you get extra citations, which is effective. Nevertheless it’s exhausting to drive predictable progress when issues are this unstable. Preserve this in thoughts while you’re optimizing for AI.
Should you’re occupied with studying extra about AI search engine optimisation, maintain a watch out for the brand new content material we repeatedly publish on our weblog. Listed here are some articles it’s best to try subsequent: