Learn
Use Cases
How teams use ConversionWax
Guides
Step-by-step setup and strategy
Content Personalization Guide
How to personalize your website
Playbook
Proven plays for every industry
Compare
How we stack up against others
Blog
Personalization tips and platform updates
Help Center
Docs, setup guides, and support
Featured playbook
Ecommerce Personalization Playbook
Geo-targeted offers, BFCM windows, device-specific layouts - copy-paste plays that run themselves.
Optimized Images
Auto-resize for any device
Video Support
Personalized video content
Scheduled Updates
Time-based content automation
By role
Ecommerce Operators
Geo-offers, BFCM, device layouts
Growth Marketers
Campaign pages, UTM matching, A/B
Digital Agencies
Multi-site, version control, team access
New in the platform
AI Image Generation
Generate campaign visuals from a prompt. Saves to your asset library.
Learn more →Part of our Geotargeting series - Read the full Geotargeting Guide
You set up geo-targeted banners on your site. Denver visitors see Denver imagery, Miami visitors see Miami imagery, everyone else gets the default. It is live. Now how do you know it is actually working?
Gut feeling is not a measurement strategy. You need specific metrics, a testing framework, and a straightforward way to turn percentage gains into revenue numbers. Here is how to measure it.
ConversionWax tracks three distinct metrics per banner. Each one tells you something different, and the gaps between them reveal where visitors drop off.
Page loads. The number of times the page containing your banner was viewed. This is your top-of-funnel number - how many visitors had the opportunity to see your geo-targeted content.
Renders. The number of times the banner was actually displayed in the visitor's viewport. A page load does not guarantee a render. If your banner sits below the fold and the visitor bounces before scrolling to it, the page loaded but the banner never rendered. The gap between page loads and renders tells you whether your banner placement is getting enough visibility.
Clicks. The number of times someone interacted with the banner. This is your conversion signal - the visitor saw the geo-targeted image and acted on it.
Why the distinction matters: If your page loads are high but renders are low, your problem is banner placement, not targeting. If renders are high but clicks are low, your targeting might be right but the imagery is not compelling enough. Each gap points to a different fix.
Before you can measure the lift from geotargeting, you need a baseline. What was your default banner's performance before you added location-specific variants?
If you did not track your default banner's metrics before launching, that is fine. You will build the baseline using A/B testing instead. But if you have historical data, pull the click-through rate on your default banner for the markets you are now targeting. That is your "before" number.
The comparison is simple: geo-targeted banner performance versus default banner performance, for the same audience, in the same time period. Everything else - traffic source, page design, offer - stays the same. The only variable is the imagery.
The cleanest way to measure geotargeting ROI is a split test. ConversionWax has built-in A/B testing that automatically splits traffic between two banner versions.
Here is the setup:
Version A: Your geo-targeted banner. Denver visitors see Denver imagery.
Version B: Your generic default banner. Denver visitors see the same nationwide image everyone else sees.
Both versions run simultaneously for the same audience. Traffic is split automatically. After enough data accumulates, you compare render rates, click-through rates, and downstream conversions for each version.
The result is a clean comparison. "Denver visitors who saw Denver imagery clicked through at 4.2%. Denver visitors who saw the generic banner clicked through at 3.1%. That is a 35% lift from geotargeting."
Without A/B testing, you are stuck comparing different time periods (before and after launch) or comparing different markets to each other. Both methods introduce confounding variables. Same-market, same-time, split-traffic testing removes the noise.
Testing timeline: Run each A/B test for at least two weeks. You need enough traffic per variant to avoid noise. For most sites, two to four weeks of data gives you a reliable signal. High-traffic sites can draw conclusions faster.
How closely you can monitor performance depends on your ConversionWax plan:
Starter ($19/month): Daily analytics. Good enough to validate whether geotargeting works over a multi-week test. You can see day-over-day trends and compare A/B results.
Growth ($49/month): Daily and hourly analytics. Lets you spot time-of-day patterns. Maybe Denver imagery performs best during business hours when local professionals are browsing. Hourly data reveals these patterns.
Professional ($149/month) and Premier ($299/month): Daily, hourly, and 5-minute analytics. For high-traffic sites running time-sensitive promotions - flash sales, event-driven campaigns - where you need near-real-time feedback on whether a geo-targeted variant is performing.
For most geotargeting ROI measurement, daily data is sufficient. You are measuring a sustained lift over weeks, not minute-by-minute fluctuations.
Here is how to convert a percentage lift into a dollar number.
Example: Ecommerce site
Monthly revenue: $100,000
Current conversion rate: 2.5%
Geo-targeted conversion rate: 2.8% (a 12% lift)
Additional monthly revenue: $12,000
A 12% conversion lift on a $100K/month site produces $12,000 in additional monthly revenue. That is $144,000 per year from the same traffic you are already paying for. The ConversionWax subscription pays for itself in the first day of the month.
The math scales linearly. A $50K/month site with the same 12% lift gains $6,000/month. A $500K/month site gains $60,000/month. The lift percentage stays roughly consistent - what changes is the revenue it multiplies against.
Geo-targeted image personalization typically produces an 8-35% conversion lift. That range depends on industry, how well the imagery matches the market, and how different the geo-targeted version is from the default. The closer the match between imagery and visitor context, the higher the lift.
Once you have baseline ROI data, here is where to focus:
Which locations respond best. Not every market will produce the same lift. Some cities or regions will convert significantly better with local imagery while others barely outperform the default. Double down on the markets that respond and cut the variants that do not.
Which imagery styles win. A city skyline might outperform a local landmark. Lifestyle photography might beat product photography in some markets. Run A/B tests on the image content itself, not just geo-targeted versus generic.
Which targeting combinations perform. Location alone is one signal. Location plus campaign source (via URL variable targeting) is more specific. Location plus time of day is more contextual. Test layered rules to find the combinations that produce the highest lift.
Viewport-level performance. Check whether your geo-targeted banners perform differently on desktop versus mobile. The render-to-click ratio might differ by viewport, which means your mobile variant might need different imagery than your desktop variant - even for the same location.
Expansion opportunities. Once you have three or four markets producing consistent ROI, look at your traffic data for the next tier of locations. Which unserved markets have enough traffic to justify a geo-targeted variant? The marginal cost of adding a new location is low - the main investment is creating the image assets.
Keep it simple: Measure the lift per market. Multiply by revenue. Compare to the cost. If the additional revenue exceeds the tool cost - and it almost always will, even at modest lift percentages - expand to the next market. Repeat.
Read the full geotargeting guide for more on setup and strategy.
Built-in A/B testing and analytics. Know exactly what your geo-targeted banners produce.