Skip to main content

Hi Everyone,


I run a niche digital agency that works with a large number of clients in a single industry. With each client being somewhat low volume, a/b testing doesn’t happen very fast. Our specialty is PPC, but with us taking on a lot of clients recently, I’m adding on a landing page service to add a little more value. I’d like to make some strides towards optimization as fast as possible.


For the good of everyone, I’m thinking about running a/b tests across all the companies. I’d like to make a single page that can be customized for each individual company (customization would only include location information and logo). Then, I’d like to a/b test it using aggregated data to find what works best for the industry.


Does anybody know a good way to organize these tests? My thoughts currently are to copy the page across domains and manually compile the data on conversion rates for variants. My concern, however, is that this will become very time consuming, especially when I need to make changes or add a new variant (as I will need to make the changes individually for every client).


Thanks in advance for your help!

Hi @BrandonB and welcome to the Community 😊


I’d say that this is a slippery slope. Every client will most likely have varying degrees of difference in target groups and messaging. Every client is unique and require a unique approach. You CAN run these tests and aggregate data, but I doubt your will learn anything of value from it. My 5 cents.


Thanks @Finge!


I’d agree with you 100% that in many industries this would be a poor approach, but in this industry, the clients all have an identical USP and target group. Trust me - I’m the last person that would say that. The only differences come from local changes in culture and demographics. Not just that, but they are quite low volume (while traffic holds a large value). A split test that detects a 10% increase in conversion rate is estimated to take 720 days. For this reason, agencies in this industry don’t a/b test traffic. It never crossed my mind that it could be reasonably done until I onboarded a large number of clients.


This leaves the question, would it be better to do split tests for individual clients that take two years to complete, or to aggregate data and make large strides over a year? I’d assume the latter, but if you have any reason to believe that individual tests are more appropriate in this situation, I’m all ears.


The way I see it, if local cultural differences are the key differentiator, this isn’t much different than running a nationwide split test for a larger corporation. Yes, the results technically could be more accurate if the corporation were to segment their audience by locality to adjust for differences in culture and demographics, and maybe they will at some point, but at a beginning stage such as this, the most practical thing to do is make adjustments on a macro level.


By doing this, I hope to learn how to convert better in this industry in a reasonable amount of time. Even though what I learn won’t work identically for every client, it will improve things on average, which I think would be a big win. Perhaps after some large strides are made as a whole, I can adjust things for individual clients as necessary and, on the largest-volume ones, use unique tests.


Reply