Help! My new page rocks sometimes but not others. Why?


OK fellow Unbouncers… I’m out of ideas… my team is out of ideas. Time to see if you can help solve this mystery.

Back Story
My hypothesis is moving the form to a second page will increase conversions by offering a smaller commitment (click on button) before a bigger one (fill out a form). To do this test, we needed a completely new landing page design.

Moving the form to a second page will also lead to faster A/B testing because I can test the page based on click through rate, which has more “conversions” than form submissions. For instance, instead of testing 1000 visitors / 10 conversions (form submission), the same test would be 1000 visitors / 100 conversions (button clicks). If you do the math, you can reach statistical significance faster with the 100 conversions even though it’s the same number of visitors and same relative difference between variations.

Testing
For the programs I tested it (4 of them), I got some great increases at +95% confidence. For example, the following A/B Test for our Respiratory Therapy program resulted in a 3.7x increase! Dang!

  • Respiratory Therapy - Form on Page (360 Visitors / 15 Conversions)

  • Respiration Therapy - Click through to Form (348 Visitors / 53 Conversions)
    Mystery
    I thought this was a clear win for us and started deploying it to pages without testing. Most of them showed a clear lift in conversions. But there is one page in particular that had a huge drop in conversion rate (about half).

  • Medical Assisting - Form on Page (~6% Conversion Rate) 

  • Medical Assisting - Click through to Form (~3% Conversion Rate)
    The content is almost identical (except for the addition of the benefits section and the 2nd image). Medical Assisting is entry-level position for the healthcare industry. It’s in high demand, but doesn’t require as much education compared to our Respiratory Therapy program.

Can anyone figure out what could be going on?


5 replies

Hey Philip, I feel your pain. Confidence levels have gotten the best of me in the past. I felt pretty good about some 95% confidence levels on previous a/b tests and was very confused when the results suddenly changed at the end of a testing period, or even worse, in production.

Lesson learned: 95% is not always 95%.

Situations vary, but here’s what I’ve seen in our tests:

  • A test that becomes conclusive during the work week suddenly tanks on Monday morning. Hypothesis: weekend visitors behave differently on the same landing page.

  • A landing page variant with 95% confidence is opened up to more traffic to generate more conversions but overall CR falls. Hypothesis: different campaigns, keywords, ad copy, and PPC sources are going to convert differently.

  • A winning variant seems to dominate based on email lead conversion rates but fails miserably at generating phone calls. Some clients get 75-80% of their leads over the phone so we have to use different tracking numbers on each variant to get the full conversion picture.

It’s a long-winded answer but my suggestion is to keep testing. Your sample sizes are still pretty small so more data is needed to smooth out differences in CR for different visitor segments (mobile, ad copy, etc.) and cohorts (weekend vs. weekday time of day, etc.).

Good luck!

Hi Phillip,

First off I just want to say I think this is a really good idea and its good to see the tests and roll out to most of your campaigns went well. Fingers crossed it keeps working for a long time, we all know it’s an ever changing landscape with design and human behavior.

Traffic Mindset
Andrew touched on a few points relating to traffic sources, campaigns, keywords and ad copy. I’d agree with that, perhaps the target audience or the “mindset of your traffic” is different and needs to be treated differently, to this end I would always A/B test on each individual campaign because there are differences that stem from so many variables.

Also you mentioned that the landing page that bombed is for an entry level position, maybe it didn’t need the extra commitment/reinforcement that the other training programs did and hence didn’t benefit from the click through? The click through in this case could actually be an extra step and a burden to the user on conversions. Just a thought.

Page Differences
The other thing that jumps at me is that the example pages you gave, the one that works includes a video, and the one that doesn’t is  just an image, we know the power of video is immense, maybe that’s one of the key factors. The video’s starting frame is also branded with the phone number and logo which gives it more cohesion to the page. 

I also personally feel the logo on the page that is converting better is nicer and gives a better impression of a professional training college, I’d stick to that one if it was me.  

The section detailing “Professional Certifications” is missing on the under performing page, I think that’s a more important section that maybe it’s being given credit for. It’s the kind of “what am i going to get out of this at the end of the day” closing statement. Something like that would benefit the other page as well.

The non converting page also states  under the click through button “Program credential levels vary by campus”, that’s not going to help, maybe that could be explained in the missing “Professional Certifications” section and removed from under the button, it just adds doubt too early on in the thought process. 

Channel Segregation
I’ve had this debate on a number of occasions both in house and externally. We all know different triggers work on different traffic sources, I’m not sure if you already do this, but I have had good experience with creating separate landing pages with slightly amended copy for social & organic versus PPC/Paid traffic. It’s a bit more work but it does help to increase conversions that little bit extra. May be worth some analysis and see what traffic sources are converting and which ones are bombing, it might be that you can see on the successful pages all the traffic is converting well but on the one that doesn’t work one particular segment just might not be responding to the message.

And lastly…
Thank you for sharing this with  us, I’m actually going to take this idea of click through to forms and run a test for one of our clients because I think you’ve taken a good sales concept (keep getting your client to say yes then close them!) and put it into a landing page. I’d love to hear how things go so please do keep us updated and fingers crossed that something one of us says here helps to get this rogue page converting like a monster!

All the best

Stuart.

Thank you for your response! This is great feedback!

I’ve personally come a long way in understanding the science behind A/B testing. My formal education is in Environmental Science. We used probability analysis extensively, so you’d think I’d have mastered it. But I’m still finding ways that I’m performing tests incorrectly. Even recently I realized my favorite significance testing tool was using a single-tailed test when I should be using a two-tailed test.

This video helped me tremendously! The whole video is good, but he starts talking about A/B testing 8:30 into it.

Great resource and I am glad that I stumbled back to page 9 of the archives to find this!!!  🙂

Solid feedback, Stuart.  Sometimes it’s hard to tell whether a click-through page is building click synergy or click fatigue…

Reply