1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Question: A/B Tests Statistical Significance

Discussion in 'Facebook' started by sau9696, May 10, 2017.

  1. sau9696

    sau9696 Newbie

    Joined:
    Jul 20, 2015
    Messages:
    16
    Likes Received:
    0
    Quick question,

    How many impressions/reach/clicks per ad until your test is said to be statistically significant.

    Struggling to find a practical answer online. Tried using A/B test bayesian calculators and other shit and read articles saying you need hundreds of thousands in traffic, which is likely $xx,xxx minimum in ad spend. Other people say use the chi-square test, which I learnt before in stats but kinda forgot now. But I'll learn it again if it's necessary, shouldn't take too long.

    Obviously I'm starting out in this internet money game so I can't afford massive spending, but I want accurate data so I can implement and execute quick.

    If you need any more info about my ads/business/etc to have a better answer feel free to ask. Just not giving away my actual site URL.

    Thanks guys.
     
  2. AneaKr

    AneaKr Jr. VIP Jr. VIP

    Joined:
    Oct 15, 2014
    Messages:
    159
    Likes Received:
    27
    Occupation:
    SEM Specialist
    Home Page:
    Usually, you don't need hundreds of thousands in traffic to understand the results on your A/B test.

    It depends upon your goal. 5000 impressions can be more than enough to understand which ad has a higher CTR. 5000 clicks can be more than enough to see which ad has a higher conversion rate.

    Also, it's good have the experiment running for at least 2 weeks.
     
  3. sau9696

    sau9696 Newbie

    Joined:
    Jul 20, 2015
    Messages:
    16
    Likes Received:
    0
    Thanks for the reply - I can definitely afford 5000 impressions now but 5000 clicks probably not.

    I guess I will also have to trust my gut when I look at the numbers.

    2 weeks is quite a bit longer than my previous tests, usually I run 3-5 days and then kill/scale/adjust... will try longer test periods for sure.
     
  4. sirmeep

    sirmeep Registered Member

    Joined:
    Jan 16, 2017
    Messages:
    91
    Likes Received:
    37
    Gender:
    Male
    Occupation:
    Professional Muppet
    Location:
    Ehhhh.... Somewhere Awesome
    It depends on the variance between the results. The higher the difference between the two results, the lower your sample size needs to be.

    There are couple of calculators out there that show this... for instance: https://vwo.com/ab-split-test-significance-calculator/

    Also, 'run for two weeks' is to address an slightly different issue. It's to make sure their isn't another variable that is unexpectedly influencing the results. For instance Version A gets better results on Friday vs Version B, and your 3-5 day test runs during Friday.
     
  5. sau9696

    sau9696 Newbie

    Joined:
    Jul 20, 2015
    Messages:
    16
    Likes Received:
    0
    Thanks for the link to the calculator, will definitely be using. Also yes totally logical that the higher the difference between the two results, the lower the sample size, didn't really consciously think of that before.

    Hmm yeah the 2 weeks thing makes sense now...