1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Facebook Cloaking Methodology

Discussion in 'Cloaking and Content Generators' started by innovativecr, May 20, 2014.

  1. innovativecr

    innovativecr Newbie

    Joined:
    May 19, 2014
    Messages:
    15
    Likes Received:
    4
    Hi all,

    I am fairly familiar with device, ip/proxy, user-agent and browser detection techniques and am curious as how to which people maintain a list of Facebook footprints?

    Is it churning out a bunch of ads and monitoring the first 1-2 visitors to the landing environment, then storing these footprints as non pass-through?

    Thanks in advance,

    inno
     
  2. mrblackjack

    mrblackjack Jr. VIP Jr. VIP Premium Member

    Joined:
    Dec 6, 2011
    Messages:
    960
    Likes Received:
    553
    Occupation:
    I live alone, I work alone, I make money alone
    Location:
    G00gle LaNd
    What you describe will help you collect FB bots, though getting FB bots is easy, they advertise their ranges in their website here:
    https://developers.facebook.com/docs/opengraph/howtos/maximizing-distribution-media-content

    The IPs u need to really collect are the 3rd party services they work with (security companies, etc) and the manual reviewers IP. The only way to collect them is by gathering a huge data set of numerous active campaigns across different periods from all over the world, and cross data between them. So, creating an ad and waiting for a manual reviewer to visit, or some 3rd party crawler, aint gonna work really.

    Besides, gathering IPs is the least you really need to worry abt, as IPs get changed, and they use dynamic IPs too which makes it hard to track/block (as legit users use such IPs too, so blocking them means losing traffic). There are other ways to differentiate good traffic from bad one, and it's not by the IP (or UA string, host, referrer, etc etc all the headers you collect) level.
     
    • Thanks Thanks x 2
    Last edited: May 21, 2014
  3. Atomic76

    Atomic76 Registered Member

    Joined:
    May 24, 2014
    Messages:
    67
    Likes Received:
    33
    One company, when reviewing Facebook's traffic quality on their paid ads, simply looked at what percentage of the traffic they were receiving had javascript enabled or not. Typically with this company's traffic, they would see below 5% or so of overall traffic had javascript disabled, but they later noticed that somewhere upwards of 70%+ of their paid Facebook traffic had it disabled. Considering how javascript-heavy Facebook is, it's safe to say these likely weren't real users, but rather bots.
     
  4. mrblackjack

    mrblackjack Jr. VIP Jr. VIP Premium Member

    Joined:
    Dec 6, 2011
    Messages:
    960
    Likes Received:
    553
    Occupation:
    I live alone, I work alone, I make money alone
    Location:
    G00gle LaNd
    Again, this method will only differentiate bots from users which is easy (though some bots do have JS enabled, like Google etc). However, manual reviewers obviously have JS enabled, so it aint gonna solve manual reviewers detection
     
  5. xcivicdx

    xcivicdx Newbie

    Joined:
    Jun 13, 2014
    Messages:
    22
    Likes Received:
    4
    Its very difficult to predict what facebook will do. When you submit an ad fb of course hits the link to see what your submitting and to give a preview. But then they do check the link later on with simplistic ip ranges and or specific useragents, browsers or internet providers. If you cloak to many internet providers then youll cloak to many people. Its difficult to remain cloaked because they use proxies and or dynamic ips as mentioned above.