1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

How to prevent duplicate sites with ScrapeBox

Discussion in 'Black Hat SEO Tools' started by grantunwin, Dec 3, 2011.

  1. grantunwin

    grantunwin Newbie

    Joined:
    Jan 30, 2010
    Messages:
    6
    Likes Received:
    1
    I've just purchased ScrapeBox and couldn't quite figure one thing out. How do I prevent ScrapeBox from posting to the same site twice?

    I understand the remove duplicate feature when scraping for URLs, but how can I maintain a list (automatically) that will prevent ScrapeBox from 'finding' the same site again for future new lists.
     
  2. HelloInsomnia

    HelloInsomnia Jr. Executive VIP Jr. VIP Premium Member

    Joined:
    Mar 1, 2009
    Messages:
    1,816
    Likes Received:
    2,912
    You can create a master blacklist and check against it after every scrape. But your going to scrape the same domains over and over if you do enough scraping.
     
  3. themidiman

    themidiman Power Member

    Joined:
    Feb 25, 2011
    Messages:
    701
    Likes Received:
    1,535
    Location:
    root@pts/0
    Scrapebox just added a new thing (I think) "remove urls containing entries from" that lets you remove duplicate urls using a reference file.
    So as you start building your lists, keep one text file that you add all your AA urls to.
    Then when you run a harvest, just use that file as the reference file and you should be good.
     
    • Thanks Thanks x 1
  4. grantunwin

    grantunwin Newbie

    Joined:
    Jan 30, 2010
    Messages:
    6
    Likes Received:
    1
    Thanks! I just didn't want to be posting on the same site over and over.
     
  5. alaltaierii

    alaltaierii Supreme Member

    Joined:
    Jun 11, 2010
    Messages:
    1,408
    Likes Received:
    349
    Make a list with your FOND entries sites and compare that list every time you want to start a new blast.