Question about scrapebox?

Discussion in 'Black Hat SEO' started by domainmadness, Jan 22, 2012.

  1. domainmadness

    domainmadness Senior Member

    Joined:
    Jun 22, 2011
    Messages:
    1,125
    Likes Received:
    357
    How can I prevent scrapebox for processing previously harvested urls? I mean, if I harvest URLs and process my urls, then I do it again, what is the method not to check for example PR twice for URLs Ive found useless?

    Probably bit of a noob question, Im not that familiar with SB.
     
  2. the_demon

    the_demon Jr. Executive VIP

    Joined:
    Nov 23, 2008
    Messages:
    3,231
    Likes Received:
    1,596
    Occupation:
    Search Engine Marketing
    Location:
    The Internet
    I'm not entirely sure this option exists... You could load a domain blacklist file and then filter duplicate domains from the new list you append.

    Not entirely sure if that's what your asking about.