How to prevent duplicate sites with ScrapeBox

grantunwin

Newbie
Joined
Jan 30, 2010
Messages
6
Reaction score
1
I've just purchased ScrapeBox and couldn't quite figure one thing out. How do I prevent ScrapeBox from posting to the same site twice?

I understand the remove duplicate feature when scraping for URLs, but how can I maintain a list (automatically) that will prevent ScrapeBox from 'finding' the same site again for future new lists.
 
I've just purchased ScrapeBox and couldn't quite figure one thing out. How do I prevent ScrapeBox from posting to the same site twice?

I understand the remove duplicate feature when scraping for URLs, but how can I maintain a list (automatically) that will prevent ScrapeBox from 'finding' the same site again for future new lists.

You can create a master blacklist and check against it after every scrape. But your going to scrape the same domains over and over if you do enough scraping.
 
Scrapebox just added a new thing (I think) "remove urls containing entries from" that lets you remove duplicate urls using a reference file.
So as you start building your lists, keep one text file that you add all your AA urls to.
Then when you run a harvest, just use that file as the reference file and you should be good.
 
Thanks! I just didn't want to be posting on the same site over and over.
 
Make a list with your FOND entries sites and compare that list every time you want to start a new blast.
 
Back
Top