i have a small quick question if you will be willing to assist me it will be superb! its ScrapeBox question, lets say , i harvesting 20M urls. now, all the 20M is saved in my harvester_session folder, every file with 1M urls. right? ok, so i would like to remove duplicates from all the files. is there any way to load into scrapebox more then 1M url? is there another software that can make this filtering? thakns!! M.