imagine you have a lot of keywords that you are harvesting. when scrapebox reaches one million urls, it keeps harvesting. however, at this point, the harvesting is useless because scrapebox cannot fit more than one million urls. Am I right? so what am I supposed to do if I have a lot of keywords and expect the harvested urls to go way over one million? it would be nice if SB would remove the duplicates behind the scenes while it's harvesting so the one million would not be so easily reached. can it do that?