Nothing ground breaking, just a useful command that quickly combines all of the harvested files from scrapebox and uniquely sorts them all into one file. Collect all of the harvested files over to linux and run: - Add however many batch files, no limit) - The time command at the beginning is optional and only shows you how long the process took after the process is complete. Using the command above (3million links in total on an old 2.2ghz dual core): real 1m34.996s user 1m12.981s sys 0m2.784s Then run: to get a total count of urls within the new unique file. Much quicker then using emacs, gvim, or any other windows based text editor. Hopefully this helps.