Hi guys, I'm after a little help. I'm trying to scrape email addresses for all the florists in the UK. I decided my best approach would be to harvest the URLs via: Footprint: site:.co.uk "florst" then a keyword list of each town in the UK - with results set to 100 to try and not dilute. The approach seems to work pretty well, however the problem that I have is when I use the "Link Extractor" to harvest all "internal" URLs (prior to extracting emails). Some of the sites are not relevant and span multiple pages. How could I refine the above process to achieve purer results? Many thanks, dEVS!