I will be working on downloading a lot of pages from the same domain, starting with some kind of subdirectory (around three to four levels deep). I would like to start downloading from there, include it, and also download all subdirectories, but only on this domain, and descendent, from the main ulr. It is usually 10 or 20 pages max, and nothing else. There may be images on these pages, and I would like to include them too. So this is basically - enter url like the one above - download 10-30 pages (this url, and all underneath it, only on this domain) I've been testing ScrapeBook for Firefox, but there may be something better. I've been also trying HTTrack and Teleport Pro, but these, as far as can go back remembering, never work. What could be the best solution for this? Something fast would be good too. I may work on 10,000 separate urls like this, lets say. Thanks.