So I'm wanting to scrape some data off a site that has a ton of information on it, I've tried using HTTrack, but it's having issues doing what I need it to. If someone knows of a script or a program that can download a list of html files and pictures (and save them as the same filenames), I'd love to hear about it. p.s. When I loaded a list of 50k URLs into HTTrack, I was able to get it to work, but the larger (parent list of 1.01million URLs) one won't work for some reason. Tried splitting it to lists of 500k URLs, but that's not working either. p.p.s. Also, the reason I want to download them to the PC is so that I can easier scrape/refer back to the files later - the site has temporary listings (think like craigslist postings - they expire after X days, I want to be able to store the data for later).