Just in case you couldn't get s3 ripper working, or don't like using it, here's how to get s3 bucket contents manually: 1) first, browse to the top level domain of the bucket. example: hxxp://bucket.s3.amazonaws.com 2) if you encounter an "access denied" message here, then you're out of luck . the bucket's been properly protected, and s3 ripper won't work on it either. (side note: to properly protect your own s3 buckets, set the folder itself to private, while keeping all the files inside as public. leaving the folder public opens it to being scraped.) 3) if the bucket isn't protected, you'll see a bunch of data. press (ctrl+s) to save it as an .xml file. 4) open the file with excel, pressing [ok] to all the default options. column F (ns1:key) will show all the filenames. 5) to download individual files, combine the filename with the bucket name and enter it into your browser's address bar. example: hxxp://bucket.s3.amazonaws.com/filename1.pdf that's it!