scrapebox footprints - expired domains

phans

Junior Member
Joined
Jun 15, 2011
Messages
142
Reaction score
23
What footprints can i use for scrapebox to search for a time span of lets say 2003-2008. With what footprints can i get results for just this time period?

Thanks
 
Hi, I don't think you can achieve this the way you are suggesting - there's no way a footprint would pick up an expired domain. What you can do is: search for posts from 2003 - 2008 (the only footprint I can think of would be putting those dates as "inurl:2003", etc) get all the links that scrapebox comes up with and export to the Link Extractor plugin This will give you all the links from those pages, some of which may be expired domains. Once you have that you can stick them in Xenu, or do a PR check in Scrapebox - a Pagerank of #N/A is often expired, but either tool will only give you a 50/50 outcome at best, you'll still need to validate if the domain is available For what its worth, I dont think this is a cost effective way to buy links or find decent domains.
 
I would use the TDNAM scraper in scrapebox.
I've tried making little code snippets to get cute with finding domains like this with no success - ended up getting far better results using the built in functionality alreay there.
Scraping WHOIS and ICANN is also an option and scrapebox can help here as well (it's in the add-ons)

Scritty
 
you can't use SB to find expired domains. period. OFC you can get lucky and find some after scraping for days but it is a highly unoptimal solution. Just get the TDNAM lists from godaddy FTP and code a custom tool to check them for whatever data u want.
 
Scrapebox Footprint + Scrapebox Pr Checker + Xenu Link Checker

All Status="no such host" are domains that are possibly available (do not have a site on the domain).

A nice trick after you have the domain list is to stick it back into scrapebox and use the

PA/DA checker, this can filter out any "fake pr/pr that is no longer valid", ie has lost all of its

links, and then export to CSV, sort by DA, and get checking on the bulk domain search!
 
Stumbled upon this thread and wanted to add an update. In 2013, there wasn't any easy way to find expired domains. But, now Scrapebox has an expired domain add on that makes this very easy.
 
Stumbled upon this thread and wanted to add an update. In 2013, there wasn't any easy way to find expired domains. But, now Scrapebox has an expired domain add on that makes this very easy.

thanks for bumping a 4 year old thread
 
If you are simply scanning for domains, and then checking whether the domain has expired or not after that, then you can use SB to harvest the domains from big G and specifically set the date for your search. Please note that G uses Julian date/year, thus you will need to checkout how to use that in your search string. You should have good proxies that can harvest from G for this matter.
 
You can use the "daterange:" operator for footprints
 
Back
Top