I'm new to scrapebox, and I'm trying to build out a list of auto-approve urls. So far, my process is this: scrape sites by keyword+footprint post a comment to harvested sites check all urls for comment url (filtering for successful posts) export only urls of successful comment urls Import successful comment urls into link extractor and extract all internal links Rinse & Repeat the Link extracting process 2-3 more times Now, at this point, I have taken a list of like 150-300 urls, which allowed auto-approved commenting, and expanded it considerably to include more urls from the same sites. ( I'm making the assumption that most sites that would allow an auto-approved comment on one url would allow an auto approved on all) So, Through this process I find myself with a list of around 80K urls, which account for the majority of all posts/articles/pages from those original comment allowing sites. The problem, is that this final list contains a lot of urls for pages that wouldn't have a comment section?category archives, tags archive, author archives, contact pages, user profiles, etc...?and I need to filter them again to cut the flak out. I could just load them into the Comment Poster again, and extract successful urls from those, but that seems inefficient and I'd also like to avoid alerting said sites to a potential influx of spam until it's already too late. Is there a way to load my url list into Scrapebox and search urls against a footprint (like "comment" "leave a comment" etc..) and return a "found" or "not found" response?