1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Scrapebox - Deleting harvested sites that contain desired footprint

Discussion in 'Black Hat SEO' started by RapGod, Jun 30, 2015.

  1. RapGod

    RapGod Registered Member

    Joined:
    Dec 18, 2013
    Messages:
    73
    Likes Received:
    3
    So I started using Scrapebox some days ago and I nearly always get "Failed detecting form".

    I checked the sites and saw that I need to be logged in case to comment.

    So I now want to filter all sites that contain "you must be logged in" etc. so I can simply delete them without wasting time

    Anyone got an idea how I can achieve that?
     
  2. Sweetfunny

    Sweetfunny Jr. VIP Jr. VIP

    Joined:
    Jul 13, 2008
    Messages:
    1,793
    Likes Received:
    5,073
    Location:
    ScrapeBox v2.0
    Home Page:
    You can do this with the Page Scanner Addon http://www.scrapebox.com/page-scanner

    It allows you to scan a list of URL's for anything in the page such as text, html, specific images or javascript etc and save the URL's that do contain and dont contain your keywords/html in to separate lists.

    But in your case it won't save you any time over just trying to comment to the list anyway. If you are getting lots of "Failed detecting form" it means your footprints are probably not good, and need to be refined so you dont harvest so many irrelevant URL's in the beginning rather then trying cleaning the list up after the fact.
     
  3. RapGod

    RapGod Registered Member

    Joined:
    Dec 18, 2013
    Messages:
    73
    Likes Received:
    3
    Thanks for the reply, well I use a list of footprints (70-80) and then simply apply them to all keywords.