1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Scrapebox and manual commenting - How to check duplicates?

Discussion in 'Black Hat SEO Tools' started by retroslice, Mar 18, 2012.

  1. retroslice

    retroslice Registered Member

    Joined:
    Mar 15, 2012
    Messages:
    82
    Likes Received:
    6
    Hey guys,

    I am planning on using scrapebox to manually comment on high pr blogs to backlink to my money site.

    So far I have got a list of 100 quality blogs to comment on that are do follow, and I'll start commenting on them slowly as my site is new.

    My question is, is there a method that allows me to check which sites/domains I have already commented on, so if scrapebox harvests the same url in the future it will automatically know and remove it? (if I start a brand new harvest with similar keywords).

    I am thinking if I got 100 urls I would save them as a text file.
    Then the next day I harvest another 100 urls, I wan't to check them against the ones I already commented on from the text file from the previous day so I can remove them.

    Hope this makes sense
     
  2. GoldenGlovez

    GoldenGlovez Moderator Staff Member Moderator Jr. VIP

    Joined:
    Mar 23, 2011
    Messages:
    701
    Likes Received:
    1,713
    Location:
    Guangdong, China
    Home Page:
    retroslice,

    There are two ways you can go about this. The first would be to import your existing lists into the Harvester section, then use Import URL List > Import URL Lists to compare (on domain level). This will remove any duplicate URL's or Domains from the main list.

    Another way you can accomplish this automatically is to add the domains to Scrapebox's Blacklist. You can find the blacklist inside a folder in your Scrapebox directory. However, the last method is only viable for small amounts of domains (a few hundred) otherwise it will begin to severely decrease your harvesting speed (as it needs to check each URL harvested against the list).

    Hope this helps,
    GG
     
    • Thanks Thanks x 1
  3. retroslice

    retroslice Registered Member

    Joined:
    Mar 15, 2012
    Messages:
    82
    Likes Received:
    6
    That's brilliant, method 1 was exactly what I was looking for.

    Thank you!
     
  4. retroslice

    retroslice Registered Member

    Joined:
    Mar 15, 2012
    Messages:
    82
    Likes Received:
    6
    Without starting a new thread, could you tell me how to find edu links that allow/have comments?

    I just harvested 2000 .edu links using the built in footprint "site:.edu", and clicking through them all, the majority do not have comments enabled!
     
  5. GoldenGlovez

    GoldenGlovez Moderator Staff Member Moderator Jr. VIP

    Joined:
    Mar 23, 2011
    Messages:
    701
    Likes Received:
    1,713
    Location:
    Guangdong, China
    Home Page:
    Try using any of these footprints:

    site:.edu "Leave a Comment"
    site:.edu "Leave a Response"
    site:.edu "Leave a Reply"
    site:.edu "Add Comment"
    site:.edu "Add Response"
    site:.edu "Add Reply"
    site:.edu "Post a Comment"
    site:.edu "Post a Response"
    site:.edu "Post a Reply"
     
  6. SEOWhizz

    SEOWhizz Power Member

    Joined:
    Oct 22, 2011
    Messages:
    606
    Likes Received:
    432
    Location:
    Lat: 38N 43' 11.298" Long: 27W 12' 7.733"
    You could do this by using Scrape box's Remove/Filter to remove the URL's not required.

    Lets say you save the previous day's commented URL's in commented.txt, and the newly harvested URL's are in Scrape box's Harvester. You could use Remove/Filter > Remove Url's Containing Entries From: <Browse to commented.txt>
     
  7. garylee

    garylee Newbie

    Joined:
    Nov 18, 2011
    Messages:
    47
    Likes Received:
    7
    Occupation:
    adult actor and producer
    Location:
    Paso Robles, CA. USA
    Home Page:
    You can also take all of your successfully posted URL's and store them in excel which also has a duplicate checker.