1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Question about scrapebox

Discussion in 'Black Hat SEO' started by mohanseenauth, Jun 14, 2015.

  1. mohanseenauth

    mohanseenauth Newbie

    Joined:
    Apr 8, 2010
    Messages:
    38
    Likes Received:
    3
    Is there a way to harvest URLS with scrapebox that are unique? Lets say I harvest a batch of urls with a set of keywords, then i harvest again with the same urls and keywords, will the results of the second batch come out unique. Or will scrapebox keep getting the same urls with each keywords.

    Another question is, if I have a batch of urls to do fast blog comment on, and i comment and lets say out of 1000 urls 100 are a success. Then I try to comment again with same urls, and lets say 60 are success, does scrape box post comments twice on the same blog? Or does it scan through the previous posting results and skips what was already a success.

    Any help would be appreciated.
     
  2. loopline

    loopline Jr. VIP Jr. VIP

    Joined:
    Jan 25, 2009
    Messages:
    3,799
    Likes Received:
    2,026
    Gender:
    Male
    Home Page:
    Scrapebox is only going to give you what the engines give it, so if the search engines give back the same urls for the same keywords then you will get the same urls a 2nd time. Its highly likely that at least most urls will be the same the 2nd time from the engines, unless we are talking about a highly changing query and/or long time periods in between harvests.

    If you need more results just tack words or letters on, scrapebox can append a-z for instance and then harvest and remove duplicate urls. You can also use the import and compare list option to compare one list to another and get only uniques.



    It will post over and over and over on the same blogs. If you want to only post on new blogs you can save successful and then remove them from your list.

    So if you start with 1000 urls, then save them to a file, lets call it file A.

    Then you post and you get 100 success. You export the success to file B.

    Then you load file A in the urls harvested grid. Then you go to import >> import and compare url list - then select file B. It will remove any urls in file B from the grid and you would be left with the 900 that had failed. I said this to show an example of how to use the compare feature.

    But really all you need to do in this case is just export the failed from the poster and load them back in and post again. But if you keep posting it will keep posting to the same blogs.
     
  3. mohanseenauth

    mohanseenauth Newbie

    Joined:
    Apr 8, 2010
    Messages:
    38
    Likes Received:
    3

    Can I take the 900 urls list, and place it in the harvester to harvest unique urls? Or is it strictly keywords I have to use. To use footprints can i harvest on the wordpress option or i have to use custom footprints, and also when can i use movable types.

    And my other question, what is the recommend amount of proxies to use with SB. I have 11. Whenever I try to harvest off google it doesnt harvest anything. It only works for bing, and sometimes yahoo and aol. Yahoo sometimes gives me an error, i think error 999 or error 499 or something like that. Will my proxies be dead if I over use the harvester?
     
  4. loopline

    loopline Jr. VIP Jr. VIP

    Joined:
    Jan 25, 2009
    Messages:
    3,799
    Likes Received:
    2,026
    Gender:
    Male
    Home Page:
    Im not sure what you mean, I mean you can put the 900 urls and put it in the keyword box but Im not sure its going to return anything relevant there. Can you explain what your thinking?

    You can harvest with custom footprints or built in ones, its the same concept, just the ones built in are just to get you started, but you can use your own. Scrapebox poster doesn't post to moveable type urls any longer, but you could still scrape them.

    999 from yahoo is ip banned. 11 proxies will get banned fast, you would want to use a delay of 5-10 seconds when using only 11 proxies with google, and that assumes that you start with them all unblocked. Once they get blocked typically they get unblocked in 12-48 hours, depending on various factors.
     
  5. mohanseenauth

    mohanseenauth Newbie

    Joined:
    Apr 8, 2010
    Messages:
    38
    Likes Received:
    3
    I have been using the 11 proxies so far and no problem but thanks for the tip.

    I guess putting the 900 url list in the keyword box doesn't make sense when I think about it.

    I found out the keyword scraper, i pulled 800 keywords from google keyword planner, put them in the keyword scrape and pulled out 3000 keywords. I scraped and got about 100k results. I am commenting now on them with the fast commenter option. I'm wondering after I am done posting on these 100k urls how I can get more urls to post on. I can keyword scrape the 3000 keywords and hopefully pull out more unique key words which i could use to scrape more url, that I didnt post on. I hope I can pull that off.

    Thanks for the help bro.

    EDIT

    A possible feature for ScrapeBox v2 is a log that tracks what URLs I commented on so I do not duplicate comment on the same blog.
     
    Last edited: Jun 16, 2015
  6. DJPaulyD

    DJPaulyD Power Member

    Joined:
    Aug 16, 2014
    Messages:
    542
    Likes Received:
    455
    Get GScraper, it's a much better scraper than Scrapebox.
     
  7. loopline

    loopline Jr. VIP Jr. VIP

    Joined:
    Jan 25, 2009
    Messages:
    3,799
    Likes Received:
    2,026
    Gender:
    Male
    Home Page:
    Sure you can do that with the keywords. If you don't want to post on the same url twice just save off all the urls you post to in a file, call it file A. Then when you scrape up your net 100K urls you just click import >> import and compare url list and then select file A. Scrapebox will take any urls that are in File A that you already posted to and remove them from the new urls you scraped. Then post to the new batch and add that list of urls to the urls in File A.

    Rinse and Repeat.

    In what way? Scrapebox is far faster and far better in every way I can find.