1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Scrapebox: Multiple Instance + Same Proxies?

Discussion in 'Black Hat SEO' started by SEOfarmer, Dec 16, 2011.

  1. SEOfarmer

    SEOfarmer Regular Member

    Joined:
    Jun 6, 2011
    Messages:
    318
    Likes Received:
    10
    Is it okay to use multiple instance of scrapebox but 'same' list of proxies? Any negative impact with this?

    Please advice. Thanks
     
  2. DamageX

    DamageX Elite Member

    Joined:
    Sep 17, 2008
    Messages:
    2,692
    Likes Received:
    1,687
    Occupation:
    Unemployable
    Location:
    Former nomad
    I do that all the time and I am yet to see any negative results from it.
     
  3. steilx

    steilx Newbie

    Joined:
    Jul 8, 2011
    Messages:
    3
    Likes Received:
    0
    I also didn't see any problem ( regarding the speed ) while harvesting with 3 scrapebox instances and using the same list of public proxies. But the list was around 200 proxies so the possibility two scrapebox instances were using the same proxy at the same time was small. I believe that if you use at least 5-7 proxies per instance you wont notice any performance problems ( i am not talking about the problems that occur from the heavy use of the same ip, eg temporarily banned ip from yahoo )
     
  4. Moogle!

    Moogle! Regular Member

    Joined:
    Dec 27, 2010
    Messages:
    349
    Likes Received:
    52
    Location:
    Pearl of the Orient
    It depends on your connection settings and how many proxies you got.

    If your planning on running 2 instances of scrapebox, you should have atleast 100+ working proxies, otherwise you'll get a lot of failed/timeout submissions.
     
  5. SEOfarmer

    SEOfarmer Regular Member

    Joined:
    Jun 6, 2011
    Messages:
    318
    Likes Received:
    10
    Well. I dont use public proxies, but I do have around 75 private proxies.
    I wonder if that will work.

    Do you guys use public proxies to harvest, then switch back to private proxies to post or what?

    Thanks.
     
  6. steilx

    steilx Newbie

    Joined:
    Jul 8, 2011
    Messages:
    3
    Likes Received:
    0
    I use public proxies for harvesting and pagerank checker and private proxies for posting and link checker . I suggest not to use the private proxies for harvesting since the ips get temporarily banned from the search engines when they reach a number of queries.
     
  7. theonly1

    theonly1 BANNED BANNED

    Joined:
    Nov 5, 2011
    Messages:
    219
    Likes Received:
    95
    I suggest no more than 5 instances(up to 10,depending on your VPS/dedicated/home computer and Internet connection),because otherwise you will get fewer numbers of successfully posted entries!

    Also do not set the threads to let's say 20 for each of the 5 instances(if you are using 5,of course) -select random numbers 18,19,20,21,22 for each instance of SB -you will get better results that way //at least i do !

    Final tip:try not using the same blog list for each instance of SB-rather if you have 5 blog lists each producing 20k successfully posted entries for example,load each list in SB individually!This will give you better success rate+you can use each blog list multiple times instead of spamming to death one list,then generating/buying new list and so on...
     
  8. Mex-deluxe

    Mex-deluxe Regular Member

    Joined:
    May 24, 2010
    Messages:
    252
    Likes Received:
    29
    What I don't understand is this:

    I start a Scrapebox harvesting, and it could run for days since I loaded a huge list of keywords.

    But this harvesting session will stop in a few hours/days, because I burn all the loaded public proxies.

    So I burnt all the proxies, and Scrapebox hangs. How to load new, fresh proxies while the harveser still runs? I cannot edit the proxies textarea while the harvester runs.

    But if I stop the harvester, load new, fresh proxies and start it again, then the harvester will search for the same keywords I already used! So I will get a lot of duplicate URLs.

    Is there a way to update proxies on the fly without stopping the harvester? Or else I don't understand how some people harvest millions of auto-approve URLs.
     
  9. SEOWhizz

    SEOWhizz Power Member

    Joined:
    Oct 22, 2011
    Messages:
    606
    Likes Received:
    432
    Location:
    Lat: 38N 43' 11.298" Long: 27W 12' 7.733"
    Use the multi-threaded harvester, and the Harvester Keyword Statistics dialog to remove the completed keywords, so you don't use them again in a harvesting run.

    - Scrapebox > Settings > Use multi-threaded harvester
    - After the scraping starts to slow down (I'm using the latest version v1.15.42 and it does not hang), then select the "Stop harvesting" button.
    - The Harvester Keyword Statistics dialog appears:
    Select : Export keywords > Export all not completed keywords to Keyword list.
    - Load fresh proxies.
    - Start harvesting again!
     
    • Thanks Thanks x 1
  10. Mex-deluxe

    Mex-deluxe Regular Member

    Joined:
    May 24, 2010
    Messages:
    252
    Likes Received:
    29
    Thanks, so this is how people harvest 300 000+ auto-approve URLs?

    It doesn't hang, but at some point I get no more results, because all the public proxies have been banned from Google.

    Let's say if I use a software like Proxy Goblin, then I should stop the harvester each day, replace the burnt proxies with new, fresh proxies, and load the harvester again but only searching for unused keywords?

    I think a very useful feature in Scrapebox would be if I could modify the proxy list on the fly, while the harvester is running.

    There is already a Proxies folder inside the Scrapebox folder, with a proxies.txt file in it. So when I modify that proxies.txt file, Scrapebox should monitor that file and notice that I change the proxies even if the harvester is still running. Then Scrapebox should use that new proxy list, without me stopping the harvester.

    In fact this feature would completely automate the harvesting process, since a public proxy finder like Proxy Goblin can save new proxies to a txt file automatically, on schedule.

    So I could just load a huge list of keywords into Scrapebox, start Proxy Goblin, and start the Scrapebox harvesting. What would happen after that is Proxy Goblin would keep updating the proxy list on auto pilot, and the Scrapebox harvester could run for days, even for weeks without baby sitting!

    Why Scrapebox doesn't have this feature?