1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Using GScraper Links in GSA-SER

Discussion in 'Black Hat SEO Tools' started by bneslsc, Jan 28, 2014.

  1. bneslsc

    bneslsc Newbie

    Joined:
    Jan 27, 2013
    Messages:
    3
    Likes Received:
    0
    Hey guys

    i'm using GScraper and GSA and having problems and have a few questions I hope you all can answer:


    Basically, it's to do with using GScraper for scraping and GSA-SER for posting. From reading somethings on the many forums, I believe that most people are using GScraper and their proxies for scraping, and obviously GSA-SER for link building.


    Firstly when using GScraper, what are the best ways to maximise the LPM. I've had it go as high as 70K LPM but mostly it stays around 5k to 6k. And I cannot figure what are the optimal settings to maintain it at 65k. I am using GScraper on a dedicated VPS (the Wizard plan from Solid SEO host) using the GScraper subscription proxies. All that VPS does is scrape. I am mixing up the threads between 500 and 1500 and same thing... the LPM jumps up and down, but is mostly stuck at 5k, which is really annoying.


    My next question is about using the scraped links in GSA-SER. I should say that what I do is I extract the footprints from GSA into GScraper so it only scrapes links that GSA can successfully post too. Upon finishing the scraping of the links, there is 2 options that people take from here. They either trim to root domain and remove duplicates, or they remove the duplicate URLs.


    This is what I can't get my head around. Most people seem to be trimming to the root domain, running PR checks and then adding them into GSA. But isn't the whole point of GSCraper to scrape URLs, filter them, then PR check them? I mean, just because the root domain has PR, doesn't mean the inner pages (URLs) will too. Also, a URL will also help with relevancy if you are scraping using niche related keywords. What is your advice/opinion here. Is it better for general lists to trim to domain and for specific niches to use URLs? Or better to use root domains period, or URLs period?


    Next, regardless of whether using URLs/root domains. Importing these links into GSA-SER is proving a bit challenging. To import them. I go to Options > Advanced > Tools > Import URLs (identify platform and sort in) > From File. This imports the merged and filtered links I put in. The thing is.... no matter whether I use URLs or Root domains, it gets rid of most the links? I'm confused? I used the GSA footprints to scrape, so how could most of the links not be useful?


    I think that's pretty much the big stumbling block for me. Again, if you could help me out here, it would provide a ton of clarity.
     
  2. liberox

    liberox Power Member

    Joined:
    Jun 8, 2012
    Messages:
    529
    Likes Received:
    179
    Location:
    World WILD Web
    Use another provider. Gscraper own proxies are really not working well at this time.


    BMO it`s not necessary. Anyway, You may use both methods same time.


    It`s normal.
     
  3. divok

    divok Senior Member

    Joined:
    Jul 21, 2010
    Messages:
    1,015
    Likes Received:
    634
    Location:
    http://twitter.com/divok
    Don't do that . Right click on any project > import urls > from file . And then yes or no depending on your choice .
     
  4. SerpEvil

    SerpEvil Supreme Member

    Joined:
    Jan 1, 2012
    Messages:
    1,495
    Likes Received:
    446
    Gender:
    Male
    Occupation:
    WebMaster
    Location:
    Top 3 On Google
    it happens to me all the time,after scraping over 3 mil urls with gscraper and removed dup domains as long there was social network sites and forums there is no reason to remove only duplicate url i had a 220k unique domains list,from those 220k domains GSA posted on no more than 200-300 of them,that means GSA footprints from their own software are not the best and we need better footprints.
     
  5. satyr85

    satyr85 Power Member

    Joined:
    Aug 7, 2011
    Messages:
    580
    Likes Received:
    445
    Location:
    Poland
    3mil urls is nothing, i scrape 100-200 mil urls (depending on footprints) per 24 hours. For easy footprints i hit 250+ mil urls per 24 hours. Ofc these urls are not unique. Your problem can be proxies + also footprints but even with good footprints and slow proxies you wont scrape alot.
     
  6. SerpEvil

    SerpEvil Supreme Member

    Joined:
    Jan 1, 2012
    Messages:
    1,495
    Likes Received:
    446
    Gender:
    Male
    Occupation:
    WebMaster
    Location:
    Top 3 On Google
    i didnt say i scraped 1 month for 3 mil urls,i did it for 1-2 hours.yes those urls are unique as long i removed duplicate domains.the problem is with footprints
     
  7. xjackx

    xjackx Regular Member

    Joined:
    Mar 17, 2011
    Messages:
    412
    Likes Received:
    29
    Location:
    Brazil
    What are your results with those lists?
     
  8. shuttershades

    shuttershades Registered Member

    Joined:
    Oct 26, 2013
    Messages:
    92
    Likes Received:
    15
    Home Page:
    Here same footprints (sharemobile. ro/download/774302)
    Please shere with as your footprints.
     
  9. shuttershades

    shuttershades Registered Member

    Joined:
    Oct 26, 2013
    Messages:
    92
    Likes Received:
    15
    Home Page:
    Man please help me with same good footprints. Thx