1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Clarification on increasing the number of unique auto approve URL's in Scrapebox

Discussion in 'Black Hat SEO Tools' started by brascan1, Oct 31, 2010.

  1. brascan1

    brascan1 Newbie

    Joined:
    Jul 6, 2008
    Messages:
    34
    Likes Received:
    1
    Hi:

    Can someone clarify this for me. Someone in another forum mentioned that they can get " There are then another 275,676 Unique Auto Approve WP URL's fetched from 4,151 Unique Auto Approve WP Domains List. "

    How does this work. I have Scrapebox, how can I apply this technique. Please, advise.
     
  2. accelerator_dd

    accelerator_dd Jr. VIP Jr. VIP Premium Member

    Joined:
    May 14, 2010
    Messages:
    2,441
    Likes Received:
    1,005
    Occupation:
    SEO
    Location:
    IM Wonderland
    I guess you find an auto-approve wordpress blog and use some footprint like "site:<the_auto_approve>" if one is auto approve then all posts are..... :D
     
  3. pisco

    pisco Regular Member

    Joined:
    Aug 13, 2010
    Messages:
    222
    Likes Received:
    42
    Location:
    Lisbon
    I think there is no footprint to tell if it auto approves, you just need to post the comment, and check if the link is up quickly, if it is, congrats you have one auto approve blog to add to your list, if not it is moderated.
     
  4. darshan1994

    darshan1994 BANNED BANNED

    Joined:
    Oct 9, 2009
    Messages:
    654
    Likes Received:
    318
    Those 255k url is are different url of the blog. Since the whole blog didnt need approval every single url in that blog is auto approve. so just harvest the shit out of it :)
     
  5. murka

    murka Newbie

    Joined:
    Mar 15, 2011
    Messages:
    4
    Likes Received:
    0
    It means they have a list of ~5k domains, and all the other URL's are pages from those domains. Like someone said, if one page is auto approve then usually they all are.

    Once you find an autoapprove domain, use the "trim to root" feature on your whole list, and double check that you have removed any duplicates. Then search and replace "h t t p : / / " with " s i t e : h t t p : / / " for the whole list and put it into the keywords section of the keyword scrape. Leave the footprint blank and harvest to get all the pages for that domain.

    Edit: Didn't look at post date, sorry to bump an ancient thread.