1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

How do I use scrapebox to find wiki's?

Discussion in 'Black Hat SEO Tools' started by forwardedlandlines, Mar 8, 2012.

  1. forwardedlandlines

    forwardedlandlines Jr. VIP Jr. VIP

    Joined:
    Feb 10, 2012
    Messages:
    540
    Likes Received:
    372
    I've been searching for footprints on google, I've been experimenting, and neither brought me what I'm looking for. The sites weren't wiki's, the sites weren't .edu when I specified it, just getting extremely few results with 200 harvesting threads...
     
  2. auw21

    auw21 Regular Member

    Joined:
    Sep 18, 2010
    Messages:
    284
    Likes Received:
    79
    use :

    "powered by mediawiki" + keywords
     
  3. themidiman

    themidiman Power Member

    Joined:
    Feb 25, 2011
    Messages:
    701
    Likes Received:
    1,534
    Location:
    root@pts/0
    create a text file containing:

    allinurl:"wiki/index.php?"

    Load a bunch of keywords, and merge that text file with the keywords.

    Start harvesting.

    There are other footprints too, but that one came to mind.
     
  4. azguru

    azguru Elite Member

    Joined:
    Jan 5, 2012
    Messages:
    1,770
    Likes Received:
    1,278
    Location:
    The United States
    "powered by mediawiki"
    inurl:wiki site:.edu
    inurl:"special:userlogin"
    "This page was last modified on" inurl:wiki
    "main page" "random page" inurl:wiki
     
    • Thanks Thanks x 3
  5. GumShoerer

    GumShoerer Newbie

    Joined:
    Feb 3, 2011
    Messages:
    23
    Likes Received:
    1
    If I'm looking for relevant I generally scrape all the backlinks of my (top 200) competitors ... then I filter the URLs that contain wiki in them.

    I usually end up with 100+ prospective wiki relevant to my keyword.
     
  6. outrnm

    outrnm Newbie

    Joined:
    Mar 1, 2012
    Messages:
    13
    Likes Received:
    3
    will check it out