1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Need a method to extract pages from a website

Discussion in 'Black Hat SEO' started by stevelori, Nov 22, 2010.

  1. stevelori

    stevelori Newbie

    Joined:
    Oct 5, 2010
    Messages:
    40
    Likes Received:
    0
    hey

    i need some help , i am using scrapebox to harvest blogs
    and recive some good 3-7 pr blogs
    but i after i delete duplicate domains i left only with the domain itself
    so what i do is go url by url find the first entry i see and copy the link
    that hold the comment box

    my question is is there a better ways then doing it manually
    i got like 2000

    tnk
     
  2. artizhay

    artizhay BANNED BANNED

    Joined:
    Nov 21, 2010
    Messages:
    1,867
    Likes Received:
    1,335
    There should be several programs available that auto-submit to blogs. But if you need any URLs from a site, a simple PHP regex code can retrieve them for you. PM me if you need help.
     
  3. stevelori

    stevelori Newbie

    Joined:
    Oct 5, 2010
    Messages:
    40
    Likes Received:
    0
    tnk man this what i need , i need to extract the last added url or any url on site or blog , i try to PM it says until 15 MSG ...NO

    "simple PHP regex code"
    can you send me the script .. i know php also but can you point me in right way or send me the code
     
    Last edited: Nov 22, 2010
  4. wickedguy

    wickedguy Supreme Member

    Joined:
    Jul 22, 2009
    Messages:
    1,402
    Likes Received:
    1,379
    Location:
    BHW--> South Africa
    Home Page:
    PHP:
    <?
    $page=@file_get_contents('blbl . com');

    $pattern='/<a href=[\"|\']?(.*?)[\"|\']>(.*?)<\/a>/si';
    preg_match_all($pattern,$page,$matches);

    foreach(
    $matches[1] as $url){
    echo 
    $url."<br>";
    }
    ?>