1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

What impact will AI and Machine Learning have on organic Google results?

Discussion in 'AI - Artificial Intelligence in Digital Marketing' started by myopic1, Feb 6, 2016.

  1. myopic1

    myopic1 Regular Member

    Joined:
    Mar 24, 2014
    Messages:
    408
    Likes Received:
    402
    Recently the head of the organic search team was replaced with an artificial intelligence/machine learning specialist. After initially having concerns that machine learning and neural nets would break organic rankings, or rather make it so difficult to understand why things rank as they do and exactly what effect changes future tweaks to them would have...it looks like Google are phasing in Machine learning into how they assess webpages. Where do you think this will take SEO, PBNs, Organic rankings and how will we have to adapt?

    Article detailing some of the above: http://www.wired.com/2016/02/ai-is-changing-the-technology-behind-google-searches/
     
  2. Capo Dei Capi

    Capo Dei Capi BANNED BANNED

    Joined:
    Oct 23, 2014
    Messages:
    754
    Likes Received:
    1,732
    It means that we will also have to adopt machine learning and increase the quality of SEO rather than just do spam or above spam in quality of seo.
     
  3. Reaver

    Reaver Jr. VIP Jr. VIP

    Joined:
    Aug 6, 2015
    Messages:
    1,848
    Likes Received:
    5,311
    Gender:
    Female
    "Let's let machines do our work for us!" <-- How Skynet got started
     
    • Thanks Thanks x 1
  4. TeamSocial

    TeamSocial Junior Member

    Joined:
    Jun 15, 2014
    Messages:
    130
    Likes Received:
    40
    Gender:
    Male
    I've got an extensive background in neural nets and neuroscience in general, and I have to say that article is half retarded. People DO understand how neural net's work, it's very trivial actually. But it is right in the sense that it's very hard to trace the results and make appropriate changes. I don't know how to phrase this in laymans, but I'll give it a shot.

    Site rankings are completely subjective, its where sites should be at, there's no definitive constant or final goal where each site should be placed. In other words almost every decision made by the AI would first have to go through the algorithms it was already using to learn from it's offsets. But if it strictly adhered to the algorithms, it wouldn't "learn" persay. That's how humans learn, error/reward/hebian. So it will possibly increase search accuracy, but it will also make a lot of more errors. Also the human, "subjective" element which is the desired outcome is 10x harder to implement because you can't just change a constant +/- 1 like you could in a simple mathematical algorithm, you have to trace the neurons and attempt to make the adjustment yourself, which is basically impossible if you want to do it right.

    What does this mean? Yes it will fuck over your low quality PBN's, in maybe 1-2 years? But seriously, if you make real quality pbns, and you don't shit that millions of other people do to build them, it won't change a thing.
     
    • Thanks Thanks x 4
  5. asap1

    asap1 BANNED BANNED Jr. VIP

    Joined:
    Mar 25, 2013
    Messages:
    4,961
    Likes Received:
    3,179
    PBNs will never die if you put in the work to make them like real sites and leave absolutely 0 footprints in the registration process.

    When google stops using links as such a big ranking factor is when we should start getting worried.
     
  6. JustUs

    JustUs Power Member

    Joined:
    May 6, 2012
    Messages:
    626
    Likes Received:
    582
    At the heart of A.I./Machine learning is a classifier of some type. The below code is of a semi supervised Bayesian classifier. While not all the functions are shown, take notice that how the classifier works is based on the frequency of given words/lines and the probability of those words and lines. We cannot know just exactly what signals Google and the other search engines may use, and they may not either. What we can know is that the frequency of those terms affects the probability. The higher the probability, the higher the ranking. However, this does not mean keyword stuffing.

    Code:
        training_file = sys.argv[1]
        testing_file = sys.argv[2]
    
        (priors,likelihood) = get_priors_likelihood_from_file(training_file)
        testing_lines = get_lines_from_file(testing_file)
        training_lines = get_lines_from_file(training_file)
    
        labeled_posteriors = get_posteriors_from_lines(training_lines, priors.keys())
    
        for i in range(10):
            """ Iterations of the algorithm """
            unlabeled_posteriors = [] #Contains the posterior and the line
            numcorrect = 0
    
            #Normalize the likelihood
            for k in priors.keys():
                n = float(sum(likelihood[k].values()))
                for v in likelihood[k].keys():
                    likelihood[k][v] /= n
    
            num_lines = len(testing_lines)
            num_classifed = 0
            for line in testing_lines:
                classification = classify_bayesian(line, priors, likelihood)
                num_classifed += 1
                if classification == line[1]:
                    num_correct += 1
                unlabeled_posteriors.append((get_class_posteriors(line, priors, likelihood), line))
    
            print ("Classified %d correctly out of %d for an accuracy of %f" %(num_correct, len(testing_lines), float(num_correct)/len(testing_lines)))
    
            (priors, likelihood) = relearn_priors_likelihood(labeled_posteriors * unlabeled_posteriors)
    
    
     
  7. StageFright

    StageFright Newbie Premium Member

    Joined:
    Jul 13, 2013
    Messages:
    21
    Likes Received:
    5
    As they say fight fire with fire. I propose implementing unsupervised learning algorithms (some would say this problem would be best suited via a supervised learning algorithm), but I believe unsupervised learning would be best. If we use this, it would in theory be possible to push various variables into the learning module, as well as the resultant page rank. The algorithm would qualify the specific factors responsible for ranking and would also rate these. From here new strategies can be devised to enable "keeping up" with all the pandas giraffes and elephants big G ends up releasing. Well that is my theory anyway, had not built this as yet.
     
  8. JustCash

    JustCash Marketplace seller Marketplace seller Premium Member

    Joined:
    Jun 8, 2017
    Messages:
    111
    Likes Received:
    10
    Gender:
    Male
    Home Page:
    It means that any "tricks" won't work anymore because instead of a simple algorithm (or manual human reviewer) going over your websites, it's a super intelligence that's more powerful than any human mind.
     
  9. garyeastwood

    garyeastwood Registered Member

    Joined:
    Jun 7, 2017
    Messages:
    60
    Likes Received:
    2
    Gender:
    Male
    Occupation:
    Ecommerce
    Location:
    United Kingdom
    Google has been using machine learning to rank pages from the beginning - the algorithms have only grown more complex.
     
  10. BrownFreak

    BrownFreak Newbie

    Joined:
    Jul 7, 2017
    Messages:
    16
    Likes Received:
    4
    Gender:
    Male
    This - and also the amount of data that they now have available has also grown exponentially. When Google first appeared on the Scene they had a search engine so they could only mine SEO related information.
    Now with everyone using Gmail, Google Docs right through to Maps and Google earth, not only do they have SEO data but pretty much everything in between...
     
  11. lancis

    lancis Elite Member

    Joined:
    Jul 31, 2010
    Messages:
    1,680
    Likes Received:
    2,416
    Occupation:
    Entrepreneur
    Location:
    Milky Way
    Home Page:
    IMHO the industry uses the buzzwords too extensively (such as AI, Deep Learning). In reality we're as far from AI as 20 years ago.
    The only achievement of past years can be summed up as: propagation of Neural Networks from Science to Engineering.

    With the amount of libraries out there, even a layman can create a Neural Network. That Neural Network will be as far from AI as the layman himself from the moon.

    There is only 1 advantage that Neural Networks offer over traditional Machine Learning methods (such as kernels) - they can be trained on huge amounts of data. Otherwise, they are sub optimal, and standard ML methods can easily outperform anything created with NN once you find a way to feed the same amounts of data in.

    So to answer the OP question... we are not there yet, we still have no reason to adapt to AI. :)