1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

What impact will AI and Machine Learning have on organic Google results?

Discussion in 'AI - Artificial Intelligence in Digital Marketing' started by myopic1, Feb 6, 2016.

  1. myopic1

    myopic1 Regular Member

    Joined:
    Mar 24, 2014
    Messages:
    408
    Likes Received:
    404
    Recently the head of the organic search team was replaced with an artificial intelligence/machine learning specialist. After initially having concerns that machine learning and neural nets would break organic rankings, or rather make it so difficult to understand why things rank as they do and exactly what effect changes future tweaks to them would have...it looks like Google are phasing in Machine learning into how they assess webpages. Where do you think this will take SEO, PBNs, Organic rankings and how will we have to adapt?

    Article detailing some of the above: http://www.wired.com/2016/02/ai-is-changing-the-technology-behind-google-searches/
     
    • Thanks Thanks x 1
  2. Capo Dei Capi

    Capo Dei Capi BANNED BANNED

    Joined:
    Oct 23, 2014
    Messages:
    754
    Likes Received:
    1,734
    It means that we will also have to adopt machine learning and increase the quality of SEO rather than just do spam or above spam in quality of seo.
     
    • Thanks Thanks x 1
  3. Reaver

    Reaver Jr. VIP Jr. VIP

    Joined:
    Aug 6, 2015
    Messages:
    1,904
    Likes Received:
    5,459
    Gender:
    Female
    "Let's let machines do our work for us!" <-- How Skynet got started
     
    • Thanks Thanks x 2
  4. TeamSocial

    TeamSocial Junior Member

    Joined:
    Jun 15, 2014
    Messages:
    130
    Likes Received:
    41
    Gender:
    Male
    I've got an extensive background in neural nets and neuroscience in general, and I have to say that article is half retarded. People DO understand how neural net's work, it's very trivial actually. But it is right in the sense that it's very hard to trace the results and make appropriate changes. I don't know how to phrase this in laymans, but I'll give it a shot.

    Site rankings are completely subjective, its where sites should be at, there's no definitive constant or final goal where each site should be placed. In other words almost every decision made by the AI would first have to go through the algorithms it was already using to learn from it's offsets. But if it strictly adhered to the algorithms, it wouldn't "learn" persay. That's how humans learn, error/reward/hebian. So it will possibly increase search accuracy, but it will also make a lot of more errors. Also the human, "subjective" element which is the desired outcome is 10x harder to implement because you can't just change a constant +/- 1 like you could in a simple mathematical algorithm, you have to trace the neurons and attempt to make the adjustment yourself, which is basically impossible if you want to do it right.

    What does this mean? Yes it will fuck over your low quality PBN's, in maybe 1-2 years? But seriously, if you make real quality pbns, and you don't shit that millions of other people do to build them, it won't change a thing.
     
    • Thanks Thanks x 5
  5. asap1

    asap1 BANNED BANNED

    Joined:
    Mar 25, 2013
    Messages:
    4,961
    Likes Received:
    3,185
    PBNs will never die if you put in the work to make them like real sites and leave absolutely 0 footprints in the registration process.

    When google stops using links as such a big ranking factor is when we should start getting worried.
     
    • Thanks Thanks x 1
  6. JustUs

    JustUs Power Member

    Joined:
    May 6, 2012
    Messages:
    626
    Likes Received:
    585
    At the heart of A.I./Machine learning is a classifier of some type. The below code is of a semi supervised Bayesian classifier. While not all the functions are shown, take notice that how the classifier works is based on the frequency of given words/lines and the probability of those words and lines. We cannot know just exactly what signals Google and the other search engines may use, and they may not either. What we can know is that the frequency of those terms affects the probability. The higher the probability, the higher the ranking. However, this does not mean keyword stuffing.

    Code:
        training_file = sys.argv[1]
        testing_file = sys.argv[2]
    
        (priors,likelihood) = get_priors_likelihood_from_file(training_file)
        testing_lines = get_lines_from_file(testing_file)
        training_lines = get_lines_from_file(training_file)
    
        labeled_posteriors = get_posteriors_from_lines(training_lines, priors.keys())
    
        for i in range(10):
            """ Iterations of the algorithm """
            unlabeled_posteriors = [] #Contains the posterior and the line
            numcorrect = 0
    
            #Normalize the likelihood
            for k in priors.keys():
                n = float(sum(likelihood[k].values()))
                for v in likelihood[k].keys():
                    likelihood[k][v] /= n
    
            num_lines = len(testing_lines)
            num_classifed = 0
            for line in testing_lines:
                classification = classify_bayesian(line, priors, likelihood)
                num_classifed += 1
                if classification == line[1]:
                    num_correct += 1
                unlabeled_posteriors.append((get_class_posteriors(line, priors, likelihood), line))
    
            print ("Classified %d correctly out of %d for an accuracy of %f" %(num_correct, len(testing_lines), float(num_correct)/len(testing_lines)))
    
            (priors, likelihood) = relearn_priors_likelihood(labeled_posteriors * unlabeled_posteriors)
    
    
     
    • Thanks Thanks x 1
  7. StageFright

    StageFright Newbie Premium Member

    Joined:
    Jul 13, 2013
    Messages:
    21
    Likes Received:
    6
    As they say fight fire with fire. I propose implementing unsupervised learning algorithms (some would say this problem would be best suited via a supervised learning algorithm), but I believe unsupervised learning would be best. If we use this, it would in theory be possible to push various variables into the learning module, as well as the resultant page rank. The algorithm would qualify the specific factors responsible for ranking and would also rate these. From here new strategies can be devised to enable "keeping up" with all the pandas giraffes and elephants big G ends up releasing. Well that is my theory anyway, had not built this as yet.
     
    • Thanks Thanks x 1
  8. JustCash

    JustCash Marketplace seller Marketplace seller Premium Member

    Joined:
    Jun 8, 2017
    Messages:
    113
    Likes Received:
    11
    Gender:
    Male
    Home Page:
    It means that any "tricks" won't work anymore because instead of a simple algorithm (or manual human reviewer) going over your websites, it's a super intelligence that's more powerful than any human mind.
     
    • Thanks Thanks x 1
  9. garyeastwood

    garyeastwood Registered Member

    Joined:
    Jun 7, 2017
    Messages:
    60
    Likes Received:
    3
    Gender:
    Male
    Occupation:
    Ecommerce
    Location:
    United Kingdom
    Google has been using machine learning to rank pages from the beginning - the algorithms have only grown more complex.
     
    • Thanks Thanks x 1
  10. BrownFreak

    BrownFreak Newbie

    Joined:
    Jul 7, 2017
    Messages:
    16
    Likes Received:
    5
    Gender:
    Male
    This - and also the amount of data that they now have available has also grown exponentially. When Google first appeared on the Scene they had a search engine so they could only mine SEO related information.
    Now with everyone using Gmail, Google Docs right through to Maps and Google earth, not only do they have SEO data but pretty much everything in between...
     
    • Thanks Thanks x 1
  11. lancis

    lancis Elite Member

    Joined:
    Jul 31, 2010
    Messages:
    1,683
    Likes Received:
    2,426
    Occupation:
    Entrepreneur
    Location:
    Milky Way
    Home Page:
    IMHO the industry uses the buzzwords too extensively (such as AI, Deep Learning). In reality we're as far from AI as 20 years ago.
    The only achievement of past years can be summed up as: propagation of Neural Networks from Science to Engineering.

    With the amount of libraries out there, even a layman can create a Neural Network. That Neural Network will be as far from AI as the layman himself from the moon.

    There is only 1 advantage that Neural Networks offer over traditional Machine Learning methods (such as kernels) - they can be trained on huge amounts of data. Otherwise, they are sub optimal, and standard ML methods can easily outperform anything created with NN once you find a way to feed the same amounts of data in.

    So to answer the OP question... we are not there yet, we still have no reason to adapt to AI. :)
     
    • Thanks Thanks x 2
  12. timothywcrane

    timothywcrane Power Member

    Joined:
    Apr 25, 2009
    Messages:
    630
    Likes Received:
    247
    Occupation:
    Internet Promotion Management
    Location:
    USA
    The only problem I see with unsupervised learning is that over-underfitting would be at the whim of the ML setup. Even if you set up algo or structures to combat this... an algo is in essence a mathematical human led "supervisor". I run into this type of scenario when I use ontological pre-processing within iterations of SPARQL queries. We already use this type of setup for aircraft navigation instruction iteration, using GEO-data and weather patterns instead of ontological data in this instant. Its a fine line, but anytime a human "pulls a graph", to check for overfitting/under I consider it supervision. This is still needed as Searchenginewatch found out, AI is currently false positive and false negative in its assumptions almost half of the time. Air traffic control might be moving the towers, but the sensors put in thier place are still "supervised". Granted Airspace and front page are different beasts, but analytic quality is analytic quality. Quantitatively we are advancing fast with GPUs and distribution, but I think the further away from structure we stray, we exponentially increase the complexity in search for quality. Everyone complains that G doesn't give up the info, but few seem to speculated the the info they do might be misleading to an algo even if "accurate". Even the TOS says it cannot be relied on to be accurate. Just look at the facebook ML that took all high melanin humans as apes and monkeys. Let's hope AI "infers" better than a discredited 19th century phrenologist before we decide to "turn it loose".
     
    • Thanks Thanks x 1
  13. mapg

    mapg Newbie

    Joined:
    Aug 13, 2017
    Messages:
    28
    Likes Received:
    6
    Gender:
    Male
    Well, I think AI will make tricking google more difficult in every aspect.

    I did seo in times when keyword stuffing worked, and what I see over time is that google gets smarter and smarter, as it puts more resources against spammers and ranking hackers.

    AI can be used in content quality determination in both natural language understanding by the machines, and in spying on visitor behavior to see if the content satisfies the visitor's search intent.

    It can also be used to uncover link schemas and link manipulation (it already does to some point by the panda algo).

    And the natural thing for me to believe is that big g will continue getting better and better over time, making it harder and harder to trick it, but I dont believe it will ever eliminate rank manipulation and spamming.
     
    • Thanks Thanks x 1