I've been doing a lot of thinking about this lately, and I believe I have a pretty solid hypothesis on just what the Google Dance & Sandbox actually are and what their purpose is. Think of this thread as a place to agree or disagree with what I'm about to say...just make your opinion heard by saying something. That's the best way for ideas and theories to progress (well, testing actually is, but you get the idea ) Every search engine out there has an algorithm. Simply put, an algorithm takes the variables it is programmed to process in, and spits out a result. Data In = Data Out I believe there are two things that can cause the downfall of any search engine and Google has figured out a solution to both of them (there are more than two, but I'm using just these two so as to keep this post short ). They are: A static algorithm that doesn't change...or doesn't change often enough & results returned to the user based solely on what I'll refer to as "hard data". Hard data meaning data that no matter who looks at it, it can only be interpreted one way. In other words, if I were to say whats 1+1? The only possible answer, no matter who looks at it, is 2. The first thing mentioned above (the static algorithm) can cause the downfall of a search engine because after awhile, it will be figured out. Once it is figured out, it can be manipulated. Obviously, as soon as the SERPs can be manipulated, they WILL be manipulated and the user experience for that search engine goes to crap...POOF, nobody goes to that SE any more. Google obviously updates it's algorithm on a regular, if not constant basis. So, first thing taken care of. Now I think Google has evolved to a whole new level by taking care of the second variable I mentioned above...only returning results to the user based on "hard data"...the data fed into the algorithm. Simple example: If X site has X amount of backlinks, it gets X position. That is an algorithm that is based on "hard data". There is nothing to interpret. You'll always get the same answer. I think Google has done something brilliant. I believe they have added what I'll now call "soft data" to their algorithm. What is soft data? Data that changes, data that can be interpreted more than one way, data that "flows" in one direction or another. Let me explain what I'm saying a little bit better and in a way any IM will understand. I think Google is split testing! They're getting how they rank their pages based on how users browse their results. But I think it goes much deeper than this. I think there is a distinct difference between the purpose of the Sandbox and the Dance. Google Dance: Site owner typically sees their site fluctate a couple of positions on google. Maybe they are jumping around from position 1 or 2 to position 7 or 8. Maybe they are jumping around from page 3 or 4 to page 1 and 2. Why is this happening? These changes can sometimes literally happen repeatedly in less than 12 hours. Nothing THAT drastic can change in less than 12 hours where ANY algorithm change could cause that much movement over and over and over again. What I think Google is doing is allowing their users to determine your search position! Obviously not entirely. They have their "hard data" in place that gives everything in Google it's initial blueprint, but they leave the rest up to its users. This is referred to normally as "crowdsourcing" although with crowdsourcing, the "crowd" normally knows up front that they are part of the process (like when Mountain Dew releases 4 different flavors of soda and says that after 3 months they will discontinue all but 1 flavor and then they ask you to go to their website and vote on your favorite one. The one with the most votes stays and the rest are out. That is an example of crowdsourcing). So, Google puts your site in one position, then replaces it with somebody elses site. They keep track of the clickthrough rate when your site was in X position and when the other site was in that same position. After X amount of time, that "soft data" that was just provided to Google via the users of their search engine (ala clickthrough rates) is turned into variable Q and variable Q is fed into the algorithm as a newly formed bit of "hard data" (along with the rest of the actual "hard data"). I think there is a lot more than just clickthrough rate though. I think it is time spent on site after clickthough, pages viewed, which pages were viewed and where they go after they leave your site (if they use the back button and show back up at googles SERPs that is). Think about this. What happens when a user shows up at an MFA site? More importantly, a REALLY poorly set up MFA site. One that is designed to get the user to leave the page as soon as possible via one of their AdSense ads? That user leaves! Where do these sites typically end up? The Google trash bin. You see, you just simply can't fake the kind of data that a real, quality site would generate. You can only get people to click your link in google more with an appropriate fitting title. Once they get to your site, you can only keep them there with real genuine content. Once they stay at your site you can only get them to come back (and not hit the back button for somebody elses results) with content that makes them want to come back. I believe Google has added a portion, or all of...or more of these bits of "soft data" to their algorithm. That is (in a nutshell) what I think the Google Dance is for. They are split testing / crowdsourcing as part of what is going into how they provide their SERPs. THIS COULD NEVER BE GAMED! (I mean, it could, but it would be very, very, very difficult). It would also be always changing. What the "crowd" deems as the most relevant one week could easily be dumped as ridiculous the next. Google SandBox: This is typically a much more drastic change than the aforementioned "dance". What a site owner normally experiences here is what appears to be a complete dissapearance from the SERPs. However, upon further inspection, they find that they aren't actually de-indexed. Just moved from their former 1st page position to page 50 (or wherever...just far back enough that nobody would ever see it). You'll even find that Google is still indexing your newly created pages, keeping track of your newly created backlinks, basically other than your new (and horrible) SERP position, everything else is "business as usual". Why? Why would Google suddenly throw you to the dogs, but still continue to do all the work of crawling, indexing & checking all those variables? Again, I think Google is Using "Crowdsourced Data" to determine your position! Above, that may have made sense, as you were in their SERPs where lots of people would be clicking your links so they could easily do a "split test". But you might be wondering what I think they could possibly be basing "crowdsourced data" on if they have removed you from their SERPs (or put you somewhere that may as well be considered removed). Well, if your page was popular enough and had enough backlinks to get on the first or second page of Google in the first place, then surely your site must be able to survive for a couple weeks without Google traffic directly. I mean, if Google removed Amazon.com from their SERPs would people stop visiting it? Would people stop linking to it? Would people stop talking about it? Of course not! I believe they have the "SandBox" as a way to tell if your site has all the statistics that a front page "site" deserves because they were gotten legitimately or if they were illegitimate. You see, if they were illegitimate (IE - You were the one doing all the work to make it look like a site that everybody "talks" about) then when you are removed from the SERPs, everything should theoretically come to a halt. As a site owner, you're not going to put more work into promoting a site that has been what appears to be de-indexed. So no Google, no backlinking, no talking about your site, no facebook links, no nothing. Most importantly, no traffic. Everything stops. Since you made the site with the intentions of gaming Google in the first place, then your only REAL source of traffic is going to be Google. So when you get SandBoxed, I believe that the absolute BEST thing you can do, is to make absolutely sure that you don't stop doing what you were doing before you were SandBoxed. What's tough to keep going is the traffic. However, if how you got to the first page through legitimate means, Google SHOULD be able to remove your site from the SERPs (or move them to page 5000) and still see activity on your site. They should still see backlinks, people talking, facebook comments, etc, etc, etc. Granted, there will obviously be a drop in stats, but everything SHOULDN'T come to a halt. Again, imagine that Google removed YouTube from their SERPs. Think everybody would just stop talking about YouTube? Granted, that is an exaggerated example, but I believe the basic premise still applies. In the end, I think Google has done something that every other company in the world has been doing for awhile. Crowdsourcing (at least that is what I would call it in it's most basic form). They just figured out a way to do it virtually. It makes perfect sense. You let the crowd choose whats best and the crowd is always happy. You also can't "game" the crowd. To sum this up in the simplest way I can put it is this. Imagine Google turning it's SERPs into a giant version of Digg (or any social bookmarking site) except the data that is fed into it isn't via a user clicking an "I like this button" but through metaphorically saying "I like this site" by doing what any user does when they actually like a site...USING IT. The business model works. Look at Digg, YouTube, FaceBook, Twitter, Wikipedia...it simply works. What's most relevant is what the users SAY is most relevant. However I think Google is working its way towards making the user tell them what's most relevant without actually saying anything (because again, that is "gameable"). I think there will be BIG changes in how you rank in the SERPs in the future, and I think that change is going to come through "CrowdSourcing". EDIT: After some conversation ensued below with some other BHW members, I decided to do a little research. As it turns out, Microsoft has already placed a patent on what they are calling (it's the patents name): SYSTEM AND METHOD FOR SPAM IDENTIFICATION portions of which are very similar to what some of my opinions are above. This is simply a coincidence, as I had no idea this existed prior to writing everything above. You can view the entire legal patent Microsoft file here: http://appft.uspto.gov/netacgi/nph-P...DN/20100100564 Here is just one very interesting excerpt from it (out of the many I could post): The above means that a site's spam rating will decrease (and therefor it's ranking will increase) if "many users" visit the individual result. Pretty much exactly one of the things mentioned above. If you don't mind spending a half an hour reading through some technical stuff, you'll find that patent Microsoft filed very interesting. It holds some very insightful information on what direction search engines are headed.