bubbl.us/view.php?sid=300255&pw=yakFceWqarfPQMTZGMEc3SS5DQ3pyRQ
bubbl.us/view.php?sid=300256&pw=yakFceWqarfPQMTZISDJqQ0FNcHM4TQ
I apologize. I've written a lot of different responses to the posts in this thread over the last few days and decided not to actually post them until I can figure out a good way of explaining all of this without using misleading language. My response above was by far the most reactionary and mean-spirited. I shouldn't post when I'm grumpy. I also somehow missed your post with the multi-dimensional list and apologize for that.
That said, here is the problem with this method that you outline with this graph:
If I'm reading it correctly, (you should really fire up a visualization tool) while it is absolutely multi-dimensional, as you stated above, it does not exhibit any where near enough complexity to avoid even the simplest detection algorithms. It also assumes, potentially based on an over-simplification that is entirely my fault for introducing, that weight is evenly distributed at all nodes and that this distribution is temporally linear. It is not. In fact, the even distribution as discussed in my initial post assumes "perfect conditions," which never occur in the wild. It was merely done for simple explication. Also, weight is not distributed on these networks in a way where a node that received a .5 weight, would then distribute a weight of 1.5 if it only had 2 links. This is another place where my initial explanation might have been misleading, but the fact of the matter is that on the majority of the large network only rarely will we see node connections (edges) that distribute weight greater than 1. Since PageRank relies on a similar algorithm, presumably this is why an overwhelming number of sites have a PageRank of only 1. This is also why, if we disregard quality, there is no direct correlation between the number of links and the ranking of a page (an old SEO myth.) Even the original PageRank algorithm was far more rich and complex than that.
This is separate visual perspective of the funnel system you created:
It requires 3 linear passes, from 9 -> 1, 6 -> 1, and 3 -> 1. Due to the linearity of the system, a good greedy network algorithm could identify this within 10 guesses (at most.) Increasing the complexity by making each node on this graph representative of a large network, potentially one of many described throughout this thread, will decrease the potential for discovery considerably, especially if you use one like the one I outlined to obfuscate network bias.
Remember that even the discovery of a small greed sub-network within the larger network would cause a massive, negative weight redistribution. This is why it is essential for each outward node sub-network to be reasonably complex. Though one can't intuit too much about how Google deals with greedy networks, if they deal with it like anyone else in academia, they only automatically search for a set number of "best guess" passes and shove "more probable" networks off into a separate system for further evaluation. Path discovery of bias of intentionally complex node networks based on Hamiltonian cycles, even in the best equipment, can take many, many days to identify. It is much easier to develop them, than it is to pick them out. Because these directed paths aren't the only paths in the networks, each pass requires the algorithm to do a considerably larger analysis of all other outward edges from a single node. It's easy to see when it's graphed, but much, much harder to posit from a data set.
Furthermore, I did outline a multi-dimensional graph in the first post, I just didn't create a graph for it because I figured it was pretty self-explanatory. An example, using 2 separate sub-networks similar to the first one outlined:
This is not an ideal network, as you'd want each sub-network to have a variance in the number of nodes or (ideally and) in the nature of the path. Even if any specific bias couldn't be culled out of the data set, a good algorithm can identify sub-system similarity in networks very easily, which is extremely unnatural for a "organic" network, especially repetitive similarity (site navigation of single sites, aside.) Please keep that in mind. I'm not going to outline a hundred different scenarios. It isn't in my best interest on so many levels, time being one.
Remember that the name of the game is not solely maximizing the amount of juice, but also maximizing the longevity of ANY amount of juice. We'd be dumb not to assume that path analysis isn't occurring. Even if they slow their devaluing and de-indexing process to make it harder to get an idea of how they're doing the analysis, the process is still happening over the long term.
To discover your final destination juice point all google has to do is follow the linked list to the end.
In olden's example he has an irregular graph containing several web sites. The aim in creating your initial graph is to make it look as natural as possible. Once you've created a graph/blog farm that contains links that are as natural looking as possible...
...the next question is "How do I direct the link juice somewhere, we'll call site X, without google knowing that I have this juice generator?".
NP-Complete problems are easy to VERIFY but not DISCOVER. This means we can create them easily, but google can't find them easily.
This is why olden's method of creating a blog farm is the best option.
We get it, advertisements are annoying!
Sure, ad-blocking software does a great job at blocking ads, but it also blocks useful features and essential functions on BlackHatWorld and other forums. These functions are unrelated to ads, such as internal links and images. For the best site experience please disable your AdBlocker.