how good is this linking strategy for blog farm?

Do you honestly believe that you understand how dimensionality in systems works? Do you realize that you misused that phrase in that last sentence in an attempt to try and disparage someone with a functional understanding of the math that you're talking about? Do you understand how complicated path traversal is to calculate? Do you know any algorithms to do it?

It would be great if you could guide me as to where I was wrong ....

I respect and adore you oldenstyehats , not because you are a Donor or an Executive VIP or have been thanked so many times previously. It is because you think for yourself, think what the person has written, analyze the Pros and Cons of it, and then give your opinion. Most people lack this skill and will jump overboard to thank you and support you just because you have written a lengthy post,even if they do not understand anything. They blindly thank others when they see that many others have thanked someone.

Or maybe I'm just a misguided Soul !! :eek:


Anyways.... Should stop wasting my time....I'm outta this thread ....
:reddy:
 
Last edited:
It would be great if you could guide me as to where I was wrong ....

I respect and adore you oldenstyehats , not because you are a Donor or an Executive VIP or have been thanked so many times previously. It is because you think for yourself, think what the person has written, analyze the Pros and Cons of it, and then give your opinion. Most people lack this skill and will jump overboard to thank you and support you just because you have written a lengthy post,even if they do not understand anything. They blindly thank others when they see that many others have thanked someone.

Or maybe I'm just a misguided Soul !! :eek:


Anyways.... Should stop wasting my time....I'm outta this thread ....
:reddy:
I apologize. I've written a lot of different responses to the posts in this thread over the last few days and decided not to actually post them until I can figure out a good way of explaining all of this without using misleading language. My response above was by far the most reactionary and mean-spirited. I shouldn't post when I'm grumpy. I also somehow missed your post with the multi-dimensional list and apologize for that.

That said, here is the problem with this method that you outline with this graph:
307303123935cb840771modrh1.jpg

If I'm reading it correctly, (you should really fire up a visualization tool) while it is absolutely multi-dimensional, as you stated above, it does not exhibit any where near enough complexity to avoid even the simplest detection algorithms. It also assumes, potentially based on an over-simplification that is entirely my fault for introducing, that weight is evenly distributed at all nodes and that this distribution is temporally linear. It is not. In fact, the even distribution as discussed in my initial post assumes "perfect conditions," which never occur in the wild. It was merely done for simple explication. Also, weight is not distributed on these networks in a way where a node that received a .5 weight, would then distribute a weight of 1.5 if it only had 2 links. This is another place where my initial explanation might have been misleading, but the fact of the matter is that on the majority of the large network only rarely will we see node connections (edges) that distribute weight greater than 1. Since PageRank relies on a similar algorithm, presumably this is why an overwhelming number of sites have a PageRank of only 1. This is also why, if we disregard quality, there is no direct correlation between the number of links and the ranking of a page (an old SEO myth.) Even the original PageRank algorithm was far more rich and complex than that.

This is separate visual perspective of the funnel system you created:
funnel-net.png


It requires 3 linear passes, from 9 -> 1, 6 -> 1, and 3 -> 1. Due to the linearity of the system, a good greedy network algorithm could identify this within 10 guesses (at most.) Increasing the complexity by making each node on this graph representative of a large network, potentially one of many described throughout this thread, will decrease the potential for discovery considerably, especially if you use one like the one I outlined to obfuscate network bias.

Remember that even the discovery of a small greed sub-network within the larger network would cause a massive, negative weight redistribution. This is why it is essential for each outward node sub-network to be reasonably complex. Though one can't intuit too much about how Google deals with greedy networks, if they deal with it like anyone else in academia, they only automatically search for a set number of "best guess" passes and shove "more probable" networks off into a separate system for further evaluation. Path discovery of bias of intentionally complex node networks based on Hamiltonian cycles, even in the best equipment, can take many, many days to identify. It is much easier to develop them, than it is to pick them out. Because these directed paths aren't the only paths in the networks, each pass requires the algorithm to do a considerably larger analysis of all other outward edges from a single node. It's easy to see when it's graphed, but much, much harder to posit from a data set.

Furthermore, I did outline a multi-dimensional graph in the first post, I just didn't create a graph for it because I figured it was pretty self-explanatory. An example, using 2 separate sub-networks similar to the first one outlined:
linknet.png


This is not an ideal network, as you'd want each sub-network to have a variance in the number of nodes or (ideally and) in the nature of the path. Even if any specific bias couldn't be culled out of the data set, a good algorithm can identify sub-system similarity in networks very easily, which is extremely unnatural for a "organic" network, especially repetitive similarity (site navigation of single sites, aside.) Please keep that in mind. I'm not going to outline a hundred different scenarios. It isn't in my best interest on so many levels, time being one.

Remember that the name of the game is not solely maximizing the amount of juice, but also maximizing the longevity of ANY amount of juice. We'd be dumb not to assume that path analysis isn't occurring. Even if they slow their devaluing and de-indexing process to make it harder to get an idea of how they're doing the analysis, the process is still happening over the long term.
 
good posts oldenstylehats, you really know about what you are talking about. I can imagine how complex this whole math thing is.

Looking forward to your blogpost about this for some inspiration, when you finally get this out..I'm pretty sure you will get this ;)


Also something for those, who want try s.th. like this out..

Don't make all the blogs in 1 day. First make some Blogs, then after a time post about 10 articles, and in the end start posting often and build you network.
And build some natural Backlinks for each blog, like forum posting, commenting, a little socialbookmarking...
 
When he's discussing multidimensionality, he's talking about "lists of lists," though none of the graphical images that have been posted present this in any meaningful way. In my last post, I did, however, outline one way we could take the last graph that he posted and turn it into a "multidimensional list." From what I understand, it is more of a meta-concept laid upon directed graphs/lists than it is a theory with a foundation in the math itself. I could be wrong though, I admittedly don't read as many journal articles as I did a few years ago.
 
i got a few questions, so "5" gets the most credit, then how much credit does "1" get?

Also if i was going to create additional networks, would I start the network linking from node 5 or node 1?
 
farm.png

(the colored linies/circles between the dots indicates groups, not links)

GREEN GROUP
BLUE GROUP
RED GROUP
ORANGE MAINSITE

50% of all outbound link goes inwards to a random blog from either of the inner groups or mainsite
25% of all outbound links stays whitin group and goes to a random blog
25% of all outbound links goes outwards to a random blog (The Green group link out to 3. parts sites)

The closer you get to the main site, the more juice the links will have as the majority of links goes inwards.

The percentages are fixed at rates without deeper evaluation. Do you think they are OK or do you have any other proposals? 50%,30%, 20%??
 
oldenstylehats, do you think this wordpress linking is good ? There is a PlugIn who makes this.. I can't see, that there would be something bad in it.
 

Attachments

  • silo-300x260.jpg
    silo-300x260.jpg
    15.6 KB · Views: 210
Very interesting thread!

May I dare to ask for which purpose a blogform really serves? Where is the difference to direct BH link building using tools like XRumer / Autopligg? Where is the exact benefit? It takes time to get link power, so the outcome must be extraordinarily?
 
I apologize. I've written a lot of different responses to the posts in this thread over the last few days and decided not to actually post them until I can figure out a good way of explaining all of this without using misleading language. My response above was by far the most reactionary and mean-spirited. I shouldn't post when I'm grumpy. I also somehow missed your post with the multi-dimensional list and apologize for that.

That said, here is the problem with this method that you outline with this graph:
307303123935cb840771modrh1.jpg

If I'm reading it correctly, (you should really fire up a visualization tool) while it is absolutely multi-dimensional, as you stated above, it does not exhibit any where near enough complexity to avoid even the simplest detection algorithms. It also assumes, potentially based on an over-simplification that is entirely my fault for introducing, that weight is evenly distributed at all nodes and that this distribution is temporally linear. It is not. In fact, the even distribution as discussed in my initial post assumes "perfect conditions," which never occur in the wild. It was merely done for simple explication. Also, weight is not distributed on these networks in a way where a node that received a .5 weight, would then distribute a weight of 1.5 if it only had 2 links. This is another place where my initial explanation might have been misleading, but the fact of the matter is that on the majority of the large network only rarely will we see node connections (edges) that distribute weight greater than 1. Since PageRank relies on a similar algorithm, presumably this is why an overwhelming number of sites have a PageRank of only 1. This is also why, if we disregard quality, there is no direct correlation between the number of links and the ranking of a page (an old SEO myth.) Even the original PageRank algorithm was far more rich and complex than that.

This is separate visual perspective of the funnel system you created:
funnel-net.png


It requires 3 linear passes, from 9 -> 1, 6 -> 1, and 3 -> 1. Due to the linearity of the system, a good greedy network algorithm could identify this within 10 guesses (at most.) Increasing the complexity by making each node on this graph representative of a large network, potentially one of many described throughout this thread, will decrease the potential for discovery considerably, especially if you use one like the one I outlined to obfuscate network bias.

Remember that even the discovery of a small greed sub-network within the larger network would cause a massive, negative weight redistribution. This is why it is essential for each outward node sub-network to be reasonably complex. Though one can't intuit too much about how Google deals with greedy networks, if they deal with it like anyone else in academia, they only automatically search for a set number of "best guess" passes and shove "more probable" networks off into a separate system for further evaluation. Path discovery of bias of intentionally complex node networks based on Hamiltonian cycles, even in the best equipment, can take many, many days to identify. It is much easier to develop them, than it is to pick them out. Because these directed paths aren't the only paths in the networks, each pass requires the algorithm to do a considerably larger analysis of all other outward edges from a single node. It's easy to see when it's graphed, but much, much harder to posit from a data set.

Furthermore, I did outline a multi-dimensional graph in the first post, I just didn't create a graph for it because I figured it was pretty self-explanatory. An example, using 2 separate sub-networks similar to the first one outlined:
linknet.png


This is not an ideal network, as you'd want each sub-network to have a variance in the number of nodes or (ideally and) in the nature of the path. Even if any specific bias couldn't be culled out of the data set, a good algorithm can identify sub-system similarity in networks very easily, which is extremely unnatural for a "organic" network, especially repetitive similarity (site navigation of single sites, aside.) Please keep that in mind. I'm not going to outline a hundred different scenarios. It isn't in my best interest on so many levels, time being one.

Remember that the name of the game is not solely maximizing the amount of juice, but also maximizing the longevity of ANY amount of juice. We'd be dumb not to assume that path analysis isn't occurring. Even if they slow their devaluing and de-indexing process to make it harder to get an idea of how they're doing the analysis, the process is still happening over the long term.

Which arrows are we following here on Odlens method, the green or black? or both?
Thanks
 
there is a million ways that you could make this work!

I dont think there is any golden way of doing this seeing as how far based in math this is unless your a math god
 
Last edited:
there is a million ways that you could make this work!

I dont think there is any golden way of doing this seeing as how far based in math this is unless your a math god

Spill the beans if there is a million ways to make this work.
 
erm..are all of you mathematician or what...for me i'm not...i'll try linking method youre all suggested..
 
I apologize. I've written a lot of different responses to the posts in this thread over the last few days and decided not to actually post them until I can figure out a good way of explaining all of this without using misleading language. My response above was by far the most reactionary and mean-spirited. I shouldn't post when I'm grumpy. I also somehow missed your post with the multi-dimensional list and apologize for that.

That said, here is the problem with this method that you outline with this graph:
307303123935cb840771modrh1.jpg

If I'm reading it correctly, (you should really fire up a visualization tool) while it is absolutely multi-dimensional, as you stated above, it does not exhibit any where near enough complexity to avoid even the simplest detection algorithms. It also assumes, potentially based on an over-simplification that is entirely my fault for introducing, that weight is evenly distributed at all nodes and that this distribution is temporally linear. It is not. In fact, the even distribution as discussed in my initial post assumes "perfect conditions," which never occur in the wild. It was merely done for simple explication. Also, weight is not distributed on these networks in a way where a node that received a .5 weight, would then distribute a weight of 1.5 if it only had 2 links. This is another place where my initial explanation might have been misleading, but the fact of the matter is that on the majority of the large network only rarely will we see node connections (edges) that distribute weight greater than 1. Since PageRank relies on a similar algorithm, presumably this is why an overwhelming number of sites have a PageRank of only 1. This is also why, if we disregard quality, there is no direct correlation between the number of links and the ranking of a page (an old SEO myth.) Even the original PageRank algorithm was far more rich and complex than that.

This is separate visual perspective of the funnel system you created:
funnel-net.png


It requires 3 linear passes, from 9 -> 1, 6 -> 1, and 3 -> 1. Due to the linearity of the system, a good greedy network algorithm could identify this within 10 guesses (at most.) Increasing the complexity by making each node on this graph representative of a large network, potentially one of many described throughout this thread, will decrease the potential for discovery considerably, especially if you use one like the one I outlined to obfuscate network bias.

Remember that even the discovery of a small greed sub-network within the larger network would cause a massive, negative weight redistribution. This is why it is essential for each outward node sub-network to be reasonably complex. Though one can't intuit too much about how Google deals with greedy networks, if they deal with it like anyone else in academia, they only automatically search for a set number of "best guess" passes and shove "more probable" networks off into a separate system for further evaluation. Path discovery of bias of intentionally complex node networks based on Hamiltonian cycles, even in the best equipment, can take many, many days to identify. It is much easier to develop them, than it is to pick them out. Because these directed paths aren't the only paths in the networks, each pass requires the algorithm to do a considerably larger analysis of all other outward edges from a single node. It's easy to see when it's graphed, but much, much harder to posit from a data set.

Furthermore, I did outline a multi-dimensional graph in the first post, I just didn't create a graph for it because I figured it was pretty self-explanatory. An example, using 2 separate sub-networks similar to the first one outlined:
linknet.png


This is not an ideal network, as you'd want each sub-network to have a variance in the number of nodes or (ideally and) in the nature of the path. Even if any specific bias couldn't be culled out of the data set, a good algorithm can identify sub-system similarity in networks very easily, which is extremely unnatural for a "organic" network, especially repetitive similarity (site navigation of single sites, aside.) Please keep that in mind. I'm not going to outline a hundred different scenarios. It isn't in my best interest on so many levels, time being one.

Remember that the name of the game is not solely maximizing the amount of juice, but also maximizing the longevity of ANY amount of juice. We'd be dumb not to assume that path analysis isn't occurring. Even if they slow their devaluing and de-indexing process to make it harder to get an idea of how they're doing the analysis, the process is still happening over the long term.


Sorry for bringing this thread up again - but it's a damn good read.

Two things:

1) Is there any software that can generate hamiltonian paths?.

2) Would it matter if the two hamiltonian paths were crossed randomly, ex. 7 was replaced with 13.
 
I'm just determinig alternatives. Anyways, read graph theory,What good 'ol oldenhats proposed is also a single dimensional link list that is 'more' easily traversed.... While the second one is a multidimensional list..... :p


Both post are really good. Was wondering exactly how the blog farm worked.
 
Sorry to bump an old topic. I've been looking for a good linking strategy for my blog farms.


Using the network below, For every 6 blogs I get 1 link to my target site?
1 blog farm = 6 blogs/nodes with one node linking to target site.



linknet.png





So for example if I wanted 5 links to a target site I'd need to create 5 of these blog farms?
 
Last edited:
Has anyone got close to writing an application that will mimic the link diagram that oldenstylehats has presented on this thread and build sites. It would seem to me to be far too complex and would be very expensive.
 
How does one come up with the linking structure between nodes?

For example in the above diagram I noticed each node except the target site has two links going to other nodes:

Is this just a random placement of these 2 links then you run the hamiltonian path through the network adding additional links where needed then untimately exiting the network at the target site?

I noticed some links are reciprocal I take it this is not an issue.

So would it be these nodes/pages on authority domains that I point all my blackhat backlinks to or should I focus all my BH links in the node with the link going to the target site?
 
Last edited:
Network bias is simply weighted preference distributed across the network. In this scenario, we want the balance to be "tipped" (so to speak) in a single site's direction. To do this, we must find whether our schema has a deliberate path wherein all nodes are touched once and only once. This path is called a Hamiltonian path.

Luckily for us, in this example it does:
as.png

Follow the red arrows from Node 1 to Node 4 to Node 7 to Node 2 to Node 3 to Node 6 and finally to Node 5. Specific preference has been established in a relatively non-regular network. If all sites are weighted evenly and their juice is distributed evenly, Node 5 would have more juice to give than any other node on the network.

In terms of detection, as the number of nodes increases, as does our ability to obscure this directionality. One method of obscuring unnatural bias is to mimic actual networks. Not too surprisingly, bias is shown in real networks as well, but generally towards single nodes from many different networks. Imagine an even larger graph than the example. A graph where Node 5 is linked to by 3 or 4 other networks similar to the one in our example. If it were linked to by 3 other networks by only two links (as is the case in our example) a site with only 6 inbound links would carry considerably more weight as a node than any other node on its whole network.

I'm not going lay it all out here, because this is not a simple concept and I don't have the time nor presence of mind to break it down completely. However, it is absolutely possible to develop the graphs of networks like these without knowing a lick of graph theory.

Hiya.

Like most people I'm not really following you, but let me try anyway. Hope my head doesn't explode.

My guess at how to do what you're saying:
1. Create a path or route where every node is touched once. (In RED, above)
2. Then you add more links randomly, without causing another "route" where there's a path where every node is touched once. (Links in black)

Is that about right?

And the problem I have with the above diagram is how much reciprocal linking is going on. I'm sure if there's more nodes, then you'd be able to do something similar without reciprocal links.

Of course, if we knew that reciprocal links were simply discounted without dissipating link juice, then we'd actually have a kickass method to focus link juice while hiding the true path.

But if reciprocal links merely dissipate linkjuice without increasing what's sent through the other links on the same site, we're screwed unless we avoid reciprocal linking.

Seems to me that Google should note a lack of reciprocal linking as suspicious though. It would be a good footprint for folks trying to manipulate link juice.
 
Back
Top
AdBlock Detected

We get it, advertisements are annoying!

Sure, ad-blocking software does a great job at blocking ads, but it also blocks useful features and essential functions on BlackHatWorld and other forums. These functions are unrelated to ads, such as internal links and images. For the best site experience please disable your AdBlocker.

I've Disabled AdBlock