You're looking at this situation from the perspective of graph theory. That's a very, very good first step. As you've already mentioned, in your example the linking pattern is highly regular and the number of node-to-node connections is homogeneous. Regardless of what specific technology Google uses to identify

*greedy communities*, one can approach this issue with detection avoidance in mind. Specific methodology is hard to discuss in

*real terms* without getting technical and whipping out Mathematica, but, as I think you've discovered for yourself, graphs can be illuminating.

For example:

Imagine that each arrow on this 7-node network represents a link between a different site. As you can plainly see, there are a variety of different linking relationships between the seven different nodes, but none that appear regularly. If we imagine that this network represents a network of "fresh blogs," the total possible linking weight of each blog is equal to 1. We assume that the weight of a single node (1.0) must be evenly distributed across all outbound links. For example, in the case of

**Node 1**, there are three outbound links, so the weight of each link has a value of roughly 0.333 of

**Node 1**'s total weight.

**Node 2**'s links each have 0.5,

**Node 3**'s 0.5, etc.

We must ask ourselves a single question; does this network exhibit bias? Or, in linking terms, does one site get more juice than the others?

Network bias is simply weighted preference distributed across the network. In this scenario, we want the balance to be "tipped" (so to speak) in a single site's direction. To do this, we must find whether our schema has a deliberate path wherein all nodes are

*touched* once and only once. This path is called a Hamiltonian path.

Luckily for us, in this example it does:

Follow the red arrows from

**Node 1** to

**Node 4** to

**Node 7** to

**Node 2** to

**Node 3** to

**Node 6** and finally to

**Node 5**. Specific preference has been established in a

*relatively* non-regular network. If all sites are weighted evenly and their juice is distributed evenly,

**Node 5** would have more juice to give than any other node on the network.

In terms of detection, as the number of nodes increases, as does our ability to obscure this directionality. One method of obscuring unnatural bias is to mimic actual networks. Not too surprisingly, bias is shown in real networks as well, but generally towards single nodes from many

*different* networks. Imagine an even larger graph than the example. A graph where

**Node 5** is linked to by 3 or 4 other networks similar to the one in our example. If it were linked to by 3 other networks by only two links (as is the case in our example) a site with only 6 inbound links would carry considerably more weight as a node than any other node on its whole network.

I'm not going lay it all out here, because this is not a simple concept and I don't have the time nor presence of mind to break it down completely. However, it is absolutely possible to develop the graphs of networks like these without knowing a lick of graph theory. If you're interested in exploring these ideas more, look into Hamiltonian paths, adjacency matrices, and the work of Dr. Robert Tarjan.

Good luck!

*Note: I'll try and put something a little more comprehensive and clear on our blog in the next few weeks.*