I've been wanting to mass create websites, blogs, and forums depending on a specific keywords and grabbing the content from all over the web. The biggest hurdle now as I understand it is the G.Panda update, which penalized such websites. However, I am wondering if automatically regenerating/rewording "stolen" content to make it unique will beat Panda? Technically, this regenerated content will be unique and will be the only one of its kind, even though it does not have to be semantically correct for human reading. Or does the big G actually have some algorithm to detect non-semantic content, thus making content auto-regeneration not working? For those still doing autoblogs and grabbing content from the net, what is your take on this whole G.Panda issue and how do you beat it? Thanks.