I am developing a new bot that does article directory submission for a variety of article directory platforms and I am coming across a common issue. When an article gets approved sometimes it goes unnoticed by the spiders and never shows up cached. Keep in mind that all articles are synonymized and are never 100% alike, so dupe content shouldn't be an issue. I thought of solving this problem in a couple of different ways but each have their drawbacks. The first possible solution would be to linkline (or possibly linkwheel) the articles and bookmark the beginning of the line. The problem with this would be that I don't know the URL of each article until it is approved by the moderators (not always the case - sometimes I can guess the future URL), but theoritically if there is a bad URL, then I have to edit the article and correct it. I have 1000 article directories and that's too painful. The second possible solution would be to bookmark (digg, jumptags etc) each approved article. This may work, but you would need several accounts, several email addys, and well, that is not the lazy man's approach. The third possible solution would be to have the bot continuously check for non-cached versions of approved articles, then create an html file that you upload to an already indexed website. Eventually, the spiders will see the html file, then follow through to the various un-cached articles. I like the third solution the best because it is the easiest from a programming standpoint. If all links to each article come from one source (the html file), is it possible that this could hurt my SEO since the basic structure of the linking pattern would be: One html file pointing to many articles that point to one website. The HTML file would reside on a different site that I am trying to SEO. Anyways, I hope that wasn't too confusing, but I am looking to see what you whitehat experts think.