I've been doing SEO for a number of years but am quite new to more black hat based tactics. I have come across a competitor who have a domain performing well for a lot of searches and I can't suss out how they originally got it indexed and how they are keeping it indexed. The domain redirects via 307 redirect to another site, which in turn redirects via 302 to another domain, which has robots noindex header set. This final site in the chain is quite thin on content and is really just designed to push the user through steps to yet another domain. I can't get my head around how the original domain has been indexed and continues to show in SERPS - web archive doesn't have any records for any of the many pages which appear in SERPS (over 20k pages in total are indexed) but seems to suggest that many of the page have been redirecting since at least December 2014. The domain which ranks is an expired domain, presumably with decent authority. Given the amount of pages that are indexed I don't think that large levels of content would have been written to get this site to rank initially and although there are some links and ones from reasonably authoritative domains, I don't quite see how they got all of these many pages to rank so well initially. What really confuses me though is how the site has continued to rank for a number of years when it simply redirects - I've always been under the impression that Google would drop pages like this, is that not the case? Is this kept in because the redirect is 307? Or is it likely that they are serving different content to search engines somewhere in this? I've viewed the site with Googlebot user agent and it simply follows the same redirect chain. Any suggestions, links to any posts which deal with this or points in the right direction would be much appreciated as it's driving me a little crazy.