No Such thing as duplicate content - Revealed

although duplicate sites do get removed from being indexed...
 
A duplicate site might do. Though why you'd want to duplicate a whole site other than out-right fraud is beyond me.

(example, the bogus sign in pages fow Battle.net accounts that are identical to the real one)

Duplicate content - I must have tens of thousands of pieces of duplicate content (maybe hundreds od thousands - don't knw)

To my knowedge all gives me link juice and traffic, and I've never noticed a single piece being de-indexed.

Scritty
 
the way i look at it there cant be duplicate content. Reason being. Competitors can clone your site and get your site dropped from the index for being a duplicate site.
 
The whole idea of getting penalized for duplicate content is more about having duplicate pages on the same site.

Back in the day before google came into the picture, getting indexed in search engines was based completely on onpage factors. One of the early keyword tactics that marketers found was that if you created a landing page that contained a keyword in the filename it would help get that page indexed for that keyword. It didn't take long for webmasters to figure out they could just create a thousand identical landing pages with only the filename changed. So you ended up with websites having many duplicate landing pages like mysite.com/keyword1.htm, mysite.com/keyword2.htm, mysite.com/keyword3.htm, etc. This would get each of the pages into the Yahoo index for the chosen keywords. (This was also the heyday for keyword stuffing in the meta tags and hidden content).

It didn't take long for the search engines to figure this out, so duplicate content on the same website soon became a negative factor. By the time Google came along in the late 90's, duplicate content and keyword stuffing were pretty standard negative factors among the SEs.

Having duplicate content on seperate websites is a completely different thing. It's never really been penalized, but the SE do try to determine which site is the main source. Their goal is to have the main source show up as the most relevant site for that content, but they've never actually been that good achieving it. Even if you are the original source for an article, you still need a lot of SEO to make sure your site will be found.

Duplicate content on seperate websites mostly became a big issue because of copyright and article directories. The article directories want to be sure that anything submitted actually belongs to the submitter and they also want to make sure that they have unique content so they stand out as "better than the rest". So when people started submitting articles to the directories for the purpose of SEO, not having duplicate content became a pretty big issue. Even more so for the people trying to make money doing article submissions as their chosen method.

edit - So basically duplicate content as far as SEO is concerned won't absolutely cause problems for your website. The idea is based on some real factors, but has grown to become a misleading myth. In some cases duplicate content might be a negative factor and in other cases it could actually be good for your site. Just like many other factors, it really depends on how your using it.
 
Last edited:
The whole idea of getting penalized for duplicate content is more about having duplicate pages on the same site.

Back in the day before google came into the picture, getting indexed in search engines was based completely on onpage factors. One of the early keyword tactics that marketers found was that if you created a landing page that contained a keyword in the filename it would help get that page indexed for that keyword. It didn't take long for webmasters to figure out they could just create a thousand identical landing pages with only the filename changed. So you ended up with websites having many duplicate landing pages like mysite.com/keyword1.htm, mysite.com/keyword2.htm, mysite.com/keyword3.htm, etc. This would get each of the pages into the Yahoo index for the chosen keywords. (This was also the heyday for keyword stuffing in the meta tags and hidden content).

It didn't take long for the search engines to figure this out, so duplicate content on the same website soon became a negative factor. By the time Google came along in the late 90's, duplicate content and keyword stuffing were pretty standard negative factors among the SEs.

Having duplicate content on seperate websites is a completely different thing. It's never really been penalized, but the SE do try to determine which site is the main source. Their goal is to have the main source show up as the most relevant site for that content, but they've never actually been that good achieving it. Even if you are the original source for an article, you still need a lot of SEO to make sure your site will be found.

Duplicate content on seperate websites mostly became a big issue because of copyright and article directories. The article directories want to be sure that anything submitted actually belongs to the submitter and they also want to make sure that they have unique content so they stand out as "better than the rest". So when people started submitting articles to the directories for the purpose of SEO, not having duplicate content became a pretty big issue. Even more so for the people trying to make money doing article submissions as their chosen method.

edit - So basically duplicate content as far as SEO is concerned won't absolutely cause problems for your website. The idea is based on some real factors, but has grown to become a misleading myth. In some cases duplicate content might be a negative factor and in other cases it could actually be good for your site. Just like many other factors, it really depends on how your using it.

I must say, you do make a strong argument.

There is however a part of my mind that won't accept it. It seems ridiculous when looked at from a google-eye view, as copied content shouldn't be awarded "rank" or "authority" because it doesn't "add value" (or perhaps only the most minimal value). Surely Google can just look at the indexing date and mark the first index as the original.

It's amazing to think that if you and Scritty are correct, blog support networks (sometimes known as linkwheels) which don't have to rank well could be made up entirely of word for word duplicate content.

The obvious next step seems to be to conduct a test, but promoting mini-blogs takes linkbuilding resources. There is apprehension about devoting these resources to dupe-sites which I would not have total confidence in.

Very interesting argument, though. It's made me think again - despite not fully endorsing duplicate content for general SEO use.
 
Last edited:
Google owns a patent that has to do with duplicate content.

It may not be a factor now, but why plan your own burial?
 
@Micallef and furca - I'm not saying that it's never a factor, simply that it's not the absolute negative that people have grown to think of it as. If that were the case then quoting other peoples work would have a negative effect, even though that's a proper course of action in journalism. In fact if duplicate content were the total negative factor that a lot of people say it is, then autoblogging couldn't exist, aggregating news sites couldn't exist. But the fact is there are many sites using duplicate content that are seen as authority sites. Even autoblogs can do very will in the SERPs if the owner does a good enough job. Google is looking for sites that provide a positive user experience, if a site with duplicate content can provide value to it's visitors then there's no reason to penalize it.

All I'm really saying is that it's a myth that duplicate content will absolutely get you penalized. There are times when it's totally appropriate to use it and in those cases it isn't going to hurt you at all and in some cases could even help. Just like all the other factors that can help or hurt, it depends on how you're using it.


@Micallef - As far as google looking at the index date to determine who has the original source, well that's what I mean by they don't do a good job of making that determination. Take for example the case of someone that has an ebook, and then various websites post the content of the ebook on their blogs, then later on the creator decides to set up his own blog. Well obviously he can rightfully claim to be the source, but google will already have other sites indexed for that content. Not an easy thing to solve with algorithms, so we all just live with it. Best thing can do is just start working on SEO to get your own site to rank higher for your own content. LOL

I'm not really suggesting anyone change how they're doing anything. I just thought it would be interesting to explain the origin of the "SEO duplicate content penalty" since it fit within the topic of this thread. I still remember back in the 90s when spammer type websites were creating thousands of duplicate landing pages just to get extra exposure on Yahoo. It was pretty much an early Black Hat SEO technique from even before it was called Black Hat. LOL


edit - another duplicate content example is the landing pages set up by MLM companies for their associates, or the SMC and similar BusOp type companies that set up identical sales pages for the people that buy their programs. Yes I know those companies are borderline scams, but there are still some people that buy the programs and actually are able to get those duplicate sites up in the top of the SERPs.
 
Last edited:
Whilst I agree that there might not be a 'penalty' as such, I also think that links inside duplicate content wont be anywhere near as powerful as non duped content link. Otherwise we would all be going around creating link wheels and web 2.0 properties with duplicate content and still ranking for certain keywords. Surely the whole idea of article spinners is to try to change the content so that it is unique.
 
I think I may be able to shed some evidence upon this dup content issue.

Back in May 2010 I set up a small experiement to ascertain the validity of ranking using nothing but dupe content specially for the Yahoo and Google engines and also to test the effectveness of backlinking and whether great quality links could easily outrank the original article.

My idea was fulled by a PR6 EDU blog post about a subject which I had plenty of PLR material. I thought why not set up a quick blog directly relating to the blog post thereby enabling my comment upon the page to be really relevant and therefore a high probability of being accepted. So, I set up a simple wordpress.com blog about the subject and posted a couple of good quality PLR articles. I then set up the autopost feature to post a further 3 articles over the following 3 months.

As expected, my link from the PR6 EDU blog was accepted (no ads on the site and links out to authority sites in the blogroll and within posts) and I then threw all my high quality do follow links at the site.

Within the first week the site was indexed by both engines and Yahoo gave the main keyword (263K results without quotes 195k with) a no 8 position, Google 166. During this first week I had 88 backlinks showing....I slowly built more good quality links over the next 3 months to total around 200.

Given the 'clean' nature of the blog I even speculated with backlinks in high quality locations I would not normally spend time upon (another small test) and found moderated blogs much more likely to publish your comment (90%+).

Over the next 3 months (to date) the site has danced about (as expected) with Google between positions 100 - 700 and seems to have settled at position 500. The interesting part is that Yahoo danced around the top 10 for the entire duration and has now settled at position 4.

So, the results - in my mind Google firstly is aware that the content is duplicate and secondly are not ranking it well despite the fact I have some really good high quality links. Yahoo on the other hand appear to not take this into account and have consistantely ranked the content pretty much since the start of the test.

Main objectives here were to see whether dup content could be ranked with Google using high quality backlinks, it seems to me that it cannot at these competition levels (lower may be a different story). A by product of the test though is that Yahoo will rank it despite being dup content. I also learnt that creating a blog like this to obtain good backlinks from places I would not normally have them, is a good use of resource, it took only an hour or so to set up.

Given the clean nature of the blog whenever I came across a high quality, high PR blog post during my normal backlinking routine which looked moderated, I always used this link and found that most mods will approve it. It then subsequently becomes a great place to drop some links to my money sites as and when I am ready to....
 
It would be very interesting to see what happens if you change the content to a non-dublicate-version.

According to your experience this should than result in a much better ranking, I suppose.

Willing to try this?
 
I've used duplicate content including plr for articles and have had no problem getting them ranked high. This duplicate content debate has been going on for a long time, hopefully this will help put it to rest.
 
It would be very interesting to see what happens if you change the content to a non-dublicate-version.

According to your experience this should than result in a much better ranking, I suppose.

Willing to try this?

I have this test ongoing at present using unique content - it is too early to give any results yet but once it has a good 3-4 months under its belt I will either update this thread or post a new one.
 
I feel original content is a must for a site. Even though you might get traffic with duplicate content, will be cost free and easy. In the long run original content will do you more good than bad. I have seen this happening with my site. My site started ranking fairly easy due to original content and the number of links I got with those articles were just amazing. People were themselves publishing links of my articles on their site that helped me get traffic as well as one way links.
 
Why do you even speculate about things when there is a clear statement from google?



It's interesting from
0:46 to 3:30 and especially at 2:53

To sum it up:
There is no "duplicate content penalty" but there is a penalty on spam (lower rankings or deindexing) and duplicate content is one indicator of spam.

just general lack of information. when you start off you dont know up from down. its forums and posts like this to clarify and clear up problems like this that hold people back when they initially start out. thanks for reinforcing the statement made in the post title
 
So, I set up a simple wordpress.com blog about the subject and posted a couple of good quality PLR articles.

New sites have a lot of problems with dup content, but authority ones don't. If you try this test with a site that has PR, i believe the results will be different.
 
I don't believe there's any dupe content penalty. News sites would never rank if there were.
 
I've been trying an experiment using 5 original articles and then posting a well-written PLR article about once a week. It's been about 7 weeks and I'm on page one for my keyword, but it's not all that competitive. I'm about to try the same thing on a more competitive keyword. Backlinking has been a slow, steady campaign with SENuke.

Edit for clarification: I've been submitting the plr all over the place for backlinks with SENuke, only spinning the title and first sentence to make it harder for someone to just google my title and immediately see that it's dupe.
 
Last edited:
Back
Top