Date / Time



News by month

News Categories

Link to us

Link to us



Search Promotion Data


News by month 2019

Latest News



Googles Take On Duplicate Content

on 10 Jan  Posted by Admin  Category: Site Promotion  
by Matt Jackson

Duplicate content is the topic of many SEO conversations. The search engines penalize duplicate content to an extent but a recent Google blog post has indicated that perhaps the penalty is less damaging than many assumed. The duplicate content penalty is primarily designed to ensure that search engine results are not simply page after page displaying the exact same page content. As such, it may still prove detrimental to the performance of some of your pages. This article is a summary and explanation of Google's blog post, Deftly Dealing with Duplicate Content. What Is And What Is Not Duplicate Content?

A big part of the duplicate content debate has always been the question of what is and what is not considered to be duplicate content. Google describes it as being:

"substantive blocks of content within or across domains that either completely match other content or are appreciably similar."

Perhaps the most important section of this quote is the first word 'substantive'. One or two words, or even as in the instance above, a moderate quote taken from the page of another website is not deemed as being duplicate content. In fact, the post goes on to reiterate that very point stating that, as examples, translations and quotes would not be considered duplicate.

The Duplicate Content Penalty

The most poignant section of the post points out that Google appreciates that there are genuinely viable uses of duplicate content. The example given is the publishing of 'regular' and 'printer' versions of pages or articles. In these cases where the Google algorithms can find no malicious intent to manipulate the search engines, they will list and rank only one page. This means that while a penalty of sorts is applied to one page, it is far more forgiving than many believe.

Which Page Do You Want Ranked?

Enabling the search algorithms to determine the page to rank and the page to ignore could prove dangerous at best. In some circumstances, for instance when republishing free website content, there is little you can do to control whether your page is ranked or whether the source page is ranked. In the case of the printer and the regular pages, though, you can (and should) use the robots.txt file or the noindex Meta tag to instruct the search engine spider.

More Harsh Penalties Do Exist

While it may seem like Google does not take duplicate content as seriously as many first thought, they may implement site wide penalties where appropriate. If the algorithm determines that duplicate content is used in a way to manipulate the search engines or deceive visitors then Google they may 'make appropriate adjustments in the indexing and ranking of the sites involved'.

The Most Likely Outcome

Generally speaking the only penalty your site is likely to incur is that the page that contains duplicate content will not be ranked. You may also find that if you have two pages containing the same information then Google ranks the one you deem to be less important ignoring the other.


The use of free website content taken from willing websites has often been used as a way to populate sites. It provides good information and Webmasters are able to pick up content that is engaging for their reader. Obviously, if the page isn't being ranked because of the duplicate content filter then this means no benefit in terms of search engine ranking but it does still provide value to visitors. The duplicate content penalty is not necessarily the site killer that you first imagined, but you should take care to control the pages that are ranked and where you use duplicate content.

About The Author

This duplicate content article appears on the WebWiseWords website. WebWiseWords provides high quality copywriting services, including SEO copywriting and a variety of other new media copywriting services.