How to deal with Duplicate Contents on the website

What is Duplicate Content?

Duplicate content is nothing but replicating copies of the same written piece of information available on two domains through multiple URL’s. Duplicate content can be of:

  1. Webmaster himself copies the content from different sources and uses content on a website. This is know as ‘online plagiarism’.
  2. Sometimes same information through web pages are published under same ‘webpage titles’ this again leads to duplicate links.
  3. Sometimes people do have the habit of coping others information and publishing it as their information, this also leads to duplication.
  4. Technical issues regarding permalinks, themes, designs and architecture of a site.

What do search engines do with duplicate contents or links of website?

The duplicity of a link or content is detected by Google bots. They have certain algorithms from which they determine “Cluster of Content” and it helps to pick a representative URL to show it in search results. Google has crawlers, which crawls the pages and ranks them accordingly. While finding the duplicate page or content these crawlers, crawls the multiple pages but indexes only one among them. Google says that their algorithms do a reasonably good job in detecting the original source. When Google has processed the cluster of pages containing the same content, it will return only one URL in search results. All the other URL’s will never be shown on search results and they will be considered as duplicate content or shadow copies.

How to Prevent Duplicate Content Issues on Website?

As a web designer/ web master one need to ensure that domain needs to be free from technical problems, which leads to duplicate content issues.

Some of the tips are given below which can help in avoiding duplicate content:

1. Try to be unique: Do not copy others, this is always a good policy, one should not imitate others for any reason, one need to stand separate and stand unique when it comes to their website or web information.  Now a days due to lack of competition and technology advancement, to keep them self in the completion, people tend to copy others by violating privacy policies.  This needs to be avoided.

Logo - [Allowed to use only on this blog - Copyrights]

Logo – [Allowed to use only on this blog – Copyrights]

2. Canonicalization: Due to Globalization reaching customers across the world has become very easy, spreading the information to all regions across the world is must. Website is one of the common media to spread the word across all over the world. Designing website into multiple language has become a common practice now. But by doing so, there are chances of getting duplicate pages. This can be avoided by using rel=”canonical” element in the <head> tag of web page. Which ensures that google will index the right page. This is called ‘Canonicalization’

3. 301 Redirect: and should be same, this can be done using 301 redirect method. Remember google treats both these addresses as different, so when you host your site, you need to redirect both links to same page or else it will create indexing problems for your site with google.

301 redirect is the most efficient and Search Engine Friendly method for webpage redirection. It’s not that hard to implement and it should preserve your search engine rankings for that particular page. If you have to change file names or move pages around, it’s the safest option. The code “301″ is interpreted as “moved permanently”.

4. Permalinks: Permalinks are permanent links (or URLS) to your individual blog posts , Categories , Tags etc. A permalink is the Permanent link to your Article that will never be changed and hence the name is coined “Permalink” (Permanent Link ). There are so many situations when your permalinks and post slugs may contribute towards duplicate content.

5. Use of Robots.txt: There can be situations when you might have to permanently block duplicate pages using a Robots.txt file. This is not a good practice because if one links a page  to the duplicate page, Goglebot will crawl only that link and find that duplicate page sooner or later. The best solution for this is to use of the property rel=”canonical” in <head> tag element or always do a 301 redirect to the original page.

6. Multiple Platform sites: Now a days it’s a common practice to develop sites across multiple platforms[Mobiles, IPads, Desktops, iphones etc], its good if the same is done across using multiple CSS rather creating multiple directories and htmls pages for each platforms. Creating multiple pages always creates indexing problems.

So if one follows above steps properly, the website is always out of getting duplicity indexing and duplicity content issues. The helps web pages and websites to get better SEO rankings for the entire website and web pages.


3 thoughts on “How to deal with Duplicate Contents on the website

  1. Pingback: How to deal with Duplicate Contents on the website | Web Designing - Top Tips for Web Developments |

  2. Pingback: ChangeDetection – Know when any web page changes | Biotech Innovator

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s