Duplicate Content and SEO: The Ultimate Guide
Duplicate content refers to very similar, or the exact same, content being on multiple pages. Keep this in mind:
- Duplicate content adds little to no value for your visitors and confuses search engines.
- Avoid having duplicate content, as it may harm your SEO performance.
- Duplicate content can be caused by technical mishaps and manually copied content.
- There are effective ways to prevent both cases of duplicate content from becoming an issue, which we'll discuss in this article.
What is duplicate content?
Taken narrowly, duplicate content refers to very similar, or the exact same, content being on multiple pages within your own website or on other websites.
Taken broadly, duplicate content is content that adds little to no value for your visitors. Therefore, pages with little to no body content are also considered to be duplicate content.
Why is duplicate content bad for SEO?
Duplicate content is bad for two reasons:
- When there are several versions of content available, it's hard for search engines to determine which version to index, and subsequently show in their search results. This lowers performance for all versions of the content, since they're competing against each other.
- Search engines will have trouble consolidating link metrics (, relevancy and trust) for the content, especially when other websites link to more than one version of that content.
Can I get a duplicate content penalty?
Having duplicate content can hurt your SEO performance, but it won't get you a penalty from Google as long as you didn't intentionally copy someone else's website. If you're an honest website owner with some technical website challenges, and you're not trying to trick Google, you don't have to worry about getting a penalty from Google.
Here's what industry veterans think:
What is the most common fix for duplicate content?
In many cases, the best way to fix duplicate content is implementing 301 redirects from the non-preferred versions of URLs to the preferred versions.
When URLs need to remain accessible to visitors, you can't use redirect but you can either use a or a robots redirective. The canonical URL allows you to consolidates some signals, while the robots noindex directive doesn't.
Choose your weapon to battle duplicate content carefully through as they all have their pros and cons. There's no "one size fits all" approach to duplicate content.
Go through the section below to learn about the different causes of duplicate content, and see which method to tackle it fits best.
Common causes of duplicate content
Duplicate content is often due to an incorrectly set up web server or website. These occurrences are technical in nature and will likely never result in a Google penalty. They can seriously harm your rankings though, so it's important to make it a priority to fix them.
But besides technical causes, there are also human-driven causes: content that's purposely being copied and published elsewhere. As we've said, these can bring penalties if they have a malicious intent.
Duplicate content due to technical reasons
Non-www vs www and HTTP vs HTTPs
Say you're using the
www subdomain and HTTPs. Then your preferred way of serving your content is via
https://www.example.com. This is your canonical domain.
If your web server is badly configured, your content may also be accessible through:
Choose a preferred way of serving your content, and implement 301 redirects for non-preferred ways that lead to the preferred version:
URL structure: casing and trailing slashes
For Google, URLs are case-sensitive. Meaning that
https://example.com/url-A/ are seen as different URLs. When you're creating links, it's easy to make a typo, causing both versions of the URL to get indexed. Please note that URLs aren't case-sensitive for Bing.
A forward slash (
/) at the end of an URL is called a trailing slash. Often URLs are accessible through both variants here:
Choose a preferred structure for your URLs, and for non-preferred URL versions, implement a 301 redirect to the preferred URL version.
Index pages (index.html, index.php)
Without your knowledge, your homepage may be accessible via multiple URLs because your web server is misconfigured. Besides https://www.example.com, your homepage may also be accessible through:
Choose a preferred way to serve your homepage, and implement 301 redirects from non-preferred versions to the preferred version.
In case your website is using any of these URLs to serve content, make sure to canonicalize these pages because redirecting them would break the pages.
Parameters for filtering
Websites often use parameters in URLs so they can offer filtering functionality. Take this URL for example:
This page would show all the black toy cars.
While this is fine for visitors, it may cause major issues for search engines. Filter options often generate a virtually infinite amount of combinations when there is more than one filter option available. All the more so because the parameters can be rearranged as well.
These two URLs would show the exact same content:
Implement canonical URLs—one for each main, unfiltered page—to prevent duplicate content and consolidate the filter-delivered page's authority. Please note that this doesn't prevent issues. Alternatively, you could use parameter handling functionality in Google Search Console and Bing Webmaster Tools to instruct their crawlers how to deal with parameters.
A taxonomy is a grouping mechanism to classify content. They are often used in Content Management Systems to support categories and tags.
Let's say you have a blog post that is in three categories. The blog post may be accessible through all three:
Be sure to choose one of these categories as the primary one, and make the others canonicalize to that one using the canonical URL.
Dedicated pages for images
Some Content Management Systems create a separate page for each image. This page often just shows the image on an otherwise empty page. Since this page has no other content, it's very similar to all the other image pages and thus amounts to duplicate content.
If possible, disable the feature to give images dedicated pages. If that's not possible, the next best thing is to add a meta robots noindex attribute to the page.
If you have comments enabled on your website, you may be automatically paginating them after a certain amount. The paginated comment pages will show the original content; only the comments at the bottom will be different.
For example, the article URL that shows comments 1-20 could be
https://www.example.com/category/topic/comments-2/ for comments 21-40, and
https://www.example.com/category/topic/comments-3/ for comments 41-60.
Use the pagination link relationships to signal that these are a series of paginated pages.
Localization and hreflang
When it comes to localization, duplicate content issues can arise when you're using the exact same content to target people in different regions who speak the same language.
For example: when you have a dedicated website for the Canadian market and also one for the US-market—both in English—chances are there's a lot of duplication in the content.
Google is good at detecting this, and usually folds these results together. The
hreflang attribute helps prevent duplicate content. So if you're using the same content for different audiences, be sure to as part of a .
Indexable search result pages
Many websites offer search functionality, allowing visitors to search through the website's content. The pages on which the search results are displayed are all very similar, and in most cases don't provide any value to search engines. That's why you don't want them to be indexable for search engines.
Prevent search engines from indexing the search result pages by utilizing the meta robots noindex attribute. And also in general, it's a best practice not to link to your search result pages.
Indexable staging/testing environment
It's likewise a best practice to use staging environments for rolling out and testing new features on websites. But these are often incorrectly left accessible and indexable for search engines.
Avoid publishing work-in-progress content
When you create a new page that contains little content, save it without publishing it yet—often it will provide little to no value.
Parameters used for tracking
Parameters are commonly used for tracking purposes too. For instance when sharing URLs on Twitter, the source is added to the URL. This is another source of duplicate content. Take for example this URL that was tweeted using Buffer:
It's a best practice to implement self-referencing canonical URLs on pages. If you've already done that, this solves the issue. All URLs with these tracking parameters are canonicalized by default to the version without the parameters.
Sessions may store visitor information for web analytics. If each URL a visitor requests gets a session ID appended, this creates a lot of duplicate content, because the content at these URLs is exactly the same.
For example, when you click through to a localized version of our website, we add a Google Analytics session variable like
https://www.contentking.nl/?_ga=2.41368868.703611965.1506241071-1067501800.1494424269. It shows the homepage with the exact same content, just on a different URL.
Once again—it's a best practice to implement self-referencing canonical URLs on pages. If you've already done that, this solves the issue. All URLs with these tracking parameters are canonicalized by default to the version without the parameters.
When pages have a print-friendly version at a separate URL, there are essentially two version of the same content. Imagine this:
Implement a canonical URL leading from the print friendly version to the normal version of the page.
Duplicate content caused by copied content
Landing pages for paid search
Paid search requires dedicated landing pages that target specific keywords. The landing pages are often copies of original pages, which are then adjusted to target these specific keywords. Since these pages are very similar, they produce duplicate content if they are .
Prevent search engines from indexing the landing pages by implementing the meta robots noindex attribute. In general, it's a best practice to neither link to your landing pages nor include them in your XML sitemap.
Other parties copying your content
Duplicate content can also originate from others copying your content and publishing it elsewhere. This is in particular a problem if your website has a low , and the one copying your content has a higher domain authority. Websites with a higher domain authority often get crawled more frequent, resulting in the copied content being crawled first on the website of the one that copied the content. They may now be perceived as the original author and rank above you.
Copying content from other websites
Copying content from other websites is a form of duplicate content too. Google how to best handle this from an SEO point of view: linking to the original source, combined with either a or
a meta robots noindex tag. Keep in mind that not all website owners are happy with you syndicating their content, so it's recommended to ask for permission to use their content.
Finding duplicate content
Finding duplicate content within your own website
Using ContentKing, you can easily find duplicate content by checking whether your pages have a unique page , , and . You can do this by going to the Issues section and opening the "Meta information" and "Content Headings" cards. See if there are any open issues regarding:
- "Page title is not unique"
- "Meta description is not unique"
- "H1 heading is not unique"
- : Google's found duplicate URls that aren't canonicalized to a preferred version.
- : Google chose to ignore your canonical on URLs they found on their own, and instead assigns Google-selected canonicals.
- : Google chose to ignore the canonicals you defined for URLs you submitted through an XML sitemap.
Finding duplicate content outside your own website
If you've got a small website, you can try searching in Google for phrases between quotes. For instance, if I want to see if there are any other versions of this article, I may search for "Using ContentKing, you can easily find duplicate content by checking whether your pages have a unique page title, meta description, and H1 heading."
Frequently asked questions about duplicate content
Can I get a penalty for having duplicate content?
If you didn't intentionally copy someone's website, then it's very unlikely for you to get a duplicate content penalty. If you have did copy large amounts of other people's content, then you're walking a fine line. This is what :
Will fixing duplicate content issues increase my rankings?
Yes, because by fixing the duplicate content issues you're telling search engines what pages they should really be crawling, indexing, and ranking.
How much duplicate content is acceptable?
There's no one good answer to this question. However:
If you want to rank with a page, it needs to be valuable to your visitors and have unique content.