Youtube channel !

Be sure to visit my youtube channel

Thursday, December 03, 2009

SEO keyword density, canonical, duplicate content

Above is a sample screenshot is taken from Google's Webmaster Tools Keywords report. You may ask why should we need it when we can use Firefox integrated Show keyword density function?
Well, one benefit is that this function shows specific keyword significance across your pages. Let me explain what do this means:

Suppose that you are optimizing content for the keyword 'cars'. It's a normal practice to repeat 'cars' 2-3 times, style it in bold, etc... Everything's good as long as you do it naturally. The moment you overstuff your page with this keyword it will get penalized and lose its current ranking in Google SERPS. So you have to be careful with such repetitions.
Moreover in the report, you can see the overall website keyword significance. And because Google likes thematic websites it is really important for these keywords to reflect your website purpose or theme. Otherwise, you're just targeting the wrong visitors and shouldn't be puzzled by the high abandonment rate.

But, enough with the theory now let's discuss how you can fix some things up:

Check every individual webpage keyword density online via Webconfs and correct(reduce) words, that are being used over 2%. Again this % depends on your local keyword concurrency. So tolerable levels can vary up and down.

Add the 'canonical' tag to all your website pages:
< link rel="canonical" href="http://www.example.com/your_preferred_webpage_url.html" />
(and make sure to specify the URL that you really prefer!). This will reveal to the search engine what your legitimate webpage is. More info: http://googlewebmastercentral.blogspot.com/2009/02/specify-your-canonical.html



Blogger users can achieve adding canonical with the following code at the head section in the Template:
<b:if cond='data:blog.pageType == "item"'>
<link expr:href='data:blog.url' rel='canonical'/>
</b:if>
(it will remove the parameters appended at the end of the URL such as http://nevyan.blogspot.com/2016/12/test.html?showComment=1242753180000
and specify the original authority page: http://nevyan.blogspot.com/2016/12/test.html )

Next to prevent duplicate references of your archive( i.e .../2009_01_01_archive.html) and label pages( i.e. /search/label/...) from getting indexed just add:
<b:if cond='data:blog.pageType == "archive"'>
<meta content='noindex,follow' name='robots'/>
</b:if>
<b:if cond='data:blog.pageType == "index"'>
<b:if cond='data:blog.url != data:blog.homepageUrl'>
<meta content='noindex,follow' name='robots'/>
</b:if>
</b:if>

To prevent indexing of mobile (duplicates) of the original pages:
    <b:if cond="data:blog.isMobile">
<meta content='noindex,nofollow' name='robots'/>
</b:if>

And working solution blocking even the /search/tags from indexing, allowing only homepage and posts to be indexed:
    <b:if cond="data:blog.pageType == "index" and data:blog.url != data:blog.homepageUrl">
<meta content='noindex,follow' name='robots'/>
</b:if>

Subscribe To My Channel for updates

Integrating AI code helpers into Visual Studio Code

In this guide, we’ll walk through setting up a local AI-powered coding assistant within Visual Studio Code (VS Code). By leveraging tools s...