Content is the king. However, the linking defines the site and page importance for other web participants. Both internal and external ones are important, and there are tricks how to use HTML to your benefit. Common, image, javascript links, the new rel=”nofollow” atribute, redirects, robots.txt – there are variety of linking related things used depending on goal. And this causes the logical problem with me.
Lets leave the unindexable pages (prohibited for example with robots.txt file), and search results outside this story. The question is how different for user are javascript, redirects and common links ? The answer is – well, they are the same. However, search engines handle them differently for the moment. Javascript links are not handled as a links, and redirects are shaky ground. Should it be so ? I think, all the links should be handled the same, because of similar functionality to the user. Perhaps it would be imposible to fully evaluate javascript outside the browser yet, but I think, soon there will be less ways to trick link partners. Additionally, they are here to skew the site map.
Of couse, there will be some member-only accessable areas. That is ok. But I still want to be able to find all the freely accessable content in the web not depending on link strategy site uses.

Categories: Programming

Giedrius Majauskas

I am a internet company owner and project manager living at Lithuania. I am interested in computer security, health and technology topics.

0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *