Monday, November 28, 2011

An Explanatory Guide To Search Engines

By Jeffrey Rush


Imagining the World Wide Web is not simple so the best way is to think of it as a network of underground train stations and every stop or station is a webpage or component of information. The label given to the means of finding their way among different parts of the Web like trains moving along tracks is 'links'. If it wasn't for links then the billions of web based documents would not be connected together for the search engines to navigate. This vast network of documents is then navigated by specially designed automated robots known as "spiders" or "crawlers" which then can access information directed by the search engines. All of the data collected by these robots can then by analysed. The frequently used term for this process is "information retrieval" and the term "hits" describes the findings themselves. Across the world and the Web the huge majority of search engine traffic is managed by just three search engine companies.

Data storage centres hold the immense hard drives that store the information gathered by the "spiders" and "crawlers" and these are connected together to their counterparts across the world in a variety of locations.

The search engine operators store specially selected sections of the pages visited so that when users request information the necessary pages can be found as quickly as possible - often in well under a second. The search engines are so quick now that if a request is not responded with answers in under a couple of seconds it is seen to be slow.

The term given to the request for information by a user is "search query". All of the results are then listed for the user with the most relevant or important at the top and going down in order. Many different factors are used to ascertain the relevance of search engine results and several types of search engines exist all using different techniques. When search engine technology was in its infancy in the mid 1990s, if search results showed the key word used in the search criteria then that was enough considered to be relevant. When this is compared to the approaches used now it seems very simplistic with the outcome being that unfortunately it often lead to irrelevant pages being brought up in the searches which frustrated users.

Deciding what information is important to the user is vital for the search engine to succeed in its work. The page or documents popularity is used as a crucial factor in the mathematical equation process to determine the importance of the information to the person conducting the search. In search engine terminology the name for these is 'ranking factors'. The philosophy underlying this process is that if the information found is already popular to users then it should be considered relevant to other users. The system for showing the final results is simple and user friendly with the most important and relevant information ranked highest up the list. Without the exact secret to how the big commercial search engines rank websites the field of Search Engine Optimisation has developed offering companies a way of climbing the search engine ratings.




About the Author:



0 comments:

Post a Comment