2.1 How Google finds sites and pages

All major search engines use spider programs (also known as crawlers or robots) to scour the web, collect documents, give each a unique reference, scan their text, and hand them off to an indexing program. Where the scan picks up hyperlinks to other documents, those documents are then fetched in their turn. Google’s spider is called Googlebot and you can see it hitting your site if you look at your web logs. A typical Googlebot entry (in the browser section of your logs) might look like this:

Mozilla/5.0 (compatible; Googlebot/2.1; http://www.google.com/bot.html)
Tags:

About author

Curabitur at est vel odio aliquam fermentum in vel tortor. Aliquam eget laoreet metus. Quisque auctor dolor fermentum nisi imperdiet vel placerat purus convallis.

0 megjegyzés

Post a Comment