Author Topic: SEARCH ENGINES  (Read 815 times)

0 Members and 1 Guest are viewing this topic.

Offline DE_Light

  • Lead Admin
  • *****
SEARCH ENGINES
« on: June 07, 2012, 09:43:45 PM »
SEARCH ENGINES

Search engines are programs that search documents for specified keywords and return a list of the documents where the keywords were found. A search engine is really a general class of programs; however, the term is often used to specifically describe systems like Google, Bing and Yahoo! Search that enable users to search for documents on the World Wide Web.

Web Search Engines

Typically, Web search engines work by sending out a spider to fetch as many documents as possible. Another program, called an indexer, then reads these documents and creates an index based on the words contained in each document. Each search engine uses a proprietary algorithm to create its indices such that, ideally, only meaningful results are returned for each query.

HOW DO WEB SEARCH ENGINES WORK?

Search engines are the key to finding specific information on the vast expanse of the World Wide Web. Without sophisticated search engines, it would be virtually impossible to locate anything on the Web without knowing a specific URL. But do you know how search engines work? And do you know what makes some search engines more effective than others?
When people use the term search engine in relation to the Web, they are usually referring to the actual search forms that search through databases of HTML documents, initially gathered by a robot.
There are basically three types of search engines: Those that are powered by robots (called crawlers; ants or spiders) and those that are powered by human submissions; and those that are a hybrid of the two.
Sponsored

Crawler-based search engines are those that use automated software agents (called crawlers) that visit a Web site, read the information on the actual site, read the site's meta tags and also follow the links that the site connects to performing indexing on all linked Web sites as well. The crawler returns all that information back to a central depository, where the data is indexed. The crawler will periodically return to the sites to check for any information that has changed. The frequency with which this happens is determined by the administrators of the search engine.
Human-powered search engines rely on humans to submit information that is subsequently indexed and catalogued. Only information that is submitted is put into the index.

In both cases, when you query a search engine to locate information, you're actually searching through the index that the search engine has created —you are not actually searching the Web. These indices are giant databases of information that is collected and stored and subsequently searched. This explains why sometimes a search on a commercial search engine, such as Yahoo! or Google, will return results that are, in fact, dead links. Since the search results are based on the index, if the index hasn't been updated since a Web page became invalid the search engine treats the page as still an active link even though it no longer is. It will remain that way until the index is updated.

 

* Post Updates

Re: Export MBOX Files to Outlook PST Emails by Somit Vishwakarma
[November 28, 2020, 01:06:01 PM]


WPX is giving out 6 months of free WordPress hosting this week by obasimiracle
[November 25, 2020, 01:56:47 AM]


Re: Outlook PST Merge by ruth less
[November 11, 2020, 07:22:38 AM]

Inside: 3P Techies Blog

* Newest Techies

Get Updates


Sign up to get latest updates delivered to your inbox. No Spam, We Promise!

Get Hosting!

a Faster web hosting service

 

Copyright © 3rd Planet Techies. All rights Reserved.

Top || Mobile || Privacy