Site Owners Forums - Webmaster Forums

Site Owners Forums - Webmaster Forums (http://siteownersforums.com/index.php)
-   Search Engine Optimization (http://siteownersforums.com/forumdisplay.php?f=16)
-   -   Web Crawler (http://siteownersforums.com/showthread.php?t=59550)

rajivwebads 08-29-2012 04:08 AM

Web Crawler
 
A Web Crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion.
Other terms for Web crawlers are ants, automatic indexers, bots,Web spiders, Web robots,or—especially in the FOAF community—Web scutters
This process is called Web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for sending spam).
A Web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.
The large volume implies that the crawler can only download a limited number of the Web pages within a given time, so it needs to prioritize its downloads. The high rate of change implies that the pages might have already been updated or even deleted.
The number of possible crawlable URLs being generated by server-side software has also made it difficult for web crawlers to avoid retrieving duplicate content. Endless combinations of HTTP GET (URL-based) parameters exist, of which only a small selection will actually return unique content. For example, a simple online photo gallery may offer three options to users, as specified through HTTP GET parameters in the URL. If there exist four ways to sort images, three choices of thumbnail size, two file formats, and an option to disable user-provided content, then the same set of content can be accessed with 48 different URLs, all of which may be linked on the site. This mathematical combination creates a problem for crawlers, as they must sort through endless combinations of relatively minor scripted changes in order to retrieve unique content.
As Edwards et al. noted, "Given that the bandwidth for conducting crawls is neither infinite nor free, it is becoming essential to crawl the Web in not only a scalable, but efficient way, if some reasonable measure of quality or freshness is to be maintained." A crawler must carefully choose at each step which pages to visit next.
The behavior of a Web crawler is the outcome of a combination of policies:
• a selection policy that states which pages to download,
• a re-visit policy that states when to check for changes to the pages,
• a politeness policy that states how to avoid overloading Web sites, and
• a parallelization policy that states how to coordinate distributed Web crawlers.

johnmathew223 08-29-2012 06:22 AM

Hello,
Its a very nice information thanks for sharing .

kingsweb 08-30-2012 05:05 AM

Nice post. Thanks for the sharing.

kevinloyed 08-30-2012 10:19 PM

Nice share. I agree!

rajkum 02-07-2013 04:55 PM

but is the difference between Google's bot and other search engine's bot. Or all of them are same, although they don't look so.

samlko 02-08-2013 05:26 AM

A web crawler is a relatively simple automated program, or script, that methodically scans or "crawls" through Internet pages to create an index of the data it's looking for; these programs are usually made to be used only once,.

Zora2012 02-12-2013 10:28 PM

A web crawler also known as a web spider or web robot is a program or automated script which browses the World Wide Web in a methodical, automated manner. A crawler is a program that visits Web sites and reads their pages and other information in order to create entries for a search engine index. The major search engines on the Web all have such a program, which is also known as a "spider" or a "bot."

danish00 02-13-2013 12:55 AM

Thank you so much giving such kind of information about web crawler. Actually, I don't know about web crawlers. informative knowledge.
camp kanatal

terrijhon 02-13-2013 11:34 PM

Brother keep sharing such a useful information thanks.

jaysh4922 02-19-2013 02:37 AM

thanks nice thread....Spiders,Crawlers, or Bots are just programs the search engines use to index the content found on the World Wide Web.

outure11 02-22-2013 06:28 AM

Yes you are right rajivwebads.

jayanta1 02-22-2013 11:13 PM

A web crawler is a program or automated script which browses the World Wide Web in a methodical, automated manner. A Search Engine Spider is a program that most search engines use to find what’s new on the Internet.

Kunalmathur 02-23-2013 11:59 AM

thanks for given infomation....


All times are GMT -7. The time now is 02:15 PM.


Powered by vBulletin Copyright © 2020 vBulletin Solutions, Inc.