Web Crawler: A Review

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Web Crawler: A Review

Information Retrieval deals with searching and retrieving information within the documents and it also searches the online databases and internet. Web crawler is defined as a program or software which traverses the Web and downloads web documents in a methodical, automated manner. Based on the type of knowledge, web crawler is usually divided in three types of crawling techniques: General Purpo...

متن کامل

Review Paper on Web Crawler

Web crawler is software or a computer program which will be used for the browsing in World Wide Web in an ordered manner. The methodology used for this type of procedure is known as Web crawling or spidering.The different search engines used for spidering will give you current information. Web crawlers will create the copy of all the visited web pages that is used by the search engine as a refe...

متن کامل

A Review of Web Crawler Algorithms

The web today contains a lot of information and it keeps on increasing everyday. Thus, due to the availability of abundant data on web, searching for some particular data in this collection has become very difficult. Emphasis is given to the relevance and robustness of data by the on-going researches. Although only relevant pages are to be considered for any search query but still huge data nee...

متن کامل

A Scalable, Distributed Web-Crawler*

In this paper we present a design and implementation of a scalable, distributed web-crawler. The motivation for design of such a system to effectively distribute crawling tasks to different machined in a peer-peer distributed network. Such architecture will lead to scalability and help tame the exponential growth or crawl space in the World Wide Web. With experiments on the implementation of th...

متن کامل

Slug: A Semantic Web Crawler

This paper introduces “Slug” a web crawler (or “Scutter”) designed for harvesting semantic web content. Implemented in Java using the Jena API, Slug provides a configurable, modular framework that allows a great degree of flexibility in configuring the retrieval, processing and storage of harvested content. The framework provides an RDF vocabulary for describing crawler configurations and colle...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: International Journal of Computer Applications

سال: 2013

ISSN: 0975-8887

DOI: 10.5120/10440-5125