نتایج جستجو برای: detachment and crawling
تعداد نتایج: 16829838 فیلتر نتایج به سال:
Comparable corpora have been used as an alternative for parallel corpora as resources for computational tasks that involve domainspecific natural language processing. One way to gather documents related to a specific topic of interest is to traverse a portion of the web graph in a targeted way, using focused crawling algorithms. In this paper, we compare several focused crawling algorithms usin...
Lali sub-surface structure, with a NW-SE Zagros trending is located in Dezful Embayment. To determine the folding mechanism, structural geometric parameters including limbs dip, amplitude, wavelength, and crestal length were determined in four stages during deformation. In order to investigate the lateral folding mechanism, these geometric parameters were analyzed in three parts in the Lal...
The present experiment examined whether the mental rotation ability of 9-month-old infants was related to their abilities to crawl and manually explore objects. Forty-eight 9-month-old infants were tested; half of them had been crawling for an average of 9.3 weeks. The infants were habituated to a video of a simplified Shepard-Metzler object rotating back and forth through a 240° angle around t...
Web crawling refers to the process of gathering data from the Web. Focused crawlers are programs that selectively download Web documents (pages), restricting the scope of crawling to a pre-defined domain or topic. The downloaded documents can be indexed for a domain specific search engine or a digital library. In this paper, we describe the focused crawling technique, review relevant literature...
soil detachment is known as an important process in soil erosion and its quantification is necessary to establish a basic understanding of erosion. this study was carried out to find the best flow erosivity indicator(s) for predicting detachment rate at low slopes. for this purpose, 12 experiments including 6 flow discharges (75, 100, 125, 150, 175 and 200 ml/s) and 2 slope gradients (1.5 and 2...
A Web crawler is an important component of the Web search engine. It demands large amount of hardware resources (CPU and memory) to crawl data from the rapidly growing and changing Web. So that the crawling process should be a continuous process performed from time-to-time to maintain up-to-date crawled data. This paper develops and investigates the performance of a new approach to speed up the...
In many universities it would be useful to have a database of publications that reflects the research results of the academic staffs. Such a database can be built by automatically retrieve publication information from faculties’ homepage. In this project, we deploy focused crawling to build such a system. We also proposed a new focused crawling heuristics based on URL classification. We compare...
Information Retrieval deals with searching and retrieving information within the documents and it also searches the online databases and internet. Web crawler is defined as a program or software which traverses the Web and downloads web documents in a methodical, automated manner. Based on the type of knowledge, web crawler is usually divided in three types of crawling techniques: General Purpo...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید