Focused Web Crawler For Retrieving Relevant Contents

Main Article Content

Manikandan N K, et. al.

Abstract

The rapid growth of the World-Wide Web creates unusual scaling challenges for the purpose of general crawlers and for search engines also. we delineate a new hypertext resource discovery system which is called as Focused Crawler. The main aim of a focused crawler is to seek out the pages selectively which are very relevant to a previous defined set of topics. The topics are specified not only by using keywords, but also using prototypical documents. Instead of collecting and indexing all accessible Web documents we should also be able to answer all the possible ad-hoc queries, The main work of focused crawler is to analyze its crawl boundary to find out the links that are mostly alike for the crawl, and it avoids the unnecessary and irrelevant regions of the Web. This leads to savings in the hardware resources and network resources also and it helps the crawl for more up-to-date. A classifier that evaluates the suitable hypertext documents with respect to the relevant topics, and a distiller is introduced to filter the unwanted topic from the whole content. Focused crawling acquires relevant pages while standard crawling quickly loses its way, even though they are very similar. Focused crawling is robust against large troubles for the initial set of URLs. Focused crawling is very effective for building good quality collections of the Web documents on specific topics, using modest desktop hardware.

Downloads

Download data is not yet available.

Metrics

Metrics Loading ...

Article Details

How to Cite
et. al., M. N. K. . (2021). Focused Web Crawler For Retrieving Relevant Contents. Turkish Journal of Computer and Mathematics Education (TURCOMAT), 12(10), 2025–2028. Retrieved from https://www.turcomat.org/index.php/turkbilmat/article/view/4708
Section
Articles