Date of Award

8-3-2007

Degree Type

Thesis

Degree Name

Master of Science (MS)

Department

Computer Science

First Advisor

Dr. Charles L. Jaret - Chair

Second Advisor

Dr. Donald C. Reitzes

Third Advisor

Dr. Robert Adelman

Abstract

A large amount of on-line information resides on the invisible web - web pages generated dynamically from databases and other data sources hidden from current crawlers which retrieve content only from the publicly indexable Web. Specially, they ignore the tremendous amount of high quality content "hidden" behind search forms, and pages that require authorization or prior registration in large searchable electronic databases. To extracting data from the hidden web, it is necessary to find the search forms and fill them with appropriate information to retrieve maximum relevant information. To fulfill the complex challenges that arise when attempting to search hidden web i.e. lots of analysis of search forms as well as retrieved information also, it becomes eminent to design and implement a distributed web crawler that runs on a network of workstations to extract data from hidden web. We describe the software architecture of the distributed and scalable system and also present a number of novel techniques that went into its design and implementation to extract maximum relevant data from hidden web for achieving high performance.

DOI

https://doi.org/10.57709/1059392

Share

COinS