Crawler
Crawler is a tool that visits all pages on some web site and stores page content for the further processing. This approach allows you to speed up repeated extraction from the same web site. For example: when you are building Agent or Extractor you can miss some fields you want to extract and you need to run Agent or Extractor again to capture these fields. Scraping data from the local web site copy will be more effective than running Agent or Extractor on the live web site.