A crawler is a program used by search engines to collect data from the internet.
When a crawler visits a , it picks over the entire website’s (i.e. the text) and stores it in a databank. It also stores all the external and internal links to the website. The crawler will visit the stored links at a later point in time, which is how it moves from one website to the next. By this process the crawler captures and every website that has links to at least one other website.