A search engine makes this index using a program called a ‘web crawler’. This automatically browses the web and stores information about the pages it visits.
Every time a web crawler visits a webpage, it makes a copy of it and adds its URL to an index. Once this is done, the web crawler follows all the links on the page, repeating the process of copying, indexing and then following the links. It keeps doing this, building up a huge index of many webpages as it goes.
Some websites stop web crawlers from visiting them. These pages will be left out of the index, along with pages that no-one links to.
The information that the web crawler puts together is then used by search engines. It becomes the search engine’s index. Every webpage recommended by a search engine has been visited by a web crawler.
Search engines are answer machines. When a person performs an online search, the search engine scours its corpus of billions of documents and does two things: first, it returns only those results that are relevant or useful to the searcher’s query; second, it ranks those results according to the popularity of the websites serving the information. It is both relevance and popularity that the process of SEO is meant to influence.