Moteur de recherche

Partager
" Retour à l'index des glossaires

A web crawler, also referred to as a spider or ant, is a tool that systematically navigates the Internet[1] to gather and index information from web pages. Starting from a base list of URLs, known as seeds, it follows links to collect and store content. This data is then archived in a repository for later use. Crucial to the functionality of search engines, web crawlers help in fetching, parsing, and storing web data to keep databases up-to-date. They operate under set policies that guide their selection, revisit, politeness, and parallelization actions. Algorithms and optimization techniques are used to enhance their efficiency, and they also face challenges like handling spam and duplicate content. Their identification is vital for preventing server overloads and for security[2] purposes, as they pose a risk of data breaches when indexing sensitive resources.

Définitions des termes
1. Internet. L'internet est un système mondial de réseaux informatiques interconnectés qui utilisent des protocoles de communication normalisés, principalement le TCP/IP, pour relier des appareils dans le monde entier. Issu du terme "internetted" utilisé en 1849, le terme "Internet" a ensuite été utilisé par le ministère américain de la guerre en 1945. Son développement a commencé avec des informaticiens qui ont créé des systèmes de partage de temps dans les années 1960 et a progressé avec la création d'ARPANET en 1969. L'internet est autogéré, sans autorité centrale, et ses principaux espaces de noms sont administrés par l'Internet Corporation for Assigned Names and Numbers (ICANN). Il a considérablement transformé les moyens de communication traditionnels et s'est développé de manière exponentielle au fil des ans, le nombre d'internautes augmentant de 20% à 50% par an. En 2019, plus de la moitié de la population mondiale utilisait l'internet. La suite de protocoles internet, qui comprend le protocole TCP/IP et quatre couches conceptuelles, guide les paquets internet jusqu'à leur destination. Des services essentiels comme le courrier électronique et la téléphonie par internet fonctionnent sur l'internet. Le World Wide Web, une collection mondiale de documents interconnectés, est un élément clé de l'internet.
2. security. Security, as a term, originates from the Latin 'securus,' meaning free from worry. It is a concept that refers to the state of being protected from potential harm or threats. This protection can apply to a wide range of referents, including individuals, groups, institutions, or even ecosystems. Security is closely linked with the environment of the referent and can be influenced by different factors that can make it either beneficial or hostile. Various methods can be employed to ensure security, including protective and warning systems, diplomacy, and policy implementation. The effectiveness of these security measures can vary, and perceptions of security can differ widely. Important security concepts include access control, assurance, authorization, cipher, and countermeasures. The United Nations also plays a significant role in global security, focusing on areas like soil health and food security.

A Moteur de recherche, sometimes called a spider ou spiderbot and often shortened to crawler, is an Bot Internet that systematically browses the World Wide Web and that is typically operated by search engines for the purpose of Web indexing (web spidering).

Architecture of a Web crawler

Web search engines and some other sites web use Web crawling or spidering logiciel to update their web content or indices of other sites' web content. Web crawlers copy pages for processing by a search engine, which indexes the downloaded pages so that users can search more efficiently.

Crawlers consume resources on visited systems and often visit sites unprompted. Issues of schedule, load, and "politeness" come into play when large collections of pages are accessed. Mechanisms exist for public sites not wishing to be crawled to make this known to the crawling agent. For example, including a robots.txt file can request bots to index only parts of a website, or nothing at all.

The number of Internet pages is extremely large; even the largest crawlers fall short of making a complete index. For this reason, search engines struggled to give relevant search results in the early years of the World Wide Web, before 2000. Today, relevant results are given almost instantly.

Crawlers can validate hyperlinks et HTML code. They can also be used for web scraping et data-driven programming.

" Retour à l'index des glossaires
fr_FRFR
Retour en haut