Rastreador da Web

Partilhar isto
" Voltar ao Índice do Glossário

A web crawler, also referred to as a spider or ant, is a tool that systematically navigates the internet[1] to gather and index information from web pages. Starting from a base list of URLs, known as seeds, it follows links to collect and store content. This data is then archived in a repository for later use. Crucial to the functionality of search engines, web crawlers help in fetching, parsing, and storing web data to keep databases up-to-date. They operate under set policies that guide their selection, revisit, politeness, and parallelization actions. Algorithms and optimization techniques are used to enhance their efficiency, and they also face challenges like handling spam and duplicate content. Their identification is vital for preventing server overloads and for security[2] purposes, as they pose a risk of data breaches when indexing sensitive resources.

Definições de termos
1. internet. A Internet é um sistema global de redes informáticas interligadas que utilizam protocolos de comunicação normalizados, principalmente o TCP/IP, para ligar dispositivos em todo o mundo. Com origem no termo "internetted" utilizado em 1849, o termo "Internet" foi mais tarde utilizado pelo Departamento de Guerra dos EUA em 1945. O seu desenvolvimento começou com cientistas informáticos que criaram sistemas de partilha de tempo na década de 1960 e progrediu com a criação da ARPANET em 1969. A Internet é autónoma, sem uma autoridade central, e os seus principais espaços de nomes são administrados pela Internet Corporation for Assigned Names and Numbers (ICANN). Transformou significativamente os meios de comunicação tradicionais e tem crescido exponencialmente ao longo dos anos, com os utilizadores da Internet a aumentarem anualmente de 20% para 50%. Em 2019, mais de metade da população mundial utilizou a Internet. O conjunto de protocolos da Internet, que inclui o TCP/IP e quatro camadas conceptuais, orienta os pacotes da Internet para os seus destinos. Serviços essenciais como o correio eletrónico e a telefonia via Internet funcionam na Internet. A World Wide Web, uma coleção global de documentos interligados, é uma componente essencial da Internet.
2. security. Security, as a term, originates from the Latin 'securus,' meaning free from worry. It is a concept that refers to the state of being protected from potential harm or threats. This protection can apply to a wide range of referents, including individuals, groups, institutions, or even ecosystems. Security is closely linked with the environment of the referent and can be influenced by different factors that can make it either beneficial or hostile. Various methods can be employed to ensure security, including protective and warning systems, diplomacy, and policy implementation. The effectiveness of these security measures can vary, and perceptions of security can differ widely. Important security concepts include access control, assurance, authorization, cipher, and countermeasures. The United Nations also plays a significant role in global security, focusing on areas like soil health and food security.
Rastreador da Web (Wikipédia)

A Rastreador da Web, sometimes called a spider ou spiderbot and often shortened to crawler, is an Bot da Internet that systematically browses the World Wide Web and that is typically operated by search engines for the purpose of Web indexing (web spidering).

Architecture of a Web crawler

Web search engines and some other sítios Web use Web crawling or spidering software to update their web content or indices of other sites' web content. Web crawlers copy pages for processing by a search engine, which indexes the downloaded pages so that users can search more efficiently.

Crawlers consume resources on visited systems and often visit sites unprompted. Issues of schedule, load, and "politeness" come into play when large collections of pages are accessed. Mechanisms exist for public sites not wishing to be crawled to make this known to the crawling agent. For example, including a robots.txt file can request bots to index only parts of a website, or nothing at all.

The number of Internet pages is extremely large; even the largest crawlers fall short of making a complete index. For this reason, search engines struggled to give relevant search results in the early years of the World Wide Web, before 2000. Today, relevant results are given almost instantly.

Crawlers can validate hyperlinks e HTML code. They can also be used for web scraping e data-driven programming.

" Voltar ao Índice do Glossário
pt_PT_ao90PT
Deslocar para o topo