Web scraping

Deel dit
" Terug naar Woordenlijst Index

Web scraping is a method of extracting data from websites. It originated with the creation of the World Wide Web[1] Wanderer in 1993, and its techniques have evolved with technological advances. These techniques include text pattern matching, HTTP programming, and HTML[3] parsing. Despite its utility, web scraping has faced legal issues, with differing global perspectives on its legality. Some consider it a violation of website[2] terms of service, leading to legal disputes. Nevertheless, many websites offer public data access via web APIs, which were first introduced in 2000. This method of data collection is continually developing, with its legality and techniques subject to ongoing changes.

Terms definitions
1. World Wide Web ( World Wide Web ) The World Wide Web, often referred to as the Web, is a widespread information system platform that billions of people interact with daily. Invented by Tim Berners-Lee in 1989 at the European Organization for Nuclear Research (CERN), the Web was designed to support connections between multiple databases on different computers. Its function is to facilitate content sharing over the Internet in a user-friendly manner. This is achieved through web servers that make documents and media content available. Users can locate and access these resources through Uniform Resource Locators (URLs). The Web supports various content types and allows for easy navigation across websites via hyperlinks. Its use extends to various sectors including education, entertainment, commerce, and government, with information provided by companies, organizations, government agencies, and individual users.
2. website. This text primarily discusses the concept of a 'Website'. A website is a collection of interconnected web pages, usually including a homepage, located on the same server and prepared and maintained as a collection of data by a person, group, or organization. Websites are a cornerstone of the internet, serving as hubs for information, commerce, communication, and entertainment. They can have various forms such as business sites, gaming sites, academic platforms, or social networking sites. Websites have evolved over time, from text and static images to dynamic, interactive multimedia platforms. The development and functionality of websites are governed by web standards set by the World Wide Web Consortium (W3C). Websites are also influenced by advancements in web server technology and design principles such as responsive design.
Web scraping (Wikipedia)

Web scraping, web harvesting, or web data extraction is data scraping used for extracting data from websites. Web scraping software may directly access the World Wide Web using the Hypertext Transfer Protocol or a web browser. While web scraping can be done manually by a software user, the term typically refers to automated processes implemented using a bot or web crawler. It is a form of copying in which specific data is gathered and copied from the web, typically into a central local database or spreadsheet, for later retrieval or analysis.

Scraping a web page involves fetching it and extracting from it. Fetching is the downloading of a page (which a browser does when a user views a page). Therefore, web crawling is a main component of web scraping, to fetch pages for later processing. Once fetched, extraction can take place. The content of a page may be parsed, searched and reformatted, and its data copied into a spreadsheet or loaded into a database. Web scrapers typically take something out of a page, to make use of it for another purpose somewhere else. An example would be finding and copying names and telephone numbers, companies and their URLs, or e-mail addresses to a list (contact scraping).

As well as contact scraping, web scraping is used as a component of applications used for web indexing, web mining and data mining, online price change monitoring and price comparison, product review scraping (to watch the competition), gathering real estate listings, weather data monitoring, website change detection, research, tracking online presence and reputation, web mashup, and web data integration.

Web pages are built using text-based mark-up languages (HTML and XHTML), and frequently contain a wealth of useful data in text form. However, most web pages are designed for human end-users and not for ease of automated use. As a result, specialized tools and software have been developed to facilitate the scraping of web pages.

Newer forms of web scraping involve monitoring data feeds from web servers. For example, JSON is commonly used as a transport mechanism between the client and the web server.

There are methods that some websites use to prevent web scraping, such as detecting and disallowing bots from crawling (viewing) their pages. In response, there are web scraping systems that rely on using techniques in DOM parsing, computer vision and natural language processing to simulate human browsing to enable gathering web page content for offline parsing

" Terug naar Woordenlijst Index
nl_BENL
Scroll naar boven