Self-supervised learning

Share This
« Back to Glossary Index

Self-supervised learning is a subtype of machine learning[2] that enhances the learning process by generating its own classified outputs without needing labeled data[3]. It includes various techniques, such as autoassociative and contrastive learning, which use neural networks and both positive and negative examples respectively for training. Notable models in this domain include Facebook[4]’s wav2vec and Google[5]’s BERT. Self-supervised learning distinguishes itself from other learning methods such as unsupervised, semi-supervised, transfer, and reinforcement learning in several ways. For instance, unlike unsupervised learning, it does not depend on labels in the sample data. It has been used extensively in various fields, including speech recognition[1] and natural language processing. Recent research and notable contributions to self-supervised learning have furthered its application and understanding.

Terms definitions
1. speech recognition. Speech recognition is a technological advancement that allows computers to interpret and understand human speech, converting it into a format that the computer can understand. This technology was initially developed in the 1950s by Bell Labs with a device named Audrey, specifically designed for single-speaker digit recognition. Over the years, the technology has developed through notable milestones such as IBM's demonstration of speech recognition at the 1962 World's Fair, the proposal of linear predictive coding in 1966, and DARPA's funding of Speech Understanding Research in 1971. Further advances and methods like Hidden Markov models and deep learning techniques have significantly improved the accuracy of speech recognition. This technology is now applied in various sectors including in-car systems, education, healthcare, and government intelligence. Its primary function is to translate spoken language into written text, but it has also proven critical in diagnosing and treating speech disorders.
2. machine learning. Machine learning, a term coined by Arthur Samuel in 1959, is a field of study that originated from the pursuit of artificial intelligence. It employs techniques that allow computers to improve their performance over time through experience. This learning process often mimics the human cognitive process. Machine learning applies to various areas such as natural language processing, computer vision, and speech recognition. It also finds use in practical sectors like agriculture, medicine, and business for predictive analytics. Theoretical frameworks such as the Probably Approximately Correct learning and concepts like data mining and mathematical optimization form the foundation of machine learning. Specialized techniques include supervised and unsupervised learning, reinforcement learning, and dimensionality reduction, among others.

Self-supervised learning (SSL) is a paradigm in machine learning where a model is trained on a task using the data itself to generate supervisory signals, rather than relying on external labels provided by humans. In the context of neural networks, self-supervised learning aims to leverage inherent structures or relationships within the input data to create meaningful training signals. SSL tasks are designed so that solving it requires capturing essential features or relationships in the data. The input data is typically augmented or transformed in a way that creates pairs of related samples. One sample serves as the input, and the other is used to formulate the supervisory signal. This augmentation can involve introducing noise, cropping, rotation, or other transformations. Self-supervised learning more closely imitates the way humans learn to classify objects.

The typical SSL method is based on an artificial neural network or other model such as a decision list. The model learns in two steps. First, the task is solved based on an auxiliary or pretext classification task using pseudo-labels which help to initialize the model parameters. Second, the actual task is performed with supervised or unsupervised learning. Other auxiliary tasks involve pattern completion from masked input patterns (silent pauses in speech or image portions masked in black).

Self-supervised learning has produced promising results in recent years and has found practical application in audio processing and is being used by Facebook and others for speech recognition.

« Back to Glossary Index
en_USEN
Scroll to Top