Fine-tuning in deep learning is a technique used in the field of artificial intelligence[1], specifically within machine learning[2] algorithms. Its primary function is to enhance the performance of pre-existing neural network models. This is accomplished by reusing and adjusting certain parameters within these models. It's a form of transfer learning, where knowledge gained from one task is applied to another related task. Fine-tuning can be applied to the entire network or just a subset of layers, often adding adapters for augmentation. It is particularly effective in natural language processing for language modeling. However, it's important to note that fine-tuning can sometimes affect a model's robustness, requiring strategies like linear interpolation to balance performance. Various methods, including the Low-rank adaptation (LoRA) technique, offer different approaches to fine-tuning.
At deep learning, fine-tuning is an approach to transfer learning in which the weights of a pre-trained model Are trained on new data. Fine-tuning can be done on the entire neural network, or on only a subset of its layers, in which case the layers that are not being fine-tuned are "frozen" (not updated during the backpropagation step). A model may also be augmented with "adapters" that consist of far fewer parameters than the original model, and fine-tuned in a parameter-efficient way by tuning the weights of the adapters and leaving the rest of the model's weights frozen.
For some architectures, such as convolutional neural networks, it is common to keep the earlier layers (those closest to the input layer) frozen because they capture lower-level features, while later layers often discern high-level features that can be more related to the task that the model is trained on.
Models that are pre-trained on large and general corpora are usually fine-tuned by reusing the model's parameters as a starting point and adding a task-specific layer trained from scratch. Fine-tuning the full model is common as well and often yields better results, but it is more computationally expensive.
Fine-tuning is typically accomplished with supervised learning, but there are also techniques to fine-tune a model using weak supervision. Fine-tuning can be combined with a reinforcement learning from human feedback-based objective To produce language models like ChatGPT (a fine-tuned version of GPT-3) and Sparrow.