date: 2024-07-05
title: "Prompting and In-Context Learning"
status: UNFINISHED
author:
- AllenYGY
tags:
- NOTE
publish: True
Prompting and In-Context Learning
Issues with Fine-Tuning
- Need large task-specific datasets for fine-tuning 需要大量专业的数据集
- Train endlessly 不通用
- Collect data for task , fine-tune model to solve task
- Collect data for task , fine-tune model to solve task
- ...
- Prone to overfitting 容易过拟合
- Large models adapt to very narrow task distribution, which may exploit spurious correlations
- Finetuning large models is expensive to time, memory, and cost
How to adapt a pre-trained model without fine-tuning
Prompting
- Prompt-based learning (inference)
- A new paradigm in Deep Learning / Machine Learning (NLP, CV)
- Encouraging a pre-trained model to make particular predictions by providing a “prompt” that instructs the model to perform the task effectively
The General workflow of Prompting:
- Prompt Addition
- Answer Prediction (Search)
- Post-process the answer
Design Considerations for Prompting
- Pre-trained model choice
- Prompt engineering
- Answer engineering
- Multi-Prompt learning
The Elements of Prompting
A prompt contains any of the following elements
- Instruction − The description of a specific task that you want the model to perform
- Context − External information or additional context that can steer the model to better responses
- Input data − The input or question that we are interested to find a response for
- Output Indicator − The type or format of the output
Applications of Prompting
- Text classification
- Text summarization
- Information extraction
- Question Answering
- Conversation
- Code generation
- Reasoning
Techniques of Prompting
- Zero-shot Prompting 无样本
- Few-shot Prompting 少样本
- Chain-of-Thought Prompting
- Self-Consistency
- Tree of Thoughts
- Multimodal CoT Prompting
- Active-Prompt
- Generate Knowledge Prompting
- Retrieval Augmented Generation
- Automatic Reasoning and Tool-use