Few-Shot Learning
Few-shot learning is the ability of an AI model to adapt to a new task or output format from just a small number of input-output examples provided in the prompt, without any weight updates.
理解する Few-Shot Learning
Few-shot learning is one of the most practically useful properties of large language models. By including a few examples of the desired input-output mapping in the prompt, you can reliably steer the model toward a specific output format, style, or reasoning pattern. This is also called in-context learning because the learning happens in the context window rather than through gradient updates. For example, showing a model three examples of how to extract task details from emails teaches it to extract tasks consistently from new emails, even when they are phrased very differently. This is far more sample-efficient than traditional supervised learning, which requires thousands of labeled examples to achieve similar consistency. Few-shot prompting is particularly powerful for structured output tasks: extracting specific fields from unstructured text, converting descriptions into JSON objects, or classifying items into categories. The examples define both the expected format and the decision criteria implicitly. The optimal number of shots varies by task and model. More examples generally improve consistency but consume context window space. For complex extraction tasks, three to ten examples typically provide a good balance. Advanced techniques like chain-of-thought few-shot learning include reasoning steps in the examples to improve performance on complex reasoning tasks.
GAIAの活用方法 Few-Shot Learning
GAIA uses few-shot examples in prompts for tasks requiring consistent structured output, such as extracting task details from emails, parsing calendar event information from natural language, or categorizing messages by urgency. By providing representative examples, GAIA's prompts ensure the LLM returns data in the exact format needed for downstream processing and tool invocation.
関連概念
Zero-Shot Learning
Zero-shot learning is the ability of an AI model to perform tasks it has never explicitly been trained on, relying on general knowledge and reasoning rather than task-specific examples.
Prompt Engineering
Prompt engineering is the practice of designing and refining inputs to AI language models to reliably elicit desired outputs, shaping model behavior without modifying the underlying weights.
Large Language Model (LLM)
A Large Language Model (LLM) is a deep learning model trained on massive text datasets that can understand, generate, and reason about human language across a wide range of tasks.
Chain-of-Thought Reasoning
Chain-of-thought (CoT) reasoning is a prompting technique that instructs an AI model to articulate its intermediate reasoning steps before producing a final answer, significantly improving accuracy on complex multi-step problems.


