Constitutional AI
Constitutional AI (CAI) is a training methodology developed by Anthropic that aligns AI models with human values by having the AI evaluate and revise its own outputs against a written set of principles — a 'constitution' — rather than relying exclusively on human-labeled preference data.
Comprendre Constitutional AI
Introduced by Anthropic in 2022, Constitutional AI was designed to address scalability limitations of RLHF: as models become more capable, human evaluators may struggle to reliably judge which outputs are better. CAI replaces some human feedback with AI feedback: the model is prompted to critique its own responses against a constitution of principles (e.g., 'Is this response harmful?', 'Is this response honest?') and then revise them. The process has two main phases. In supervised learning, the model generates responses, critiques them against constitutional principles, and revises them — creating a synthetic dataset of improved responses. In RL from AI Feedback (RLAIF), a separate AI model is trained as a preference model using AI-generated comparisons rather than human comparisons, which is then used to fine-tune the base model with reinforcement learning. The 'constitution' itself is a human-authored document: a list of principles that describe what the AI should and should not do. Anthropic's constitution draws from sources including the UN Declaration of Human Rights and existing AI ethics frameworks. By encoding values explicitly in language rather than implicitly through human preference ratings, CAI makes the alignment process more interpretable and adjustable. Constitutional AI is most associated with Claude, Anthropic's family of AI models. It complements rather than replaces RLHF — most deployed models use both techniques.
Comment GAIA utilise Constitutional AI
GAIA can be configured to run on Claude, Anthropic's Constitutional AI-trained model family, which brings the safety and helpfulness guarantees of CAI to GAIA's autonomous operations. When GAIA manages sensitive personal data across email, calendar, and task systems, the underlying model's alignment — including its reluctance to take harmful actions or violate user privacy — directly shapes what GAIA will and will not do autonomously.
Concepts liés
Reinforcement Learning from Human Feedback (RLHF)
Reinforcement Learning from Human Feedback (RLHF) is a machine learning technique that trains AI models to produce outputs preferred by humans by learning from human-provided rankings or ratings rather than purely from raw data.
Humain dans la boucle
L'humain dans la boucle (HITL) est un modèle de conception dans lequel un système IA inclut une supervision et une validation humaines à des points de décision clés, garantissant que les actions sensibles ou à fort impact nécessitent une confirmation humaine avant exécution.
Large Language Model (LLM)
Un Large Language Model (LLM) est un modèle d'apprentissage profond entraîné sur d'immenses ensembles de textes, capable de comprendre, générer et raisonner sur le langage humain dans une grande variété de tâches.
Ajustement fin
L'ajustement fin est le processus qui consiste à reprendre l'entraînement d'un modèle d'IA pré-entraîné sur un jeu de données plus petit et spécifique à une tâche afin d'adapter son comportement à un domaine ou une application particuliers.
Agent IA
Un agent IA est un système logiciel autonome qui perçoit son environnement, raisonne sur les actions à entreprendre et prend des mesures pour atteindre des objectifs spécifiques sans intervention humaine continue.


