Agents that learn from interactions, improve over time, and adapt to changing requirements
Self-Evolving agents continuously improve their performance by learning from feedback, adapting to user preferences, and optimizing their approaches based on historical outcomes.
The Self-Evolving primitive enables agents to learn and improve over time without manual retraining. By analyzing past interactions, collecting feedback, and identifying patterns, self-evolving agents become more effective, personalized, and efficient with each use.Self-evolving capabilities include:
Learning from Feedback: Incorporate user corrections and preferences into future responses
Performance Optimization: Identify and adopt more efficient approaches based on outcomes
Personalization: Adapt to individual user styles, preferences, and requirements
Pattern Recognition: Discover and apply successful patterns from historical data
Continuous Improvement: Automatically refine behaviors without manual intervention
Feedback Integration
Learn from user corrections, ratings, and explicit feedback to improve responses
Performance Tracking
Monitor success metrics and optimize strategies that work best
User Adaptation
Personalize behavior based on individual user preferences and patterns
Automatic Refinement
Continuously improve without manual updates or retraining
Baseline Behavior: Agent starts with general capabilities
Interaction: User interacts with agent, provides tasks and feedback
Data Collection: System records interactions, outcomes, and feedback
Pattern Analysis: Identify successful patterns and common preferences
Behavior Update: Adjust agent behavior based on learnings
Validation: Test improvements to ensure quality increases
Deployment: Apply learnings to future interactions
Privacy-First Learning: All learning happens within your isolated sessions and accounts. Learnings are never shared across different users or organizations.
import { Agentbase } from '@agentbase/sdk';const agentbase = new Agentbase({ apiKey: process.env.AGENTBASE_API_KEY});// Initial requestconst result = await agentbase.runAgent({ message: "Write a product description for our SaaS tool", session: userSession});console.log('Initial response:', result.message);// User provides feedbackawait agentbase.provideFeedback({ session: userSession, messageId: result.messageId, feedback: { rating: 3, comment: "Good but too technical. Use simpler language and focus on benefits." }});// Next request learns from feedbackconst improved = await agentbase.runAgent({ message: "Write another product description for our analytics feature", session: userSession // Agent now uses simpler language and focuses on benefits});
Different feedback types drive different learning:
Explicit Ratings
Copy
// Simple rating feedbackawait agentbase.provideFeedback({ session: userSession, messageId: messageId, feedback: { rating: 4, // 1-5 scale helpful: true }});// Agent learns: This type of response received positive rating
Corrective Feedback
Copy
// Specific correctionsawait agentbase.provideFeedback({ session: userSession, messageId: messageId, feedback: { rating: 2, correction: "Use more specific examples", preferredApproach: "Show code samples instead of theory" }});// Agent learns: For this user, prefer code samples over theory
async function personalizedContentAgent(userId: string) { const userSession = await getUserSession(userId); // First use: Generic style const post1 = await agentbase.runAgent({ message: "Write a blog post about AI trends", session: userSession }); // User feedback: "Too casual, I prefer professional tone" await agentbase.provideFeedback({ session: userSession, messageId: post1.messageId, feedback: { rating: 3, correction: "Use more professional tone, less casual language" } }); // Second use: Agent adapts to professional tone const post2 = await agentbase.runAgent({ message: "Write a blog post about cloud computing", session: userSession // Automatically uses professional tone learned from feedback }); // Over time, agent learns user's complete style preferences: // - Tone: Professional // - Length: 500-700 words // - Structure: Introduction, 3 main points, conclusion // - Technical depth: Intermediate}
// Reinforce good behaviorawait agentbase.provideFeedback({ session, messageId, feedback: { rating: 5, whatWorked: "Perfect level of detail, great examples", keepDoing: "This explanation style is ideal" }});// Also correct issuesawait agentbase.provideFeedback({ session, messageId, feedback: { rating: 2, whatNeedsFix: "Too verbose, took too long", preferInstead: "Concise summary with link to details" }});
// Long-term learning across sessionsconst result = await agentbase.runAgent({ message: "Continue our project", session: longTermSession // All previous interactions and feedback available // Agent has learned preferences over weeks/months});
// Efficient learning configurationconst result = await agentbase.runAgent({ message: "Task with optimized learning", session: userSession, learning: { enabled: true, frequency: "adaptive", // Learn more from early feedback, stabilize over time cachePatterns: true, // Cache learned patterns for speed batchUpdates: true // Batch feedback processing }});
Remember: Self-evolution requires consistent feedback and sufficient interaction history. Start with explicit preferences, then let the agent learn your patterns naturally over time.