On this tutorial, we’ll discover create a complicated Self-Enhancing AI Agent utilizing Google’s cutting-edge Gemini API. This self-improving agent demonstrates autonomous problem-solving, dynamically evaluates efficiency, learns from successes and failures, and iteratively enhances its capabilities by means of reflective evaluation and self-modification. The tutorial walks by means of structured code implementation, detailing mechanisms for reminiscence administration, functionality monitoring, iterative activity evaluation, resolution era, and efficiency analysis, all built-in inside a robust self-learning suggestions loop.
import google.generativeai as genai
import json
import time
import re
from typing import Dict, Listing, Any
from datetime import datetime
import traceback
We arrange the foundational parts to construct an AI-powered self-improving agent using Google’s Generative AI API. Libraries comparable to json, time, re, and datetime facilitate structured knowledge administration, efficiency monitoring, and textual content processing, whereas kind hints (Dict, Listing, Any) assist guarantee sturdy and maintainable code.
class SelfImprovingAgent:
def __init__(self, api_key: str):
"""Initialize the self-improving agent with Gemini API"""
genai.configure(api_key=api_key)
self.mannequin = genai.GenerativeModel('gemini-1.5-flash')
self.reminiscence = {
'successful_strategies': [],
'failed_attempts': [],
'learned_patterns': [],
'performance_metrics': [],
'code_improvements': []
}
self.capabilities = {
'problem_solving': 0.5,
'code_generation': 0.5,
'learning_efficiency': 0.5,
'error_handling': 0.5
}
self.iteration_count = 0
self.improvement_history = []
def analyze_task(self, activity: str) -> Dict[str, Any]:
"""Analyze a given activity and decide method"""
analysis_prompt = f"""
Analyze this activity and supply a structured method:
Activity: {activity}
Please present:
1. Activity complexity (1-10)
2. Required expertise
3. Potential challenges
4. Advisable method
5. Success standards
Format as JSON.
"""
strive:
response = self.mannequin.generate_content(analysis_prompt)
json_match = re.search(r'{.*}', response.textual content, re.DOTALL)
if json_match:
return json.hundreds(json_match.group())
else:
return {
"complexity": 5,
"expertise": ["general problem solving"],
"challenges": ["undefined requirements"],
"method": "iterative enchancment",
"success_criteria": ["task completion"]
}
besides Exception as e:
print(f"Activity evaluation error: {e}")
return {"complexity": 5, "expertise": [], "challenges": [], "method": "fundamental", "success_criteria": []}
def solve_problem(self, downside: str) -> Dict[str, Any]:
"""Try to resolve an issue utilizing present capabilities"""
self.iteration_count += 1
print(f"n=== Iteration {self.iteration_count} ===")
print(f"Downside: {downside}")
task_analysis = self.analyze_task(downside)
print(f"Activity Evaluation: {task_analysis}")
solution_prompt = f"""
Based mostly on my earlier studying and capabilities, remedy this downside:
Downside: {downside}
My present capabilities: {self.capabilities}
Earlier profitable methods: {self.reminiscence['successful_strategies'][-3:]} # Final 3
Identified patterns: {self.reminiscence['learned_patterns'][-3:]} # Final 3
Present an in depth resolution with:
1. Step-by-step method
2. Code implementation (if relevant)
3. Anticipated consequence
4. Potential enhancements
"""
strive:
start_time = time.time()
response = self.mannequin.generate_content(solution_prompt)
solve_time = time.time() - start_time
resolution = {
'downside': downside,
'resolution': response.textual content,
'solve_time': solve_time,
'iteration': self.iteration_count,
'task_analysis': task_analysis
}
quality_score = self.evaluate_solution(resolution)
resolution['quality_score'] = quality_score
self.reminiscence['performance_metrics'].append({
'iteration': self.iteration_count,
'high quality': quality_score,
'time': solve_time,
'complexity': task_analysis.get('complexity', 5)
})
if quality_score > 0.7:
self.reminiscence['successful_strategies'].append(resolution)
print(f"✅ Resolution High quality: {quality_score:.2f} (Success)")
else:
self.reminiscence['failed_attempts'].append(resolution)
print(f"❌ Resolution High quality: {quality_score:.2f} (Wants Enchancment)")
return resolution
besides Exception as e:
print(f"Downside fixing error: {e}")
error_solution = {
'downside': downside,
'resolution': f"Error occurred: {str(e)}",
'solve_time': 0,
'iteration': self.iteration_count,
'quality_score': 0.0,
'error': str(e)
}
self.reminiscence['failed_attempts'].append(error_solution)
return error_solution
def evaluate_solution(self, resolution: Dict[str, Any]) -> float:
"""Consider the standard of an answer"""
evaluation_prompt = f"""
Consider this resolution on a scale of 0.0 to 1.0:
Downside: {resolution['problem']}
Resolution: {resolution['solution'][:500]}... # Truncated for analysis
Price primarily based on:
1. Completeness (addresses all features)
2. Correctness (logically sound)
3. Readability (nicely defined)
4. Practicality (implementable)
5. Innovation (artistic method)
Reply with only a decimal quantity between 0.0 and 1.0.
"""
strive:
response = self.mannequin.generate_content(evaluation_prompt)
score_match = re.search(r'(d+.?d*)', response.textual content)
if score_match:
rating = float(score_match.group(1))
return min(max(rating, 0.0), 1.0)
return 0.5
besides:
return 0.5
def learn_from_experience(self):
"""Analyze previous efficiency and enhance capabilities"""
print("n🧠 Studying from expertise...")
if len(self.reminiscence['performance_metrics']) str:
"""Generate improved model of code"""
improvement_prompt = f"""
Enhance this code primarily based on the objective:
Present Code:
{current_code}
Enchancment Purpose: {improvement_goal}
My present capabilities: {self.capabilities}
Realized patterns: {self.reminiscence['learned_patterns'][-3:]}
Present improved code with:
1. Enhanced performance
2. Higher error dealing with
3. Improved effectivity
4. Clear feedback explaining enhancements
"""
strive:
response = self.mannequin.generate_content(improvement_prompt)
improved_code = {
'authentic': current_code,
'improved': response.textual content,
'objective': improvement_goal,
'iteration': self.iteration_count
}
self.reminiscence['code_improvements'].append(improved_code)
return response.textual content
besides Exception as e:
print(f"Code enchancment error: {e}")
return current_code
def self_modify(self):
"""Try to enhance the agent's personal code"""
print("n🔧 Trying self-modification...")
current_method = """
def solve_problem(self, downside: str) -> Dict[str, Any]:
# Present implementation
cross
"""
improved_method = self.generate_improved_code(
current_method,
"Make downside fixing extra environment friendly and correct"
)
print("Generated improved technique construction")
print("Word: Precise self-modification requires cautious implementation in manufacturing")
def run_improvement_cycle(self, issues: Listing[str], cycles: int = 3):
"""Run a whole enchancment cycle"""
print(f"🚀 Beginning {cycles} enchancment cycles with {len(issues)} issues")
for cycle in vary(cycles):
print(f"n{'='*50}")
print(f"IMPROVEMENT CYCLE {cycle + 1}/{cycles}")
print(f"{'='*50}")
cycle_results = []
for downside in issues:
outcome = self.solve_problem(downside)
cycle_results.append(outcome)
time.sleep(1)
self.learn_from_experience()
if cycle str:
"""Generate a complete efficiency report"""
if not self.reminiscence['performance_metrics']:
return "No efficiency knowledge accessible but."
metrics = self.reminiscence['performance_metrics']
avg_quality = sum(m['quality'] for m in metrics) / len(metrics)
avg_time = sum(m['time'] for m in metrics) / len(metrics)
report = f"""
📈 AGENT PERFORMANCE REPORT
{'='*40}
Complete Iterations: {self.iteration_count}
Common Resolution High quality: {avg_quality:.3f}
Common Remedy Time: {avg_time:.2f}s
Profitable Options: {len(self.reminiscence['successful_strategies'])}
Failed Makes an attempt: {len(self.reminiscence['failed_attempts'])}
Success Price: {len(self.reminiscence['successful_strategies']) / max(1, self.iteration_count) * 100:.1f}%
Present Capabilities:
{json.dumps(self.capabilities, indent=2)}
Patterns Realized: {len(self.reminiscence['learned_patterns'])}
Code Enhancements: {len(self.reminiscence['code_improvements'])}
"""
return report
We outline the above class, SelfImprovingAgent, as implementing a strong framework leveraging Google’s Gemini API for autonomous task-solving, self-assessment, and adaptive studying. It incorporates structured reminiscence techniques, functionality monitoring, iterative problem-solving with steady enchancment cycles, and even makes an attempt managed self-modification. This superior implementation permits the agent to progressively improve its accuracy, effectivity, and problem-solving sophistication over time, making a dynamic AI that may autonomously evolve and adapt.
def principal():
"""Foremost perform to exhibit the self-improving agent"""
API_KEY = "Use Your GEMINI KEY Right here"
if API_KEY == "Use Your GEMINI KEY Right here":
print("⚠️ Please set your Gemini API key within the API_KEY variable")
print("Get your API key from: https://makersuite.google.com/app/apikey")
return
agent = SelfImprovingAgent(API_KEY)
test_problems = [
"Write a function to calculate the factorial of a number",
"Create a simple text-based calculator that handles basic operations",
"Design a system to find the shortest path between two points in a graph",
"Implement a basic recommendation system for movies based on user preferences",
"Create a machine learning model to predict house prices based on features"
]
print("🤖 Self-Enhancing Agent Demo")
print("This agent will try to resolve issues and enhance over time")
agent.run_improvement_cycle(test_problems, cycles=3)
print("n" + agent.get_performance_report())
print("n" + "="*50)
print("TESTING IMPROVED AGENT")
print("="*50)
final_problem = "Create an environment friendly algorithm to kind a big dataset"
final_result = agent.solve_problem(final_problem)
print(f"nFinal Downside Resolution High quality: {final_result.get('quality_score', 0):.2f}")
The principle() perform serves because the entry level for demonstrating the SelfImprovingAgent class. It initializes the agent with the consumer’s Gemini API key and defines sensible programming and system design duties. The agent then iteratively tackles these duties, analyzing its efficiency to refine its problem-solving skills over a number of enchancment cycles. Lastly, it assessments the agent’s enhanced capabilities with a brand new complicated activity, showcasing measurable progress and offering an in depth efficiency report.
def setup_instructions():
"""Print setup directions for Google Colab"""
directions = """
📋 SETUP INSTRUCTIONS FOR GOOGLE COLAB:
1. Set up the Gemini API consumer:
!pip set up google-generativeai
2. Get your Gemini API key:
- Go to https://makersuite.google.com/app/apikey
- Create a brand new API key
- Copy the important thing
3. Substitute 'your-gemini-api-key-here' together with your precise API key
4. Run the code!
🔧 CUSTOMIZATION OPTIONS:
- Modify test_problems checklist so as to add your individual challenges
- Alter enchancment cycles rely
- Add new capabilities to trace
- Prolong the educational mechanisms
💡 IMPROVEMENT IDEAS:
- Add persistent reminiscence (save/load agent state)
- Implement extra refined analysis metrics
- Add domain-specific downside varieties
- Create visualization of enchancment over time
"""
print(directions)
if __name__ == "__main__":
setup_instructions()
print("n" + "="*60)
principal()
Lastly, we outline the setup_instructions() perform, which guides customers by means of making ready their Google Colab surroundings to run the self-improving agent. It explains step-by-step set up dependencies, arrange and configure the Gemini API key, and spotlight numerous choices for customizing and enhancing the agent’s performance. This method simplifies consumer onboarding, facilitating simple experimentation and lengthening the agent’s capabilities additional.
In conclusion, the implementation demonstrated on this tutorial gives a complete framework for creating AI brokers that carry out duties and actively improve their capabilities over time. By harnessing the Gemini API’s superior generative energy and integrating a structured self-improvement loop, builders can construct brokers able to refined reasoning, iterative studying, and self-modification.
Try the Pocket book on GitHub. All credit score for this analysis goes to the researchers of this venture. Additionally, be at liberty to comply with us on Twitter and don’t overlook to hitch our 95k+ ML SubReddit and Subscribe to our E-newsletter.
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its reputation amongst audiences.