HomeArtificial IntelligenceConstructing Superior Multi-Agent AI Workflows by Leveraging AutoGen and Semantic Kernel

Constructing Superior Multi-Agent AI Workflows by Leveraging AutoGen and Semantic Kernel


On this tutorial, we stroll you thru the seamless integration of AutoGen and Semantic Kernel with Google’s Gemini Flash mannequin. We start by organising our GeminiWrapper and SemanticKernelGeminiPlugin courses to bridge the generative energy of Gemini with AutoGen’s multi-agent orchestration. From there, we configure specialist brokers, starting from code reviewers to inventive analysts, demonstrating how we are able to leverage AutoGen’s ConversableAgent API alongside Semantic Kernel’s embellished capabilities for textual content evaluation, summarization, code overview, and inventive problem-solving. By combining AutoGen’s sturdy agent framework with Semantic Kernel’s function-driven method, we create a complicated AI assistant that adapts to a wide range of duties with structured, actionable insights.

!pip set up pyautogen semantic-kernel google-generativeai python-dotenv


import os
import asyncio
from typing import Dict, Any, Listing
import autogen
import google.generativeai as genai
from semantic_kernel import Kernel
from semantic_kernel.capabilities import KernelArguments
from semantic_kernel.capabilities.kernel_function_decorator import kernel_function

We begin by putting in the core dependencies: pyautogen, semantic-kernel, google-generativeai, and python-dotenv, guaranteeing we now have all the mandatory libraries for our multi-agent and semantic operate setup. Then we import important Python modules (os, asyncio, typing) together with autogen for agent orchestration, genai for Gemini API entry, and the Semantic Kernel courses and interior decorators to outline our AI capabilities.

GEMINI_API_KEY = "Use Your API Key Right here" 
genai.configure(api_key=GEMINI_API_KEY)


config_list = [
   {
       "model": "gemini-1.5-flash",
       "api_key": GEMINI_API_KEY,
       "api_type": "google",
       "api_base": "https://generativelanguage.googleapis.com/v1beta",
   }
]

We outline our GEMINI_API_KEY placeholder and instantly configure the genai consumer so all subsequent Gemini calls are authenticated. Then we construct a config_list containing the Gemini Flash mannequin settings, mannequin identify, API key, endpoint kind, and base URL, which we’ll hand off to our brokers for LLM interactions.

class GeminiWrapper:
   """Wrapper for Gemini API to work with AutoGen"""
  
   def __init__(self, model_name="gemini-1.5-flash"):
       self.mannequin = genai.GenerativeModel(model_name)
  
   def generate_response(self, immediate: str, temperature: float = 0.7) -> str:
       """Generate response utilizing Gemini"""
       attempt:
           response = self.mannequin.generate_content(
               immediate,
               generation_config=genai.sorts.GenerationConfig(
                   temperature=temperature,
                   max_output_tokens=2048,
               )
           )
           return response.textual content
       besides Exception as e:
           return f"Gemini API Error: {str(e)}"

We encapsulate all Gemini Flash interactions in a GeminiWrapper class, the place we initialize a GenerativeModel for our chosen mannequin and expose a easy generate_response methodology. On this methodology, we go the immediate and temperature into Gemini’s generate_content API (capped at 2048 tokens) and return the uncooked textual content or a formatted error.

class SemanticKernelGeminiPlugin:
   """Semantic Kernel plugin utilizing Gemini Flash for superior AI operations"""
  
   def __init__(self):
       self.kernel = Kernel()
       self.gemini = GeminiWrapper()
  
   @kernel_function(identify="analyze_text", description="Analyze textual content for sentiment and key insights")
   def analyze_text(self, textual content: str) -> str:
       """Analyze textual content utilizing Gemini Flash"""
       immediate = f"""
       Analyze the next textual content comprehensively:
      
       Textual content: {textual content}
      
       Present evaluation on this format:
       - Sentiment: [positive/negative/neutral with confidence]
       - Key Themes: [main topics and concepts]
       - Insights: [important observations and patterns]
       - Suggestions: [actionable next steps]
       - Tone: [formal/informal/technical/emotional]
       """
      
       return self.gemini.generate_response(immediate, temperature=0.3)
  
   @kernel_function(identify="generate_summary", description="Generate complete abstract")
   def generate_summary(self, content material: str) -> str:
       """Generate abstract utilizing Gemini's superior capabilities"""
       immediate = f"""
       Create a complete abstract of the next content material:
      
       Content material: {content material}
      
       Present:
       1. Govt Abstract (2-3 sentences)
       2. Key Factors (bullet format)
       3. Necessary Particulars
       4. Conclusion/Implications
       """
      
       return self.gemini.generate_response(immediate, temperature=0.4)
  
   @kernel_function(identify="code_analysis", description="Analyze code for high quality and strategies")
   def code_analysis(self, code: str) -> str:
       """Analyze code utilizing Gemini's code understanding"""
       immediate = f"""
       Analyze this code comprehensively:
      
       ```
       {code}
       ```
      
       Present evaluation overlaying:
       - Code High quality: [readability, structure, best practices]
       - Efficiency: [efficiency, optimization opportunities]
       - Safety: [potential vulnerabilities, security best practices]
       - Maintainability: [documentation, modularity, extensibility]
       - Options: [specific improvements with examples]
       """
      
       return self.gemini.generate_response(immediate, temperature=0.2)
  
   @kernel_function(identify="creative_solution", description="Generate inventive options to issues")
   def creative_solution(self, downside: str) -> str:
       """Generate inventive options utilizing Gemini's inventive capabilities"""
       immediate = f"""
       Downside: {downside}
      
       Generate inventive options:
       1. Typical Approaches (2-3 customary options)
       2. Modern Concepts (3-4 inventive options)
       3. Hybrid Options (combining completely different approaches)
       4. Implementation Technique (sensible steps)
       5. Potential Challenges and Mitigation
       """
      
       return self.gemini.generate_response(immediate, temperature=0.8)

We encapsulate our Semantic Kernel logic within the SemanticKernelGeminiPlugin, the place we initialize each the Kernel and our GeminiWrapper to energy customized AI capabilities. Utilizing the @kernel_function decorator, we declare strategies like analyze_text, generate_summary, code_analysis, and creative_solution, every of which constructs a structured immediate and delegates the heavy lifting to Gemini Flash. This plugin lets us seamlessly register and invoke superior AI operations inside our Semantic Kernel atmosphere.

class AdvancedGeminiAgent:
   """Superior AI Agent utilizing Gemini Flash with AutoGen and Semantic Kernel"""
  
   def __init__(self):
       self.sk_plugin = SemanticKernelGeminiPlugin()
       self.gemini = GeminiWrapper()
       self.setup_agents()
  
   def setup_agents(self):
       """Initialize AutoGen brokers with Gemini Flash"""
      
       gemini_config = {
           "config_list": [{"model": "gemini-1.5-flash", "api_key": GEMINI_API_KEY}],
           "temperature": 0.7,
       }
      
       self.assistant = autogen.ConversableAgent(
           identify="GeminiAssistant",
           llm_config=gemini_config,
           system_message="""You're a complicated AI assistant powered by Gemini Flash with Semantic Kernel capabilities.
           You excel at evaluation, problem-solving, and inventive pondering. At all times present complete, actionable insights.
           Use structured responses and think about a number of views.""",
           human_input_mode="NEVER",
       )
      
       self.code_reviewer = autogen.ConversableAgent(
           identify="GeminiCodeReviewer",
           llm_config={**gemini_config, "temperature": 0.3},
           system_message="""You're a senior code reviewer powered by Gemini Flash.
           Analyze code for greatest practices, safety, efficiency, and maintainability.
           Present particular, actionable suggestions with examples.""",
           human_input_mode="NEVER",
       )
      
       self.creative_analyst = autogen.ConversableAgent(
           identify="GeminiCreativeAnalyst",
           llm_config={**gemini_config, "temperature": 0.8},
           system_message="""You're a inventive downside solver and innovation skilled powered by Gemini Flash.
           Generate revolutionary options, and supply recent views.
           Stability creativity with practicality.""",
           human_input_mode="NEVER",
       )
      
       self.data_specialist = autogen.ConversableAgent(
           identify="GeminiDataSpecialist",
           llm_config={**gemini_config, "temperature": 0.4},
           system_message="""You're a knowledge evaluation skilled powered by Gemini Flash.
           Present evidence-based suggestions and statistical views.""",
           human_input_mode="NEVER",
       )
      
       self.user_proxy = autogen.ConversableAgent(
           identify="UserProxy",
           human_input_mode="NEVER",
           max_consecutive_auto_reply=2,
           is_termination_msg=lambda x: x.get("content material", "").rstrip().endswith("TERMINATE"),
           llm_config=False,
       )
  
   def analyze_with_semantic_kernel(self, content material: str, analysis_type: str) -> str:
       """Bridge operate between AutoGen and Semantic Kernel with Gemini"""
       attempt:
           if analysis_type == "textual content":
               return self.sk_plugin.analyze_text(content material)
           elif analysis_type == "code":
               return self.sk_plugin.code_analysis(content material)
           elif analysis_type == "abstract":
               return self.sk_plugin.generate_summary(content material)
           elif analysis_type == "inventive":
               return self.sk_plugin.creative_solution(content material)
           else:
               return "Invalid evaluation kind. Use 'textual content', 'code', 'abstract', or 'inventive'."
       besides Exception as e:
           return f"Semantic Kernel Evaluation Error: {str(e)}"
  
   def multi_agent_collaboration(self, activity: str) -> Dict[str, str]:
       """Orchestrate multi-agent collaboration utilizing Gemini"""
       outcomes = {}
      
       brokers = {
           "assistant": (self.assistant, "complete evaluation"),
           "code_reviewer": (self.code_reviewer, "code overview perspective"),
           "creative_analyst": (self.creative_analyst, "inventive options"),
           "data_specialist": (self.data_specialist, "data-driven insights")
       }
      
       for agent_name, (agent, perspective) in brokers.gadgets():
           attempt:
               immediate = f"Activity: {activity}nnProvide your {perspective} on this activity."
               response = agent.generate_reply([{"role": "user", "content": prompt}])
               outcomes[agent_name] = response if isinstance(response, str) else str(response)
           besides Exception as e:
               outcomes[agent_name] = f"Agent {agent_name} error: {str(e)}"
      
       return outcomes
  
   def run_comprehensive_analysis(self, question: str) -> Dict[str, Any]:
       """Run complete evaluation utilizing all Gemini-powered capabilities"""
       outcomes = {}
      
       analyses = ["text", "summary", "creative"]
       for analysis_type in analyses:
           attempt:
               outcomes[f"sk_{analysis_type}"] = self.analyze_with_semantic_kernel(question, analysis_type)
           besides Exception as e:
               outcomes[f"sk_{analysis_type}"] = f"Error: {str(e)}"
      
       attempt:
           outcomes["multi_agent"] = self.multi_agent_collaboration(question)
       besides Exception as e:
           outcomes["multi_agent"] = f"Multi-agent error: {str(e)}"
      
       attempt:
           outcomes["direct_gemini"] = self.gemini.generate_response(
               f"Present a complete evaluation of: {question}", temperature=0.6
           )
       besides Exception as e:
           outcomes["direct_gemini"] = f"Direct Gemini error: {str(e)}"
      
       return outcomes

We add our end-to-end AI orchestration within the AdvancedGeminiAgent class, the place we initialize our Semantic Kernel plugin, Gemini wrapper, and configure a collection of specialist AutoGen brokers (assistant, code reviewer, inventive analyst, knowledge specialist, and person proxy). With easy strategies for semantic-kernel bridging, multi-agent collaboration, and direct Gemini calls, we allow a seamless, complete evaluation pipeline for any person question.

def essential():
   """Predominant execution operate for Google Colab with Gemini Flash"""
   print("🚀 Initializing Superior Gemini Flash AI Agent...")
   print("⚡ Utilizing Gemini 1.5 Flash for high-speed, cost-effective AI processing")
  
   attempt:
       agent = AdvancedGeminiAgent()
       print("✅ Agent initialized efficiently!")
   besides Exception as e:
       print(f"❌ Initialization error: {str(e)}")
       print("💡 Be certain that to set your Gemini API key!")
       return
  
   demo_queries = [
       "How can AI transform education in developing countries?",
       "def fibonacci(n): return n if n 

Finally, we run the main function that initializes the AdvancedGeminiAgent, prints out status messages, and iterates through a set of demo queries. As we run each query, we collect and display results from semantic-kernel analyses, multi-agent collaboration, and direct Gemini responses, ensuring a clear, step-by-step showcase of our multi-agent AI workflow.

In conclusion, we showcased how AutoGen and Semantic Kernel complement each other to produce a versatile, multi-agent AI system powered by Gemini Flash. We highlighted how AutoGen simplifies the orchestration of diverse expert agents, while Semantic Kernel provides a clean, declarative layer for defining and invoking advanced AI functions. By uniting these tools in a Colab notebook, we’ve enabled rapid experimentation and prototyping of complex AI workflows without sacrificing clarity or control.


Check out the Codes. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.


Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments