HomeArtificial IntelligenceA Step-by-Step Coding Information to Constructing an Iterative AI Workflow Agent Utilizing...

A Step-by-Step Coding Information to Constructing an Iterative AI Workflow Agent Utilizing LangGraph and Gemini


On this tutorial, we display learn how to construct a multi-step, clever query-handling agent utilizing LangGraph and Gemini 1.5 Flash. The core concept is to construction AI reasoning as a stateful workflow, the place an incoming question is handed via a sequence of purposeful nodes: routing, evaluation, analysis, response technology, and validation. Every node operates as a practical block with a well-defined position, making the agent not simply reactive however analytically conscious. Utilizing LangGraph’s StateGraph, we orchestrate these nodes to create a looping system that may re-analyze and enhance its output till the response is validated as full or a max iteration threshold is reached.

!pip set up langgraph langchain-google-genai python-dotenv

First, the command !pip set up langgraph langchain-google-genai python-dotenv installs three Python packages important for constructing clever agent workflows. langgraph allows graph-based orchestration of AI brokers, langchain-google-genai supplies integration with Google’s Gemini fashions, and python-dotenv permits safe loading of atmosphere variables from .env information.

import os
from typing import Dict, Any, Record
from dataclasses import dataclass
from langgraph.graph import Graph, StateGraph, END
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.schema import HumanMessage, SystemMessage
import json


os.environ["GOOGLE_API_KEY"] = "Use Your API Key Right here"

We import important modules and libraries for constructing agent workflows, together with ChatGoogleGenerativeAI for interacting with Gemini fashions and StateGraph for managing conversational state. The road os.environ[“GOOGLE_API_KEY”] = “Use Your API Key Right here” assigns the API key to an atmosphere variable, permitting the Gemini mannequin to authenticate and generate responses.

@dataclass
class AgentState:
    """State shared throughout all nodes within the graph"""
    question: str = ""
    context: str = ""
    evaluation: str = ""
    response: str = ""
    next_action: str = ""
    iteration: int = 0
    max_iterations: int = 3

Try the Pocket book right here

This AgentState dataclass defines the shared state that persists throughout totally different nodes in a LangGraph workflow. It tracks key fields, together with the consumer’s question, retrieved context, any evaluation carried out, the generated response, and the beneficial subsequent motion. It additionally consists of an iteration counter and a max_iterations restrict to manage what number of instances the workflow can loop, enabling iterative reasoning or decision-making by the agent.

@dataclass
class AgentState:
    """State shared throughout all nodes within the graph"""
    question: str = ""
    context: str = ""
    evaluation: str = ""
    response: str = ""
    next_action: str = ""
    iteration: int = 0
    max_iterations: int = 3
This AgentState dataclass defines the shared state that persists throughout totally different nodes in a LangGraph workflow. It tracks key fields, together with the consumer's question, retrieved context, any evaluation carried out, the generated response, and the beneficial subsequent motion. It additionally consists of an iteration counter and a max_iterations restrict to manage what number of instances the workflow can loop, enabling iterative reasoning or decision-making by the agent.

class GraphAIAgent:
    def __init__(self, api_key: str = None):
        if api_key:
            os.environ["GOOGLE_API_KEY"] = api_key
       
        self.llm = ChatGoogleGenerativeAI(
            mannequin="gemini-1.5-flash",
            temperature=0.7,
            convert_system_message_to_human=True
        )
       
        self.analyzer = ChatGoogleGenerativeAI(
            mannequin="gemini-1.5-flash",
            temperature=0.3,
            convert_system_message_to_human=True
        )
       
        self.graph = self._build_graph()
   
    def _build_graph(self) -> StateGraph:
        """Construct the LangGraph workflow"""
        workflow = StateGraph(AgentState)
       
        workflow.add_node("router", self._router_node)
        workflow.add_node("analyzer", self._analyzer_node)
        workflow.add_node("researcher", self._researcher_node)
        workflow.add_node("responder", self._responder_node)
        workflow.add_node("validator", self._validator_node)
       
        workflow.set_entry_point("router")
        workflow.add_edge("router", "analyzer")
        workflow.add_conditional_edges(
            "analyzer",
            self._decide_next_step,
            {
                "analysis": "researcher",
                "reply": "responder"
            }
        )
        workflow.add_edge("researcher", "responder")
        workflow.add_edge("responder", "validator")
        workflow.add_conditional_edges(
            "validator",
            self._should_continue,
            {
                "proceed": "analyzer",
                "finish": END
            }
        )
       
        return workflow.compile()
   
    def _router_node(self, state: AgentState) -> Dict[str, Any]:
        """Route and categorize the incoming question"""
        system_msg = """You're a question router. Analyze the consumer's question and supply context.
        Decide if this can be a factual query, artistic request, problem-solving process, or evaluation."""
       
        messages = [
            SystemMessage(content=system_msg),
            HumanMessage(content=f"Query: {state.query}")
        ]
       
        response = self.llm.invoke(messages)
       
        return {
            "context": response.content material,
            "iteration": state.iteration + 1
        }
   
    def _analyzer_node(self, state: AgentState) -> Dict[str, Any]:
        """Analyze the question and decide the strategy"""
        system_msg = """Analyze the question and context. Decide if extra analysis is required
        or should you can present a direct response. Be thorough in your evaluation."""
       
        messages = [
            SystemMessage(content=system_msg),
            HumanMessage(content=f"""
            Query: {state.query}
            Context: {state.context}
            Previous Analysis: {state.analysis}
            """)
        ]
       
        response = self.analyzer.invoke(messages)
        evaluation = response.content material
       
        if "analysis" in evaluation.decrease() or "extra info" in evaluation.decrease():
            next_action = "analysis"
        else:
            next_action = "reply"
       
        return {
            "evaluation": evaluation,
            "next_action": next_action
        }
   
    def _researcher_node(self, state: AgentState) -> Dict[str, Any]:
        """Conduct extra analysis or info gathering"""
        system_msg = """You're a analysis assistant. Primarily based on the evaluation, collect related
        info and insights to assist reply the question comprehensively."""
       
        messages = [
            SystemMessage(content=system_msg),
            HumanMessage(content=f"""
            Query: {state.query}
            Analysis: {state.analysis}
            Research focus: Provide detailed information relevant to the query.
            """)
        ]
       
        response = self.llm.invoke(messages)
       
        updated_context = f"{state.context}nnResearch: {response.content material}"
       
        return {"context": updated_context}
   
    def _responder_node(self, state: AgentState) -> Dict[str, Any]:
        """Generate the ultimate response"""
        system_msg = """You're a useful AI assistant. Present a complete, correct,
        and well-structured response primarily based on the evaluation and context supplied."""
       
        messages = [
            SystemMessage(content=system_msg),
            HumanMessage(content=f"""
            Query: {state.query}
            Context: {state.context}
            Analysis: {state.analysis}
           
            Provide a complete and helpful response.
            """)
        ]
       
        response = self.llm.invoke(messages)
       
        return {"response": response.content material}
   
    def _validator_node(self, state: AgentState) -> Dict[str, Any]:
        """Validate the response high quality and completeness"""
        system_msg = """Consider if the response adequately solutions the question.
        Return 'COMPLETE' if passable, or 'NEEDS_IMPROVEMENT' if extra work is required."""
       
        messages = [
            SystemMessage(content=system_msg),
            HumanMessage(content=f"""
            Original Query: {state.query}
            Response: {state.response}
           
            Is this response complete and satisfactory?
            """)
        ]
       
        response = self.analyzer.invoke(messages)
        validation = response.content material
       
        return {"context": f"{state.context}nnValidation: {validation}"}
   
    def _decide_next_step(self, state: AgentState) -> str:
        """Determine whether or not to analysis or reply immediately"""
        return state.next_action
   
    def _should_continue(self, state: AgentState) -> str:
        """Determine whether or not to proceed iterating or finish"""
        if state.iteration >= state.max_iterations:
            return "finish"
        if "COMPLETE" in state.context:
            return "finish"
        if "NEEDS_IMPROVEMENT" in state.context:
            return "proceed"
        return "finish"
   
    def run(self, question: str) -> str:
        """Run the agent with a question"""
        initial_state = AgentState(question=question)
        outcome = self.graph.invoke(initial_state)
        return outcome["response"]

Try the Pocket book right here

The GraphAIAgent class defines a LangGraph-based AI workflow utilizing Gemini fashions to iteratively analyze, analysis, reply, and validate solutions to consumer queries. It makes use of modular nodes, equivalent to router, analyzer, researcher, responder, and validator, to purpose via complicated duties, refining responses via managed iterations.

def principal():
    agent = GraphAIAgent("Use Your API Key Right here")
   
    test_queries = [
        "Explain quantum computing and its applications",
        "What are the best practices for machine learning model deployment?",
        "Create a story about a robot learning to paint"
    ]
   
    print("🤖 Graph AI Agent with LangGraph and Gemini")
    print("=" * 50)
   
    for i, question in enumerate(test_queries, 1):
        print(f"n📝 Question {i}: {question}")
        print("-" * 30)
       
        attempt:
            response = agent.run(question)
            print(f"🎯 Response: {response}")
        besides Exception as e:
            print(f"❌ Error: {str(e)}")
       
        print("n" + "="*50)


if __name__ == "__main__":
    principal()

Lastly, the principle() operate initializes the GraphAIAgent with a Gemini API key and runs it on a set of take a look at queries masking technical, strategic, and inventive duties. It prints every question and the AI-generated response, showcasing how the LangGraph-driven agent processes numerous kinds of enter utilizing Gemini’s reasoning and technology capabilities.

In conclusion, by combining LangGraph’s structured state machine with the ability of Gemini’s conversational intelligence, this agent represents a brand new paradigm in AI workflow engineering, one which mirrors human reasoning cycles of inquiry, evaluation, and validation. The tutorial supplies a modular and extensible template for growing superior AI brokers that may autonomously deal with numerous duties, starting from answering complicated queries to producing artistic content material.


Try the Pocket book right here. All credit score for this analysis goes to the researchers of this mission.

🆕 Do you know? Marktechpost is the fastest-growing AI media platform—trusted by over 1 million month-to-month readers. E book a method name to debate your marketing campaign objectives. Additionally, be at liberty to comply with us on Twitter and don’t overlook to affix our 95k+ ML SubReddit and Subscribe to our E-newsletter.


Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its recognition amongst audiences.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments