On this complete tutorial, we information customers by means of creating a robust multi-tool AI agent utilizing LangGraph and Claude, optimized for various duties together with mathematical computations, internet searches, climate inquiries, textual content evaluation, and real-time data retrieval. It begins by simplifying dependency installations to make sure easy setup, even for freshmen. Customers are then launched to structured implementations of specialised instruments, corresponding to a secure calculator, an environment friendly web-search utility leveraging DuckDuckGo, a mock climate data supplier, an in depth textual content analyzer, and a time-fetching perform. The tutorial additionally clearly delineates the combination of those instruments inside a classy agent structure constructed utilizing LangGraph, illustrating sensible utilization by means of interactive examples and clear explanations, facilitating each freshmen and superior builders to deploy customized multi-functional AI brokers quickly.
import subprocess
import sys
def install_packages():
packages = [
"langgraph",
"langchain",
"langchain-anthropic",
"langchain-community",
"requests",
"python-dotenv",
"duckduckgo-search"
]
for bundle in packages:
attempt:
subprocess.check_call([sys.executable, "-m", "pip", "install", package, "-q"])
print(f"✓ Put in {bundle}")
besides subprocess.CalledProcessError:
print(f"✗ Failed to put in {bundle}")
print("Putting in required packages...")
install_packages()
print("Set up full!n")
We automate the set up of important Python packages required for constructing a LangGraph-based multi-tool AI agent. It leverages a subprocess to run pip instructions silently and ensures every bundle, starting from long-chain elements to internet search and atmosphere dealing with instruments, is put in efficiently. This setup streamlines the atmosphere preparation course of, making the pocket book transportable and beginner-friendly.
import os
import json
import math
import requests
from typing import Dict, Listing, Any, Annotated, TypedDict
from datetime import datetime
import operator
from langchain_core.messages import BaseMessage, HumanMessage, AIMessage, ToolMessage
from langchain_core.instruments import software
from langchain_anthropic import ChatAnthropic
from langgraph.graph import StateGraph, START, END
from langgraph.prebuilt import ToolNode
from langgraph.checkpoint.reminiscence import MemorySaver
from duckduckgo_search import DDGS
We import all the required libraries and modules for developing the multi-tool AI agent. It consists of Python normal libraries corresponding to os, json, math, and datetime for general-purpose performance and exterior libraries like requests for HTTP calls and duckduckgo_search for implementing internet search. The LangChain and LangGraph ecosystems herald message varieties, software decorators, state graph elements, and checkpointing utilities, whereas ChatAnthropic allows integration with the Claude mannequin for conversational intelligence. These imports type the foundational constructing blocks for outlining instruments, agent workflows, and interactions.
os.environ["ANTHROPIC_API_KEY"] = "Use Your API Key Right here"
ANTHROPIC_API_KEY = os.getenv("ANTHROPIC_API_KEY")
We set and retrieve the Anthropic API key required to authenticate and work together with Claude fashions. The os.environ line assigns your API key (which it’s best to change with a legitimate key), whereas os.getenv securely retrieves it for later use in mannequin initialization. This strategy ensures the secret is accessible all through the script with out hardcoding it a number of occasions.
from typing import TypedDict
class AgentState(TypedDict):
messages: Annotated[List[BaseMessage], operator.add]
@software
def calculator(expression: str) -> str:
"""
Carry out mathematical calculations. Helps primary arithmetic, trigonometry, and extra.
Args:
expression: Mathematical expression as a string (e.g., "2 + 3 * 4", "sin(3.14159/2)")
Returns:
Results of the calculation as a string
"""
attempt:
allowed_names = {
'abs': abs, 'spherical': spherical, 'min': min, 'max': max,
'sum': sum, 'pow': pow, 'sqrt': math.sqrt,
'sin': math.sin, 'cos': math.cos, 'tan': math.tan,
'log': math.log, 'log10': math.log10, 'exp': math.exp,
'pi': math.pi, 'e': math.e
}
expression = expression.change('^', '**')
end result = eval(expression, {"__builtins__": {}}, allowed_names)
return f"End result: {end result}"
besides Exception as e:
return f"Error in calculation: {str(e)}"
We outline the agent’s inner state and implement a sturdy calculator software. The AgentState class makes use of TypedDict to construction agent reminiscence, particularly monitoring messages exchanged through the dialog. The calculator perform, adorned with @software to register it as an AI-usable utility, securely evaluates mathematical expressions. It permits for secure computation by limiting obtainable features to a predefined set from the maths module and changing frequent syntax like ^ with Python’s exponentiation operator. This ensures the software can deal with easy arithmetic and superior features like trigonometry or logarithms whereas stopping unsafe code execution.
@software
def web_search(question: str, num_results: int = 3) -> str:
"""
Search the online for data utilizing DuckDuckGo.
Args:
question: Search question string
num_results: Variety of outcomes to return (default: 3, max: 10)
Returns:
Search outcomes as formatted string
"""
attempt:
num_results = min(max(num_results, 1), 10)
with DDGS() as ddgs:
outcomes = listing(ddgs.textual content(question, max_results=num_results))
if not outcomes:
return f"No search outcomes discovered for: {question}"
formatted_results = f"Search outcomes for '{question}':nn"
for i, end in enumerate(outcomes, 1):
formatted_results += f"{i}. **{end result['title']}**n"
formatted_results += f" {end result['body']}n"
formatted_results += f" Supply: {end result['href']}nn"
return formatted_results
besides Exception as e:
return f"Error performing internet search: {str(e)}"
We outline a web_search software that allows the agent to fetch real-time data from the web utilizing the DuckDuckGo Search API by way of the duckduckgo_search Python bundle. The software accepts a search question and an elective num_results parameter, making certain that the variety of outcomes returned is between 1 and 10. It opens a DuckDuckGo search session, retrieves the outcomes, and codecs them neatly for user-friendly show. If no outcomes are discovered or an error happens, the perform handles it gracefully by returning an informative message. This software equips the agent with real-time search capabilities, enhancing responsiveness and utility.
@software
def weather_info(metropolis: str) -> str:
"""
Get present climate data for a metropolis utilizing OpenWeatherMap API.
Word: This can be a mock implementation for demo functions.
Args:
metropolis: Identify of town
Returns:
Climate data as a string
"""
mock_weather = {
"the big apple": {"temp": 22, "situation": "Partly Cloudy", "humidity": 65},
"london": {"temp": 15, "situation": "Wet", "humidity": 80},
"tokyo": {"temp": 28, "situation": "Sunny", "humidity": 70},
"paris": {"temp": 18, "situation": "Overcast", "humidity": 75}
}
city_lower = metropolis.decrease()
if city_lower in mock_weather:
climate = mock_weather[city_lower]
return f"Climate in {metropolis}:n"
f"Temperature: {climate['temp']}°Cn"
f"Situation: {climate['condition']}n"
f"Humidity: {climate['humidity']}%"
else:
return f"Climate information not obtainable for {metropolis}. (This can be a demo with restricted cities: New York, London, Tokyo, Paris)"
We outline a weather_info software that simulates retrieving present climate information for a given metropolis. Whereas it doesn’t connect with a stay climate API, it makes use of a predefined dictionary of mock information for main cities like New York, London, Tokyo, and Paris. Upon receiving a metropolis title, the perform normalizes it to lowercase and checks for its presence within the mock dataset. It returns temperature, climate situation, and humidity in a readable format if discovered. In any other case, it notifies the person that climate information is unavailable. This software serves as a placeholder and might later be upgraded to fetch stay information from an precise climate API.
@software
def text_analyzer(textual content: str) -> str:
"""
Analyze textual content and supply statistics like phrase depend, character depend, and many others.
Args:
textual content: Textual content to investigate
Returns:
Textual content evaluation outcomes
"""
if not textual content.strip():
return "Please present textual content to investigate."
phrases = textual content.cut up()
sentences = textual content.cut up('.') + textual content.cut up('!') + textual content.cut up('?')
sentences = [s.strip() for s in sentences if s.strip()]
evaluation = f"Textual content Evaluation Outcomes:n"
evaluation += f"• Characters (with areas): {len(textual content)}n"
evaluation += f"• Characters (with out areas): {len(textual content.change(' ', ''))}n"
evaluation += f"• Phrases: {len(phrases)}n"
evaluation += f"• Sentences: {len(sentences)}n"
evaluation += f"• Common phrases per sentence: {len(phrases) / max(len(sentences), 1):.1f}n"
evaluation += f"• Commonest phrase: {max(set(phrases), key=phrases.depend) if phrases else 'N/A'}"
return evaluation
The text_analyzer software gives an in depth statistical evaluation of a given textual content enter. It calculates metrics corresponding to character depend (with and with out areas), phrase depend, sentence depend, and common phrases per sentence, and it identifies probably the most ceaselessly occurring phrase. The software handles empty enter gracefully by prompting the person to offer legitimate textual content. It makes use of easy string operations and Python’s set and max features to extract significant insights. It’s a useful utility for language evaluation or content material high quality checks within the AI agent’s toolkit.
@software
def current_time() -> str:
"""
Get the present date and time.
Returns:
Present date and time as a formatted string
"""
now = datetime.now()
return f"Present date and time: {now.strftime('%Y-%m-%d %H:%M:%S')}"
The current_time software gives a simple approach to retrieve the present system date and time in a human-readable format. Utilizing Python’s datetime module, it captures the current second and codecs it as YYYY-MM-DD HH:MM:SS. This utility is especially helpful for time-stamping responses or answering person queries concerning the present date and time throughout the AI agent’s interplay circulate.
instruments = [calculator, web_search, weather_info, text_analyzer, current_time]
def create_llm():
if ANTHROPIC_API_KEY:
return ChatAnthropic(
mannequin="claude-3-haiku-20240307",
temperature=0.1,
max_tokens=1024
)
else:
class MockLLM:
def invoke(self, messages):
last_message = messages[-1].content material if messages else ""
if any(phrase in last_message.decrease() for phrase in ['calculate', 'math', '+', '-', '*', '/', 'sqrt', 'sin', 'cos']):
import re
numbers = re.findall(r'[d+-*/.()sw]+', last_message)
expr = numbers[0] if numbers else "2+2"
return AIMessage(content material="I am going to show you how to with that calculation.",
tool_calls=[{"name": "calculator", "args": {"expression": expr.strip()}, "id": "calc1"}])
elif any(phrase in last_message.decrease() for phrase in ['search', 'find', 'look up', 'information about']):
question = last_message.change('seek for', '').change('discover', '').change('search for', '').strip()
if not question or len(question)
We initialize the language mannequin that powers the AI agent. If a legitimate Anthropic API secret is obtainable, it makes use of the Claude 3 Haiku mannequin for high-quality responses. With out an API key, a MockLLM is outlined to simulate primary tool-routing habits primarily based on key phrase matching, permitting the agent to perform offline with restricted capabilities. The bind_tools technique hyperlinks the outlined instruments to the mannequin, enabling it to invoke them as wanted.
def agent_node(state: AgentState) -> Dict[str, Any]:
"""Foremost agent node that processes messages and decides on software utilization."""
messages = state["messages"]
response = llm_with_tools.invoke(messages)
return {"messages": [response]}
def should_continue(state: AgentState) -> str:
"""Decide whether or not to proceed with software calls or finish."""
last_message = state["messages"][-1]
if hasattr(last_message, 'tool_calls') and last_message.tool_calls:
return "instruments"
return END
We outline the agent’s core decision-making logic. The agent_node perform handles incoming messages, invokes the language mannequin (with instruments), and returns the mannequin’s response. The should_continue perform then evaluates whether or not the mannequin’s response consists of software calls. If that’s the case, it routes management to the software execution node; in any other case, it directs the circulate to finish the interplay. These features allow dynamic and conditional transitions throughout the agent’s workflow.
def create_agent_graph():
tool_node = ToolNode(instruments)
workflow = StateGraph(AgentState)
workflow.add_node("agent", agent_node)
workflow.add_node("instruments", tool_node)
workflow.add_edge(START, "agent")
workflow.add_conditional_edges("agent", should_continue, {"instruments": "instruments", END: END})
workflow.add_edge("instruments", "agent")
reminiscence = MemorySaver()
app = workflow.compile(checkpointer=reminiscence)
return app
print("Creating LangGraph Multi-Device Agent...")
agent = create_agent_graph()
print("✓ Agent created efficiently!n")
We assemble the LangGraph-powered workflow that defines the AI agent’s operational construction. It initializes a ToolNode to deal with software executions and makes use of a StateGraph to prepare the circulate between agent choices and power utilization. Nodes and edges are added to handle transitions: beginning with the agent, conditionally routing to instruments, and looping again as wanted. A MemorySaver is built-in for persistent state monitoring throughout turns. The graph is compiled into an executable utility (app), enabling a structured, memory-aware multi-tool agent prepared for deployment.
def test_agent():
"""Check the agent with numerous queries."""
config = {"configurable": {"thread_id": "test-thread"}}
test_queries = [
"What's 15 * 7 + 23?",
"Search for information about Python programming",
"What's the weather like in Tokyo?",
"What time is it?",
"Analyze this text: 'LangGraph is an amazing framework for building AI agents.'"
]
print("🧪 Testing the agent with pattern queries...n")
for i, question in enumerate(test_queries, 1):
print(f"Question {i}: {question}")
print("-" * 50)
attempt:
response = agent.invoke(
{"messages": [HumanMessage(content=query)]},
config=config
)
last_message = response["messages"][-1]
print(f"Response: {last_message.content material}n")
besides Exception as e:
print(f"Error: {str(e)}n")
The test_agent perform is a validation utility that ensures that the LangGraph agent responds appropriately throughout completely different use instances. It runs predefined queries, arithmetic, internet search, climate, time, and textual content evaluation, and prints the agent’s responses. Utilizing a constant thread_id for configuration, it invokes the agent with every question. It neatly shows the outcomes, serving to builders confirm software integration and conversational logic earlier than shifting to interactive or manufacturing use.
def chat_with_agent():
"""Interactive chat perform."""
config = {"configurable": {"thread_id": "interactive-thread"}}
print("🤖 Multi-Device Agent Chat")
print("Obtainable instruments: Calculator, Internet Search, Climate Data, Textual content Analyzer, Present Time")
print("Kind 'stop' to exit, 'assist' for obtainable commandsn")
whereas True:
attempt:
user_input = enter("You: ").strip()
if user_input.decrease() in ['quit', 'exit', 'q']:
print("Goodbye!")
break
elif user_input.decrease() == 'assist':
print("nAvailable instructions:")
print("• Calculator: 'Calculate 15 * 7 + 23' or 'What's sin(pi/2)?'")
print("• Internet Search: 'Seek for Python tutorials' or 'Discover details about AI'")
print("• Climate: 'Climate in Tokyo' or 'What is the temperature in London?'")
print("• Textual content Evaluation: 'Analyze this textual content: [your text]'")
print("• Present Time: 'What time is it?' or 'Present date'")
print("• stop: Exit the chatn")
proceed
elif not user_input:
proceed
response = agent.invoke(
{"messages": [HumanMessage(content=user_input)]},
config=config
)
last_message = response["messages"][-1]
print(f"Agent: {last_message.content material}n")
besides KeyboardInterrupt:
print("nGoodbye!")
break
besides Exception as e:
print(f"Error: {str(e)}n")
The chat_with_agent perform gives an interactive command-line interface for real-time conversations with the LangGraph multi-tool agent. It helps pure language queries and acknowledges instructions like “assist” for utilization steering and “stop” to exit. Every person enter is processed by means of the agent, which dynamically selects and invokes acceptable response instruments. The perform enhances person engagement by simulating a conversational expertise and showcasing the agent’s capabilities in dealing with numerous queries, from math and internet search to climate, textual content evaluation, and time retrieval.
if __name__ == "__main__":
test_agent()
print("=" * 60)
print("🎉 LangGraph Multi-Device Agent is prepared!")
print("=" * 60)
chat_with_agent()
def quick_demo():
"""Fast demonstration of agent capabilities."""
config = {"configurable": {"thread_id": "demo"}}
demos = [
("Math", "Calculate the square root of 144 plus 5 times 3"),
("Search", "Find recent news about artificial intelligence"),
("Time", "What's the current date and time?")
]
print("🚀 Fast Demo of Agent Capabilitiesn")
for class, question in demos:
print(f"[{category}] Question: {question}")
attempt:
response = agent.invoke(
{"messages": [HumanMessage(content=query)]},
config=config
)
print(f"Response: {response['messages'][-1].content material}n")
besides Exception as e:
print(f"Error: {str(e)}n")
print("n" + "="*60)
print("🔧 Utilization Directions:")
print("1. Add your ANTHROPIC_API_KEY to make use of Claude mannequin")
print(" os.environ['ANTHROPIC_API_KEY'] = 'your-anthropic-api-key'")
print("2. Run quick_demo() for a fast demonstration")
print("3. Run chat_with_agent() for interactive chat")
print("4. The agent helps: calculations, internet search, climate, textual content evaluation, and time")
print("5. Instance: 'Calculate 15*7+23' or 'Seek for Python tutorials'")
print("="*60)
Lastly, we orchestrate the execution of the LangGraph multi-tool agent. If the script is run instantly, it initiates test_agent() to validate performance with pattern queries, adopted by launching the interactive chat_with_agent() mode for real-time interplay. The quick_demo() perform additionally briefly showcases the agent’s capabilities in math, search, and time queries. Clear utilization directions are printed on the finish, guiding customers on configuring the API key, working demonstrations, and interacting with the agent. This gives a easy onboarding expertise for customers to discover and lengthen the agent’s performance.
In conclusion, this step-by-step tutorial offers useful insights into constructing an efficient multi-tool AI agent leveraging LangGraph and Claude’s generative capabilities. With easy explanations and hands-on demonstrations, the information empowers customers to combine various utilities right into a cohesive and interactive system. The agent’s flexibility in performing duties, from advanced calculations to dynamic data retrieval, showcases the flexibility of recent AI improvement frameworks. Additionally, the inclusion of user-friendly features for each testing and interactive chat enhances sensible understanding, enabling instant utility in numerous contexts. Builders can confidently lengthen and customise their AI brokers with this foundational information.
Take a look at the Pocket book on GitHub. All credit score for this analysis goes to the researchers of this undertaking. Additionally, be happy to observe us on Twitter and don’t overlook to affix our 95k+ ML SubReddit and Subscribe to our E-newsletter.
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its reputation amongst audiences.