On this tutorial, we offer a sensible information for implementing LangGraph, a streamlined, graph-based AI orchestration framework, built-in seamlessly with Anthropicâs Claude API. By means of detailed, executable code optimized for Google Colab, builders discover ways to construct and visualize AI workflows as interconnected nodes performing distinct duties, corresponding to producing concise solutions, critically analyzing responses, and routinely composing technical weblog content material. The compact implementation highlights LangGraphâs intuitive node-graph structure. It will probably handle complicated sequences of Claude-powered pure language duties, from primary question-answering situations to superior content material technology pipelines.
from getpass import getpass
import os
anthropic_key = getpass("Enter your Anthropic API key: ")
os.environ["ANTHROPIC_API_KEY"] = anthropic_key
print("Key set:", "ANTHROPIC_API_KEY" in os.environ)
We securely immediate customers to enter their Anthropic API key utilizing Pythonâs getpass module, making certain delicate information isnât displayed. It then units this key as an setting variable (ANTHROPIC_API_KEY) and confirms profitable storage.
import os
import json
import requests
from typing import Dict, Record, Any, Callable, Elective, Union
from dataclasses import dataclass, area
import networkx as nx
import matplotlib.pyplot as plt
from IPython.show import show, HTML, clear_output
We import important libraries for constructing and visualizing structured AI workflows. It consists of modules for dealing with information (json, requests, dataclasses), graph creation and visualization (networkx, matplotlib), interactive pocket book show (IPython.show), and kind annotations (typing) for readability and maintainability.
strive:
import anthropic
besides ImportError:
print("Putting in anthropic package deal...")
!pip set up -q anthropic
import anthropic
from anthropic import Anthropic
We make sure the anthropic Python package deal is obtainable to be used. It makes an attempt to import the module and, if not discovered, routinely installs it utilizing pip in a Google Colab setting. After set up, it imports the Anthropic consumer, important for interacting with Claude fashions by way of the Anthropic API. 4o
@dataclass
class NodeConfig:
title: str
perform: Callable
inputs: Record[str] = area(default_factory=listing)
outputs: Record[str] = area(default_factory=listing)
config: Dict[str, Any] = area(default_factory=dict)
This NodeConfig information class defines the construction of every node within the LangGraph workflow. Every node has a reputation, an executable perform, non-compulsory inputs and outputs, and an non-compulsory config dictionary to retailer further parameters. This setup permits for modular, reusable node definitions for graph-based AI duties.
class LangGraph:
def __init__(self, api_key: Elective[str] = None):
self.api_key = api_key or os.environ.get("ANTHROPIC_API_KEY")
if not self.api_key:
from google.colab import userdata
strive:
self.api_key = userdata.get('ANTHROPIC_API_KEY')
if not self.api_key:
elevate ValueError("No API key discovered")
besides:
print("No Anthropic API key present in setting variables or Colab secrets and techniques.")
self.api_key = enter("Please enter your Anthropic API key: ")
if not self.api_key:
elevate ValueError("Please present an Anthropic API key")
self.consumer = Anthropic(api_key=self.api_key)
self.graph = nx.DiGraph()
self.nodes = {}
self.state = {}
def add_node(self, node_config: NodeConfig):
self.nodes[node_config.name] = node_config
self.graph.add_node(node_config.title)
for input_node in node_config.inputs:
if input_node in self.nodes:
self.graph.add_edge(input_node, node_config.title)
return self
def claude_node(self, title: str, prompt_template: str, mannequin: str = "claude-3-7-sonnet-20250219",
inputs: Record[str] = None, outputs: Record[str] = None, system_prompt: str = None):
"""Comfort methodology to create a Claude API node"""
inputs = inputs or []
outputs = outputs or [name + "_response"]
def claude_fn(state, **kwargs):
immediate = prompt_template
for ok, v in state.objects():
if isinstance(v, str):
immediate = immediate.exchange(f"{{{ok}}}", v)
message_params = {
"mannequin": mannequin,
"max_tokens": 1000,
"messages": [{"role": "user", "content": prompt}]
}
if system_prompt:
message_params["system"] = system_prompt
response = self.consumer.messages.create(**message_params)
return response.content material[0].textual content
node_config = NodeConfig(
title=title,
perform=claude_fn,
inputs=inputs,
outputs=outputs,
config={"mannequin": mannequin, "prompt_template": prompt_template}
)
return self.add_node(node_config)
def transform_node(self, title: str, transform_fn: Callable,
inputs: Record[str] = None, outputs: Record[str] = None):
"""Add an information transformation node"""
inputs = inputs or []
outputs = outputs or [name + "_output"]
node_config = NodeConfig(
title=title,
perform=transform_fn,
inputs=inputs,
outputs=outputs
)
return self.add_node(node_config)
def visualize(self):
"""Visualize the graph"""
plt.determine(figsize=(10, 6))
pos = nx.spring_layout(self.graph)
nx.draw(self.graph, pos, with_labels=True, node_color="lightblue",
node_size=1500, arrowsize=20, font_size=10)
plt.title("LangGraph Move")
plt.tight_layout()
plt.present()
print("nGraph Construction:")
for node in self.graph.nodes():
successors = listing(self.graph.successors(node))
if successors:
print(f" {node} â {', '.be a part of(successors)}")
else:
print(f" {node} (endpoint)")
print()
def _get_execution_order(self):
"""Decide execution order based mostly on dependencies"""
strive:
return listing(nx.topological_sort(self.graph))
besides nx.NetworkXUnfeasible:
elevate ValueError("Graph incorporates a cycle")
def execute(self, initial_state: Dict[str, Any] = None):
"""Execute the graph in topological order"""
self.state = initial_state or {}
execution_order = self._get_execution_order()
print("Executing LangGraph stream:")
for node_name in execution_order:
print(f"- Working node: {node_name}")
node = self.nodes[node_name]
inputs = {ok: self.state.get(ok) for ok in node.inputs if ok in self.state}
end result = node.perform(self.state, **inputs)
if len(node.outputs) == 1:
self.state[node.outputs[0]] = end result
elif isinstance(end result, (listing, tuple)) and len(end result) == len(node.outputs):
for i, output_name in enumerate(node.outputs):
self.state[output_name] = end result[i]
print("Execution accomplished!")
return self.state
def run_example(query="What are the important thing advantages of utilizing a graph-based structure for AI workflows?"):
"""Run an instance LangGraph stream with a predefined query"""
print(f"Working instance with query: '{query}'")
graph = LangGraph()
def question_provider(state, **kwargs):
return query
graph.transform_node(
title="question_provider",
transform_fn=question_provider,
outputs=["user_question"]
)
graph.claude_node(
title="question_answerer",
prompt_template="Reply this query clearly and concisely: {user_question}",
inputs=["user_question"],
outputs=["answer"],
system_prompt="You're a useful AI assistant."
)
graph.claude_node(
title="answer_analyzer",
prompt_template="Analyze if this reply addresses the query nicely: Query: {user_question}nAnswer: {reply}",
inputs=["user_question", "answer"],
outputs=["analysis"],
system_prompt="You're a essential evaluator. Be transient however thorough."
)
graph.visualize()
end result = graph.execute()
print("n" + "="*50)
print("EXECUTION RESULTS:")
print("="*50)
print(f"nđ QUESTION:n{end result.get('user_question')}n")
print(f"đ ANSWER:n{end result.get('reply')}n")
print(f"â
ANALYSIS:n{end result.get('evaluation')}")
print("="*50 + "n")
return graph
The LangGraph class implements a light-weight framework for setting up and executing graph-based AI workflows utilizing Claude from Anthropic. It permits customers to outline modular nodes, both Claude-powered prompts or customized transformation features, join them by way of dependencies, visualize all the pipeline, and execute them in topological order. The run_example perform demonstrates this by constructing a easy question-answering and analysis stream, showcasing the readability and modularity of LangGraphâs structure.
def run_advanced_example():
"""Run a extra superior instance with a number of nodes for content material technology"""
graph = LangGraph()
def topic_selector(state, **kwargs):
return "Graph-based AI methods"
graph.transform_node(
title="topic_selector",
transform_fn=topic_selector,
outputs=["topic"]
)
graph.claude_node(
title="outline_generator",
prompt_template="Create a short define for a technical weblog put up about {subject}. Embody 3-4 principal sections solely.",
inputs=["topic"],
outputs=["outline"],
system_prompt="You're a technical author specializing in AI applied sciences."
)
graph.claude_node(
title="intro_writer",
prompt_template="Write a fascinating introduction for a weblog put up with this define: {define}nTopic: {subject}",
inputs=["topic", "outline"],
outputs=["introduction"],
system_prompt="You're a technical author. Write in a transparent, participating fashion."
)
graph.claude_node(
title="conclusion_writer",
prompt_template="Write a conclusion for a weblog put up with this define: {define}nTopic: {subject}",
inputs=["topic", "outline"],
outputs=["conclusion"],
system_prompt="You're a technical author. Summarize key factors and embody a forward-looking assertion."
)
def assembler(state, introduction, define, conclusion, **kwargs):
return f"# {state['topic']}nn{introduction}nn## Outlinen{define}nn## Conclusionn{conclusion}"
graph.transform_node(
title="content_assembler",
transform_fn=assembler,
inputs=["topic", "introduction", "outline", "conclusion"],
outputs=["final_content"]
)
graph.visualize()
end result = graph.execute()
print("n" + "="*50)
print("BLOG POST GENERATED:")
print("="*50 + "n")
print(end result.get("final_content"))
print("n" + "="*50)
return graph
The run_advanced_example perform showcases a extra refined use of LangGraph by orchestrating a number of Claude-powered nodes to generate an entire weblog put up. It begins by deciding on a subject, then creates an overview, an introduction, and a conclusion, all utilizing structured Claude prompts. Lastly, a change node assembles the content material right into a formatted weblog put up. This instance demonstrates how LangGraph can automate complicated, multi-step content material technology duties utilizing modular, related nodes in a transparent and executable stream.
print("1. Working easy question-answering instance")
query = "What are the three principal benefits of utilizing graph-based AI architectures?"
simple_graph = run_example(query)
print("n2. Working superior weblog put up creation instance")
advanced_graph = run_advanced_example()
Lastly, we set off the execution of each outlined LangGraph workflows. First, it runs the easy question-answering instance by passing a predefined query to the run_example() perform. Then, it initiates the extra superior weblog put up technology workflow utilizing run_advanced_example(). Collectively, these calls reveal the sensible flexibility of LangGraph, from primary prompt-based interactions to multi-step content material automation utilizing Anthropicâs Claude API.
In conclusion, we now have carried out LangGraph built-in with Anthropicâs Claude API, which illustrates the benefit of designing modular AI workflows that leverage highly effective language fashions in structured, graph-based pipelines. By means of visualizing process flows and separating obligations amongst nodes, corresponding to query processing, analytical analysis, content material outlining, and meeting, builders acquire sensible expertise in constructing maintainable, scalable AI methods. LangGraphâs clear node dependencies and Claudeâs refined language capabilities present an environment friendly answer for orchestrating complicated AI processes, particularly for fast prototyping and execution in environments like Google Colab.
Take a look at the Colab Pocket book. All credit score for this analysis goes to the researchers of this undertaking. Additionally, be happy to comply with us on Twitter and donât overlook to affix our 95k+ ML SubReddit and Subscribe to our E-newsletter.
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its recognition amongst audiences.