On this tutorial, we display the way to construct an clever AI assistant by integrating LangChain, Gemini 2.0 Flash, and Jina Search instruments. By combining the capabilities of a robust massive language mannequin (LLM) with an exterior search API, we create an assistant that may present up-to-date data with citations. This step-by-step tutorial walks via establishing API keys, putting in mandatory libraries, binding instruments to the Gemini mannequin, and constructing a customized LangChain that dynamically calls exterior instruments when the mannequin requires contemporary or particular data. By the tip of this tutorial, we may have a completely practical, interactive AI assistant that may reply to person queries with correct, present, and well-sourced solutions.
%pip set up --quiet -U "langchain-community>=0.2.16" langchain langchain-google-genai
We set up the required Python packages for this undertaking. It contains the LangChain framework for constructing AI functions, LangChain Neighborhood instruments (model 0.2.16 or larger), and LangChain’s integration with Google Gemini fashions. These packages allow seamless use of Gemini fashions and exterior instruments inside LangChain pipelines.
import getpass
import os
import json
from typing import Dict, Any
We incorporate important modules into the undertaking. Getpass permits securely getting into API keys with out displaying them on the display screen, whereas os helps handle atmosphere variables and file paths. JSON is used for dealing with JSON information constructions, and typing offers kind hints for variables, akin to dictionaries and performance arguments, making certain higher code readability and maintainability.
if not os.environ.get("JINA_API_KEY"):
os.environ["JINA_API_KEY"] = getpass.getpass("Enter your Jina API key: ")
if not os.environ.get("GOOGLE_API_KEY"):
os.environ["GOOGLE_API_KEY"] = getpass.getpass("Enter your Google/Gemini API key: ")
We make sure that the required API keys for Jina and Google Gemini are set as atmosphere variables. Suppose the keys will not be already outlined within the atmosphere. In that case, the script prompts the person to enter them securely utilizing the getpass module, holding the keys hidden from view for safety functions. This strategy permits seamless entry to those companies with out requiring the hardcoding of delicate data within the code.
from langchain_community.instruments import JinaSearch
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableConfig, chain
from langchain_core.messages import HumanMessage, AIMessage, ToolMessage
print("🔧 Establishing instruments and mannequin...")
We import key modules and courses from the LangChain ecosystem. It introduces the JinaSearch software for net search, the ChatGoogleGenerativeAI mannequin for accessing Google’s Gemini, and important courses from LangChain Core, together with ChatPromptTemplate, RunnableConfig, and message constructions (HumanMessage, AIMessage, and ToolMessage). Collectively, these elements allow the mixing of exterior instruments with Gemini for dynamic, AI-driven data retrieval. The print assertion confirms that the setup course of has begun.
search_tool = JinaSearch()
print(f"✅ Jina Search software initialized: {search_tool.identify}")
print("n🔍 Testing Jina Search instantly:")
direct_search_result = search_tool.invoke({"question": "what's langgraph"})
print(f"Direct search consequence preview: {direct_search_result[:200]}...")
We initialize the Jina Search software by creating an occasion of JinaSearch() and confirming it’s prepared to be used. The software is designed to deal with net search queries inside the LangChain ecosystem. The script then runs a direct take a look at question, “what’s langgraph”, utilizing the invoke methodology, and prints a preview of the search consequence. This step verifies that the search software is functioning accurately earlier than integrating it into a bigger AI assistant workflow.
gemini_model = ChatGoogleGenerativeAI(
mannequin="gemini-2.0-flash",
temperature=0.1,
convert_system_message_to_human=True
)
print("✅ Gemini mannequin initialized")
We initialize the Gemini 2.0 Flash mannequin utilizing the ChatGoogleGenerativeAI class from LangChain. The mannequin is ready with a low temperature (0.1) for extra deterministic responses, and the convert_system_message_to_human=True parameter ensures system-level prompts are correctly dealt with as human-readable messages for Gemini’s API. The ultimate print assertion confirms that the Gemini mannequin is prepared to be used.
detailed_prompt = ChatPromptTemplate.from_messages([
("system", """You are an intelligent assistant with access to web search capabilities.
When users ask questions, you can use the Jina search tool to find current information.
Instructions:
1. If the question requires recent or specific information, use the search tool
2. Provide comprehensive answers based on the search results
3. Always cite your sources when using search results
4. Be helpful and informative in your responses"""),
("human", "{user_input}"),
("placeholder", "{messages}"),
])
We outline a immediate template utilizing ChatPromptTemplate.from_messages() that guides the AI’s habits. It features a system message outlining the assistant’s function, a human message placeholder for person queries, and a placeholder for software messages generated throughout software calls. This structured immediate ensures the AI offers useful, informative, and well-sourced responses whereas seamlessly integrating search outcomes into the dialog.
gemini_with_tools = gemini_model.bind_tools([search_tool])
print("✅ Instruments certain to Gemini mannequin")
main_chain = detailed_prompt | gemini_with_tools
def format_tool_result(tool_call: Dict[str, Any], tool_result: str) -> str:
"""Format software outcomes for higher readability"""
return f"Search Outcomes for '{tool_call['args']['query']}':n{tool_result[:800]}..."
We bind the Jina Search software to the Gemini mannequin utilizing bind_tools(), enabling the mannequin to invoke the search software when wanted. The main_chain combines the structured immediate template and the tool-enhanced Gemini mannequin, making a seamless workflow for dealing with person inputs and dynamic software calls. Moreover, the format_tool_result perform codecs search outcomes for a transparent and readable show, making certain customers can simply perceive the outputs of search queries.
@chain
def enhanced_search_chain(user_input: str, config: RunnableConfig):
"""
Enhanced chain that handles software calls and offers detailed responses
"""
print(f"n🤖 Processing question: '{user_input}'")
input_data = {"user_input": user_input}
print("📤 Sending to Gemini...")
ai_response = main_chain.invoke(input_data, config=config)
if ai_response.tool_calls:
print(f"🛠️ AI requested {len(ai_response.tool_calls)} software name(s)")
tool_messages = []
for i, tool_call in enumerate(ai_response.tool_calls):
print(f" 🔍 Executing search {i+1}: {tool_call['args']['query']}")
tool_result = search_tool.invoke(tool_call)
tool_msg = ToolMessage(
content material=tool_result,
tool_call_id=tool_call['id']
)
tool_messages.append(tool_msg)
print("📥 Getting last response with search outcomes...")
final_input = {
**input_data,
"messages": [ai_response] + tool_messages
}
final_response = main_chain.invoke(final_input, config=config)
return final_response
else:
print("ℹ️ No software calls wanted")
return ai_response
We outline the enhanced_search_chain utilizing the @chain decorator from LangChain, enabling it to deal with person queries with dynamic software utilization. It takes a person enter and a configuration object, passes the enter via the primary chain (which incorporates the immediate and Gemini with instruments), and checks if the AI suggests any software calls (e.g., net search through Jina). If software calls are current, it executes the searches, creates ToolMessage objects, and reinvokes the chain with the software outcomes for a last, context-enriched response. If no software calls are made, it returns the AI’s response instantly.
def test_search_chain():
"""Check the search chain with numerous queries"""
test_queries = [
"what is langgraph",
"latest developments in AI for 2024",
"how does langchain work with different LLMs"
]
print("n" + "="*60)
print("🧪 TESTING ENHANCED SEARCH CHAIN")
print("="*60)
for i, question in enumerate(test_queries, 1):
print(f"n📝 Check {i}: {question}")
print("-" * 50)
attempt:
response = enhanced_search_chain.invoke(question)
print(f"✅ Response: {response.content material[:300]}...")
if hasattr(response, 'tool_calls') and response.tool_calls:
print(f"🛠️ Used {len(response.tool_calls)} software name(s)")
besides Exception as e:
print(f"❌ Error: {str(e)}")
print("-" * 50)
The perform, test_search_chain(), validates your complete AI assistant setup by operating a sequence of take a look at queries via the enhanced_search_chain. It defines a listing of various take a look at prompts, masking instruments, AI subjects, and LangChain integrations, and prints outcomes, indicating whether or not software calls have been used. This helps confirm that the AI can successfully set off net searches, course of responses, and return helpful data to customers, making certain a strong and interactive system.
if __name__ == "__main__":
print("n🚀 Beginning enhanced LangChain + Gemini + Jina Search demo...")
test_search_chain()
print("n" + "="*60)
print("💬 INTERACTIVE MODE - Ask me something! (kind 'stop' to exit)")
print("="*60)
whereas True:
user_query = enter("n🗣️ Your query: ").strip()
if user_query.decrease() in ['quit', 'exit', 'bye']:
print("👋 Goodbye!")
break
if user_query:
attempt:
response = enhanced_search_chain.invoke(user_query)
print(f"n🤖 Response:n{response.content material}")
besides Exception as e:
print(f"❌ Error: {str(e)}")
Lastly, we run the AI assistant as a script when the file is executed instantly. It first calls the test_search_chain() perform to validate the system with predefined queries, making certain the setup works accurately. Then, it begins an interactive mode, permitting customers to kind customized questions and obtain AI-generated responses enriched with dynamic search outcomes when wanted. The loop continues till the person sorts ‘stop’, ‘exit’, or ‘bye’, offering an intuitive and hands-on method to work together with the AI system.
In conclusion, we’ve efficiently constructed an enhanced AI assistant that leverages LangChain’s modular framework, Gemini 2.0 Flash’s generative capabilities, and Jina Search’s real-time net search performance. This hybrid strategy demonstrates how AI fashions can increase their information past static information, offering customers with well timed and related data from dependable sources. Now you can lengthen this undertaking additional by integrating further instruments, customizing prompts, or deploying the assistant as an API or net app for broader functions. This basis opens up infinite prospects for constructing clever techniques which are each highly effective and contextually conscious.
Take a look at the Pocket book on GitHub. All credit score for this analysis goes to the researchers of this undertaking. Additionally, be happy to comply with us on Twitter and don’t overlook to hitch our 95k+ ML SubReddit and Subscribe to our E-newsletter.
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its recognition amongst audiences.