HomeArtificial IntelligenceA Step-by-Step Coding Information to Combine Dappier AI’s Actual-Time Search and Suggestion...

A Step-by-Step Coding Information to Combine Dappier AI’s Actual-Time Search and Suggestion Instruments with OpenAI’s Chat API


On this tutorial, we’ll learn to harness the facility of Dappier AI, a set of real-time search and suggestion instruments, to boost our conversational purposes. By combining Dappier’s cutting-edge RealTimeSearchTool with its AIRecommendationTool, we are able to question the most recent data from throughout the online and floor customized article recommendations from customized information fashions. We information you step-by-step via organising our Google Colab setting, putting in dependencies, securely loading API keys, and initializing every Dappier module. We’ll then combine these instruments with an OpenAI chat mannequin (e.g., gpt-3.5-turbo), assemble a composable immediate chain, and execute end-to-end queries, all inside 9 concise pocket book cells. Whether or not we want up-to-the-minute information retrieval or AI-driven content material curation, this tutorial offers a versatile framework for constructing clever, data-driven chat experiences.

!pip set up -qU langchain-dappier langchain langchain-openai langchain-community langchain-core openai

We bootstrap our Colab setting by putting in the core LangChain libraries, each the Dappier extensions and the group integrations, alongside the official OpenAI shopper. With these packages in place, we may have seamless entry to Dappier’s real-time search and suggestion instruments, the most recent LangChain runtimes, and the OpenAI API, multi function setting.

import os
from getpass import getpass


os.environ["DAPPIER_API_KEY"] = getpass("Enter our Dappier API key: ")


os.environ["OPENAI_API_KEY"] = getpass("Enter our OpenAI API key: ")

We securely seize our Dappier and OpenAI API credentials at runtime, thereby avoiding the hard-coding of delicate keys in our pocket book. Through the use of getpass, the prompts guarantee our inputs stay hidden, and setting them as setting variables makes them out there to all subsequent cells with out exposing them in logs.

from langchain_dappier import DappierRealTimeSearchTool


search_tool = DappierRealTimeSearchTool()
print("Actual-time search software prepared:", search_tool)

We import Dappier’s actual‐time search module and create an occasion of the DappierRealTimeSearchTool, enabling our pocket book to execute stay net queries. The print assertion confirms that the software has been initialized efficiently and is able to deal with search requests.

from langchain_dappier import DappierAIRecommendationTool


recommendation_tool = DappierAIRecommendationTool(
    data_model_id="dm_01j0pb465keqmatq9k83dthx34",
    similarity_top_k=3,
    ref="sportsnaut.com",
    num_articles_ref=2,
    search_algorithm="most_recent",
)
print("Suggestion software prepared:", recommendation_tool)

We arrange Dappier’s AI-powered suggestion engine by specifying our customized information mannequin, the variety of related articles to retrieve, and the supply area for context. The DappierAIRecommendationTool occasion will now use the “most_recent” algorithm to tug within the top-k related articles (right here, two) from our specified reference, prepared for query-driven content material recommendations.

from langchain.chat_models import init_chat_model


llm = init_chat_model(
    mannequin="gpt-3.5-turbo",
    model_provider="openai",
    temperature=0,
)
llm_with_tools = llm.bind_tools([search_tool])
print("✅ llm_with_tools prepared")

We create an OpenAI chat mannequin occasion utilizing gpt-3.5-turbo with a temperature of 0 to make sure constant responses, after which bind the beforehand initialized search software in order that the LLM can invoke real-time searches. The ultimate print assertion confirms that our LLM is able to name Dappier’s instruments inside our conversational flows.

import datetime
from langchain_core.prompts import ChatPromptTemplate


at the moment = datetime.datetime.at the moment().strftime("%Y-%m-%d")
immediate = ChatPromptTemplate([
    ("system", f"we are a helpful assistant. Today is {today}."),
    ("human", "{user_input}"),
    ("placeholder", "{messages}"),
])


llm_chain = immediate | llm_with_tools
print("✅ llm_chain constructed")

We assemble the conversational “chain” by first constructing a ChatPromptTemplate that injects the present date right into a system immediate and defines slots for consumer enter and prior messages. By piping the template (|) into our llm_with_tools, we create an llm_chain that mechanically codecs prompts, invokes the LLM (with real-time search functionality), and handles responses in a seamless workflow. The ultimate print confirms the chain is able to drive end-to-end interactions.

from langchain_core.runnables import RunnableConfig, chain


@chain
def tool_chain(user_input: str, config: RunnableConfig):
    ai_msg = llm_chain.invoke({"user_input": user_input}, config=config)
    tool_msgs = search_tool.batch(ai_msg.tool_calls, config=config)
    return llm_chain.invoke(
        {"user_input": user_input, "messages": [ai_msg, *tool_msgs]},
        config=config
    )


print("✅ tool_chain outlined")

We outline an end-to-end tool_chain that first sends our immediate to the LLM (capturing any requested software calls), then executes these calls by way of search_tool.batch, and eventually feeds each the AI’s preliminary message and the software outputs again into the LLM for a cohesive response. The @chain decorator transforms this right into a single, runnable pipeline, permitting us to easily name tool_chain.invoke(…) to deal with each pondering and looking out in a single step.

res = search_tool.invoke({"question": "What occurred on the final Wrestlemania"})
print("🔍 Search:", res)

We show a direct question to Dappier’s real-time search engine, asking “What occurred on the final WrestleMania,” and instantly print the structured consequence. It reveals how simply we are able to leverage search_tool.invoke to fetch up-to-the-moment data and examine the uncooked response in our pocket book.

rec = recommendation_tool.invoke({"question": "newest sports activities information"})
print("📄 Suggestion:", rec)


out = tool_chain.invoke("Who received the final Nobel Prize?")
print("🤖 Chain output:", out)

Lastly, we showcase each our suggestion and full-chain workflows in motion. First, it calls recommendation_tool.invoke with “newest sports activities information” to fetch related articles from our customized information mannequin, then prints these recommendations. Subsequent, it runs the tool_chain.invoke(“Who received the final Nobel Prize?”) to carry out an end-to-end LLM question mixed with real-time search, printing the AI’s synthesized reply, and integrating stay information.

In conclusion, we now have a sturdy baseline for embedding Dappier AI capabilities into any conversational workflow. We’ve seen how effortlessly Dappier’s real-time search empowers our LLM to entry recent details, whereas the advice software permits us to ship contextually related insights from proprietary information sources. From right here, we are able to customise search parameters (e.g., refining question filters) or fine-tune suggestion settings (e.g., adjusting similarity thresholds and reference domains) to go well with our area.


Take a look at the Dappier Platform and Pocket book right here. Additionally, don’t overlook to comply with us on Twitter and be part of our Telegram Channel and LinkedIn Group. Don’t Neglect to affix our 90k+ ML SubReddit.

🔥 [Register Now] miniCON Digital Convention on AGENTIC AI: FREE REGISTRATION + Certificates of Attendance + 4 Hour Brief Occasion (Might 21, 9 am- 1 pm PST) + Fingers on Workshop


Nikhil is an intern guide at Marktechpost. He’s pursuing an built-in twin diploma in Supplies on the Indian Institute of Know-how, Kharagpur. Nikhil is an AI/ML fanatic who’s at all times researching purposes in fields like biomaterials and biomedical science. With a powerful background in Materials Science, he’s exploring new developments and creating alternatives to contribute.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments