LangChain
LangChain is a framework for building LLM applications.
Quickstart
Confident AI provides a CallbackHandler
that can be used to trace LangChain’s execution.
Install the following packages:
pip install -U deepeval langchain langchain-openai
Login with your API key and configure DeepEval’s CallbackHandler
as a callback for LangChain:
main.py
import os
import time
from langchain.chat_models import init_chat_model
from deepeval import login_with_confident_api_key
from deepeval.integrations.langchain.callback import CallbackHandler
os.environ["OPENAI_API_KEY"] = "<your-openai-api-key>"
login_with_confident_api_key("<your-confident-api-key>")
def multiply(a: int, b: int) -> int:
return a * b
llm = init_chat_model("gpt-4o-mini", model_provider="openai")
llm_with_tools = llm.bind_tools([multiply])
llm_with_tools.invoke(
"What is 3 * 12?",
config = {"callbacks": [CallbackHandler()]}
)
time.sleep(5) # Wait for the trace to be published
Run your script:
python main.py
You can directly view the traces in the Observatory by clicking on the link in the output printed in the console.
💡
DeepEval’s CallbackHandler
is an implementation of LangChain’s BaseCallbackHandler
.
Last updated on