Skip to Content
Confident AI is free to try . No credit card required.
LLM Tracing
IntegrationsLiteLLM

LiteLLM

LiteLLM is a tool that makes it easy for developers to connect to and use many different large language models (LLMs)—like those from OpenAI, Anthropic, Hugging Face, and more—through a single, simple interface.

Quickstart

Confident AI integrates with LiteLLM to trace your LLM calls for both Proxy and SDK in the Observatory.

LiteLLM Proxy is a server (or gateway) that acts as a central point for accessing many different language models through a single API endpoint whereas the SDK is a client library that allows you to use LiteLLM in your own application.

Setup via SDK

Install the LiteLLM SDK:

pip install litellm

Use litellm.success_callback and litellm.failure_callback to trace your LLM calls:

import os import time import litellm os.environ['OPENAI_API_KEY']='<your-openai-api-key>' os.environ['CONFIDENT_API_KEY']='<your-confident-api-key>' litellm.success_callback = ["deepeval"] litellm.failure_callback = ["deepeval"] try: response = litellm.completion( model="gpt-3.5-turbo", messages=[ {"role": "user", "content": "What's the weather like in San Francisco?"} ], ) except Exception as e: print(e) print(response)

Setup via Proxy

Add deepeval to the success_callback and failure_callback in the config.yaml file.

model_list: - model_name: gpt-4o litellm_params: model: gpt-4o litellm_settings: success_callback: ["deepeval"] failure_callback: ["deepeval"]

Set your environment variables in .env file.

OPENAI_API_KEY=<your-openai-api-key> CONFIDENT_API_KEY=<your-confident-api-key>

Start your proxy server:

litellm --config config.yaml --debug

Make a request to the proxy server:

curl -X POST 'http://0.0.0.0:4000/chat/completions' \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer sk-1234' \ -d '{ "model": "gpt-3.5-turbo", "messages": [ { "role": "system", "content": "You are a helpful math tutor. Guide the user through the solution step by step." }, { "role": "user", "content": "how can I solve 8x + 7 = -23" } ] }'
💡

There are two ways to start the LiteLLM proxy server from CLI i.e., either using pip package or docker container. We have used the pip package in the example above. Refer to the LiteLLM documentation for more information.

Last updated on