Sampling
Sampling allows you to control what percentage of traces are sent to Confident’s observatory.
💡
This is useful for high-volume applications where you may want to reduce the amount of data being sent while still maintaining visibility into your system’s performance.
Configure Sample Rate
Configure the sampling rate by setting the CONFIDENT_SAMPLE_RATE
environment variable, which represents the proportion of traces that will be sent to the observatory.
export CONFIDENT_SAMPLE_RATE=0.5
Alternatively, you can set the sampling rate directly in code:
Python
import os
from deepeval.tracing import observe, trace_manager
import openai
trace_manager.configure(sample_rate=0.5)
@observe()
def llm_app(query: str):
return openai.ChatCompletion.create(
model="gpt-4o",
messages=[
{"role": "user", "content": query}
]
).choices[0].message["content"]
for _ in range(10):
llm_app("Write me a poem.") # roughly half of these traces will be sent
Traces are sampled at random and the rest are dropped
Last updated on