Skip to Content
Confident AI is free to try . No credit card required.
LLM Tracing
Tracing FeaturesAttributes

Attributes

Attributes are like metadata but specific for different span types, with the only difference being attributes are stricter in type.

For example, you are expected to provide an input of type string when setting attribute for the retriever span type to represent the text to be embedded, which is also why you should avoid setting input/outputs manually for default span types to avoid running into typing errors.

⚠️

Setting attributes will make the tracing UI easier to navigate on Confident AI, but is by no means required. You also cannot set attributes for custom span types.

Set Attributes At Runtime

Attributes are set at runtime using the update_current_span() function with its attributes argument. This function updates the attributes for the CURRENT span - which is the nearest @observe decorator level we are currently tracing. For example, if you have nested spans like:

from deepeval.tracing import observe, update_current_span, RetrieverAttributes @observe(type="custom") def outer_function(): @observe(type="retriever") def inner_function(): # Here, update_current_span() will update the Retriever span update_current_span( attribtues=RetrieverAttributes( embedding_input=query, retrieval_context=fetched_documents ) )

The current span is determined using Python’s context variables, which automatically track the active span based on the execution context. This means you don’t need to manually pass span references around - the system knows which span you’re currently executing within.

LLM attributes

LLM attributes track the input, output, and token usage of language model calls. It is highly RECOMMENDED that you set the attributes for an LLM span.

from deepeval.tracing import update_current_span, LlmAttributes @observe(type="llm", model="gpt-4") def generate_response(prompt: str) -> str: output = "Generated response" update_current_span( attributes=LlmAttributes(input=prompt, output=output) ) return output

There are TWO mandatory and TWO optional parameters for LLMAttributes:

  • input: The prompt of type List[Dict[str, Any]] OR str provided to the language model.
  • output: The response generated of type str by the language model.
  • [Optional] input_token_count: The number of tokens of type int in the input.
  • [Optional] output_token_count: The number of tokens of type int in the generated response.

Note that for input, typically you would either send the list of messages as a list of {"role": <role>, "content": <content>} directly as you would to your LLM provider’s APIs, or simply the user input as a pure str instead.

If cost_per_input_token is not set in the @observe decorator, setting the LLM attributes for input_token_count will not help calculate the cost. The same applies to output tokens.

Retriever attributes

Retriever attributes track the embedding_input and retrieved_context in RAG pipelines. It is highly RECOMMENDED that you set the attributes for a retriever span.

from deepeval.tracing import update_current_span, RetrieverAttributes @observe(type="retriever", embedder="text-embedding-ada-002") def retrieve_documents(query: str) -> List[str]: fetched_documents = ["doc1", "doc2"] update_current_span( attributes=RetrieverAttributes( embedding_input=query, retrieval_context=fetched_documents ) ) return fetched_documents

There are TWO mandatory parameters for RetrieverAttributes:

  • embedding_input: The text of type str that needs to be embedded for vector search.
  • retrieval_context: The list of type str that represents the relevant documents or text chunks retrieved from your vector store.

Tool attributes

Tool attributes track the input parameters and output of tool executions.

from deepeval.tracing import update_current_span, ToolAttributes @observe(type="tool") def web_search(query: str) -> str: result = "Search results" update_current_span( attributes=ToolAttributes(input_parameters={"query": query}, output=result) ) return result

There are TWO optional parameters for ToolAttributes:

  • [Optional] input_parameters: The parameters passed to the tool function of type Dict. Defaulted to the function kwargs.
  • [Optional] output: The result returned by the tool function of type Any. Defaulted to the function output.
💡
Tip

If update_current_span(attributes=...) is not called for a tool span, deepeval will automatically take the input and outputs of the function as the input_parameters and output.

Agent attributes

Agent attributes track the input and output of agent decisions and actions.

from deepeval.tracing import update_current_span, AgentAttributes @observe(type="agent", available_tools=["web_search"]) def research_agent(query: str) -> str: response = "Agent response" update_current_span( attributes=AgentAttributes(input=query, output=response) ) return response

There are TWO optional parameters for Agent attributes:

  • input: The input to the agent of type Any, typically the initial query or task description. Defaulted to the function kwargs.
  • output: The agent’s response or output of type Any, including any actions taken or results produced. Defaulted to the function output.

Similar to the tool span, an agent span will take the @observe decorated functions’s input and output as the input and output of its attributes.

Last updated on