{
"title": "<string>"
}Generate a title for the conversation using LLM.
{
"title": "<string>"
}Payload to generate a title for a conversation.
Maximum length of the generated title
1 <= x <= 200Optional LLM to use for title generation
Show child attributes
Model name.
API key.
Custom base URL.
API version (e.g., Azure).
x >= 0x >= 0x >= 0x >= 0HTTP timeout (s).
x >= 0Approx max chars in each event/content sent to the LLM.
x >= 1Sampling temperature for response generation. Defaults to 0 for most models and provider default for reasoning models.
x >= 00 <= x <= 1x >= 0The maximum number of input tokens. Note that this is currently unused, and the value at runtime is actually the total tokens in OpenAI (e.g. 128,000 tokens for GPT-4).
x >= 1The maximum number of output tokens. This is sent to the LLM.
x >= 1The cost per input token. This will available in logs for user.
x >= 0The cost per output token. This will available in logs for user.
x >= 0Enable streaming responses from the LLM. When enabled, the provided on_token callback in .completions and .responses will be invoked for each chunk of tokens.
Modify params allows litellm to do transformations like adding a default message, when a message is empty.
If model is vision capable, this option allows to disable image processing (useful for cost reduction).
Disable using of stop word.
Enable caching of prompts.
Enable logging of completions.
The folder to log LLM completions to. Required if log_completions is True.
A custom tokenizer to use for token counting.
Whether to use native tool calling.
Force using string content serializer when sending to LLM API. If None (default), auto-detect based on model. Useful for providers that do not support list content, like HuggingFace and Groq.
The effort to put into reasoning. This is a string that can be one of 'low', 'medium', 'high', or 'none'. Can apply to all reasoning models.
low, medium, high, none The level of detail for reasoning summaries. This is a string that can be one of 'auto', 'concise', or 'detailed'. Requires verified OpenAI organization. Only sent when explicitly set.
auto, concise, detailed If True, ask for ['reasoning.encrypted_content'] in Responses API include.
Retention policy for prompt cache. Only sent for GPT-5+ models; explicitly stripped for all other models.
The budget tokens for extended thinking, supported by Anthropic models.
The seed to use for random number generation.
Unique usage identifier for the LLM. Used for registry lookups, telemetry, and spend tracking.
Additional key-value pairs to pass to litellm's extra_body parameter. This is useful for custom inference endpoints that need additional parameters for configuration, routing, or advanced features. NOTE: Not all LLM providers support extra_body parameters. Some providers (e.g., OpenAI) may reject requests with unrecognized options. This is commonly supported by: - LiteLLM proxy servers (routing metadata, tracing) - vLLM endpoints (return_token_ids, etc.) - Custom inference clusters Examples: - Proxy routing: {'trace_version': '1.0.0', 'tags': ['agent:my-agent']} - vLLM features: {'return_token_ids': True}
Successful Response
Response containing the generated conversation title.
The generated title for the conversation
Was this page helpful?