Llama Stack Inference API for generating completions, chat completions, and embeddings.
This API provides the raw interface to the underlying models. Two kinds of models are supported:
- LLM models: these models generate "raw" and "chat" (conversational) completions.
- Embedding models: these models generate embeddings to be used for semantic search.
📄️ Generate chat completions for a batch of messages using the specified model.
Generate chat completions for a batch of messages using the specified model.
📄️ Generate completions for a batch of content using the specified model.
Generate completions for a batch of content using the specified model.
📄️ Generate embeddings for content pieces using the specified model.
Generate embeddings for content pieces using the specified model.
📄️ Describe a chat completion by its ID.
Describe a chat completion by its ID.
📄️ List all chat completions.
List all chat completions.
📄️ Generate an OpenAI-compatible chat completion for the given messages using the specified model.
Generate an OpenAI-compatible chat completion for the given messages using the specified model.
📄️ Generate an OpenAI-compatible completion for the given prompt using the specified model.
Generate an OpenAI-compatible completion for the given prompt using the specified model.
📄️ Generate OpenAI-compatible embeddings for the given input using the specified model.
Generate OpenAI-compatible embeddings for the given input using the specified model.
📄️ Rerank a list of documents based on their relevance to a query.
Rerank a list of documents based on their relevance to a query.