Skip to main content
Version: v0.2.23

Inference

Overview​

Llama Stack Inference API for generating completions, chat completions, and embeddings.

This API provides the raw interface to the underlying models. Two kinds of models are supported:

  • LLM models: these models generate "raw" and "chat" (conversational) completions.
  • Embedding models: these models generate embeddings to be used for semantic search.

This section contains documentation for all available providers for the inference API.