Inference
Overview
Llama Stack Inference API for generating completions, chat completions, and embeddings.
This API provides the raw interface to the underlying models. Two kinds of models are supported:
- LLM models: these models generate "raw" and "chat" (conversational) completions.
- Embedding models: these models generate embeddings to be used for semantic search.
This section contains documentation for all available providers for the inference API.
Providers
- inline::meta-reference
- inline::sentence-transformers
- remote::anthropic
- remote::bedrock
- remote::cerebras
- remote::databricks
- remote::fireworks
- remote::gemini
- remote::groq
- remote::hf::endpoint
- remote::hf::serverless
- remote::llama-openai-compat
- remote::nvidia
- remote::ollama
- remote::openai
- remote::passthrough
- remote::runpod
- remote::sambanova
- remote::tgi
- remote::together
- remote::vertexai
- remote::vllm
- remote::watsonx