Configuring a "Stack"
The Llama Stack runtime configuration is specified as a YAML file. Here is a simplified version of an example configuration file for the Ollama distribution:
The default `config.yaml` files generated by templates are starting points for your configuration. For guidance on customizing these files for your specific needs, see [Customizing Your config.yaml Configuration](customizing_run_yaml).
```yaml
version: 2
apis:
- agents
- inference
- vector_io
- safety
providers:
inference:
- provider_id: ollama
provider_type: remote::ollama
config:
url: ${env.OLLAMA_URL:=http://localhost:11434}
vector_io:
- provider_id: faiss
provider_type: inline::faiss
config:
kvstore:
type: sqlite
namespace: null
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/distributions/ollama}/faiss_store.db
safety:
- provider_id: llama-guard
provider_type: inline::llama-guard
config: {}
agents:
- provider_id: builtin
provider_type: inline::builtin
config:
persistence:
agent_state:
backend: kv_default
namespace: agents
responses:
backend: sql_default
table_name: responses
storage:
backends:
kv_default:
type: kv_sqlite
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/distributions/ollama}/kvstore.db
sql_default:
type: sql_sqlite
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/distributions/ollama}/sqlstore.db
stores:
metadata:
backend: kv_default
namespace: registry
inference:
backend: sql_default
table_name: inference_store
max_write_queue_size: 10000
num_writers: 4
conversations:
backend: sql_default
table_name: openai_conversations
prompts:
backend: kv_default
namespace: prompts
models:
- metadata: {}
model_id: ${env.INFERENCE_MODEL}
provider_id: ollama
provider_model_id: null
shields: []
server:
port: 8321
auth:
provider_config:
type: "oauth2_token"
jwks:
uri: "https://my-token-issuing-svc.com/jwks"
Let's break this down into the different sections. The first section specifies the set of APIs that the stack server will serve:
apis:
- agents
- inference
- vector_io
- safety
Providers
Next up is the most critical part: the set of providers that the stack will use to serve the above APIs. Consider the inference API:
providers:
inference:
# provider_id is a string you can choose freely
- provider_id: ollama
# provider_type is a string that specifies the type of provider.
# in this case, the provider for inference is ollama and it runs remotely (outside of the distribution)
provider_type: remote::ollama
# config is a dictionary that contains the configuration for the provider.
# in this case, the configuration is the url of the ollama server
config:
url: ${env.OLLAMA_URL:=http://localhost:11434}
A few things to note:
- A provider instance is identified with an (id, type, config) triplet.
- The id is a string you can choose freely.
- You can instantiate any number of provider instances of the same type.
- The configuration dictionary is provider-specific.
- Notice that configuration can reference environment variables (with default values), which are expanded at runtime. When you run a stack server, you can set environment variables in your shell before running
llama stack runto override the default values.
Environment Variable Substitution
Llama Stack supports environment variable substitution in configuration values using the
${env.VARIABLE_NAME} syntax. This allows you to externalize configuration values and provide
different settings for different environments. The syntax is inspired by bash parameter expansion
and follows similar patterns.
Basic Syntax
The basic syntax for environment variable substitution is:
config:
api_key: ${env.API_KEY}
url: ${env.SERVICE_URL}
If the environment variable is not set, the server will raise an error during startup.
Default Values
You can provide default values using the := operator:
config:
url: ${env.OLLAMA_URL:=http://localhost:11434}
port: ${env.PORT:=8321}
timeout: ${env.TIMEOUT:=60}
If the environment variable is not set, the default value http://localhost:11434 will be used.
Empty defaults are allowed so url: ${env.OLLAMA_URL:=} will be set to None if the environment variable is not set.
Conditional Values
You can use the :+ operator to provide a value only when the environment variable is set:
config:
# Only include this field if ENVIRONMENT is set
environment: ${env.ENVIRONMENT:+production}
If the environment variable is set, the value after :+ will be used. If it's not set, the field
will be omitted with a None value.
Do not use conditional values (${env.OLLAMA_URL:+}) for empty defaults (${env.OLLAMA_URL:=}).
This will be set to None if the environment variable is not set.
Conditional must only be used when the environment variable is set.
Examples
Here are some common patterns:
# Required environment variable (will error if not set)
api_key: ${env.OPENAI_API_KEY}
# Optional with default
base_url: ${env.API_BASE_URL:=https://api.openai.com/v1}
# Conditional field
debug_mode: ${env.DEBUG:+true}
# Optional field that becomes None if not set
optional_token: ${env.OPTIONAL_TOKEN:+}
Runtime Override
You can override environment variables at runtime by setting them in your shell before starting the server:
# Set environment variables in your shell
export API_KEY=sk-123
export BASE_URL=https://custom-api.com
llama stack run --config config.yaml
Type Safety
The environment variable substitution system is type-safe:
- String values remain strings
- Empty defaults (
${env.VAR:+}) are converted toNonefor fields that acceptstr | None - Numeric defaults are properly typed (e.g.,
${env.PORT:=8321}becomes an integer) - Boolean defaults work correctly (e.g.,
${env.DEBUG:=false}becomes a boolean)
Resources
Let's look at the models section:
models:
- metadata: {}
model_id: ${env.INFERENCE_MODEL}
provider_id: ollama
provider_model_id: null
model_type: llm
A Model is an instance of a "Resource" (see Concepts) and is associated with a specific inference provider (in this case, the provider with identifier ollama). This is an instance of a "pre-registered" model. While we always encourage the clients to register models before using them, some Stack servers may come up a list of "already known and available" models.
What's with the provider_model_id field? This is an identifier for the model inside the provider's model catalog. The model_id field is provided for configuration purposes but is not used as part of the model identifier.
Important: Models are identified as provider_id/provider_model_id in the system and when making API calls. When provider_model_id is omitted, the server will set it to be the same as model_id.
Examples:
- Config:
model_id: llama3.2,provider_id: ollama,provider_model_id: null→ Access as:ollama/llama3.2 - Config:
model_id: my-llama,provider_id: vllm-inference,provider_model_id: llama-3-2-3b→ Access as:vllm-inference/llama-3-2-3b(themodel_idis not used in the identifier)
If you need to conditionally register a model in the configuration, such as only when specific environment variable(s) are set, this can be accomplished by utilizing a special __disabled__ string as the default value of an environment variable substitution, as shown below:
models:
- metadata: {}
model_id: ${env.INFERENCE_MODEL:__disabled__}
provider_id: ollama
provider_model_id: ${env.INFERENCE_MODEL:__disabled__}
The snippet above will only register this model if the environment variable INFERENCE_MODEL is set and non-empty. If the environment variable is not set, the model will not get registered at all.
Server Configuration
The server section configures the HTTP server that serves the Llama Stack APIs:
server:
port: 8321 # Port to listen on (default: 8321)
tls_certfile: "/path/to/cert.pem" # Optional: Path to TLS certificate for HTTPS
tls_keyfile: "/path/to/key.pem" # Optional: Path to TLS key for HTTPS
cors: true # Optional: Enable CORS (dev mode) or full config object
CORS Configuration
CORS (Cross-Origin Resource Sharing) can be configured in two ways:
Local development (allows localhost origins only):
server:
cors: true
Explicit configuration (custom origins and settings):
server:
cors:
allow_origins: ["https://myapp.com", "https://app.example.com"]
allow_methods: ["GET", "POST", "PUT", "DELETE"]
allow_headers: ["Content-Type", "Authorization"]
allow_credentials: true
max_age: 3600
When cors: true, the server enables secure localhost-only access for local development. For production, specify exact origins to maintain security.
Authentication Configuration
Breaking Change (v0.2.14): The authentication configuration structure has changed. The previous format with
provider_typeandconfigfields has been replaced with a unifiedprovider_configfield that includes thetypefield. Update your configuration files accordingly.
The auth section configures authentication for the server. When configured, all API requests must include a valid Bearer token in the Authorization header:
Authorization: Bearer <token>
Conditional Authentication
Authentication can be conditionally enabled or disabled using environment variables with the conditional syntax (:+). This is useful for deploying the same configuration to different environments where auth may or may not be required.
Example:
server:
auth:
provider_config:
type: ${env.AUTH_PROVIDER:+oauth2_token}
audience: "llama-stack"
jwks:
uri: ${env.KEYCLOAK_URL}/realms/llamastack/protocol/openid-connect/certs
issuer: ${env.KEYCLOAK_URL}/realms/llamastack
Behavior:
- If
AUTH_PROVIDERis set (to any value): Authentication is enabled with OAuth2 - If
AUTH_PROVIDERis NOT set: Authentication is completely disabled (no middleware added)
This allows you to:
- Run without authentication in local development (unset the env var)
- Enable authentication in staging/production (set the env var)
- Use the same config.yaml across all environments
Important Notes:
- The
typefield uses the conditional syntax to control whether the entire auth provider is enabled - When the env var is not set, the entire
provider_configis set toNoneand no authentication middleware is initialized - Other auth config fields (like
route_policy) can still be used independently whenprovider_configis disabled
The server supports multiple authentication providers:
OAuth 2.0/OpenID Connect Provider with Kubernetes
The server can be configured to use service account tokens for authorization, validating these against the Kubernetes API server, e.g.:
server:
auth:
provider_config:
type: "oauth2_token"
jwks:
uri: "https://kubernetes.default.svc:8443/openid/v1/jwks"
token: "${env.TOKEN:+}"
key_recheck_period: 3600
tls_cafile: "/path/to/ca.crt"
issuer: "https://kubernetes.default.svc"
audience: "https://kubernetes.default.svc"
To find your cluster's jwks uri (from which the public key(s) to verify the token signature are obtained), run:
kubectl get --raw /.well-known/openid-configuration| jq -r .jwks_uri
For the tls_cafile, you can use the CA certificate of the OIDC provider:
kubectl config view --minify -o jsonpath='{.clusters[0].cluster.certificate-authority}'
For the issuer, you can use the OIDC provider's URL:
kubectl get --raw /.well-known/openid-configuration| jq .issuer
The audience can be obtained from a token, e.g. run:
kubectl create token default --duration=1h | cut -d. -f2 | base64 -d | jq .aud
The jwks token is used to authorize access to the jwks endpoint. You can obtain a token by running:
kubectl create namespace llama-stack
kubectl create serviceaccount llama-stack-auth -n llama-stack
kubectl create token llama-stack-auth -n llama-stack > llama-stack-auth-token
export TOKEN=$(cat llama-stack-auth-token)
Alternatively, you can configure the jwks endpoint to allow anonymous access. To do this, make sure
the kube-apiserver runs with --anonymous-auth=true to allow unauthenticated requests
and that the correct RoleBinding is created to allow the service account to access the necessary
resources. If that is not the case, you can create a RoleBinding for the service account to access
the necessary resources:
# allow-anonymous-openid.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: allow-anonymous-openid
rules:
- nonResourceURLs: ["/openid/v1/jwks"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: allow-anonymous-openid
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: allow-anonymous-openid
subjects:
- kind: User
name: system:anonymous
apiGroup: rbac.authorization.k8s.io
And then apply the configuration:
kubectl apply -f allow-anonymous-openid.yaml
The provider extracts user information from the JWT token:
- Username from the
subclaim becomes a role - Kubernetes groups become teams
You can easily validate a request by running:
curl -s -L -H "Authorization: Bearer $(cat llama-stack-auth-token)" http://127.0.0.1:8321/v1/providers
Kubernetes Authentication Provider
The server can be configured to use Kubernetes SelfSubjectReview API to validate tokens directly against the Kubernetes API server:
server:
auth:
provider_config:
type: "kubernetes"
api_server_url: "https://kubernetes.default.svc"
claims_mapping:
username: "roles"
groups: "roles"
uid: "uid_attr"
verify_tls: true
tls_cafile: "/path/to/ca.crt"
Configuration options:
api_server_url: The Kubernetes API server URL (e.g., https://kubernetes.default.svc:6443)verify_tls: Whether to verify TLS certificates (default: true)tls_cafile: Path to CA certificate file for TLS verificationclaims_mapping: Mapping of Kubernetes user claims to access attributes
The provider validates tokens by sending a SelfSubjectReview request to the Kubernetes API server at /apis/authentication.k8s.io/v1/selfsubjectreviews. The provider extracts user information from the response:
- Username from the
userInfo.usernamefield - Groups from the
userInfo.groupsfield - UID from the
userInfo.uidfield
To obtain a token for testing:
kubectl create namespace llama-stack
kubectl create serviceaccount llama-stack-auth -n llama-stack
kubectl create token llama-stack-auth -n llama-stack > llama-stack-auth-token
You can validate a request by running:
curl -s -L -H "Authorization: Bearer $(cat llama-stack-auth-token)" http://127.0.0.1:8321/v1/providers
GitHub Token Provider
Validates GitHub personal access tokens or OAuth tokens directly:
server:
auth:
provider_config:
type: "github_token"
github_api_base_url: "https://api.github.com" # Or GitHub Enterprise URL
The provider fetches user information from GitHub and maps it to access attributes based on the claims_mapping configuration.
Custom Provider
Validates tokens against a custom authentication endpoint:
server:
auth:
provider_config:
type: "custom"
endpoint: "https://auth.example.com/validate" # URL of the auth endpoint
The custom endpoint receives a POST request with:
{
"api_key": "<token>",
"request": {
"path": "/api/v1/endpoint",
"headers": {
"content-type": "application/json",
"user-agent": "curl/7.64.1"
},
"params": {
"key": ["value"]
}
}
}
And must respond with:
{
"access_attributes": {
"roles": ["admin", "user"],
"teams": ["ml-team", "nlp-team"],
"projects": ["llama-3", "project-x"],
"namespaces": ["research"]
},
"message": "Authentication successful"
}
If no access attributes are returned, the token is used as a namespace.
Access control
When authentication is enabled, access to resources is controlled
through the access_policy attribute of the auth config section under
server. The value for this is a list of access rules.
Each access rule defines a list of actions either to permit or to forbid. It may specify a principal or a resource that must match for the rule to take effect.
Valid actions are create, read, update, and delete. The resource to match should be specified in the form of a type qualified identifier, e.g. model::my-model or vector_db::some-db, a wildcard for all resources of a type, e.g. model::, or a regex pattern with the "regex:" prefix, e.g. regex:model::(llama|mistral)-3\.\d+-.. If the principal or resource are not specified, they will match all requests.
The valid resource types are model, shield, vector_db, dataset,
scoring_function, benchmark, tool, tool_group and session. In
addition, stored data such as conversations, conversation items,
responses, and files use the sql_record::<table_name> resource
type (e.g. sql_record::openai_conversations,
sql_record::responses). See Multi-Tenant Isolation for Conversations and Responses for details.
A rule may also specify a condition, either a 'when' or an 'unless', with additional constraints as to where the rule applies. The constraints supported at present are:
- 'user with
<attr-value>in<attr-name>' - 'user with
<attr-value>not in<attr-name>' - 'user is owner'
- 'user is not owner'
- 'user in owners
<attr-name>' - 'user not in owners
<attr-name>'
The attributes defined for a user will depend on how the auth configuration is defined.
When checking whether a particular action is allowed by the current user for a resource, all the defined rules are tested in order to find a match. If a match is found, the request is permitted or forbidden depending on the type of rule. If no match is found, the request is denied.
If no explicit rules are specified, a default policy is defined with which all users can access all resources defined in config but resources created dynamically can only be accessed by the user that created them.
Examples:
The following restricts access to particular github users:
server:
auth:
provider_config:
type: "github_token"
github_api_base_url: "https://api.github.com"
access_policy:
- permit:
principal: user-1
actions: [create, read, delete]
description: user-1 has full access to all resources
- permit:
principal: user-2
actions: [read]
resource: model::model-1
description: user-2 has read access to model-1 only
Similarly, the following restricts access to particular kubernetes service accounts:
server:
auth:
provider_config:
type: "oauth2_token"
audience: https://kubernetes.default.svc.cluster.local
issuer: https://kubernetes.default.svc.cluster.local
tls_cafile: /home/gsim/.minikube/ca.crt
jwks:
uri: https://kubernetes.default.svc.cluster.local:8443/openid/v1/jwks
token: ${env.TOKEN}
access_policy:
- permit:
principal: system:serviceaccount:my-namespace:my-serviceaccount
actions: [create, read, delete]
description: specific serviceaccount has full access to all resources
- permit:
principal: system:serviceaccount:default:default
actions: [read]
resource: model::model-1
description: default account has read access to model-1 only
The following policy, which assumes that users are defined with roles and teams by whichever authentication system is in use, allows any user with a valid token to use models, create resources other than models, read and delete resources they created and read resources created by users sharing a team with them:
access_policy:
- permit:
actions: [read]
resource: model::*
description: all users have read access to models
- forbid:
actions: [create, delete]
resource: model::*
unless: user with admin in roles
description: only user with admin role can create or delete models
- permit:
actions: [create, read, delete]
when: user is owner
description: users can create resources other than models and read and delete those they own
- permit:
actions: [read]
when: user in owner teams
description: any user has read access to any resource created by a user with the same team
Regex patterns can be used to match resources based on naming conventions. For example, to allow developers to access specific model families:
access_policy:
- permit:
actions: [read]
resource: regex:model::(llama|mistral)-.*
when: user with developer in roles
description: developers can read llama and mistral models
- permit:
actions: [read]
resource: regex:model::.*-3\\.\\d+-.*
when: user with user in roles
description: users can read version 3.x models
Important: When using regex patterns, remember to:
- Use the
regex:prefix to indicate a regex pattern - Patterns use Python's
remodule syntax (see Python regex documentation) - Escape special regex characters (e.g.,
\\.for literal dots) - Use anchors (
^and$) when you need exact matching (default behavior usesre.match()which anchors at the start but not the end) - Invalid regex patterns will log a warning and be treated as non-matches
Multi-Tenant Isolation for Conversations and Responses
In a multi-tenant deployment, you typically want to ensure that each
user's conversations, responses, and files are isolated from other
users. Unlike registry resources (models, shields, etc.) which are
identified by types like model::my-model, stored data uses the
sql_record::<table_name>::<record_id> resource type pattern. Each
record is automatically stamped with the authenticated user's identity
when created, and the user is owner condition can be used to restrict
access to only the user who created the record.
The relevant sql_record table names are:
| Table Name | Description |
|---|---|
openai_conversations | Conversation sessions |
conversation_items | Messages and items within conversations |
responses | Stored responses (table name is configurable in provider config) |
openai_files | Uploaded files |
The following example shows a complete access policy that allows any authenticated user to use models and create new resources, while ensuring that conversations, responses, and files can only be accessed by the user who created them:
server:
port: 8321
auth:
provider_config:
type: "oauth2_token"
jwks:
uri: "https://my-auth-provider.com/jwks"
access_policy:
# Allow all authenticated users to use configured models for inference
- permit:
actions: [read]
resource: model::*
description: Any authenticated user can use configured models
# File isolation
- permit:
actions: [create]
resource: sql_record::openai_files::*
description: Any authenticated user can upload files
- permit:
actions: [read, delete]
resource: sql_record::openai_files::*
when: user is owner
description: Users can only read and delete their own files
# Conversation isolation
- permit:
actions: [create]
resource: sql_record::openai_conversations::*
description: Any authenticated user can create conversations
- permit:
actions: [read, update, delete]
resource: sql_record::openai_conversations::*
when: user is owner
description: Users can only access their own conversations
# Conversation item isolation
- permit:
actions: [create]
resource: sql_record::conversation_items::*
description: Any authenticated user can create conversation items
- permit:
actions: [read, update, delete]
resource: sql_record::conversation_items::*
when: user is owner
description: Users can only access items in their own conversations
# Response isolation
- permit:
actions: [create]
resource: sql_record::responses::*
description: Any authenticated user can create responses
- permit:
actions: [read, update, delete]
resource: sql_record::responses::*
when: user is owner
description: Users can only access their own responses
With this policy:
- Any user with a valid token can call inference endpoints and create new conversations, responses, and files.
- A user can only list, read, update, or delete their own conversations, responses, and files. Attempts to access another user's resources will be denied.
- The
user is ownercondition works by comparing the authenticated user's principal (from the JWT token) against theowner_principalstored on each record.
If no explicit access_policy is specified, Llama Stack applies a
default policy where all users can access resources defined in config
(like models) but dynamically created resources can only be accessed by
the user that created them. However, for production multi-tenant
deployments it is recommended to define an explicit policy like the
example above.
Route-Level Authorization
Route-level authorization provides infrastructure-level access control for Llama Stack API routes. This feature allows administrators to restrict which API routes can be accessed based on user attributes (when authentication is enabled) or to globally block/allow specific routes (without authentication).
Key Features
- API Surface Control: Restrict the available API surface without authentication by blocking specific routes
- Role-Based Route Access: Control which users or teams can access specific API routes based on their roles, teams, or other attributes
- Works With or Without Authentication: Route authorization can be configured independently of authentication
- Two-Level Access Control: When both route-level and resource-level authorization are configured:
- Route authorization is checked first (infrastructure level)
- Resource authorization is checked second (data level)
- Both checks must pass for access to be granted
Configuration
Route authorization is configured in the server's authentication section under route_policy:
server:
auth:
provider_config: # Optional - only needed if using user-based conditions
type: "oauth2_token"
# ... authentication provider configuration ...
# Route-level access control (infrastructure)
route_policy:
- permit:
paths: "<path or list of paths>"
when: "<optional user condition>"
description: "<optional description>"
- forbid:
paths: "<path or list of paths>"
unless: "<optional user condition>"
description: "<optional description>"
Path Matching
Route paths support four matching patterns:
- Exact Match:
/v1/chat/completions- matches only this specific path - Prefix Wildcard:
/v1/files*- matches/v1/filesand all paths starting with/v1/files(e.g.,/v1/files/upload,/v1/files/list) - Full Wildcard:
*- matches all routes - Regex Pattern:
regex:^/v1/(chat|inference)/.*$- matches routes using regular expressions
Path Normalization: Trailing slashes are automatically removed during matching (e.g., /v1/files/ is treated as /v1/files).
Multiple paths can be specified in a single rule using a list:
paths: ["/v1/files*", "/v1/models*", "regex:^/v1/admin/.*$"]
Regex Pattern Notes:
- Use the
regex:prefix to indicate a regex pattern - Patterns use Python's
remodule syntax (see Python regex documentation) - It's recommended to use anchors (
^at start,$at end) for precise matching - Without anchors,
re.match()will anchor at the start but allow trailing characters - Example:
regex:/v1/fileswould match/v1/filesXXX, butregex:^/v1/files$would not - Invalid regex patterns will log a warning and be skipped (other patterns in the list will still be checked)
Rule Evaluation
- Permit Rules: Grant access to specified routes
- Forbid Rules: Deny access to specified routes
- Rules are evaluated in order of definition
- First matching rule determines access (permit or forbid)
- If no rule matches, access is denied by default
- If no
route_policyis configured, all routes are allowed (backward compatible)
User Conditions
When authentication is enabled, rules can include conditions based on user attributes:
when: user with <value> in <attribute>- Require user has specific attribute valueunless: user with <value> in <attribute>- Apply rule unless user has specific attribute value
Multiple conditions can be specified as a list (all must match):
when:
- user with admin in roles
- user with platform in teams
Rules with user conditions are skipped when authentication is not configured. Rules without conditions apply regardless of authentication status.
Examples
Example 1: Route Blocking Without Authentication
Block unused routes while allowing public access to others:
server:
auth:
route_policy:
- permit:
paths: ["/v1/health", "/v1/version"]
description: "Public monitoring routes"
- permit:
paths: "/v1/chat/completions"
description: "Allow inference"
Result:
/v1/healthand/v1/versionare accessible/v1/chat/completionsis accessible- All other routes are blocked (no matching permit rule)
Example 2: Role-Based Route Access
Different roles have access to different API surfaces:
server:
auth:
provider_config:
type: "oauth2_token"
# ... provider configuration ...
route_policy:
- permit:
paths: "/v1/chat/completions"
when: user with developer in roles
description: "Developers can use inference"
- permit:
paths: ["/v1/files*", "/v1/models*"]
when: user with user in roles
description: "Users can manage files and models"
- permit:
paths: "*"
when: user with admin in roles
description: "Admins have full access"
Result:
- Users with
developerrole can only access/v1/chat/completions - Users with
userrole can access file and model routes - Users with
adminrole can access all routes - Users without matching roles are denied access
Example 3: Mixed Public and Protected Routes
Combine public routes with role-based access:
server:
auth:
provider_config:
type: "oauth2_token"
# ... provider configuration ...
route_policy:
- permit:
paths: ["/v1/health", "/v1/version"]
description: "Public routes (no authentication required)"
- permit:
paths: "/v1/admin*"
when: user with admin in roles
description: "Admin routes require admin role"
- permit:
paths: "/v1/chat/completions"
when: user with developer in roles
description: "Inference requires developer role"
Result:
- Anyone can access
/v1/healthand/v1/version(no authentication needed) - Only authenticated users with
adminrole can access/v1/admin*routes - Only authenticated users with
developerrole can access/v1/chat/completions - All other routes are denied
Example 4: Using Regex Patterns for Flexible Route Matching
Use regex patterns to match multiple related routes with a single rule:
server:
auth:
provider_config:
type: "oauth2_token"
# ... provider configuration ...
route_policy:
- permit:
paths: ["/v1/health", "/v1/version"]
description: "Public health check routes"
- permit:
paths: regex:^/v1/(files|providers|models)(/.*)?$
when: user with user in roles
description: "Users can access files, providers, and models endpoints"
- permit:
paths: regex:^/v1/(chat|inference)/.*$
when: user with developer in roles
description: "Developers can access inference endpoints"
- forbid:
paths: "*"
description: "Deny all other routes by default"
Result:
- Public health endpoints are accessible to everyone
- Users with
userrole can access/v1/files,/v1/providers, and/v1/models(and subpaths) - Users with
developerrole can access/v1/chat/*and/v1/inference/*routes - All other routes return 403 Forbidden
- The regex patterns use anchors (
^and$) for precise matching
Example 5: Integration with Resource-Level Authorization
Route authorization works seamlessly with resource-level authorization:
server:
auth:
provider_config:
type: "oauth2_token"
# ... provider configuration ...
# Route-level: Controls which API routes are accessible
route_policy:
- permit:
paths: "/v1/chat/completions"
when: user with developer in roles
# Resource-level: Controls which specific resources are accessible
access_policy:
- permit:
actions: [read]
resource: model::llama-3-2-3b
when: user with developer in roles
For a request to succeed:
- User must pass route authorization (can access
/v1/chat/completions) - User must pass resource authorization (can read
model::llama-3-2-3b) - Both checks must pass
Quota Configuration
The quota section allows you to enable server-side request throttling for both
authenticated and anonymous clients. This is useful for preventing abuse, enforcing
fairness across tenants, and controlling infrastructure costs without requiring
client-side rate limiting or external proxies.
Quotas are disabled by default. When enabled, each client is tracked using either:
- Their authenticated
client_id(derived from the Bearer token), or - Their IP address (fallback for anonymous requests)
Quota state is stored in a SQLite-backed key-value store, and rate limits are applied
within a configurable time window (currently only day is supported).
Example
server:
quota:
kvstore:
type: sqlite
db_path: ./quotas.db
anonymous_max_requests: 100
authenticated_max_requests: 1000
period: day
Configuration Options
| Field | Description |
|---|---|
kvstore | Required. Backend storage config for tracking request counts. |
kvstore.type | Must be "sqlite" for now. Other backends may be supported in the future. |
kvstore.db_path | File path to the SQLite database. |
anonymous_max_requests | Max requests per period for unauthenticated clients. |
authenticated_max_requests | Max requests per period for authenticated clients. |
period | Time window for quota enforcement. Only "day" is supported. |
Note: if
authenticated_max_requestsis set but no authentication provider is configured, the server will fall back to applyinganonymous_max_requeststo all clients.
Example with Authentication Enabled
server:
port: 8321
auth:
provider_config:
type: custom
endpoint: https://auth.example.com/validate
quota:
kvstore:
type: sqlite
db_path: ./quotas.db
anonymous_max_requests: 100
authenticated_max_requests: 1000
period: day
If a client exceeds their limit, the server responds with:
HTTP/1.1 429 Too Many Requests
Content-Type: application/json
{
"error": {
"message": "Quota exceeded"
}
}
CORS Configuration
Configure CORS to allow web browsers to make requests from different domains. Disabled by default.
Quick Setup
For development, use the simple boolean flag:
server:
cors: true # Auto-enables localhost with any port
This automatically allows http://localhost:* and https://localhost:* with secure defaults.
Custom Configuration
For specific origins and full control:
server:
cors:
allow_origins: ["https://myapp.com", "https://staging.myapp.com"]
allow_credentials: true
allow_methods: ["GET", "POST", "PUT", "DELETE"]
allow_headers: ["Content-Type", "Authorization"]
allow_origin_regex: "https://.*\\.example\\.com" # Optional regex pattern
expose_headers: ["X-Total-Count"]
max_age: 86400
Configuration Options
| Field | Description | Default |
|---|---|---|
allow_origins | List of allowed origins. Use ["*"] for any. | ["*"] |
allow_origin_regex | Regex pattern for allowed origins (optional). | None |
allow_methods | Allowed HTTP methods. | ["*"] |
allow_headers | Allowed headers. | ["*"] |
allow_credentials | Allow credentials (cookies, auth headers). | false |
expose_headers | Headers exposed to browser. | [] |
max_age | Preflight cache time (seconds). | 600 |
Security Notes:
allow_credentials: truerequires explicit origins (no wildcards)cors: trueenables localhost access only (secure for development)- For public APIs, always specify exact allowed origins
TLS Context Caching
When Llama Stack makes outbound HTTPS connections to external services (such as remote inference providers, authentication endpoints, or other APIs), it optimizes performance by creating and reusing SSL contexts. An SSL context (ssl.SSLContext) is created once during provider initialization and shared across all subsequent connections to that endpoint.
Why Caching Matters: Creating an SSL context is expensive—it involves reading certificate bundles from disk, parsing certificate chains, and initializing cryptographic structures. By caching these contexts, Llama Stack avoids this overhead on every API request.
Performance benefits:
- Reduced I/O overhead: Certificate bundle files are read from disk only once
- Lower CPU usage: Certificate parsing and validation happens once per provider
- Faster request initialization: Each new HTTP client reuses the cached SSL context
The caching happens automatically during provider initialization and requires no configuration.
Important: Server Restart Required for TLS Bundle Changes
Because SSL contexts are created during provider initialization and cached for the lifetime of the server process, updates to system TLS certificate bundles will not be picked up automatically. If you update trusted CA certificates or modify the system certificate store, you must restart the Llama Stack server for the changes to take effect.
Common scenarios requiring a restart:
- System CA bundle updates (e.g.,
/etc/ssl/certs/ca-certificates.crton Linux) - Custom CA certificate additions
- Certificate bundle updates in containerized environments
- Changes to provider-specific
tls_verifyconfiguration
To apply TLS bundle updates:
- Update the system certificate store or custom CA files
- Restart the Llama Stack server to reload all SSL contexts
- Verify outbound connections to external services work correctly
Extending to handle Safety
Configuring Safety can be a little involved so it is instructive to go through an example.
The Safety API works with the associated Resource called a Shield. Providers can support various kinds of Shields. Good examples include the Llama Guard system-safety models, or Bedrock Guardrails.
To configure a Bedrock Shield, you would need to add:
- A Safety API provider instance with type
remote::bedrock - A Shield resource served by this provider.
...
providers:
safety:
- provider_id: bedrock
provider_type: remote::bedrock
config:
aws_access_key_id: ${env.AWS_ACCESS_KEY_ID}
aws_secret_access_key: ${env.AWS_SECRET_ACCESS_KEY}
...
shields:
- provider_id: bedrock
params:
guardrailVersion: ${env.GUARDRAIL_VERSION}
provider_shield_id: ${env.GUARDRAIL_ID}
...
The situation is more involved if the Shield needs Inference of an associated model. This is the case with Llama Guard. In that case, you would need to add:
- A Safety API provider instance with type
inline::llama-guard - An Inference API provider instance for serving the model.
- A Model resource associated with this provider.
- A Shield resource served by the Safety provider.
The yaml configuration for this setup, assuming you were using vLLM as your inference server, would look like:
...
providers:
safety:
- provider_id: llama-guard
provider_type: inline::llama-guard
config: {}
inference:
# this vLLM server serves the "normal" inference model (e.g., llama3.2:3b)
- provider_id: vllm-0
provider_type: remote::vllm
config:
url: ${env.VLLM_URL:=http://localhost:8000}
# this vLLM server serves the llama-guard model (e.g., llama-guard:3b)
- provider_id: vllm-1
provider_type: remote::vllm
config:
url: ${env.SAFETY_VLLM_URL:=http://localhost:8001}
...
models:
- metadata: {}
model_id: ${env.INFERENCE_MODEL}
provider_id: vllm-0
provider_model_id: null
- metadata: {}
model_id: ${env.SAFETY_MODEL}
provider_id: vllm-1
provider_model_id: null
shields:
- provider_id: llama-guard
shield_id: ${env.SAFETY_MODEL} # Llama Guard shields are identified by the corresponding LlamaGuard model
provider_shield_id: null
...