Skip to main content
Version: v0.4.0

Resources

Some of these APIs are associated with a set of Resources. Here is the mapping of APIs to resources:

  • Inference, Eval and Post Training are associated with Model resources.
  • Safety is associated with Shield resources.
  • Tool Runtime is associated with ToolGroup resources.
  • DatasetIO is associated with Dataset resources.
  • VectorIO is associated with VectorDB resources.
  • Scoring is associated with ScoringFunction resources.
  • Eval is associated with Model and Benchmark resources.

Furthermore, we allow these resources to be federated across multiple providers. For example, you may have some Llama models served by Fireworks while others are served by AWS Bedrock. Regardless, they will all work seamlessly with the same uniform Inference API provided by Llama Stack.

Registering Resources

Given this architecture, it is necessary for the Stack to know which provider to use for a given resource. This means you need to explicitly register resources (including models) before you can use them with the associated APIs.

Automatic vs Explicit Model Registration​

Model registration behavior varies by provider:

Automatic Discovery​

Some providers automatically discover and register models during initialization:

  • Remote providers (e.g., remote::openai, remote::vllm, remote::tgi) can automatically discover models from their API endpoints
  • Models are discovered via the provider's list_models() method during the initial refresh
  • For remote providers that use RemoteInferenceProviderConfig (most remote inference providers), you can enable periodic refresh by setting refresh_models: true in the provider's configuration:
providers:
inference:
- provider_id: vllm-inference
provider_type: remote::vllm
config:
url: ${env.VLLM_URL:=http://localhost:8000/v1}
refresh_models: true # Enable periodic model refresh

Explicit Registration Required​

Some providers require explicit registration of models in registered_resources.models:

  • Inline providers like inline::sentence-transformers have a hardcoded list of default models
  • Custom models that aren't in the provider's default list must be explicitly registered
  • These providers accept model registrations but don't automatically discover all available models

Example: Custom Embedding Model​

For the sentence-transformers provider, only the default model (nomic-ai/nomic-embed-text-v1.5) is automatically registered. To use a custom embedding model, you must register it explicitly:

registered_resources:
models:
- provider_id: sentence-transformers
model_id: granite-embedding-125m
provider_model_id: ibm-granite/granite-embedding-125m-english
model_type: embedding
metadata:
embedding_dimension: 768

See the Configuration Guide for more details on model registration.