Core Concepts

Given Llama Stack’s service-oriented philosophy, a few concepts and workflows arise which may not feel completely natural in the LLM landscape, especially if you are coming with a background in other frameworks.

APIs

A Llama Stack API is described as a collection of REST endpoints. We currently support the following APIs:

  • Inference: run inference with a LLM

  • Safety: apply safety policies to the output at a Systems (not only model) level

  • Agents: run multi-step agentic workflows with LLMs with tool usage, memory (RAG), etc.

  • DatasetIO: interface with datasets and data loaders

  • Scoring: evaluate outputs of the system

  • Eval: generate outputs (via Inference or Agents) and perform scoring

  • VectorIO: perform operations on vector stores, such as adding documents, searching, and deleting documents

  • Telemetry: collect telemetry data from the system

We are working on adding a few more APIs to complete the application lifecycle. These will include:

  • Batch Inference: run inference on a dataset of inputs

  • Batch Agents: run agents on a dataset of inputs

  • Post Training: fine-tune a model

  • Synthetic Data Generation: generate synthetic data for model development

API Providers

The goal of Llama Stack is to build an ecosystem where users can easily swap out different implementations for the same API. Examples for these include:

  • LLM inference providers (e.g., Fireworks, Together, AWS Bedrock, Groq, Cerebras, SambaNova, vLLM, etc.),

  • Vector databases (e.g., ChromaDB, Weaviate, Qdrant, Milvus, FAISS, PGVector, etc.),

  • Safety providers (e.g., Meta’s Llama Guard, AWS Bedrock Guardrails, etc.)

Providers come in two flavors:

  • Remote: the provider runs as a separate service external to the Llama Stack codebase. Llama Stack contains a small amount of adapter code.

  • Inline: the provider is fully specified and implemented within the Llama Stack codebase. It may be a simple wrapper around an existing library, or a full fledged implementation within Llama Stack.

Most importantly, Llama Stack always strives to provide at least one fully inline provider for each API so you can iterate on a fully featured environment locally.

Resources

Some of these APIs are associated with a set of Resources. Here is the mapping of APIs to resources:

  • Inference, Eval and Post Training are associated with Model resources.

  • Safety is associated with Shield resources.

  • Tool Runtime is associated with ToolGroup resources.

  • DatasetIO is associated with Dataset resources.

  • VectorIO is associated with VectorDB resources.

  • Scoring is associated with ScoringFunction resources.

  • Eval is associated with Model and Benchmark resources.

Furthermore, we allow these resources to be federated across multiple providers. For example, you may have some Llama models served by Fireworks while others are served by AWS Bedrock. Regardless, they will all work seamlessly with the same uniform Inference API provided by Llama Stack.

Registering Resources

Given this architecture, it is necessary for the Stack to know which provider to use for a given resource. This means you need to explicitly register resources (including models) before you can use them with the associated APIs.

Distributions

While there is a lot of flexibility to mix-and-match providers, often users will work with a specific set of providers (hardware support, contractual obligations, etc.) We therefore need to provide a convenient shorthand for such collections. We call this shorthand a Llama Stack Distribution or a Distro. One can think of it as specific pre-packaged versions of the Llama Stack. Here are some examples:

Remotely Hosted Distro: These are the simplest to consume from a user perspective. You can simply obtain the API key for these providers, point to a URL and have all Llama Stack APIs working out of the box. Currently, Fireworks and Together provide such easy-to-consume Llama Stack distributions.

Locally Hosted Distro: You may want to run Llama Stack on your own hardware. Typically though, you still need to use Inference via an external service. You can use providers like HuggingFace TGI, Fireworks, Together, etc. for this purpose. Or you may have access to GPUs and can run a vLLM or NVIDIA NIM instance. If you “just” have a regular desktop machine, you can use Ollama for inference. To provide convenient quick access to these options, we provide a number of such pre-configured locally-hosted Distros.

On-device Distro: To run Llama Stack directly on an edge device (mobile phone or a tablet), we provide Distros for iOS and Android

Evaluation Concepts

The Llama Stack Evaluation flow allows you to run evaluations on your GenAI application datasets or pre-registered benchmarks.

We introduce a set of APIs in Llama Stack for supporting running evaluations of LLM applications.

  • /datasetio + /datasets API

  • /scoring + /scoring_functions API

  • /eval + /benchmarks API

This guide goes over the sets of APIs and developer experience flow of using Llama Stack to run evaluations for different use cases. Checkout our Colab notebook on working examples with evaluations here.

The Evaluation APIs are associated with a set of Resources. Please visit the Resources section in our Core Concepts guide for better high-level understanding.

  • DatasetIO: defines interface with datasets and data loaders.

    • Associated with Dataset resource.

  • Scoring: evaluate outputs of the system.

    • Associated with ScoringFunction resource. We provide a suite of out-of-the box scoring functions and also the ability for you to add custom evaluators. These scoring functions are the core part of defining an evaluation task to output evaluation metrics.

  • Eval: generate outputs (via Inference or Agents) and perform scoring.

    • Associated with Benchmark resource.

Open-benchmark Eval

List of open-benchmarks Llama Stack support

Llama stack pre-registers several popular open-benchmarks to easily evaluate model perfomance via CLI.

The list of open-benchmarks we currently support:

  • MMLU-COT (Measuring Massive Multitask Language Understanding): Benchmark designed to comprehensively evaluate the breadth and depth of a model’s academic and professional understanding

  • GPQA-COT (A Graduate-Level Google-Proof Q&A Benchmark): A challenging benchmark of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry.

  • SimpleQA: Benchmark designed to access models to answer short, fact-seeking questions.

  • MMMU (A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI)]: Benchmark designed to evaluate multimodal models.

You can follow this contributing guide to add more open-benchmarks to Llama Stack

Run evaluation on open-benchmarks via CLI

We have built-in functionality to run the supported open-benckmarks using llama-stack-client CLI

Spin up Llama Stack server

Spin up llama stack server with ‘open-benchmark’ template

llama stack run llama_stack/templates/open-benchmark/run.yaml

Run eval CLI

There are 3 necessary inputs to run a benchmark eval

  • list of benchmark_ids: The list of benchmark ids to run evaluation on

  • model-id: The model id to evaluate on

  • output_dir: Path to store the evaluate results

llama-stack-client eval run-benchmark <benchmark_id_1> <benchmark_id_2> ... \
--model_id <model id to evaluate on> \
--output_dir <directory to store the evaluate results> \

You can run

llama-stack-client eval run-benchmark help

to see the description of all the flags that eval run-benchmark has

In the output log, you can find the file path that has your evaluation results. Open that file and you can see you aggregate evaluation results over there.

What’s Next?

  • Check out our Colab notebook on working examples with running benchmark evaluations here.

  • Check out our Building Applications - Evaluation guide for more details on how to use the Evaluation APIs to evaluate your applications.

  • Check out our Evaluation Reference for more details on the APIs.