Skip to main content
Version: v0.4.3

Meta Reference GPU Distribution

:maxdepth: 2
:hidden:

self

The llamastack/distribution-meta-reference-gpu distribution consists of the following provider configurations:

APIProvider(s)
agentsinline::meta-reference
datasetioremote::huggingface, inline::localfs
evalinline::meta-reference
inferenceinline::meta-reference
safetyinline::llama-guard
scoringinline::basic, inline::llm-as-judge, inline::braintrust
tool_runtimeremote::brave-search, remote::tavily-search, inline::rag-runtime, remote::model-context-protocol
vector_ioinline::faiss, remote::chromadb, remote::pgvector

Note that you need access to nvidia GPUs to run this distribution. This distribution is not compatible with CPU-only machines or machines with AMD GPUs.

Environment Variables​

The following environment variables can be configured:

  • LLAMA_STACK_PORT: Port for the Llama Stack distribution server (default: 8321)
  • INFERENCE_MODEL: Inference model loaded into the Meta Reference server (default: meta-llama/Llama-3.2-3B-Instruct)
  • INFERENCE_CHECKPOINT_DIR: Directory containing the Meta Reference model checkpoint (default: null)
  • SAFETY_MODEL: Name of the safety (Llama-Guard) model to use (default: meta-llama/Llama-Guard-3-1B)
  • SAFETY_CHECKPOINT_DIR: Directory containing the Llama-Guard model checkpoint (default: null)

Prerequisite: Downloading Models​

Please check that you have llama model checkpoints downloaded in ~/.llama before proceeding. See installation guide here to download the models using the Hugging Face CLI.


## Running the Distribution

You can do this via venv or Docker which has a pre-built image.

### Via Docker

This method allows you to get started quickly without having to build the distribution code.

```bash
LLAMA_STACK_PORT=8321
docker run \
-it \
--pull always \
--gpu all \
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
-v ~/.llama:/root/.llama \
-e INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
llamastack/distribution-meta-reference-gpu \
--port $LLAMA_STACK_PORT

If you are using Llama Stack Safety / Shield APIs, use:

docker run \
-it \
--pull always \
--gpu all \
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
-v ~/.llama:/root/.llama \
-e INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
-e SAFETY_MODEL=meta-llama/Llama-Guard-3-1B \
llamastack/distribution-meta-reference-gpu \
--port $LLAMA_STACK_PORT

Via Docker with Custom Run Configuration​

You can also run the Docker container with a custom run configuration file by mounting it into the container:

# Set the path to your custom config.yaml file
CUSTOM_RUN_CONFIG=/path/to/your/custom-config.yaml
LLAMA_STACK_PORT=8321

docker run \
-it \
--pull always \
--gpu all \
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
-v ~/.llama:/root/.llama \
-v $CUSTOM_RUN_CONFIG:/app/custom-config.yaml \
-e RUN_CONFIG_PATH=/app/custom-config.yaml \
llamastack/distribution-meta-reference-gpu \
--port $LLAMA_STACK_PORT

Note: The run configuration must be mounted into the container before it can be used. The -v flag mounts your local file into the container, and the RUN_CONFIG_PATH environment variable tells the entrypoint script which configuration to use.

Available run configurations for this distribution:

  • config.yaml
  • run-with-safety.yaml

Via venv​

Make sure you have the Llama Stack CLI available.

llama stack list-deps meta-reference-gpu | xargs -L1 uv pip install
INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
llama stack run distributions/meta-reference-gpu/config.yaml \
--port 8321

If you are using Llama Stack Safety / Shield APIs, use:

INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
SAFETY_MODEL=meta-llama/Llama-Guard-3-1B \
llama stack run distributions/meta-reference-gpu/run-with-safety.yaml \
--port 8321