Meta Reference GPU Distribution
:maxdepth: 2
:hidden:
self
The llamastack/distribution-meta-reference-gpu distribution consists of the following provider configurations:
| API | Provider(s) |
|---|---|
| agents | inline::meta-reference |
| datasetio | remote::huggingface, inline::localfs |
| eval | inline::meta-reference |
| inference | inline::meta-reference |
| safety | inline::llama-guard |
| scoring | inline::basic, inline::llm-as-judge, inline::braintrust |
| tool_runtime | remote::brave-search, remote::tavily-search, inline::rag-runtime, remote::model-context-protocol |
| vector_io | inline::faiss, remote::chromadb, remote::pgvector |
Note that you need access to nvidia GPUs to run this distribution. This distribution is not compatible with CPU-only machines or machines with AMD GPUs.
Environment Variables​
The following environment variables can be configured:
LLAMA_STACK_PORT: Port for the Llama Stack distribution server (default:8321)INFERENCE_MODEL: Inference model loaded into the Meta Reference server (default:meta-llama/Llama-3.2-3B-Instruct)INFERENCE_CHECKPOINT_DIR: Directory containing the Meta Reference model checkpoint (default:null)SAFETY_MODEL: Name of the safety (Llama-Guard) model to use (default:meta-llama/Llama-Guard-3-1B)SAFETY_CHECKPOINT_DIR: Directory containing the Llama-Guard model checkpoint (default:null)
Prerequisite: Downloading Models​
Please check that you have llama model checkpoints downloaded in ~/.llama before proceeding. See installation guide here to download the models using the Hugging Face CLI.
## Running the Distribution
You can do this via venv or Docker which has a pre-built image.
### Via Docker
This method allows you to get started quickly without having to build the distribution code.
```bash
LLAMA_STACK_PORT=8321
docker run \
-it \
--pull always \
--gpu all \
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
-v ~/.llama:/root/.llama \
-e INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
llamastack/distribution-meta-reference-gpu \
--port $LLAMA_STACK_PORT
If you are using Llama Stack Safety / Shield APIs, use:
docker run \
-it \
--pull always \
--gpu all \
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
-v ~/.llama:/root/.llama \
-e INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
-e SAFETY_MODEL=meta-llama/Llama-Guard-3-1B \
llamastack/distribution-meta-reference-gpu \
--port $LLAMA_STACK_PORT
Via venv​
Make sure you have the Llama Stack CLI available.
llama stack list-deps meta-reference-gpu | xargs -L1 uv pip install
INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
llama stack run distributions/meta-reference-gpu/run.yaml \
--port 8321
If you are using Llama Stack Safety / Shield APIs, use:
INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
SAFETY_MODEL=meta-llama/Llama-Guard-3-1B \
llama stack run distributions/meta-reference-gpu/run-with-safety.yaml \
--port 8321