Evaluation
Evaluation Concepts​
The Llama Stack Evaluation flow allows you to run evaluations on your GenAI application datasets or pre-registered benchmarks.
We introduce a set of APIs in Llama Stack for supporting running evaluations of LLM applications:
/datasetio
+/datasets
API/scoring
+/scoring_functions
API/eval
+/benchmarks
API
This guide goes over the sets of APIs and developer experience flow of using Llama Stack to run evaluations for different use cases. Checkout our Colab notebook on working examples with evaluations here.
The Evaluation APIs are associated with a set of Resources. Please visit the Resources section in our Core Concepts guide for better high-level understanding.
- DatasetIO: defines interface with datasets and data loaders.
- Associated with
Dataset
resource.
- Associated with
- Scoring: evaluate outputs of the system.
- Associated with
ScoringFunction
resource. We provide a suite of out-of-the box scoring functions and also the ability for you to add custom evaluators. These scoring functions are the core part of defining an evaluation task to output evaluation metrics.
- Associated with
- Eval: generate outputs (via Inference or Agents) and perform scoring.
- Associated with
Benchmark
resource.
- Associated with
Evaluation Providers​
Llama Stack provides multiple evaluation providers:
- Meta Reference (
inline::meta-reference
) - Meta's reference implementation with multi-language support - NVIDIA (
remote::nvidia
) - NVIDIA's evaluation platform integration
Meta Reference​
Meta's reference implementation of evaluation tasks with support for multiple languages and evaluation metrics.
Configuration​
Field | Type | Required | Default | Description |
---|---|---|---|---|
kvstore | RedisKVStoreConfig | SqliteKVStoreConfig | PostgresKVStoreConfig | MongoDBKVStoreConfig | No | sqlite | Key-value store configuration |
Sample Configuration​
kvstore:
type: sqlite
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/meta_reference_eval.db
Features​
- Multi-language evaluation support
- Comprehensive evaluation metrics
- Integration with various key-value stores (SQLite, Redis, PostgreSQL, MongoDB)
- Built-in support for popular benchmarks
NVIDIA​
NVIDIA's evaluation provider for running evaluation tasks on NVIDIA's platform.
Configuration​
Field | Type | Required | Default | Description |
---|---|---|---|---|
evaluator_url | str | No | http://0.0.0.0:7331 | The url for accessing the evaluator service |
Sample Configuration​
evaluator_url: ${env.NVIDIA_EVALUATOR_URL:=http://localhost:7331}
Features​
- Integration with NVIDIA's evaluation platform
- Remote evaluation capabilities
- Scalable evaluation processing
Open-benchmark Eval​
List of open-benchmarks Llama Stack support​
Llama stack pre-registers several popular open-benchmarks to easily evaluate model performance via CLI.
The list of open-benchmarks we currently support:
- MMLU-COT (Measuring Massive Multitask Language Understanding): Benchmark designed to comprehensively evaluate the breadth and depth of a model's academic and professional understanding
- GPQA-COT (A Graduate-Level Google-Proof Q&A Benchmark): A challenging benchmark of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry.
- SimpleQA: Benchmark designed to access models to answer short, fact-seeking questions.
- MMMU (A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI): Benchmark designed to evaluate multimodal models.
You can follow this contributing guide to add more open-benchmarks to Llama Stack
Run evaluation on open-benchmarks via CLI​
We have built-in functionality to run the supported open-benchmarks using llama-stack-client CLI
Spin up Llama Stack server​
Spin up llama stack server with 'open-benchmark' template
llama stack run llama_stack/distributions/open-benchmark/run.yaml
Run eval CLI​
There are 3 necessary inputs to run a benchmark eval
list of benchmark_ids
: The list of benchmark ids to run evaluation onmodel-id
: The model id to evaluate onoutput_dir
: Path to store the evaluate results
llama-stack-client eval run-benchmark <benchmark_id_1> <benchmark_id_2> ... \
--model_id <model id to evaluate on> \
--output_dir <directory to store the evaluate results>
You can run
llama-stack-client eval run-benchmark help
to see the description of all the flags that eval run-benchmark has
In the output log, you can find the file path that has your evaluation results. Open that file and you can see you aggregate evaluation results over there.
Usage Example​
Here's a basic example of using the evaluation API:
from llama_stack_client import LlamaStackClient
client = LlamaStackClient(base_url="http://localhost:8321")
# Register a dataset for evaluation
client.datasets.register(
purpose="evaluation",
source={
"type": "uri",
"uri": "huggingface://datasets/llamastack/evaluation_dataset"
},
dataset_id="my_eval_dataset"
)
# Run evaluation
eval_result = client.eval.run_evaluation(
dataset_id="my_eval_dataset",
scoring_functions=["accuracy", "bleu"],
model_id="my_model"
)
print(f"Evaluation completed: {eval_result}")
Best Practices​
- Choose appropriate providers: Use Meta Reference for comprehensive evaluation, NVIDIA for platform-specific needs
- Configure storage properly: Ensure your key-value store configuration matches your performance requirements
- Monitor evaluation progress: Large evaluations can take time - implement proper monitoring
- Use appropriate scoring functions: Select scoring metrics that align with your evaluation goals
What's Next?​
- Check out our Colab notebook on working examples with running benchmark evaluations here.
- Check out our Building Applications - Evaluation guide for more details on how to use the Evaluation APIs to evaluate your applications.
- Check out our Evaluation Reference for more details on the APIs.
- Explore the Scoring documentation for available scoring functions.