Skip to main content

remote::elasticsearch

Description

Elasticsearch is a vector database provider for Llama Stack. It allows you to store and query vectors directly within an Elasticsearch database. That means you're not limited to storing vectors in memory or in a separate service.

Features

Elasticsearch supports:

  • Store embeddings and their metadata
  • Vector search
  • Full-text search
  • Fuzzy search
  • Hybrid search
  • Document storage
  • Metadata filtering
  • Inference service
  • Machine Learning integrations

Usage

To use Elasticsearch in your Llama Stack project, follow these steps:

  1. Install the necessary dependencies.
  2. Configure your Llama Stack project to use Elasticsearch.
  3. Start storing and querying vectors.

Installation

You can test Elasticsearch locally by running this script in the terminal:

curl -fsSL https://elastic.co/start-local | sh

Or you can start a free trial on Elastic Cloud. For more information on how to deploy Elasticsearch, see the official documentation.

Documentation

See Elasticsearch's documentation for more details about Elasticsearch in general.

Configuration

FieldTypeRequiredDefaultDescription
elasticsearch_api_keystr | NoneNoThe API key for the Elasticsearch instance
elasticsearch_urlstr | NoneNolocalhost:9200The URL of the Elasticsearch instance
persistenceKVStoreReference | NoneNoConfig for KV store backend (SQLite only for now)
persistence.namespacestrNoKey prefix for KVStore backends
persistence.backendstrNoName of backend from storage.backends

Sample Configuration

elasticsearch_url: ${env.ELASTICSEARCH_URL:=localhost:9200}
elasticsearch_api_key: ${env.ELASTICSEARCH_API_KEY:=}
persistence:
namespace: vector_io::elasticsearch
backend: kv_default