Skip to main content
Version: Next

remote::vllm

Description​

Remote vLLM inference provider for connecting to vLLM servers.

Configuration​

FieldTypeRequiredDefaultDescription
urlstr | NoneNoThe URL for the vLLM model serving endpoint
max_tokens<class 'int'>No4096Maximum number of tokens to generate.
api_tokenstr | NoneNofakeThe API token
tls_verifybool | strNoTrueWhether to verify TLS certificates. Can be a boolean or a path to a CA certificate file.
refresh_models<class 'bool'>NoFalseWhether to refresh models periodically

Sample Configuration​

url: ${env.VLLM_URL:=}
max_tokens: ${env.VLLM_MAX_TOKENS:=4096}
api_token: ${env.VLLM_API_TOKEN:=fake}
tls_verify: ${env.VLLM_TLS_VERIFY:=true}