Configuration
Firn is configured entirely through environment variables. Only FIRNFLOW_S3_BUCKET is required; everything else has a sensible default.
Core settings
| Variable | Default | Description |
|---|---|---|
FIRNFLOW_S3_BUCKET |
required | S3 bucket that roots every namespace. Each namespace creates its data under s3://bucket/namespace/. |
FIRNFLOW_BIND |
0.0.0.0:3000 |
Address and port for the HTTP listener. Use 127.0.0.1:3000 to restrict to localhost. |
RUST_LOG |
info |
Log level filter. Supports tracing directives like firnflow_api=debug,firnflow_core=trace. |
S3 storage backend
These variables control which S3-compatible backend Firn connects to. For real AWS S3, you typically only need FIRNFLOW_S3_BUCKET and FIRNFLOW_S3_REGION. Credentials come from the standard AWS chain (instance profile, environment, ~/.aws/credentials).
| Variable | Default | Description |
|---|---|---|
FIRNFLOW_S3_ENDPOINT |
unset (use AWS) | Custom S3 endpoint URL. Set this for MinIO, R2, GCS, or any S3-compatible backend. When set, path-style addressing and plain HTTP are automatically enabled. |
FIRNFLOW_S3_ACCESS_KEY |
unset | S3 access key ID. Falls back to the standard AWS credential chain if not set. |
FIRNFLOW_S3_SECRET_KEY |
unset | S3 secret access key. Falls back to the standard AWS credential chain if not set. |
FIRNFLOW_S3_REGION |
us-east-1 |
AWS region for the bucket. |
AWS S3
For production AWS deployments, let the standard credential chain handle authentication (instance profiles, ECS task roles, or ~/.aws/credentials).
FIRNFLOW_S3_BUCKET=my-firn-bucket
FIRNFLOW_S3_REGION=eu-west-1
MinIO
FIRNFLOW_S3_BUCKET=firnflow
FIRNFLOW_S3_ENDPOINT=http://localhost:9000
FIRNFLOW_S3_ACCESS_KEY=minioadmin
FIRNFLOW_S3_SECRET_KEY=minioadmin
Cloudflare R2
R2 provides an S3-compatible API. Use your R2 endpoint and API tokens.
FIRNFLOW_S3_BUCKET=firn-r2
FIRNFLOW_S3_ENDPOINT=https://<account-id>.r2.cloudflarestorage.com
FIRNFLOW_S3_ACCESS_KEY=<r2-access-key>
FIRNFLOW_S3_SECRET_KEY=<r2-secret-key>
FIRNFLOW_S3_REGION=auto
Cache settings
The foyer hybrid cache has two tiers: a RAM tier for the hottest data and an NVMe tier for overflow. Both tiers are shared across all namespaces.
| Variable | Default | Description |
|---|---|---|
FIRNFLOW_CACHE_MEMORY_BYTES |
67108864 (64 MB) |
RAM tier capacity in bytes. This is the L1 cache; hits here are sub-microsecond. |
FIRNFLOW_CACHE_NVME_PATH |
/tmp/firnflow-cache |
Directory for the foyer NVMe-tier block file. In production, point this at a fast local SSD. The Docker image defaults to /var/lib/firnflow/cache. |
FIRNFLOW_CACHE_NVME_BYTES |
268435456 (256 MB) |
NVMe tier capacity in bytes. This is the L2 cache; hits here avoid S3 but are slower than RAM. |
Cache sizing guidance
The right cache size depends on your working set. A cached result for a 10-result query with 1536-dim vectors is roughly 62 KB. Some rules of thumb:
| Working set | RAM tier | NVMe tier |
|---|---|---|
| Development / testing | 64 MB | 256 MB |
| Small production (hundreds of cached queries) | 256 MB | 1 GB |
| Large production (thousands of cached queries) | 1 GB | 10 GB |
Docker Compose override
To customise settings for local development, create a docker-compose.override.yml:
# docker-compose.override.yml
services:
firnflow:
environment:
FIRNFLOW_CACHE_MEMORY_BYTES: "268435456" # 256 MB RAM cache
FIRNFLOW_CACHE_NVME_BYTES: "1073741824" # 1 GB NVMe cache
RUST_LOG: "firnflow_core=debug,firnflow_api=debug"
Full example
A complete production configuration against AWS S3:
# Production environment
FIRNFLOW_S3_BUCKET=prod-firn-vectors
FIRNFLOW_S3_REGION=eu-west-1
FIRNFLOW_BIND=0.0.0.0:3000
FIRNFLOW_CACHE_MEMORY_BYTES=268435456
FIRNFLOW_CACHE_NVME_PATH=/var/lib/firnflow/cache
FIRNFLOW_CACHE_NVME_BYTES=10737418240
RUST_LOG=info