Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.lerian.studio/llms.txt

Use this file to discover all available pages before exploring further.

Multi-tenant mode allows Matcher to serve multiple clients with complete data isolation. Each tenant operates in its own database, message broker, and cache namespace — ensuring that one tenant’s data is never visible to another. This is essential for SaaS deployments, regulated environments, or any scenario where strict data boundaries between clients are required.

Overview


By default, Matcher runs in single-tenant mode: all requests share one database and one set of infrastructure connections. This is the simplest setup and works well for single-client deployments. When multi-tenant mode is enabled, each tenant receives:
  • Isolated database — a dedicated PostgreSQL database provisioned and managed by the Tenant Manager service
  • Isolated message broker — a dedicated RabbitMQ virtual host, plus X-Tenant-ID headers on every message as defense-in-depth
  • Isolated cache — all Redis keys are automatically prefixed with the tenant identifier
  • Isolated storage — S3 objects are prefixed with the tenant identifier
Tenant identity is determined from the JWT token in each API request. The tenant_id claim in the token tells Matcher which tenant the request belongs to, and the correct infrastructure connections are resolved automatically. The legacy tenantId claim is also accepted as a fallback for backward compatibility.

How to activate


Prerequisites

  • The Tenant Manager service must be running and reachable from Matcher’s network. You can verify this by calling its /health endpoint.
  • Matcher must be configured with authentication enabled (PLUGIN_AUTH_ENABLED=true), since tenant identity comes from the JWT token.

Configuration

Multi-tenant settings are configured through environment variables, just like other Matcher settings. Where you set them depends on your deployment method:
  • Docker Compose: add them to a .env file in the project root or directly in docker-compose.yml under the environment section
  • Kubernetes / Helm: add them to your Helm values file under the appropriate environment section
  • Standalone: set them in your shell environment or process manager configuration
Refer to the Installation guide for details on where environment files are located in your deployment.

Required variables

Add these to your environment configuration to enable multi-tenant mode:
# Enable multi-tenant mode
MULTI_TENANT_ENABLED=true

# URL of the Tenant Manager service
MULTI_TENANT_URL=https://tenant-manager.example.com

# API key for authenticating with the Tenant Manager
MULTI_TENANT_SERVICE_API_KEY=your-api-key

Optional tuning

You can adjust pool sizes, timeouts, and circuit breaker behavior:
# Maximum number of concurrent tenant connection pools (default: 100)
MULTI_TENANT_MAX_TENANT_POOLS=200

# Seconds before an idle tenant pool is evicted (default: 300)
MULTI_TENANT_IDLE_TIMEOUT_SEC=600

# Consecutive Tenant Manager failures before circuit breaker opens (default: 5)
MULTI_TENANT_CIRCUIT_BREAKER_THRESHOLD=3

# Seconds the circuit breaker stays open before retrying (default: 30)
MULTI_TENANT_CIRCUIT_BREAKER_TIMEOUT_SEC=60

# Environment label for environment-scoped tenant resolution (optional)
MULTI_TENANT_ENVIRONMENT=production
After updating the configuration, restart the Matcher service. On startup, you should see log messages confirming that multi-tenant infrastructure was initialized.

Tenant isolation


Database isolation

Each tenant gets its own PostgreSQL database. When a request arrives, Matcher resolves the tenant from the JWT and connects to that tenant’s dedicated database. If no connection pool exists yet for that tenant, one is created on demand using configuration from the Tenant Manager. Connection pools are bounded by MULTI_TENANT_MAX_TENANT_POOLS and evicted when idle beyond MULTI_TENANT_IDLE_TIMEOUT_SEC.

Message broker isolation

RabbitMQ isolation uses two layers:
  • Virtual host per tenant — each tenant’s messages are routed through a dedicated vhost, preventing any cross-tenant message leakage
  • Tenant ID headers — every published message includes an X-Tenant-ID header as an additional safety layer for downstream consumers
No manual vhost creation is needed. The Tenant Manager provisions vhosts automatically.

Cache isolation

All Redis keys are automatically prefixed with the tenant identifier in the format tenant:{tenantID}:{key}. This applies to idempotency checks, deduplication, rate limiting, and credential caching.

Storage isolation

Objects stored in S3-compatible storage are prefixed with {tenantID}/, ensuring each tenant’s exports and archives are separated at the storage level.

Connection pool management


Matcher maintains a pool of database connections for each active tenant. These settings control resource usage:
VariableDefaultDescription
MULTI_TENANT_MAX_TENANT_POOLS100Maximum number of tenant pools maintained simultaneously. When the limit is reached, the least recently used pool is evicted.
MULTI_TENANT_IDLE_TIMEOUT_SEC300How long (in seconds) an unused tenant pool stays open before being cleaned up.

Capacity planning

Each tenant pool uses up to POSTGRES_MAX_OPEN_CONNS connections (default: 25). With 100 tenant pools, the worst-case total is 2,500 PostgreSQL connections. Size your database’s max_connections accordingly.

Automatic health checks

Matcher periodically re-checks tenant configuration (every MULTI_TENANT_CONNECTIONS_CHECK_INTERVAL_SEC, default 30s) to detect credential rotation or pool setting changes. Updated settings are applied without requiring a restart.

Circuit breaker


If the Tenant Manager becomes unreachable, a circuit breaker protects Matcher from cascading failures.
VariableDefaultDescription
MULTI_TENANT_CIRCUIT_BREAKER_THRESHOLD5How many consecutive failures before the circuit breaker activates.
MULTI_TENANT_CIRCUIT_BREAKER_TIMEOUT_SEC30How long the breaker stays active before allowing a retry.
While the circuit breaker is active, requests for new tenants will fail fast. However, existing tenant connections continue working normally — only new tenant onboarding is affected.

Tenant config caching


To reduce calls to the Tenant Manager, Matcher caches tenant configurations in memory.
VariableDefaultDescription
MULTI_TENANT_CACHE_TTL_SEC120How long (in seconds) tenant config is cached before refreshing from the Tenant Manager.
On the first request for a tenant, Matcher fetches the configuration from the Tenant Manager API and caches it. Subsequent requests for the same tenant are served from cache until the TTL expires.

M2M credentials (Fetcher integration)


When multi-tenant mode is enabled and Matcher needs to call the Fetcher service, it authenticates using per-tenant machine-to-machine (M2M) credentials stored in AWS Secrets Manager.

How it works

  1. When Matcher calls Fetcher on behalf of a tenant, it looks up that tenant’s credentials
  2. Credentials are cached at two levels for performance: in-memory (30 seconds) and Redis (configurable TTL)
  3. If Fetcher returns a 401 Unauthorized, both caches are automatically cleared and fresh credentials are fetched

Configuration

Add these to your environment configuration if Matcher needs to call Fetcher in multi-tenant mode:
# AWS region where Secrets Manager stores the credentials
AWS_REGION=us-east-1

# Target service name (default: fetcher)
M2M_TARGET_SERVICE=fetcher

# How long credentials are cached in Redis, in seconds (default: 300)
M2M_CREDENTIAL_CACHE_TTL_SEC=300
The service’s IAM role needs secretsmanager:GetSecretValue permission for the path tenants/{env}/{tenantOrgID}/matcher/m2m/fetcher/credentials.

All environment variables


Multi-tenant infrastructure

VariableTypeDefaultRequiredDescription
MULTI_TENANT_ENABLEDboolfalseNoMaster switch for multi-tenant mode.
MULTI_TENANT_URLstringYes (when enabled)Base URL of the Tenant Manager service.
MULTI_TENANT_SERVICE_API_KEYstringYes (when enabled)API key for authenticating with the Tenant Manager.
MULTI_TENANT_ENVIRONMENTstringNoEnvironment label for tenant resolution.
MULTI_TENANT_MAX_TENANT_POOLSint100NoMaximum concurrent tenant connection pools.
MULTI_TENANT_IDLE_TIMEOUT_SECint300NoSeconds before an idle tenant pool is evicted.
MULTI_TENANT_TIMEOUTint30NoHTTP timeout (seconds) for Tenant Manager API calls.
MULTI_TENANT_CIRCUIT_BREAKER_THRESHOLDint5NoConsecutive failures before circuit breaker activates.
MULTI_TENANT_CIRCUIT_BREAKER_TIMEOUT_SECint30NoSeconds the circuit breaker stays active.
MULTI_TENANT_CACHE_TTL_SECint120NoCache TTL (seconds) for tenant configurations.
MULTI_TENANT_CONNECTIONS_CHECK_INTERVAL_SECint30NoInterval (seconds) for connection pool health checks.
MULTI_TENANT_REDIS_HOSTstringNoRedis host for event-driven tenant discovery.
MULTI_TENANT_REDIS_PORTstring6379NoRedis port for tenant discovery.
MULTI_TENANT_REDIS_PASSWORDstringNoRedis password for tenant discovery.
MULTI_TENANT_REDIS_TLSboolfalseNoEnable TLS for tenant discovery Redis.

Default tenant

VariableTypeDefaultDescription
DEFAULT_TENANT_IDstring11111111-1111-1111-1111-111111111111UUID of the default (fallback) tenant. Used in single-tenant mode.
DEFAULT_TENANT_SLUGstringdefaultSlug of the default tenant.

M2M credentials (Fetcher)

VariableTypeDefaultRequiredDescription
M2M_TARGET_SERVICEstringfetcherNoTarget service name for credential lookup.
M2M_CREDENTIAL_CACHE_TTL_SECint300NoRedis cache TTL (seconds) for M2M credentials.
AWS_REGIONstringYes (if M2M)AWS region for Secrets Manager.

Verifying multi-tenant mode


After activating multi-tenant mode, verify that everything is working:
  1. Check startup logs. Look for messages confirming that multi-tenant infrastructure was initialized successfully.
  2. Test with a tenant JWT. Send an API request (for example, list contexts) using a JWT that contains a tenant_id claim. The request should succeed and return data for that specific tenant.
  3. Verify isolation. Make the same API call with JWTs for two different tenants. Confirm that data created under one tenant is not visible to the other.
  4. Check metrics (if telemetry is enabled). The tenant_connections_total metric should increment as new tenant pools are created.
  5. Verify M2M credentials (if using Fetcher). Check logs for successful credential retrieval when Matcher calls Fetcher for a tenant.

Deactivating multi-tenant mode


To return to single-tenant mode:
  1. Set MULTI_TENANT_ENABLED=false in your environment configuration (or remove the variable entirely).
  2. Restart the Matcher service.
The service will operate with a single shared database and the default tenant identity will apply to all requests.

Deployment considerations


When switching from single-tenant to multi-tenant mode, Redis keys change format. Old-format keys are treated as cache misses until their TTL expires. This is self-healing and typically resolves within 1–5 minutes.
Existing objects created before multi-tenant activation remain at their original paths. New objects get the tenant prefix automatically. If historical data must be accessible per tenant, a one-time migration script may be needed.
Plan your PostgreSQL max_connections based on the maximum number of tenant pools multiplied by connections per pool. Use MULTI_TENANT_IDLE_TIMEOUT_SEC to reclaim pools for inactive tenants.
While the circuit breaker is active, new-tenant requests fail fast but existing tenant pools continue working. Plan for Tenant Manager high availability in production.

Next steps


Runtime configuration

Change Matcher settings at runtime without restarts.

Installation guide

Set up Matcher from scratch.

Security

Authentication, authorization, and data protection.

Discovery (Fetcher)

Automatic source discovery through Fetcher.