Skip to main content
Before deploying Matcher, ensure that your environment meets the requirements described on this page. These prerequisites define the baseline for running reconciliation reliably in development and production environments.

System requirements


Infrastructure

ComponentMinimumRecommendedPurpose
CPU2 cores4+ coresMatching and scoring logic is CPU-intensive
Memory2 GB4+ GBIn-memory processing of transaction batches
Storage10 GB50+ GBPersistent storage for transactions and audit logs

Dependencies

Matcher depends on the following services:
  • PostgreSQL 15+: Primary data store for reconciliation contexts, transactions, matches, and audit logs.
  • Redis 7+: Used for caching, duplicate detection, distributed locking, and idempotency control.
  • RabbitMQ 3.12+: Message broker for asynchronous processing across bounded contexts.

Runtime

Matcher lets you the following runtimes and tooling:
  • Go 1.24+ (only required when building from source)
  • Docker 24+ and Docker Compose 2.20+ for containerized deployments
  • Kubernetes 1.28+ for production-grade deployments using Helm

Optional: Midaz integration


Matcher can optionally integrate with Midaz Ledger to query ledger transactions directly. This integration is entirely optional—Matcher works as a stand-alone product reconciling any data sources.

When to use Midaz integration

Use the Midaz integration if:
  • You’re using Midaz as your ledger system
  • You want to reconcile Midaz transactions against external sources
  • You want automatic transaction synchronization from Midaz

When Midaz is not needed

Matcher works independently when:
  • Reconciling between external systems (banks, ERPs, payment processors)
  • Using a different ledger system
  • Importing ledger data via CSV/JSON/XML files

Connection requirements

To enable Midaz integration, ensure that:
  1. A Midaz instance is running and reachable from Matcher’s network
  2. API access is configured with permissions to query transactions
  3. Shared authentication is enabled via lib-auth (same identity provider)

Configuration

Configure the Midaz connection through environment variables:
# Midaz API endpoint
MIDAZ_API_URL=https://midaz.example.com/api/v1

# Shared authentication (lib-auth)
AUTH_SERVICE_ADDRESS=https://auth.example.com
See the Midaz Integration guide for complete setup instructions.

Authentication


Matcher uses lib-auth for authentication and authorization, consistent with the rest of the Lerian ecosystem.

Authentication flow

  1. The client obtains a JWT from the identity provider
  2. The token is sent in the Authorization: Bearer <token> header
  3. Matcher validates the token via lib-auth
  4. Tenant identity and permissions are extracted from token claims

Required permissions

Access to Matcher features is controlled through fine-grained permissions:
PermissionDescription
config:context:createCreate reconciliation contexts
config:context:readView context configuration
config:rule:createCreate and update match rules
ingestion:import:createUpload transaction files
matching:job:runExecute matching jobs
exception:item:listView exceptions
exception:item:resolveResolve exceptions
governance:report:readAccess reports and audit views

Single-tenant mode

If authentication is disabled or no tenant identifier is present in the JWT, Matcher runs in single-tenant mode using a default tenant.
# Default tenant configuration (single-tenant mode)
DEFAULT_TENANT_ID=00000000-0000-0000-0000-000000000001
DEFAULT_TENANT_SLUG=default

Supported file formats


Matcher accepts transaction data in the following formats. Each format has specific structural requirements for successful ingestion.

CSV (comma-separated values)

Commonly used for bank statements and exports. Requirements:
  • Header row is required
  • UTF-8 encoding
  • Comma delimiter (configurable)
  • Quoted fields for values containing delimiters
Example:
transaction_id,amount,currency,date,reference
TXN-001,1000.00,USD,2024-01-15,Invoice payment
TXN-002,-250.50,USD,2024-01-16,Refund

JSON (javascript object notation)

Recommended for API-based integrations. Requirements:
  • Valid JSON array of transaction objects
  • UTF-8 encoding
  • Consistent field names across records
Example:
[
  {
    "transaction_id": "TXN-001",
    "amount": 1000.0,
    "currency": "USD",
    "date": "2024-01-15",
    "reference": "Invoice payment"
  }
]

XML (extensible markup language)

Common in enterprise and banking integrations. Requirements:
  • Single root element
  • UTF-8 encoding
  • Consistent element structure
Example:
<?xml version="1.0" encoding="UTF-8"?>
<transactions>
 <transaction>
 <transaction_id>TXN-001</transaction_id>
 <amount>1000.00</amount>
 <currency>USD</currency>
 <date>2024-01-15</date>
 <reference>Invoice payment</reference>
 </transaction>
</transactions>

File size limits

LimitDefaultConfigurable via
Maximum file size100 MBHTTP_BODY_LIMIT_BYTES
Maximum transactions per file100,000Context-level configuration

Network requirements


Inbound access

Matcher exposes a REST API that must be reachable by clients:
PortProtocolPurpose
8080HTTPAPI server (default)
8443HTTPSAPI server with TLS

Outbound access

Matcher must be able to reach the following services:
ServicePurposeRequired
PostgreSQLData persistenceYes
RedisCaching and coordinationYes
RabbitMQMessagingYes
Auth serviceToken validationIf authentication is enabled
Midaz APILedger queriesOptional (only if using Midaz)
JIRA / ServiceNowException routingOptional
Custom webhooksEvent notificationsOptional

TLS configuration

For production environments, configure TLS:
SERVER_TLS_CERT_FILE=/path/to/cert.pem
SERVER_TLS_KEY_FILE=/path/to/key.pem

Environment checklist


Before proceeding with installation, confirm that:
  • Infrastructure is ready: PostgreSQL, Redis, and RabbitMQ are running and accessible
  • Authentication is configured: Auth service is available, or auth is explicitly disabled
  • Network access is validated: Required inbound and outbound connectivity is in place
  • Credentials are available: Database credentials and API tokens are configured
  • Sample data is prepared: Transaction files are ready for testing (see Quick Start)

Next steps