Skip to main content
This guide covers how to import transaction data from external sources into Matcher for reconciliation.

Supported formats


Matcher accepts transaction files in three formats:
  • CSV: Comma-separated values with headers. Most common for bank exports.
  • JSON: Array of transaction objects. Best for API integrations.
  • XML: Structured elements. Common for enterprise systems.

File structure requirements


Each file must contain transaction records with fields that can be mapped to Matcher’s internal schema.

Required fields

Every transaction must have these fields (or mappable equivalents):
FieldTypeDescription
transaction_idStringUnique identifier within the source
amountDecimalTransaction amount (positive or negative)
currencyStringISO 4217 currency code
dateDate/DateTimeTransaction date

Optional fields

FieldTypeDescription
referenceStringExternal reference or description
counterpartyStringOther party in the transaction
typeStringTransaction type (credit, debit, etc.)
metadataObjectAdditional custom fields

Format examples


CSV

CSV Requirements:
  • First row must be column headers
  • UTF-8 encoding
  • Comma delimiter (configurable)
  • Quote fields containing commas or newlines
Code example
 transaction_id,amount,currency,date,reference,type
 BANK-2024-001,1500.00,USD,2024-01-15,Invoice #1234,credit
 BANK-2024-002,-250.00,USD,2024-01-15,Service fee,debit
 BANK-2024-003,3200.50,USD,2024-01-16,Customer payment,credit
 BANK-2024-004,-89.99,USD,2024-01-16,Subscription,debit

JSON

JSON Requirements:
  • Root element must be an array
  • Consistent field names across objects
  • UTF-8 encoding
Code example
[
  {
    "transaction_id": "BANK-2024-001",
    "amount": 1500.0,
    "currency": "USD",
    "date": "2024-01-15",
    "reference": "Invoice #1234",
    "type": "credit"
  },
  {
    "transaction_id": "BANK-2024-002",
    "amount": -250.0,
    "currency": "USD",
    "date": "2024-01-15",
    "reference": "Service fee",
    "type": "debit"
  }
]

XML

XML Requirements:
  • Valid XML with declaration
  • Root element containing transaction elements
  • UTF-8 encoding
Code example
  <?xml version="1.0" encoding="UTF-8"?>
  <transactions>
    <transaction>
      <transaction_id>BANK-2024-001</transaction_id>
      <amount>1500.00</amount>
      <currency>USD</currency>
      <date>2024-01-15</date>
     <reference>Invoice #1234</reference>
      <type>credit</type>
    </transaction>
    <transaction>
      <transaction_id>BANK-2024-002</transaction_id>
      <amount>-250.00</amount>
      <currency>USD</currency>
      <date>2024-01-15</date>
      <reference>Service fee</reference>
      <type>debit</type>
      </transaction>
  </transactions>

Upload via API


Use the import endpoint to upload transaction files.

Preview before uploading

Before committing a file for ingestion, you can preview it to verify column detection and sample data. This helps catch field mapping issues early.
cURL
curl -X POST "https://api.matcher.example.com/v1/imports/contexts/{contextId}/sources/{sourceId}/preview" \
 -H "Authorization: Bearer $TOKEN" \
 -F "file=@bank_statement_january.csv" \
 -F "max_rows=5"

Response

{
  "columns": ["transaction_id", "amount", "currency", "date", "reference"],
  "sampleRows": [
    ["BANK-2024-001", "1500.00", "USD", "2024-01-15", "Invoice #1234"],
    ["BANK-2024-002", "-250.00", "USD", "2024-01-15", "Service fee"]
  ],
  "rowCount": 2,
  "format": "csv"
}
API Reference: Preview file

Single file upload

cURL
curl -X POST "https://api.matcher.example.com/v1/imports/contexts/{contextId}/sources/{sourceId}/upload" \
 -H "Authorization: Bearer $TOKEN" \
 -F "file=@bank_statement_january.csv" \
 -F "format=csv"
API Reference: Upload file

Response

{
  "job_id": "job_imp_789xyz",
  "status": "QUEUED",
  "context_id": "ctx_abc123",
  "source_id": "src_bank456",
  "file_name": "bank_statement_january.csv",
  "file_size_bytes": 15420,
  "created_at": "2024-01-20T10:30:00Z"
}

Check import status

cURL
curl -X GET https://api.matcher.example.com/v1/contexts/{contextId}/jobs/{jobId} \
 -H "Authorization: Bearer $TOKEN"
API Reference: Get import status

Response (Processing)

{
  "job_id": "job_imp_789xyz",
  "status": "PROCESSING",
  "progress": {
    "total_rows": 1250,
    "processed_rows": 800,
    "valid_rows": 795,
    "invalid_rows": 5,
    "duplicate_rows": 12
  },
  "started_at": "2024-01-20T10:30:05Z"
}

Response (Completed)

{
  "job_id": "job_imp_789xyz",
  "status": "COMPLETED",
  "summary": {
    "total_rows": 1250,
    "imported": 1233,
    "duplicates_skipped": 12,
    "validation_errors": 5
  },
  "errors": [
    {
      "row": 45,
      "field": "amount",
      "error": "Invalid decimal format"
    },
    {
      "row": 89,
      "field": "date",
      "error": "Date before minimum allowed"
    },
    {
      "row": 234,
      "field": "currency",
      "error": "Unknown currency code: XXX"
    }
  ],
  "started_at": "2024-01-20T10:30:05Z",
  "completed_at": "2024-01-20T10:30:45Z"
}

Import job status values

StatusDescription
QUEUEDJob received, waiting to start
PROCESSINGFile is being parsed and validated
COMPLETEDImport finished successfully
FAILEDImport failed (check errors)
CANCELLEDImport was cancelled

Validation and error handling


Matcher validates uploaded files at multiple stages.

Validation stages

1

Format Validation

Verifies the file is valid CSV, JSON, or XML with correct structure.
2

Schema Validation

Checks that required fields are present and match the configured field map.
3

Data Type Validation

Validates amounts are valid decimals, dates are parseable, currencies are valid ISO codes.
4

Business Rule Validation

Applies context-specific rules like date ranges, amount limits, etc.

Common validation errors

ErrorCauseSolution
INVALID_FORMATFile cannot be parsedCheck file encoding and structure
MISSING_REQUIRED_FIELDRequired field not foundVerify field mapping configuration
INVALID_AMOUNTAmount not a valid numberCheck for currency symbols or commas in numbers
INVALID_DATEDate cannot be parsedUse ISO 8601 format (YYYY-MM-DD)
UNKNOWN_CURRENCYCurrency code not recognizedUse ISO 4217 codes (USD, EUR, BRL)
DATE_OUT_OF_RANGEDate before/after allowed rangeCheck context date boundaries

Handling errors

By default, valid rows are imported even if some rows have errors. Configure error handling behavior through context settings or handle errors after import completion by reviewing the job status response.

Duplicate detection


Matcher automatically detects and handles duplicate transactions to prevent double-counting.

How duplicates are detected

Duplicates are identified by computing a hash of key fields:
  • transaction_id
  • source_id
  • amount
  • currency
  • date
If a transaction with the same hash already exists in the system, it’s flagged as a duplicate.

Duplicate handling options

OptionBehavior
skip (default)Duplicate rows are skipped, existing data unchanged
replaceDuplicate rows replace existing transactions
errorImport fails if duplicates are found
Configure duplicate handling behavior through context or source settings.

Viewing duplicate details

The import summary shows how many duplicates were found:
{
  "summary": {
    "total_rows": 1000,
    "imported": 950,
    "duplicates_skipped": 50,
    "validation_errors": 0
  }
}

Batch uploads


For large reconciliation jobs, you can upload multiple files in sequence.

Upload multiple files

# Upload bank statement
curl -X POST "https://api.matcher.example.com/v1/imports/contexts/{contextId}/sources/{bankSourceId}/upload" \
 -H "Authorization: Bearer $TOKEN" \
 -F "file=@bank_january.csv" \
 -F "format=csv"

# Upload ledger export
curl -X POST "https://api.matcher.example.com/v1/imports/contexts/{contextId}/sources/{ledgerSourceId}/upload" \
 -H "Authorization: Bearer $TOKEN" \
 -F "file=@ledger_january.csv" \
 -F "format=csv"

Wait for all imports

Before running matching, ensure all imports are complete:
# List imports for context
curl -X GET "https://api.matcher.example.com/v1/imports?context_id={contextId}&status=PROCESSING" \
 -H "Authorization: Bearer $TOKEN"

Search uploaded transactions


After importing files, you can search across all transactions in a context to verify data quality or investigate specific records.
cURL
curl -X GET "https://api.matcher.example.com/v1/imports/contexts/{contextId}/transactions/search?q=Invoice&amount_min=1000&status=UNMATCHED" \
 -H "Authorization: Bearer $TOKEN"

Response

{
  "items": [
    {
      "id": "019c96a0-2a10-7dfe-b5c1-8a1b2c3d4e5f",
      "sourceId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
      "amount": "1500.00",
      "currency": "USD",
      "date": "2024-01-15T00:00:00Z",
      "description": "Invoice #1234",
      "status": "UNMATCHED"
    }
  ],
  "total": 1,
  "limit": 20,
  "offset": 0
}
API Reference: Search transactions
Supported filters include amount_min, amount_max, date_from, date_to, currency, source_id, status, and free-text search via the q parameter.

Best practices


Check file format and encoding locally before uploading. This catches obvious errors faster.
# Check CSV is valid
head -5 transactions.csv

# Check encoding
file transactions.csv
Standardize on ISO 8601 format (YYYY-MM-DD or YYYY-MM-DDTHH:MM:SSZ) across all sources to avoid parsing issues.
Always include unique transaction IDs from the source system. This enables proper duplicate detection and audit trails.
Decide on a convention (negative for debits, positive for credits) and apply it consistently. Document this in your field mapping.
For very large files (>50MB), consider splitting into smaller chunks by date range. This improves reliability and allows partial retries.
For recurring reconciliation, automate file uploads using scheduled jobs or webhooks from source systems.
# Example: Daily upload via cron
0 6 * * * /scripts/upload_bank_statement.sh

Next steps