Skip to main content
Starting with Midaz v3.5.0 and Helm Chart v5.x, CRM is no longer deployed as a standalone plugin. It is now integrated directly into the Midaz monorepo and Helm chart as an embedded component. This guide explains how to migrate from the standalone plugin-crm deployment to the integrated CRM.
If you are starting a new Midaz deployment (v5.x+), you do not need this guide. Simply enable CRM in your Helm values as described in Deploy Midaz using Helm. The integrated CRM is now the only supported deployment model.

What changed


The CRM plugin was originally maintained as a separate codebase with its own release cycle and deployed independently through a dedicated Helm chart (plugin-crm) in the midaz-plugins namespace. Starting with Midaz v3.5.0-beta.12 (December 2025), CRM was incorporated into the Midaz monorepo under components/crm/. Its deployment was then consolidated into the main Midaz Helm chart beginning in v5.x.

Architecture comparison

AspectStandalone (v4.x and earlier)Integrated (v5.x+)
Source codeSeparate repositorycomponents/crm/ in the Midaz monorepo
Helm chartplugin-crm (dedicated chart)Part of the midaz chart
Namespacemidaz-pluginsmidaz
VersioningIndependent release cycleMatches Midaz core version
MongoDBOwn connection configurationShared MongoDB with other Midaz services
Installationhelm install plugin-crm oci://...crm.enabled: true in Midaz values
Port40034003 (unchanged)

API changes

The CRM API remains fully backward-compatible. All endpoints available in the standalone version continue to work the same way in the integrated deployment.
ResourceEndpointsStatus
HoldersPOST, GET (list), GET (by ID), PATCH, DELETEUnchanged
AliasesPOST, GET (list by holder), GET (by ID), PATCH, DELETEUnchanged
Aliases (global)GET /v1/aliases (list across all holders)Unchanged
Related PartiesDELETEAdded in integrated version
The GET /v1/aliases endpoint allows you to list aliases across all holders with advanced filtering. Filters include holder_id, account_id, ledger_id, document, banking details, regulatory fields, and related party attributes.This endpoint complements the holder-scoped endpoints under /v1/holders/{holder_id}/aliases.
The endpoint DELETE /v1/holders/{holder_id}/aliases/{alias_id}/related-parties/{related_party_id} was introduced with the integrated CRM.If you previously needed to remove related parties individually, this operation is now supported directly. In earlier versions, this required updating the alias payload without the related party.

What stays the same

  • API contract: All existing endpoints, request and response schemas, and behaviors remain unchanged.
  • Database: MongoDB continues to be the storage backend.
  • Authentication: Access Manager integration works the same way (PLUGIN_AUTH_ENABLED, PLUGIN_AUTH_ADDRESS).
  • Encryption keys: LCRYPTO_HASH_SECRET_KEY and LCRYPTO_ENCRYPT_SECRET_KEY are still required.
  • Default port: CRM continues to run on port 4003.

Pre-migration checklist


Before starting the migration, confirm the following:
1
Identify your current standalone version
helm list -n midaz-plugins
Record the plugin-crm chart version and app version.
2
Backup your current Helm values
helm get values plugin-crm -n midaz-plugins > plugin-crm-values-backup.yaml
3
Backup your MongoDB data
# Find the MongoDB pod in the midaz-plugins namespace
kubectl get pods -n midaz-plugins -l app=mongodb

# Export the CRM database
kubectl exec -n midaz-plugins <mongodb-pod> -- \
  mongodump --db crm --archive=/tmp/crm-backup.archive

# Copy the backup locally
kubectl cp midaz-plugins/<mongodb-pod>:/tmp/crm-backup.archive ./crm-backup.archive
4
Verify your Midaz chart is v5.x or later
helm list -n midaz
If you are running v4.x or earlier, upgrade Midaz first using the Upgrading Helm guide.
5
Schedule a maintenance windowCRM will be temporarily unavailable during migration. Plan for a short downtime window.

Migration steps


Step 1 — Enable CRM in the Midaz chart

Add the CRM configuration to your Midaz Helm values:
crm:
  enabled: true
  configmap:
    MONGO_HOST: "midaz-mongodb.midaz.svc.cluster.local."
    MONGO_NAME: "crm"
    MONGO_USER: "midaz"
    MONGO_PORT: "27017"
  secrets:
    MONGO_PASSWORD: "<your-mongodb-password>"
    LCRYPTO_HASH_SECRET_KEY: "<your-existing-hash-key>"
    LCRYPTO_ENCRYPT_SECRET_KEY: "<your-existing-encrypt-key>"
Use the same encryption keys (LCRYPTO_HASH_SECRET_KEY and LCRYPTO_ENCRYPT_SECRET_KEY) used in the standalone deployment. Different keys will make existing encrypted data unreadable.
If you use external secrets:
crm:
  enabled: true
  useExistingSecret: true
  existingSecretName: "crm-secrets"

Step 2 — Migrate your MongoDB data

If your standalone CRM used its own MongoDB instance, restore the data into the Midaz-managed MongoDB.
# Copy the backup to the Midaz MongoDB pod
kubectl cp ./crm-backup.archive midaz/<midaz-mongodb-pod>:/tmp/crm-backup.archive

# Restore the CRM database
kubectl exec -n midaz <midaz-mongodb-pod> -- \
  mongorestore --archive=/tmp/crm-backup.archive --db crm --drop
If the standalone and integrated CRM already use the same MongoDB instance, you can skip this step. Just confirm that MONGO_HOST and MONGO_NAME match.

Step 3 — Deploy the integrated CRM

helm upgrade midaz oci://registry-1.docker.io/lerianstudio/midaz-helm \
  --version 5.x.x \
  -n midaz \
  -f your-values.yaml

Step 4 — Verify the integrated CRM is running

# Check the pod status
kubectl get pods -n midaz -l app=crm

# Check the logs
kubectl logs -n midaz deployment/midaz-crm

# Test the health endpoint (run port-forward in a separate terminal)
kubectl port-forward -n midaz svc/midaz-crm 4003:4003 &
curl http://localhost:4003/health

Step 5 — Validate your data

Run a quick validation to confirm that your data migrated correctly.
# List holders through the integrated CRM
curl -H "X-Organization-Id: <your-org-id>" \
  http://localhost:4003/v1/holders

# Verify a specific holder
curl -H "X-Organization-Id: <your-org-id>" \
  http://localhost:4003/v1/holders/<known-holder-id>
Compare the results with the standalone deployment.

Step 6 — Update DNS and ingress

Update your DNS records or ingress rules to point to the CRM service in the midaz namespace.
# The CRM service name changes from the standalone chart
# Old: plugin-crm.midaz-plugins.svc.cluster.local
# New: midaz-crm.midaz.svc.cluster.local

Step 7 — Remove the standalone CRM

After confirming that everything works as expected, remove the standalone deployment.
helm uninstall plugin-crm -n midaz-plugins
Only uninstall the standalone CRM after validating the integrated deployment. This operation removes the standalone deployment and its resources.
If the midaz-plugins namespace is no longer needed, you can optionally remove it.
kubectl delete namespace midaz-plugins

Access Manager permissions


Access Manager permissions remain unchanged after the migration. The application name continues to be plugin-crm, and permissions apply to the holders and aliases resources.
PermissionDescriptionResourcesAllowed Methods
plugin-crm-editor-permissionFull accessholders, aliasespost, get, patch, delete
plugin-crm-contributor-permissionRead and writeholders, aliasespost, get, patch
plugin-crm-viewer-permissionRead onlyholders, aliasesget
No updates are required in your Access Manager configuration.

Rollback procedure


If you need to revert to the standalone CRM:
1
Disable CRM in the Midaz chart:
crm:
  enabled: false
2
Re-deploy the Midaz chart:
helm upgrade midaz oci://registry-1.docker.io/lerianstudio/midaz-helm \
  --version 5.x.x -n midaz -f your-values.yaml
3
Re-install the standalone plugin-crm:
helm install plugin-crm oci://registry-1.docker.io/lerianstudio/plugin-crm \
  --version <your-previous-version> \
  -n midaz-plugins \
  -f plugin-crm-values-backup.yaml
4
Restore DNS or ingress to point back to the standalone service.

Troubleshooting


CRM pod fails to start with encryption errors
  • Confirm that LCRYPTO_HASH_SECRET_KEY and LCRYPTO_ENCRYPT_SECRET_KEY exactly match the values used in the standalone deployment.
Data appears empty after migration
  • Verify that MONGO_HOST and MONGO_NAME point to the correct MongoDB instance and database.
  • If you ran mongorestore, confirm the restore completed successfully.
Access Manager rejects requests
  • The application name in Access Manager should still be plugin-crm. No change is required.
Port conflict on 4003
  • Running both the standalone and integrated CRM simultaneously will create a conflict on port 4003.
If you need to run both during testing, temporarily change the port in the Midaz values:
crm:
  configmap:
    SERVER_PORT: "4013"

Next steps