Using Helm
This guide walks you through installing and configuring a Midaz environment for Kubernetes using Helm. You’ll find instructions on setting up Ingress controllers, enabling observability, and managing dependencies.
Before you start
Before deploying Midaz with Helm, make sure you have the following in place:
- Kubernetes (v1.30+) – You’ll need a running cluster to deploy into.
- Helm 3+ – Helm must be installed and available. Run
helm version
to confirm.
Want a deeper understanding of how everything fits together? Head over to our Midaz architecture overview before you dive in.
How to deploy Midaz
To get Midaz up and running with Helm, run:
helm install midaz oci://registry-1.docker.io/lerianstudio/midaz-helm --version <version> -n midaz --create-namespace
This creates a namespace called midaz
(if it doesn’t already exist) and deploys the chart.
To confirm the deployment went through:
helm list -n midaz
TipYou’ll find the Helm chart used in this guide in our GitHub repository. Feel free to fork it, tweak values, or extend as needed.
Configuring ingress for different controllers
Midaz supports different ingress controllers for exposing services like Transaction, Ledger, and Console. You’ll need to have a controller set up in your cluster, and configure it in the values.yaml
file.
Below are examples for the most common options:
NGINX ingress controller
ingress:
enabled: true
className: "nginx"
annotations: {}
hosts:
- host: midaz.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: midaz-tls
hosts:
- midaz.example.com
TipCheck the ingress-nginx official documentation for a full reference on Nginx annotations.
AWS ALB (Application Load Balancer)
ingress:
enabled: true
className: "alb"
annotations:
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.name: "midaz"
alb.ingress.kubernetes.io/healthcheck-path: "/healthz"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
hosts:
- host: midaz.example.com
paths:
- path: /
pathType: Prefix
tls: []
Traefik Ingress Controller
ingress:
enabled: true
className: "traefik"
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: "web, websecure"
traefik.ingress.kubernetes.io/router.tls: "true"
hosts:
- host: midaz.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: midaz-tls
hosts:
- midaz.example.com
Configuring observability
Midaz uses Grafana Docker OpenTelemetry LGTM to power its observability stack—this setup collects and exports telemetry data such as traces and metrics.
Accessing the Grafana dashboard
To access the dashboard locally:
kubectl port-forward svc/midaz-grafana 3000:3000 -n midaz
Then head to: http://localhost:3000.
Internal DNS access for Grafana
To expose Grafana within your cluster or private network via DNS, enable and configure Ingress like this:
grafana:
enabled: true
name: grafana
ingress:
enabled: true
className: "nginx"
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/whitelist-source-range: ""
hosts:
- host: "midaz-ote.example.com"
paths:
- path: /
pathType: Prefix
tls: []
Disabling observability
You can disable the observability stack entirely by setting:
grafana:
enabled: false
Configuring dependencies
Midaz comes with default dependencies that are enabled out of the box. You can turn them off if you're using existing infrastructure.
Valkey
- Version: 2.4.7.
- Repository: Bitnami.
- Disable:
valkey.enabled = false
.
To use your own Redis/Valkey instance:
onboarding:
configmap:
REDIS_HOST: { your-host }
REDIS_PORT: { your-port }
REDIS_USER: { your-user }
secrets:
REDIS_PASSWORD: { your-password }
transaction:
configmap:
REDIS_HOST: { your-host }
REDIS_PORT: { your-port }
REDIS_USER: { your-user }
secrets:
REDIS_PASSWORD: { your-password }
PostgreSQL
- Version: 16.3.0.
- Repository: Bitnami.
- Disable:
postgresql.enabled = false
.
External PostgreSQL config example:
onboarding:
configmap:
DB_HOST: { your-host }
DB_USER: { your-user }
DB_PORT: { your-port }
DB_REPLICA_HOST: { your-replica-host }
DB_REPLICA_USER: { your-replica-user }
DB_REPLICA_PORT: { your-replica-port }
secrets:
DB_PASSWORD: { your-password }
DB_REPLICA_PASSWORD: { your-replica-password }
transaction:
configmap:
DB_HOST: { your-host }
DB_USER: { your-user }
DB_PORT: { your-port }
DB_REPLICA_HOST: { your-replica-host }
DB_REPLICA_USER: { your-replica-user }
DB_REPLICA_PORT: { your-replica-port }
secrets:
DB_PASSWORD: { your-password }
DB_REPLICA_PASSWORD: { your-replica-password }
MongoDB
- Version: 15.4.5.
- Repository: Bitnami.
- Disable:
mongodb.enabled = false
.
To connect to an external MongoDB:
onboarding:
configmap:
MONGO_HOST: { your-host }
MONGO_NAME: { your-db-name }
MONGO_USER: { your-user }
MONGO_PORT: { your-port }
secrets:
MONGO_PASSWORD: { your-password }
transaction:
configmap:
MONGO_HOST: { your-host }
MONGO_NAME: { your-db-name }
MONGO_USER: { your-user }
MONGO_PORT: { your-port }
secrets:
MONGO_PASSWORD: { your-password }
RabbitMQ
- Version: 16.0.0.
- Repository: Bitnami.
- Disable:
rabbitmq.enabled = false
.
If you're using an external RabbitMQ instance, you need to load the required definitions file. Without these queues, exchanges, and bindings in place, Midaz won’t work as expected.
You can load the definitions in one of two ways:
Automatically
Enable the externalRabbitmqDefinitions
flag in your values.yaml
file to apply the default definitions automatically:
global:
# -- Enable or disable loading of default RabbitMQ definitions to external host
externalRabbitmqDefinitions:
enabled: true
This creates a Kubernetes Job that loads the RabbitMQ definitions into your external instance.
AttentionThis Job only runs during the first install of the chart. It’s triggered by a Helm post-install hook and won’t run again during upgrades or re-installs.
If you need to re-run it, delete the release and install it again.
Manually
If you prefer to apply the definitions yourself, use RabbitMQ’s HTTP API:
curl -u { your-host-user }: { your-host-pass } -X POST -H "Content-Type: application/json" -d @load_definitions.json http://{ your-host }: { your-host-port }/api/definitions
You’ll find the load_definitions.json
file at: charts/midaz/files/rabbitmq/load_definitions.json
.
Use your own RabbitMQ
If you already have a RabbitMQ instance running, you can disable the built-in dependency and point Midaz components to your external setup:
onboarding:
configmap:
RABBITMQ_HOST: { your-host }
RABBITMQ_DEFAULT_USER: { your-host-user }
RABBITMQ_PORT_HOST: { your-host-port }
RABBITMQ_PORT_AMQP: { your-host-amqp-port }
secrets:
RABBITMQ_DEFAULT_PASS: { your-host-pass }
transaction:
configmap:
RABBITMQ_HOST: { your-host }
RABBITMQ_DEFAULT_USER: { your-host-user }
RABBITMQ_PORT_HOST: { your-host-port }
RABBITMQ_PORT_AMQP: { your-host-amqp-port }
secrets:
RABBITMQ_DEFAULT_PASS: { your-host-pass }
Midaz components
The Midaz system runs on four distinct layers that work together, distributed in segregated workloads:
Onboarding
Parameter | Description | Default Value |
---|---|---|
onboarding.name | Service name. | "onboarding" |
onboarding.replicaCount | Number of replicas for the onboarding service. | 2 |
onboarding.image.repository | Repository for the onboarding service container image. | "lerianstudio/midaz-onboarding" |
onboarding.image.pullPolicy | Image pull policy. | "IfNotPresent" |
onboarding.image.tag | Image tag used for deployment. | "2.1.0" |
onboarding.imagePullSecrets | Secrets for pulling images from a private registry. | [] |
onboarding.nameOverride | Overrides the default generated name by Helm. | "" |
onboarding.fullnameOverride | Overrides the full name generated by Helm. | "" |
onboarding.podAnnotations | Pod annotations for additional metadata. | {} |
onboarding.podSecurityContext | Security context applied at the pod level. | {} |
onboarding.securityContext.runAsGroup | Defines the group ID for the user running the process inside the container. | 1000 |
onboarding.securityContext.runAsUser | Defines the user ID for the process running inside the container. | 1000 |
onboarding.securityContext.runAsNonRoot | Ensures the process does not run as root. | true |
onboarding.securityContext.capabilities.drop | List of capabilities to drop. | ["ALL"] |
onboarding.securityContext.readOnlyRootFilesystem | Defines the root filesystem as read-only. | true |
onboarding.pdb.enabled | Specifies whether PodDisruptionBudget is enabled. | true |
onboarding.pdb.minAvailable | Minimum number of available pods. | 1 |
onboarding.pdb.maxUnavailable | Maximum number of unavailable pods. | 1 |
onboarding.pdb.annotations | Annotations for the PodDisruptionBudget. | {} |
onboarding.deploymentUpdate.type | Type of deployment strategy. | "RollingUpdate" |
onboarding.deploymentUpdate.maxSurge | Maximum number of pods that can be created over the desired number of pods. | "100%" |
onboarding.deploymentUpdate.maxUnavailable | Maximum number of pods that can be unavailable during the update. | 0 |
onboarding.service.type | Kubernetes service type. | "ClusterIP" |
onboarding.service.port | Port for the HTTP API. | 3000 |
onboarding.ingress.enabled | Specifies whether Ingress is enabled. | false |
onboarding.ingress.className | Ingress class name. | "" |
onboarding.ingress.annotations | Additional ingress annotations. | {} |
onboarding.ingress.hosts | Configured hosts for Ingress and associated paths. | "" |
onboarding.ingress.tls | TLS configurations for Ingress. | [] |
onboarding.resources.limits.cpu | CPU limit allocated for the pods. | "1500m" |
onboarding.resources.limits.memory | Memory limit allocated for the pods. | "512Gi" |
onboarding.resources.requests.cpu | Minimum CPU request for the pods. | "768m" |
onboarding.resources.requests.memory | Minimum memory request for the pods. | "256Mi" |
onboarding.autoscaling.enabled | Specifies whether autoscaling is enabled. | true |
onboarding.autoscaling.minReplicas | Minimum number of replicas for autoscaling. | 2 |
onboarding.autoscaling.maxReplicas | Maximum number of replicas for autoscaling. | 5 |
onboarding.autoscaling.targetCPUUtilizationPercentage | Target CPU utilization percentage for autoscaling. | 80 |
onboarding.autoscaling.targetMemoryUtilizationPercentage | Target memory utilization percentage for autoscaling. | 80 |
onboarding.nodeSelector | Node selectors for pod scheduling. | {} |
onboarding.tolerations | Tolerations for pod scheduling. | {} |
onboarding.affinity | Affinity rules for pod scheduling. | {} |
onboarding.configmap | Additional configurations in ConfigMap. | Find the default values in the configuration. |
onboarding.secrets | Additional secrets for the service. | Find the default values in the configuration. |
onboarding.serviceAccount.create | Specifies whether the service account should be created. | true |
onboarding.serviceAccount.annotations | Annotations for the service account. | {} |
onboarding.serviceAccount.name | Service account name. If not defined, it will be generated automatically. | "" |
Transaction
Parameter | Description | Default Value |
---|---|---|
transaction.name | Service name. | "transaction" |
transaction.replicaCount | Number of replicas for the transaction service. | 1 |
transaction.image.repository | Repository for the transaction service container image. | "lerianstudio/midaz-transaction" |
transaction.image.pullPolicy | Image pull policy. | "IfNotPresent" |
transaction.image.tag | Image tag used for deployment. | "2.1.0" |
transaction.imagePullSecrets | Secrets for pulling images from a private registry. | [] |
transaction.nameOverride | Overrides the default generated name by Helm. | "" |
transaction.fullnameOverride | Overrides the full name generated by Helm. | "" |
transaction.podAnnotations | Pod annotations for additional metadata. | {} |
transaction.podSecurityContext | Security context for the pod. | {} |
transaction.securityContext.runAsGroup | Defines the group ID for the user running the process inside the container. | 1000 |
transaction.securityContext.runAsUser | Defines the user ID for the process running inside the container. | 1000 |
transaction.securityContext.runAsNonRoot | Ensures the process does not run as root. | true |
transaction.securityContext.capabilities.drop | List of Linux capabilities to drop. | ["ALL"] |
transaction.securityContext.readOnlyRootFilesystem | Defines the root filesystem as read-only. | true |
transaction.pdb.enabled | Enable or disable PodDisruptionBudget. | true |
transaction.pdb.minAvailable | Minimum number of available pods. | 2 |
transaction.pdb.maxUnavailable | Maximum number of unavailable pods. | 1 |
transaction.pdb.annotations | Annotations for the PodDisruptionBudget. | {} |
transaction.deploymentUpdate.type | Type of deployment strategy. | "RollingUpdate" |
transaction.deploymentUpdate.maxSurge | Maximum number of pods that can be created over the desired number of pods. | "100%" |
transaction.deploymentUpdate.maxUnavailable | Maximum number of pods that can be unavailable during the update. | 0 |
transaction.service.type | Kubernetes service type. | "ClusterIP" |
transaction.service.port | Port for the HTTP API. | 3001 |
transaction.ingress.enabled | Enable or disable ingress. | false |
transaction.ingress.className | Ingress class name. | "" |
transaction.ingress.annotations | Additional ingress annotations. | {} |
transaction.ingress.hosts | Configured hosts for ingress and associated paths. | [{host: "", paths: [{path: "/", pathType: "Prefix"}]}] |
transaction.ingress.tls | TLS configuration for ingress. | [] |
transaction.resources.limits.cpu | CPU limit allocated for the pods. | "2000m" |
transaction.resources.limits.memory | Memory limit allocated for the pods. | "512Gi" |
transaction.resources.requests.cpu | Minimum CPU request for the pods. | "768m" |
transaction.resources.requests.memory | Minimum memory request for the pods. | "256Mi" |
transaction.autoscaling.enabled | Enable or disable horizontal pod autoscaling. | true |
transaction.autoscaling.minReplicas | Minimum number of replicas for autoscaling. | 3 |
transaction.autoscaling.maxReplicas | Maximum number of replicas for autoscaling. | 9 |
transaction.autoscaling.targetCPUUtilizationPercentage | Target CPU utilization percentage for autoscaling. | 70 |
transaction.autoscaling.targetMemoryUtilizationPercentage | Target memory utilization percentage for autoscaling. | 80 |
transaction.nodeSelector | Node selector for scheduling pods on specific nodes. | {} |
transaction.tolerations | Tolerations for scheduling on tainted nodes. | {} |
transaction.affinity | Affinity rules for pod scheduling. | {} |
transaction.configmap | ConfigMap for environment variables and configurations. | Find the default values in the configuration. |
transaction.secrets | Secrets for storing sensitive data. | Find the default values in the configuration. |
transaction.serviceAccount.create | Specifies whether a ServiceAccount should be created. | true |
transaction.serviceAccount.annotations | Annotations for the ServiceAccount. | {} |
transaction.serviceAccount.name | Name of the service account. | "" |
Console
Parameter | Description | Default Value |
---|---|---|
console.name | Service name. | "console" |
console.enabled | Enable or disable the console service. | true |
console.replicaCount | Number of replicas for the deployment. | 1 |
console.image.repository | Docker image repository for Console. | "lerianstudio/midaz-console" |
console.image.pullPolicy | Docker image pull policy. | "IfNotPresent" |
console.image.tag | Docker image tag used for deployment. | "1.25.1" |
console.imagePullSecrets | Secrets for pulling Docker images from a private registry. | [] |
console.nameOverride | Overrides the resource name. | "" |
console.fullnameOverride | Overrides the full resource name. | "" |
console.podAnnotations | Annotations for the pods. | {} |
console.podSecurityContext | Security context applied at the pod level. | {} |
console.securityContext.runAsGroup | Defines the group ID for the user running the process inside the container. | 1000 |
console.securityContext.runAsUser | Defines the user ID for the process running inside the container. | 1000 |
console.securityContext.runAsNonRoot | Ensures the process does not run as root. | true |
console.securityContext.capabilities.drop | List of Linux capabilities to drop. | ["ALL"] |
console.securityContext.readOnlyRootFilesystem | Defines the root filesystem as read-only. | true |
console.pdb.enabled | Specifies whether PodDisruptionBudget is enabled. | false |
console.pdb.minAvailable | Minimum number of available pods for PodDisruptionBudget. | 1 |
console.pdb.maxUnavailable | Maximum number of unavailable pods for PodDisruptionBudget. | 1 |
console.pdb.annotations | Annotations for the PodDisruptionBudget. | {} |
console.deploymentUpdate.type | Type of deployment strategy. | "RollingUpdate" |
console.deploymentUpdate.maxSurge | Maximum number of pods that can be created over the desired number of pods. | "100%" |
console.deploymentUpdate.maxUnavailable | Maximum number of pods that can be unavailable during the update. | 0 |
console.service.type | Kubernetes service type. | "ClusterIP" |
console.service.port | Service port. | 8081 |
console.ingress.enabled | Specifies whether Ingress is enabled. | false |
console.ingress.className | Ingress class name. | "" |
console.ingress.annotations | Additional annotations for Ingress. | {} |
console.ingress.hosts | Configured hosts for Ingress and associated paths. | [{ "host": "", "paths": [{ "path": "/", "pathType": "Prefix" }] }] |
console.ingress.tls | TLS configurations for Ingress. | [] |
console.resources.limits.cpu | CPU limit allocated for the pods. | "200m" |
console.resources.limits.memory | Memory limit allocated for the pods. | "256Mi" |
console.resources.requests.cpu | Minimum CPU request for the pods. | "100m" |
console.resources.requests.memory | Minimum memory request for the pods. | "128Mi" |
console.autoscaling.enabled | Specifies whether horizontal pod autoscaling is enabled. | true |
console.autoscaling.minReplicas | Minimum number of replicas for autoscaling. | 1 |
console.autoscaling.maxReplicas | Maximum number of replicas for autoscaling. | 3 |
console.autoscaling.targetCPUUtilizationPercentage | Target CPU utilization percentage for autoscaling. | 80 |
console.autoscaling.targetMemoryUtilizationPercentage | Target memory utilization percentage for autoscaling. | 80 |
console.nodeSelector | Node selectors for pod scheduling. | {} |
console.tolerations | Tolerations for pod scheduling. | {} |
console.affinity | Affinity rules for pod scheduling. | {} |
console.configmap | Additional configurations in the ConfigMap. | Find the default values in the configuration. |
console.secrets | Additional secrets for the service. | {} |
console.serviceAccount.create | Specifies whether the service account should be created. | true |
console.serviceAccount.annotations | Annotations for the service account. | {} |
console.serviceAccount.name | Service account name. If not defined, it will be generated automatically. | "" |
Updated 22 days ago