Using Helm
This guide walks you through installing and configuring a Midaz environment on Kubernetes using Helm.
You’ll find instructions on setting up Ingress controllers, enabling observability, and managing dependencies.
Deploying Midaz with Helm
Prerequisites
Before deploying Midaz with Helm, make sure you have:
- Kubernetes (v1.30+) – Running cluster.
- Helm 3+ – Installed and available (
helm version
). - Access to a container registry with Midaz images.
- DNS and TLS certificates for ingress (or cert-manager installed).
NoteThe source code for this Helm chart is available at:
The default installation matches the one provided in the Midaz quick installation guide.
Want a deeper understanding of the architecture? Check the Midaz architecture overview.
Install Midaz via Helm Chart
To install Midaz using Helm, run the following command:
helm install midaz oci://registry-1.docker.io/lerianstudio/midaz-helm --version <version> -n midaz --create-namespace
- Replace
<version>
with the desired Helm chart version. You can check available versions by running:
helm search repo oci://registry-1.docker.io/lerianstudio/midaz-helm --versions
This creates a namespace called midaz
(if it doesn’t already exist) and deploys the chart. To confirm the deployment went through run:
helm list -n midaz
TipThe Helm chart is in our GitHub repository. You can fork it, customize values, or extend as needed.
Configuring ingress
Ingress allows you to expose Midaz services outside the Kubernetes cluster, binding them to specific domains and TLS certificates. In this chart, you can enable ingress individually for the Transaction, Onboarding, and Console services.
To use ingress, you’ll need an ingress controller running in your cluster (e.g., NGINX, AWS ALB, or Traefik) and DNS entries pointing to it.
TipYou can enable ingress per service in your values.yaml file and configure hostnames, TLS secrets, and any controller-specific annotations.
The following sections provide configuration examples for the most common ingress controllers.
NGINX ingress controller
To use the NGINX Ingress Controller, configure the values.yaml
as follows:
ingress:
enabled: true
className: "nginx"
// The `annotations` field is used to add custom metadata to the Nginx resource.
// Annotations are key-value pairs that can be used to attach arbitrary non-identifying metadata to objects.
// These annotations can be used by various tools and libraries to augment the behavior of the Nginx resource.
// See more https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md
annotations: {}
hosts:
- host: midaz.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: midaz-tls # Ensure this secret exists or is managed by cert-manager
hosts:
- midaz.example.com
TipCheck the ingress-nginx official documentation for a full reference on Nginx annotations.
AWS ALB (Application Load Balancer)
For AWS ALB Ingress Controller, configure the values.yaml
as follows:
ingress:
enabled: true
className: "alb"
annotations:
alb.ingress.kubernetes.io/scheme: internal # Use "internet-facing" for public ALB
alb.ingress.kubernetes.io/target-type: ip # Use "instance" if targeting EC2 instances
alb.ingress.kubernetes.io/group.name: "midaz" # Group ALB resources under this name
alb.ingress.kubernetes.io/healthcheck-path: "/healthz" # Health check path
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]' # Listen on HTTP and HTTPS
hosts:
- host: midaz.example.com
paths:
- path: /
pathType: Prefix
tls: [] # TLS is managed by the ALB using ACM certificates
Traefik Ingress Controller
For Traefik, configure the values.yaml
as follows:
ingress:
enabled: true
className: "traefik"
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: "web, websecure" # Entrypoints defined in Traefik
traefik.ingress.kubernetes.io/router.tls: "true" # Enable TLS for this route
hosts:
- host: midaz.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: midaz-tls # Ensure this secret exists and contains the TLS certificate
hosts:
- midaz.example.com
Configuring observability
Midaz uses Grafana Docker OpenTelemetry LGTM to collect and visualize telemetry data such as traces and metrics.to collect and visualize telemetry data such as traces and metrics.
You can access the Grafana dashboard using one of two options:
Option 1: Local access
To access the dashboard locally run:
kubectl port-forward svc/midaz-grafana 3000:3000 -n midaz
Then use http://localhost:3000 to open the dashboard.
Option 2: Ingress access
To expose Grafana within your cluster or private network via DNS, enable and configure Ingress like this:
grafana:
enabled: true
name: grafana
ingress:
enabled: true
className: "nginx"
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
hosts:
- host: "midaz-ote.example.com"
paths:
- path: /
pathType: Prefix
tls: []
Disabling observability
You can disable the observability stack entirely by setting:
grafana:
enabled: false
Configuring dependencies
The Midaz Helm Chart has the following dependencies for the project's default installation. All dependencies are enabled by default.
Valkey
- Version: 2.4.7
- Repository: Bitnami
- Disable:
valkey.enabled = false
If you have an existing Valkey or Redis instance, you can disable this dependency and configure Midaz Components to use your external instance, like this:
onboarding:
configmap:
REDIS_HOST: { your-host }:{ your-host-port }
secrets:
REDIS_PASSWORD: { your-host-pass }
transaction:
configmap:
REDIS_HOST: { your-host }:{ your-host-port }
secrets:
REDIS_PASSWORD: { your-host-pass }
PostgreSQL
- Version: 16.3.0
- Repository: Bitnami
- Disable:
postgresql.enabled = false
If you have an existing PostgreSQL instance, you can disable this dependency and configure Midaz Components to use your external PostgreSQL, like this:
onboarding:
configmap:
DB_HOST: { your-host }
DB_USER: { your-host-user }
DB_PORT: { your-host-port }
## DB Replication
DB_REPLICA_HOST: { your-replication-host }
DB_REPLICA_USER: { your-replication-host-user }
DB_REPLICA_PORT: { your-replication-host-port}
secrets:
DB_PASSWORD: { your-host-pass }
DB_REPLICA_PASSWORD: { your-replication-host-pass }
transaction:
configmap:
DB_HOST: { your-host }
DB_USER: { your-host-user }
DB_PORT: { your-host-port }
## DB Replication
DB_REPLICA_HOST: { your-replication-host }
DB_REPLICA_USER: { your-replication-host-user }
DB_REPLICA_PORT: { your-replication-host-port}
secrets:
DB_PASSWORD: { your-host-pass }
DB_REPLICA_PASSWORD: { your-replication-host-pass }
MongoDB
- Version: 15.4.5
- Repository: Bitnami
- Disable:
mongodb.enabled = false
If you have an existing MongoDB instance, you can disable this dependency and configure Midaz Components to use your external MongoDB, like this:
onboarding:
configmap:
MONGO_HOST: { your-host }
MONGO_NAME: { your-host-name }
MONGO_USER: { your-host-user }
MONGO_PORT: { your-host-port }
secrets:
MONGO_PASSWORD: { your-host-pass }
transaction:
configmap:
MONGO_HOST: { your-host }
MONGO_NAME: { your-host-name }
MONGO_USER: { your-host-user }
MONGO_PORT: { your-host-port }
secrets:
MONGO_PASSWORD: { your-host-pass }
RabbitMQ
- Version: 16.0.0
- Repository: Bitnami
- Disable:
rabbitmq.enabled = false
If you're using an external RabbitMQ instance, you need to load the required load_definitions.json file. Without these queues, exchanges, and bindings in place, Midaz won’t work as expected.
You can load the definitions in one of two ways:
Automatically
Enable the externalRabbitmqDefinitions
flag in your values.yaml
file to apply the default definitions automatically:
global:
# -- Enable or disable loading of default RabbitMQ definitions to external host
externalRabbitmqDefinitions:
enabled: true
This creates a Kubernetes Job that loads the RabbitMQ definitions into your external instance.
AttentionThis Job only runs during the first install of the chart. It’s triggered by a Helm post-install hook and won’t run again during upgrades or re-installs.
If you need to re-run it, delete the release and install it again.
Manually
If you prefer to apply the definitions yourself, use RabbitMQ’s HTTP API:
curl -u { your-host-user }: { your-host-pass } -X POST -H "Content-Type: application/json" -d @load_definitions.json http://{ your-host }: { your-host-port }/api/definitions
You’ll find the load_definitions.json
file at: charts/midaz/files/rabbitmq/load_definitions.json
.
Use your own RabbitMQ
If you already have a RabbitMQ instance running, you can disable the built-in dependency and point Midaz components to your external setup:
onboarding:
configmap:
RABBITMQ_HOST: { your-host }
RABBITMQ_DEFAULT_USER: { your-host-user }
RABBITMQ_PORT_HOST: { your-host-port }
RABBITMQ_PORT_AMQP: { your-host-amqp-port }
secrets:
RABBITMQ_DEFAULT_PASS: { your-host-pass }
transaction:
configmap:
RABBITMQ_HOST: { your-host }
RABBITMQ_DEFAULT_USER: { your-host-user }
RABBITMQ_PORT_HOST: { your-host-port }
RABBITMQ_PORT_AMQP: { your-host-amqp-port }
secrets:
RABBITMQ_DEFAULT_PASS: { your-host-pass }
Nginx Proxy Manager (Plugins UIs)
The NGINX Proxy Manager in this chart routes traffic to plugin UIs.
By default, it’s disabled, and all plugin UIs are only accessible through this proxy when enabled. To activate it, set nginx.enabled: true
in your values.yaml
.
- You can also configure ingress for it by setting
nginx.ingress.enabled: true
.
To enable the UI for a specific plugin, set pluginsUi.enabled: true
in the Console service configuration and define the plugin settings, for example:
console:
pluginsUi:
enabled: true
plugins:
plugin-crm-ui:
enabled: true
port: 8082
NoteTo allow NGINX to serve the plugin UIs, the corresponding Helm charts must be installed with UI enabled in the
midaz-plugins
namespace.
Otell Collector
The Otell Collector gathers metrics from Midaz components. It’s disabled by default. To enable it, set:
otell:
enabled: true
When enabled, it automatically collects and forwards telemetry data from all running Midaz services.
Midaz components
The Midaz system runs on four distinct layers that work together, distributed in segregated workloads:
Onboarding
Parameter | Description | Default Value |
---|---|---|
onboarding.name | Service name. | "onboarding" |
onboarding.replicaCount | Number of replicas for the onboarding service. | 2 |
onboarding.image.repository | Repository for the onboarding service container image. | "lerianstudio/midaz-onboarding" |
onboarding.image.pullPolicy | Image pull policy. | "IfNotPresent" |
onboarding.image.tag | Image tag used for deployment. | "2.2.2" |
onboarding.imagePullSecrets | Secrets for pulling images from a private registry. | [] |
onboarding.nameOverride | Overrides the default generated name by Helm. | "" |
onboarding.fullnameOverride | Overrides the full name generated by Helm. | "" |
onboarding.podAnnotations | Pod annotations for additional metadata. | {} |
onboarding.podSecurityContext | Security context applied at the pod level. | {} |
onboarding.securityContext.* | Defines security context settings for the container. | See values.yaml |
onboarding.pdb.enabled | Specifies whether PodDisruptionBudget is enabled. | true |
onboarding.pdb.minAvailable | Minimum number of available pods. | 1 |
onboarding.pdb.maxUnavailable | Maximum number of unavailable pods. | 1 |
onboarding.pdb.annotations | Annotations for the PodDisruptionBudget. | {} |
onboarding.deploymentUpdate.* | Deployment update strategy. | See values.yaml |
onboarding.service.type | Kubernetes service type. | "ClusterIP" |
onboarding.service.port | Port for the HTTP API. | 3000 |
onboarding.service.annotations | Annotations for the service. | {} |
onboarding.ingress.enabled | Specifies whether Ingress is enabled. | false |
onboarding.ingress.className | Ingress class name. | "" |
onboarding.ingress.annotations | Additional ingress annotations. | {} |
onboarding.ingress.hosts | Configured hosts for Ingress and associated paths. | "" |
onboarding.ingress.tls | TLS configurations for Ingress. | [] |
onboarding.resources.* | CPU/Memory resource requests/limits. | See values.yaml |
onboarding.autoscaling.enabled | Specifies whether autoscaling is enabled. | true |
onboarding.autoscaling.minReplicas | Minimum number of replicas for autoscaling. | 2 |
onboarding.autoscaling.maxReplicas | Maximum number of replicas for autoscaling. | 5 |
onboarding.autoscaling.targetCPUUtilizationPercentage | Target CPU utilization percentage for autoscaling. | 80 |
onboarding.autoscaling.targetMemoryUtilizationPercentage | Target memory utilization percentage for autoscaling. | 80 |
onboarding.nodeSelector | Node selectors for pod scheduling. | {} |
onboarding.tolerations | Tolerations for pod scheduling. | {} |
onboarding.affinity | Affinity rules for pod scheduling. | {} |
onboarding.configmap.* | Environment variables for the service. | See values.yaml |
onboarding.secrets.* | Secrets for the service. | See values.yaml |
onboarding.useExistingSecret | Use an existing secret instead of creating a new one. | false |
onboarding.existingSecretName | The name of the existing secret to use. | "" |
onboarding.extraEnvVars | A list of extra environment variables. | [] |
onboarding.serviceAccount.create | Specifies whether the service account should be created. | true |
onboarding.serviceAccount.annotations | Annotations for the service account. | {} |
onboarding.serviceAccount.name | Service account name. If not defined, it will be generated automatically. | "" |
Transaction
Parameter | Description | Default Value |
---|---|---|
transaction.name | Service name. | "transaction" |
transaction.replicaCount | Number of replicas for the transaction service. | 1 |
transaction.image.repository | Repository for the transaction service container image. | "lerianstudio/midaz-transaction" |
transaction.image.pullPolicy | Image pull policy. | "IfNotPresent" |
transaction.image.tag | Image tag used for deployment. | "2.2.2" |
transaction.imagePullSecrets | Secrets for pulling images from a private registry. | [] |
transaction.nameOverride | Overrides the default generated name by Helm. | "" |
transaction.fullnameOverride | Overrides the full name generated by Helm. | "" |
transaction.podAnnotations | Pod annotations for additional metadata. | {} |
transaction.podSecurityContext | Security context for the pod. | {} |
transaction.securityContext.* | Defines security context settings for the container. | See values.yaml |
transaction.pdb.enabled | Enable or disable PodDisruptionBudget. | true |
transaction.pdb.minAvailable | Minimum number of available pods. | 2 |
transaction.pdb.maxUnavailable | Maximum number of unavailable pods. | 1 |
transaction.pdb.annotations | Annotations for the PodDisruptionBudget. | {} |
transaction.deploymentUpdate.* | Deployment update strategy. | See values.yaml |
transaction.service.type | Kubernetes service type. | "ClusterIP" |
transaction.service.port | Port for the HTTP API. | 3001 |
transaction.service.annotations | Annotations for the service. | {} |
transaction.ingress.enabled | Enable or disable ingress. | false |
transaction.ingress.className | Ingress class name. | "" |
transaction.ingress.annotations | Additional ingress annotations. | {} |
transaction.ingress.hosts | Configured hosts for ingress and associated paths. | [] |
transaction.ingress.tls | TLS configuration for ingress. | [] |
transaction.resources.* | CPU/Memory resource requests/limits. | See values.yaml |
transaction.autoscaling.enabled | Enable or disable horizontal pod autoscaling. | true |
transaction.autoscaling.minReplicas | Minimum number of replicas for autoscaling. | 1 |
transaction.autoscaling.maxReplicas | Maximum number of replicas for autoscaling. | 5 |
transaction.autoscaling.targetCPUUtilizationPercentage | Target CPU utilization percentage for autoscaling. | 80 |
transaction.autoscaling.targetMemoryUtilizationPercentage | Target memory utilization percentage for autoscaling. | 80 |
transaction.nodeSelector | Node selector for scheduling pods on specific nodes. | {} |
transaction.tolerations | Tolerations for scheduling on tainted nodes. | {} |
transaction.affinity | Affinity rules for pod scheduling. | {} |
transaction.configmap.* | Environment variables for the service. | See values.yaml |
transaction.secrets.* | Secrets for the service. | See values.yaml |
transaction.useExistingSecret | Use an existing secret instead of creating a new one. | false |
transaction.existingSecretName | The name of the existing secret to use. | "" |
transaction.extraEnvVars | A list of extra environment variables. | [] |
transaction.serviceAccount.create | Specifies whether a ServiceAccount should be created. | true |
transaction.serviceAccount.annotations | Annotations for the ServiceAccount. | {} |
transaction.serviceAccount.name | Name of the service account. | "" |
Console
Parameter | Description | Default Value |
---|---|---|
console.name | Service name. | "console" |
console.enabled | Enable or disable the console service. | true |
console.replicaCount | Number of replicas for the deployment. | 1 |
console.image.repository | Docker image repository for Console. | "lerianstudio/midaz-console" |
console.image.pullPolicy | Docker image pull policy. | "IfNotPresent" |
console.image.tag | Docker image tag used for deployment. | "2.2.1" |
console.imagePullSecrets | Secrets for pulling Docker images from a private registry. | [] |
console.nameOverride | Overrides the resource name. | "" |
console.fullnameOverride | Overrides the full resource name. | "" |
console.podAnnotations | Annotations for the pods. | {} |
console.podSecurityContext | Security context applied at the pod level. | {} |
console.securityContext.* | Defines security context settings for the container. | See values.yaml |
console.pdb.enabled | Specifies whether PodDisruptionBudget is enabled. | false |
console.pdb.minAvailable | Minimum number of available pods for PodDisruptionBudget. | 1 |
console.pdb.maxUnavailable | Maximum number of unavailable pods for PodDisruptionBudget. | 1 |
console.pdb.annotations | Annotations for the PodDisruptionBudget. | {} |
console.deploymentUpdate.* | Deployment update strategy. | See values.yaml |
console.service.type | Kubernetes service type. | "ClusterIP" |
console.service.port | Service port. | 8081 |
console.service.annotations | Annotations for the service. | {} |
console.ingress.enabled | Specifies whether Ingress is enabled. | false |
console.ingress.className | Ingress class name. | "" |
console.ingress.annotations | Additional annotations for Ingress. | {} |
console.ingress.hosts | Configured hosts for Ingress and associated paths. | [] |
console.ingress.tls | TLS configurations for Ingress. | [] |
console.resources.* | CPU/Memory resource requests/limits. | See values.yaml |
console.autoscaling.enabled | Specifies whether horizontal pod autoscaling is enabled. | true |
console.autoscaling.minReplicas | Minimum number of replicas for autoscaling. | 1 |
console.autoscaling.maxReplicas | Maximum number of replicas for autoscaling. | 3 |
console.autoscaling.targetCPUUtilizationPercentage | Target CPU utilization percentage for autoscaling. | 80 |
console.autoscaling.targetMemoryUtilizationPercentage | Target memory utilization percentage for autoscaling. | 80 |
console.nodeSelector | Node selectors for pod scheduling. | {} |
console.tolerations | Tolerations for pod scheduling. | {} |
console.affinity | Affinity rules for pod scheduling. | {} |
console.configmap.* | Environment variables for the service. | See values.yaml |
console.secrets.* | Secrets for the service. | See values.yaml |
console.useExistingSecret | Use an existing secret instead of creating a new one. | false |
console.existingSecretName | The name of the existing secret to use. | "" |
console.extraEnvVars | A list of extra environment variables. | [] |
console.pluginsUi.enabled | Enable or disable the plugins UI proxy. | false |
console.pluginsUi.plugins.* | Configuration for each plugin UI. | See values.yaml |
console.serviceAccount.create | Specifies whether the service account should be created. | true |
console.serviceAccount.annotations | Annotations for the service account. | {} |
console.serviceAccount.name | Service account name. If not defined, it will be generated automatically. | "" |
Updated 8 days ago