Midaz Terraform Foundation

Midaz Terraform Foundation is a repository with ready-made Terraform examples to help you create the base infrastructure needed to run Midaz, using the best practices of the main cloud providers: AWS, GCP, or Azure.

This base infrastructure includes:

  • Network (VPC, subnets)
  • DNS
  • Database
  • Redis/Valkey
  • Kubernetes cluster (EKS, GKE, or AKS)

🚧

Attention

The Midaz Terraform Foundation templates do not include MongoDB or RabbitMQ. That’s because these services aren’t consistently offered as managed services across all major cloud providers (AWS, GCP, Azure).


Why use it

Provisioning infrastructure shouldn’t be slow, inconsistent, or error-prone. The midaz-terraform-foundation helps you move fast while staying aligned with Lerian’s best practices for security, observability, and scalability. Here's how it compares to manual or ad-hoc setups:

Speed and standardization

CriteriaWith Midaz TerraformManual Setup / Ad-hoc Scripts
Setup speedFast – provisions everything in minutes with a single applySlow – takes days to configure and test
Architecture standardStandardized – follows Lerian's best practicesUnpredictable – may be inconsistent
ReusabilityHigh – supports multiple environments with minimal changesLow – hard to reuse across projects

Security and observability

CriteriaWith Midaz TerraformManual Setup / Ad-hoc Scripts
Security by defaultYes – secure by design (isolated VPCs, IAM, secrets, etc.)No – depends on the team, increasing exposure risk
Built-in observabilityBuilt-in – integrates with Prometheus, Grafana, and moreManual – requires separate setup, often skipped
Production-ready?Yes – high availability and autoscaling out of the boxUncertain – needs extra effort to harden

Maintenance and support

CriteriaWith Midaz TerraformManual Setup / Ad-hoc Scripts
MaintainabilityEasy – modular and versioned for painless updatesHard – scripts break easily
Lerian supportIncluded – verified and supported by LerianNone – not guaranteed
Estimated deployment time1 day – including validation1–2 weeks – with higher operational risk

👍

Looking for speed, reliability, and long-term support?

midaz-terraform-foundation helps you deploy faster, avoid common pitfalls, and scale with confidence — all backed by Lerian’s engineering standards.


What you’ll need

Before getting started, make sure you have:

  • Terraform v1.0.0 or higher
  • A cloud provider account (AWS, GCP, or Azure)
  • A storage bucket for Terraform state files
  • The CLI tool for your cloud provider:
    • aws for AWS
    • gcloud for GCP
    • az for Azure

CI/CD Integration

This repository provides Terraform examples for deploying foundation infrastructure. It does not include a CI/CD pipeline. You’ll need to create one based on your project’s needs.

Already running a Terraform CI/CD pipeline? Here’s what to do:

  1. Skip the deployment script; it’s intended for local use only.
  2. Copy the relevant example configs into your private Infrastructure as Code repo.
  3. Integrate the Terraform configs into your pipeline as needed.
  4. Use your CI/CD platform’s built-in secret management to handle credentials securely.

Project structure


The repository is designed with a unique structure for each cloud provider. This means that every component of the infrastructure is organized to support a modular and controlled approach. You have the flexibility to deploy only what you need, or go all in with the entire foundation if you prefer.

.
├── examples/
    ├── aws/
    │   ├── vpc/
    │   ├── route53/
    │   ├── rds/
    │   ├── valkey/
    │   └── eks/
    ├── gcp/
    │   ├── vpc/
    │   ├── cloud-dns/
    │   ├── cloud-sql/
    │   ├── valkey/
    │   └── gke/
    └── azure/
        ├── network/
        ├── dns/
        ├── database/
        ├── redis/
        └── aks/

Deployment order matters

To avoid errors and ensure everything connects correctly, we recommend deploying components in this sequence:

  1. VPC / Network
  2. DNS
  3. Database
  4. Redis/Valkey
  5. Kubernetes cluster

Creating the state storage


Terraform requires a remote backend to manage its state. To get started with these templates, it’s essential to set up a storage bucket for the Terraform state files first.

AWS

Do not forget to replace REGION and UNIQUE_BUCKET_NAME with your information.

  1. Create an S3 bucket:
 aws s3api create-bucket \
    --bucket UNIQUE_BUCKET_NAME \
    --region REGION \
    --create-bucket-configuration LocationConstraint=REGION
  1. Enable versioning:
aws s3api put-bucket-versioning \
    --bucket UNIQUE_BUCKET_NAME \
    --versioning-configuration Status=Enabled
  1. Enable encryption:
aws s3api put-bucket-encryption \
    --bucket UNIQUE_BUCKET_NAME \
    --server-side-encryption-configuration \
    '{"Rules":[{"ApplyServerSideEncryptionByDefault":{"SSEAlgorithm":"AES256"}}]}'
  1. Block public access:
aws s3api put-public-access-block \
    --bucket UNIQUE_BUCKET_NAME \
    --public-access-block-configuration \    '{"BlockPublicAcls":true,"IgnorePublicAcls":true,"BlockPublicPolicy":true,"RestrictPublicBuckets":true}'

Google Cloud Platform

  1. Create a GCS bucket:
gsutil mb -l us-central1 gs://your-terraform-state-bucket
  1. Enable versioning:
gsutil versioning set on gs://your-terraform-state-bucket

Azure

  1. Create a resource group:
az group create --name terraform-state-rg --location eastus
  1. Create a storage account:
az storage account create --name tfstate$RANDOM --resource-group terraform-state-rg --sku Standard_LRS
  1. Create a container:
az storage container create --name terraform-state --account-name <storage-account-name>

Configuration requirements

  • Before you deploy the infrastructure, make sure you’ve created and configured the variables file for each cloud component:
  1. Copy the example file:
cd examples/<provider>/<component>
cp midaz.tfvars-example midaz.tfvars
  1. Replace all placeholders in the midaz.tfvars file with your actual values.
    1. This file holds the key configuration for your infrastructure setup.

Production credentials and deployment


When it comes to deploying infrastructure in production environments, managing credentials with care is crucial for maintaining security. Here's a guide on how to handle credentials securely:

Cloud provider authentication

When using the deploy script locally, we highly encourage utilizing cloud provider CLI authentication tools instead of raw credentials. This method is significantly more secure, as it automatically manages credential rotation, MFA, and token refresh for you!

Why adopt this approach?

  • Tokens refresh automatically
  • MFA and SSO integration out of the box
  • Credentials are rotated and stored securely
  • Full audit trail for authentication events

AWS

Use AWS CLI to assume a role.

aws sso login --profile your-profile

or

aws sts assume-role --role-arn arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME --role-session-name terraform

GCP

Use gcloud authentication.

gcloud auth application-default login

For service accounts, use the following code:

gcloud auth activate-service-account --key-file=path/to/service-account.json

Azure

Use Azure CLI.

az login

For service principals, use the following code:

az login --service-principal

Credential Management Best Practices

Stay safe and compliant by following your cloud provider’s official guidance:

Recommended practices

  • Rotate credentials on a regular schedule
  • Use role-based access control (RBAC) wherever possible
  • Require MFA for user accounts
  • Prefer short-lived, temporary credentials
  • Monitor and audit how credentials are used
  • Never commit credentials to version control

Using the deploy script

The deploy.sh script handles the setup sequence, highlights issues along the way, and ensures that each component is deployed in the correct order..

What it does

  • Allows you to pick your cloud provider (AWS, Azure, or GCP)
  • Offers options to deploy or destroy the stack
  • Checks that all backend configuration placeholders are set correctly
  • Runs Terraform commands in the right order for each component
  • Outputs clear, color-coded logs so you know what’s happening at every step

How to use it

  1. Ensure that all prerequisites are complete and that your remote state bucket has been created.
  2. Fill in all the placeholders in the backend.tf files.
  3. Make the script executable:
chmod +x deploy.sh
  1. Run the script:
./deploy.sh
  1. When prompted, select your cloud provider.
  2. The script will automatically:
    1. Check the remaining placeholders.
    2. Run terraform init, plan, and apply for each component.
    3. Deploy in the correct order and stop if something fails.

Error handling

We built the script to fail quickly and provide an explanation. If something goes wrong, it will:

  • Stop immediately if it finds placeholders you forgot to fill in
  • Exit if any Terraform command fails
  • Show you exactly which component failed and at what step

Installing Midaz


After deploying the foundation infrastructure, you can install Midaz using Helm. For more information, refer to the Deploying using Helm page.

Prerequisites

  • Kubernetes cluster is up and running (EKS, GKE, or AKS).
  • kubectl configured to access the cluster.
  • Helm v3.x installed.
  • Access to the Midaz Helm repo.

Install steps

  1. Add the Midaz Helm repository:
helm repo add midaz https://lerianstudio.github.io/helm
helm repo update
  1. Create a values file (values.yaml) with your configuration:
# Example values.yaml
# Disable default dependencies
valkey:
   enabled: false

postgresql:
   enabled: false

## Configure external PostgreSQL
onboarding:
  configmap:
    DB_HOST: "postgresql.midaz.internal."
    DB_USER: "midaz"
    DB_PORT: "5432"
    DB_REPLICA_HOST: "postgresql-replica.midaz.internal."
    DB_REPLICA_USER: "midaz"
    DB_REPLICA_PORT: "5432"
    REDIS_HOST: "valkey.midaz.internal"
    REDIS_PORT: "6379"
  secrets:
     DB_PASSWORD: "<your-db-password>"
     DB_REPLICA_PASSWORD: "<your-replica-db-password>"
     REDIS_PASSWORD: "<your-redis-password>"

transaction:
  configmap:
    DB_HOST: "postgresql.midaz.internal."
    DB_USER: "midaz"
    DB_PORT: "5432"
    DB_REPLICA_HOST: "postgresql-replica.midaz.internal."
    DB_REPLICA_USER: "midaz"
    DB_REPLICA_PORT: "5432"
    REDIS_HOST: "valkey.midaz.internal"
    REDIS_PORT: "6379"
  secrets:
    DB_PASSWORD: "<your-db-password>"
    DB_REPLICA_PASSWORD: "<your-replica-db-password>"
    REDIS_PASSWORD: "<your-redis-password>"
  1. Install Midaz:
helm install midaz midaz/midaz -f values.yaml

For detailed configuration options and advanced setup, please refer to the Midaz Helm Repository.


Security tips


Using the cloud brings fantastic opportunities, but it also comes with important responsibilities. To help keep your Midaz infrastructure secure, we recommend:

  • Always use private Kubernetes clusters to limit public exposure.
  • Access the Kubernetes API via VPN instead of allowing public access.
  • Set up and enforce RBAC (Role-Based Access Control) to manage user permissions effectively.
  • Store all secrets in the cloud provider’s secret management service.
  • Give service accounts only the permissions they truly need.

Contributing


Before making any changes, you’ll need to set up Git hooks. This helps ensure every commit follows our standards and passes required checks.

  1. Install the Git hooks:
make hooks
  1. Create a new feature branch:
git checkout -b feature/your-feature
  1. Make your changes and commit using Conventional Commits
  2. Open a pull request targeting the develop branch
  3. Once tested and approved, your changes will be merged into main.

Check out our Contributing Guide to learn more about how we work together and what we expect from contributors.


License


Midaz is an open-source project licensed under the Apache 2.0 License.


Need help?


  • Check the README inside each component folder
  • Search existing issues
  • Open a new issue if needed