Midaz Terraform Foundation
Midaz Terraform Foundation is a repository with ready-made Terraform examples to help you create the base infrastructure needed to run Midaz, using the best practices of the main cloud providers: AWS, GCP, or Azure.
This base infrastructure includes:
- Network (VPC, subnets)
- DNS
- Database
- Redis/Valkey
- Kubernetes cluster (EKS, GKE, or AKS)
Attention
The Midaz Terraform Foundation templates do not include MongoDB or RabbitMQ. That’s because these services aren’t consistently offered as managed services across all major cloud providers (AWS, GCP, Azure).
Why use it
Provisioning infrastructure shouldn’t be slow, inconsistent, or error-prone. The midaz-terraform-foundation
helps you move fast while staying aligned with Lerian’s best practices for security, observability, and scalability. Here's how it compares to manual or ad-hoc setups:
Speed and standardization
Criteria | With Midaz Terraform | Manual Setup / Ad-hoc Scripts |
---|---|---|
Setup speed | Fast – provisions everything in minutes with a single apply | Slow – takes days to configure and test |
Architecture standard | Standardized – follows Lerian's best practices | Unpredictable – may be inconsistent |
Reusability | High – supports multiple environments with minimal changes | Low – hard to reuse across projects |
Security and observability
Criteria | With Midaz Terraform | Manual Setup / Ad-hoc Scripts |
---|---|---|
Security by default | Yes – secure by design (isolated VPCs, IAM, secrets, etc.) | No – depends on the team, increasing exposure risk |
Built-in observability | Built-in – integrates with Prometheus, Grafana, and more | Manual – requires separate setup, often skipped |
Production-ready? | Yes – high availability and autoscaling out of the box | Uncertain – needs extra effort to harden |
Maintenance and support
Criteria | With Midaz Terraform | Manual Setup / Ad-hoc Scripts |
---|---|---|
Maintainability | Easy – modular and versioned for painless updates | Hard – scripts break easily |
Lerian support | Included – verified and supported by Lerian | None – not guaranteed |
Estimated deployment time | 1 day – including validation | 1–2 weeks – with higher operational risk |
Looking for speed, reliability, and long-term support?
midaz-terraform-foundation
helps you deploy faster, avoid common pitfalls, and scale with confidence — all backed by Lerian’s engineering standards.
What you’ll need
Before getting started, make sure you have:
- Terraform v1.0.0 or higher
- A cloud provider account (AWS, GCP, or Azure)
- A storage bucket for Terraform state files
- The CLI tool for your cloud provider:
aws
for AWSgcloud
for GCPaz
for Azure
CI/CD Integration
This repository provides Terraform examples for deploying foundation infrastructure. It does not include a CI/CD pipeline. You’ll need to create one based on your project’s needs.
Already running a Terraform CI/CD pipeline? Here’s what to do:
- Skip the deployment script; it’s intended for local use only.
- Copy the relevant example configs into your private Infrastructure as Code repo.
- Integrate the Terraform configs into your pipeline as needed.
- Use your CI/CD platform’s built-in secret management to handle credentials securely.
Project structure
The repository is designed with a unique structure for each cloud provider. This means that every component of the infrastructure is organized to support a modular and controlled approach. You have the flexibility to deploy only what you need, or go all in with the entire foundation if you prefer.
.
├── examples/
├── aws/
│ ├── vpc/
│ ├── route53/
│ ├── rds/
│ ├── valkey/
│ └── eks/
├── gcp/
│ ├── vpc/
│ ├── cloud-dns/
│ ├── cloud-sql/
│ ├── valkey/
│ └── gke/
└── azure/
├── network/
├── dns/
├── database/
├── redis/
└── aks/
Deployment order matters
To avoid errors and ensure everything connects correctly, we recommend deploying components in this sequence:
- VPC / Network
- DNS
- Database
- Redis/Valkey
- Kubernetes cluster
Creating the state storage
Terraform requires a remote backend to manage its state. To get started with these templates, it’s essential to set up a storage bucket for the Terraform state files first.
AWS
Do not forget to replace REGION
and UNIQUE_BUCKET_NAME
with your information.
- Create an S3 bucket:
aws s3api create-bucket \
--bucket UNIQUE_BUCKET_NAME \
--region REGION \
--create-bucket-configuration LocationConstraint=REGION
- Enable versioning:
aws s3api put-bucket-versioning \
--bucket UNIQUE_BUCKET_NAME \
--versioning-configuration Status=Enabled
- Enable encryption:
aws s3api put-bucket-encryption \
--bucket UNIQUE_BUCKET_NAME \
--server-side-encryption-configuration \
'{"Rules":[{"ApplyServerSideEncryptionByDefault":{"SSEAlgorithm":"AES256"}}]}'
- Block public access:
aws s3api put-public-access-block \
--bucket UNIQUE_BUCKET_NAME \
--public-access-block-configuration \ '{"BlockPublicAcls":true,"IgnorePublicAcls":true,"BlockPublicPolicy":true,"RestrictPublicBuckets":true}'
Google Cloud Platform
- Create a GCS bucket:
gsutil mb -l us-central1 gs://your-terraform-state-bucket
- Enable versioning:
gsutil versioning set on gs://your-terraform-state-bucket
Azure
- Create a resource group:
az group create --name terraform-state-rg --location eastus
- Create a storage account:
az storage account create --name tfstate$RANDOM --resource-group terraform-state-rg --sku Standard_LRS
- Create a container:
az storage container create --name terraform-state --account-name <storage-account-name>
Configuration requirements
- Before you deploy the infrastructure, make sure you’ve created and configured the variables file for each cloud component:
- Copy the example file:
cd examples/<provider>/<component>
cp midaz.tfvars-example midaz.tfvars
- Replace all placeholders in the
midaz.tfvars
file with your actual values.- This file holds the key configuration for your infrastructure setup.
Production credentials and deployment
When it comes to deploying infrastructure in production environments, managing credentials with care is crucial for maintaining security. Here's a guide on how to handle credentials securely:
Cloud provider authentication
When using the deploy script locally, we highly encourage utilizing cloud provider CLI authentication tools instead of raw credentials. This method is significantly more secure, as it automatically manages credential rotation, MFA, and token refresh for you!
Why adopt this approach?
- Tokens refresh automatically
- MFA and SSO integration out of the box
- Credentials are rotated and stored securely
- Full audit trail for authentication events
AWS
Use AWS CLI to assume a role.
aws sso login --profile your-profile
or
aws sts assume-role --role-arn arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME --role-session-name terraform
GCP
Use gcloud authentication.
gcloud auth application-default login
For service accounts, use the following code:
gcloud auth activate-service-account --key-file=path/to/service-account.json
Azure
Use Azure CLI.
az login
For service principals, use the following code:
az login --service-principal
Credential Management Best Practices
Stay safe and compliant by following your cloud provider’s official guidance:
- AWS: Managing AWS access keys
- GCP: Managing service account keys
- Azure: Identity management best practices
Recommended practices
- Rotate credentials on a regular schedule
- Use role-based access control (RBAC) wherever possible
- Require MFA for user accounts
- Prefer short-lived, temporary credentials
- Monitor and audit how credentials are used
- Never commit credentials to version control
Using the deploy script
The deploy.sh
script handles the setup sequence, highlights issues along the way, and ensures that each component is deployed in the correct order..
What it does
- Allows you to pick your cloud provider (AWS, Azure, or GCP)
- Offers options to deploy or destroy the stack
- Checks that all backend configuration placeholders are set correctly
- Runs Terraform commands in the right order for each component
- Outputs clear, color-coded logs so you know what’s happening at every step
How to use it
- Ensure that all prerequisites are complete and that your remote state bucket has been created.
- Fill in all the placeholders in the
backend.tf
files. - Make the script executable:
chmod +x deploy.sh
- Run the script:
./deploy.sh
- When prompted, select your cloud provider.
- The script will automatically:
- Check the remaining placeholders.
- Run
terraform init
,plan
, andapply
for each component. - Deploy in the correct order and stop if something fails.
Error handling
We built the script to fail quickly and provide an explanation. If something goes wrong, it will:
- Stop immediately if it finds placeholders you forgot to fill in
- Exit if any Terraform command fails
- Show you exactly which component failed and at what step
Installing Midaz
After deploying the foundation infrastructure, you can install Midaz using Helm. For more information, refer to the Deploying using Helm page.
Prerequisites
- Kubernetes cluster is up and running (EKS, GKE, or AKS).
kubectl
configured to access the cluster.- Helm v3.x installed.
- Access to the Midaz Helm repo.
Install steps
- Add the Midaz Helm repository:
helm repo add midaz https://lerianstudio.github.io/helm
helm repo update
- Create a values file (
values.yaml
) with your configuration:
# Example values.yaml
# Disable default dependencies
valkey:
enabled: false
postgresql:
enabled: false
## Configure external PostgreSQL
onboarding:
configmap:
DB_HOST: "postgresql.midaz.internal."
DB_USER: "midaz"
DB_PORT: "5432"
DB_REPLICA_HOST: "postgresql-replica.midaz.internal."
DB_REPLICA_USER: "midaz"
DB_REPLICA_PORT: "5432"
REDIS_HOST: "valkey.midaz.internal"
REDIS_PORT: "6379"
secrets:
DB_PASSWORD: "<your-db-password>"
DB_REPLICA_PASSWORD: "<your-replica-db-password>"
REDIS_PASSWORD: "<your-redis-password>"
transaction:
configmap:
DB_HOST: "postgresql.midaz.internal."
DB_USER: "midaz"
DB_PORT: "5432"
DB_REPLICA_HOST: "postgresql-replica.midaz.internal."
DB_REPLICA_USER: "midaz"
DB_REPLICA_PORT: "5432"
REDIS_HOST: "valkey.midaz.internal"
REDIS_PORT: "6379"
secrets:
DB_PASSWORD: "<your-db-password>"
DB_REPLICA_PASSWORD: "<your-replica-db-password>"
REDIS_PASSWORD: "<your-redis-password>"
- Install Midaz:
helm install midaz midaz/midaz -f values.yaml
For detailed configuration options and advanced setup, please refer to the Midaz Helm Repository.
Security tips
Using the cloud brings fantastic opportunities, but it also comes with important responsibilities. To help keep your Midaz infrastructure secure, we recommend:
- Always use private Kubernetes clusters to limit public exposure.
- Access the Kubernetes API via VPN instead of allowing public access.
- Set up and enforce RBAC (Role-Based Access Control) to manage user permissions effectively.
- Store all secrets in the cloud provider’s secret management service.
- Give service accounts only the permissions they truly need.
Contributing
Before making any changes, you’ll need to set up Git hooks. This helps ensure every commit follows our standards and passes required checks.
- Install the Git hooks:
make hooks
- Create a new feature branch:
git checkout -b feature/your-feature
- Make your changes and commit using Conventional Commits
- Open a pull request targeting the
develop
branch - Once tested and approved, your changes will be merged into
main
.
Check out our Contributing Guide to learn more about how we work together and what we expect from contributors.
License
Midaz is an open-source project licensed under the Apache 2.0 License.
Need help?
- Check the README inside each component folder
- Search existing issues
- Open a new issue if needed
Updated 1 day ago