Skip to main content
Observability is how we understand what’s happening inside a system by analyzing its outputs: metrics, logs, and traces. This page explains how telemetry flows through Midaz, the tools that power it, and how you can connect your own systems to monitor performance, troubleshoot issues, and ensure operational excellence.

Who configures what?


To avoid confusion, here’s a quick split of responsibilities:

Client side

On your infrastructure, the main configuration for observability lives in the components/infra/grafana/otelcol-config.yaml file. In this file, you define the collector’s behavior:
  • Processors: batching, memory limits, filtering, obfuscation, sampling, etc.
  • Exporters: for example, dual routing to Prometheus
  • API key authentication secrets
After editing this file, you must restart the stack with make down && make up for changes to take effect.
This setup ensures telemetry is processed efficiently, secured properly, and routed to the right destinations.

Lerian side

On Lerian’s managed infrastructure, the observability stack is centrally configured and operated. This includes:
  • Central Collector
  • Prometheus
  • Loki
  • Tempo
  • Grafana
These components are preconfigured and maintained by Lerian. You don’t edit them directly.
This ensures consistency across environments and removes the need for local maintenance on your side.

How the data flows


Telemetry data originates in your application and flows through a Client Collector, powered by OpenTelemetry. Running in your environment, this collector enriches the data and securely forwards it to a Central Collector managed by Lerian. From there, it’s routed to three specialized backends: Grafana sits on top of everything, giving you a unified view of all telemetry signals. This flow ensures observability at scale, built on OpenTelemetry standards for portability and consistency.

Stack components


Together, these components form a complete observability pipeline: flexible on your side, consistent and secure on Lerian’s side, and fully based on OpenTelemetry standards.

Client Collector

The Client Collector is a lightweight OpenTelemetry Collector that runs close to your application, either as a DaemonSet or a Deployment. It enriches telemetry with Kubernetes metadata and your tenant identifier (client_id), then routes the data to the Central Collector. It matters because it reduces load on the central pipeline, enables source-level filtering, and attaches crucial metadata such as k8s.pod.name. Installation is managed via Helm and Terraform, making it easy to integrate into your infrastructure.

Central Collector

The Central Collector is a centralized OpenTelemetry Collector deployment that receives telemetry from all clients. It performs global processing, enforces multi-tenancy, and exports signals to the appropriate storage backends.
The Central Collector is fully managed by Lerian. You don’t configure or modify it directly.
This setup ensures consistency across tenants and guarantees that telemetry data is routed securely and efficiently to its final destinations.

Prometheus

Prometheus is optimized for storing and querying numerical time-series data. The Central Collector pushes metrics using remote_write.

Loki

Loki stores logs using label-based indexing, making it fast and cost-effective. Logs are sent from the Central Collector to the loki-write service.

Tempo

Tempo stores full distributed traces and integrates tightly with Prometheus and Loki through Grafana.

Grafana

Grafana is your single pane of glass. It connects to Prometheus, Loki, and Tempo, enabling you to correlate metrics, logs, and traces in one place.
You can pivot between metrics, logs, and traces directly inside Grafana to speed up troubleshooting.

Embedded Collector


You can enable the Client Collector as a dependency of your Midaz application with a single configuration flag:
otel-collector-lerian:
  enabled:true
This automatically installs a DaemonSet and configures your application to export telemetry to it. The required environment variables and secrets are injected via Helm, so you don’t need to manage them manually.

Editing the Client Collector


When you need to customize behavior (obfuscation, filtering, sampling, etc.), you will:
  1. Edit components/infra/grafana/otelcol-config.yaml.
  2. Add or adjust the processors or exporters blocks.
  3. Restart the stack:
    make down
    make up
    

Client Collector Processors


In the OpenTelemetry Collector, processors are the core of data manipulation. They run sequentially to enrich, filter, sample, and transform telemetry data before exporting it to the backends. Below is the list of processors configured in the Lerian Client Collector, their purpose, and how to configure them.
Where to configure: add each block under processors: in otelcol-config.yaml.
1. batch
  • What it is: Groups multiple telemetry signals (metrics, logs, or traces) into batches before sending them to the next stage.
  • Why it matters: Improves compression efficiency, reduces network requests, and enhances overall pipeline performance.
  • Configuration:
processors:
  batch: {}
2. memory_limiter
  • What it is: Monitors the collector’s memory usage and drops data if it approaches a defined threshold.
  • Why it matters: Prevents the collector from being OOMKilled by Kubernetes.
  • Configuration:
processors:
  memory_limiter:
    check_interval: 1s
    limit_percentage: 75
    spike_limit_percentage: 15
3. spanmetrics
  • What it is: Generates metrics directly from trace data.
  • Why it matters: Produces “RED” metrics (Rate, Errors, Duration) automatically.
  • Configuration:
processors:
  spanmetrics:
    metrics_exporter: prometheus
    dimensions:
      - name: http.method
      - name: http.status_code
      - name: service.name
      - name: client.id
4. transform/remove_sensitive_attributes
  • What it is: Removes sensitive span attributes using regex.
  • Why it matters: Keeps identifiers but strips headers, bodies, or other sensitive request data.
  • Configuration:
processors:
  transform/remove_sensitive_attributes:
    trace_statements:
      - context: span
        statements:
          - delete_matching_keys(attributes, "^app\\.request\\.(?!request_id$).*")
5. tail_sampling
  • What it is: A sampling strategy applied after spans are received.
  • Why it matters: Keeps only high-value traces (errors, specific clients) and reduces storage costs.
  • Configuration:
processors:
  tail_sampling:
    policies:
      - name: keep_firmino_traces_policy
      - name: http_server_errors_policy
      - name: drop_all_other_traces_policy
6. filter/drop_node_metrics
  • What it is: Filters out node-level metrics.
  • Why it matters: Reduces noise and focuses on app-level telemetry.
  • Configuration:
processors:
  filter/drop_node_metrics:
    metrics:
      exclude:
        match_type: regexp
        metric_names:
          - ^k8s\\.node\\..*$
7. filter/include_midaz_namespaces
  • What it is: Keeps only metrics from midaz and midaz-plugins.
  • Why it matters: Eliminates irrelevant Kubernetes workloads.
  • Configuration:
processors:
  filter/include_midaz_namespaces:
    metrics:
      include:
        match_type: regexp
        resource_attributes:
          - key: k8s.namespace.name
            value: '^(midaz|midaz-plugins)$'
8. k8sattributes
  • What it is: Adds Kubernetes metadata to telemetry.
  • Why it matters: Enables richer context in Grafana queries.
  • Configuration:
processors:
  k8sattributes:
    auth_type: "serviceAccount"
    passthrough: false
    extract:
      metadata:
        - k8s.pod.name
        - k8s.deployment.name
        - k8s.namespace.name
        - k8s.node.name
9. resource/add_client_id
  • What it is: Inserts or updates client.id in telemetry.
  • Why it matters: Critical for multi-tenancy.
  • Configuration:
processors:
  resource/add_client_id:
    attributes:
      - key: client.id
        value: "Firmino"
        action: upsert
10. transform/remove_log_body
  • What it is: Removes log body content.
  • Why it matters: Prevents sensitive or PII data from persisting in logs.
  • Configuration:
processors:
  transform/remove_log_body:
    log_statements:
      - context: log
        statements:
          - set(body, "")
11. transform/obfuscate_attributes
  • What it is: Obfuscates selected attributes.
  • Why it matters: Protects sensitive values (like legalDocument or accountAlias) before data leaves your cluster.
  • Configuration (otelcol-config.yaml):
processors:
  transform/obfuscate_attributes:
    trace_statements:
      - context: span
        statements:
          - replace_pattern(attributes["legalDocument"], ".*", "***")
          - replace_pattern(attributes["accountAlias"], ".*", "***")
    log_statements:
      - context: log
        statements:
          - replace_pattern(attributes["legalDocument"], ".*", "***")
          - replace_pattern(attributes["accountAlias"], ".*", "***")
  • Customizing the fields:
    • Defaults: legalDocument, accountAlias
    • Add or remove fields as needed
    • Restart required: make down && make up

In short

ProcessorData typeFunctionBenefit
batchMetrics, Logs, TracesGroups telemetry before exportImproves compression and network use
memory_limiterMetrics, Logs, TracesDrops data when memory limit is nearPrevents OOMKilled
spanmetricsTraces → MetricsCreates RED metricsImmediate performance insights
transform/remove_sensitive_attributesTracesStrips sensitive span attrsKeeps IDs, removes secrets
tail_samplingTracesSmart sampling of tracesLower storage, focus on errors/targets
filter/drop_node_metricsMetricsExcludes noisy node-level dataCleaner dataset
filter/include_midaz_namespacesMetricsKeeps only Midaz namespacesRemoves irrelevant metrics
k8sattributesMetrics, Logs, TracesAdds K8s metadataRicher Grafana context
resource/add_client_idAll signalsTags telemetry with client IDEnables multi-tenancy
transform/remove_log_bodyLogsClears log bodyAvoids storing PII
transform/obfuscate_attributesAll signalsMasks chosen fieldsEnsures sensitive data never leaves

Protecting sensitive data


Midaz treats the Client Collector as a telemetry firewall. All filtering, sampling, and transformation rules are defined in your configuration file (components/infra/grafana/otelcol-config.yaml).
Observability Client Leria Jp

Client vs Lerian responsibilities in the observability pipeline

This file runs inside your infrastructure, ensuring that sensitive attributes are removed or obfuscated before data leaves your cluster.
Sensitive values such as request bodies, legal documents, or account aliases never reach Lerian’s Central Collector.
Our architecture enforces this separation:
  • Client Collector (you configure): Runs in your cluster. Apply processors such as transform/remove_sensitive_attributes, transform/remove_log_body, and transform/obfuscate_attributes.
  • Central Collector (Lerian managed): Receives only the filtered, sanitized telemetry streams and routes them to Prometheus, Loki, and Tempo.
Configurations are fully packaged and managed via Helm, keeping deployments consistent, traceable, and aligned with best practices.
You decide what’s sensitive in otelcol-config.yaml. Lerian only sees sanitized telemetry.

Telemetry flow


Here’s what happens when telemetry is enabled:
  1. Application starts and detects OpenTelemetry configuration.
  2. Telemetry is exported to the local Client Collector.
  3. Client Collector enriches data with Kubernetes metadata and your client_id.
  4. Processors enrich, filter, and transform the data.
  5. Data is forwarded to the Central Collector.
  6. Central Collector processes and routes data:
    • Metrics → Prometheus
    • Logs → Loki
    • Traces → Tempo
  7. Grafana lets you query it all, correlating across signals.
You can, for example, run:
sum(rate(http_server_duration_seconds_count{
  k8s_pod_name=~"checkout-.*",
  client_id="client-name"
}[5m]))
And then jump straight to the related logs or traces.

Authenticating collector requests


To ensure data security and integrity, all telemetry sent from your cluster to Lerian’s platform must be authenticated using a secure API key.

How to set it up

  1. Create the Kubernetes Secret to store your API token:
kubectl create secret generic otel-api-key \
  --from-literal=OTEL_API_KEY='YOUR_TOKEN_HERE' \
  -n midaz
  1. Reference the secret in your Helm values file to inject it as an environment variable:
extraEnvs:
  - name: OTEL_API_KEY
    valueFrom:
      secretKeyRef:
        name: otel-api-key
        key: OTEL_API_KEY
  1. Telemetry is securely sent to Lerian’s telemetry endpoint over HTTPS, with the API key included in the headers.
<https://telemetry.lerian.io:443>
This key must remain private. If compromised, contact Lerian support immediately to rotate the token.

Data encryption in transit


All telemetry data, including metrics, logs, and traces, is transmitted from your environment to Lerian’s observability platform using HTTPS with TLS encryption. This means:
  • The communication between the Client Collector and the Central Collector is fully encrypted.
  • Data in transit is protected against interception, tampering, or unauthorized access.
  • Even if network traffic is inspected, the contents remain unreadable without the proper cryptographic keys.
Combined with API key authentication, this ensures your telemetry is both secure and verifiable from source to destination.
We enforce encrypted transport by default. No data is accepted over insecure channels.

Dual routing


Need to keep a copy of your metrics internally? You can configure the Client Collector to send telemetry to multiple destinations.

Example

exporters:
  otlphttp/server:
    endpoint: "<https://telemetry.lerian.io:443>"
    headers:
      x-api-key: "${OTEL_API_KEY}"
  prometheus/local:
    endpoint: prometheus-server-example:8889
Add both exporters to your metrics pipeline, and the same metrics will be sent to our platform and your internal Prometheus.
This setup is ideal for local monitoring without disrupting the standard flow to Lerian’s observability stack.

Glossary


A Kubernetes workload type that ensures a Pod runs on every (or selected) node in a cluster. Used for deploying the Client Collector, so that it can collect node-level data like Kubelet metrics.
A Kubernetes workload that manages replicas of a Pod. Used for the Central Collector and other platform services.
Sends telemetry data from the Collector to one or more backends (e.g., Prometheus for metrics, Loki for logs, Tempo for traces).
An open-source visualization layer. Grafana connects to Prometheus, Loki, and Tempo to provide a unified interface for querying and exploring metrics, logs, and traces.
Our backend for logs. Loki indexes metadata labels rather than full log content, making it fast and cost-efficient for high-volume use cases.
An architectural approach where a single platform serves multiple clients (tenants). In Midaz, telemetry data is tagged with a client_id to ensure isolation and traceability across tenants.
The ability to understand a system’s internal state by analyzing its external outputs. In practice, it means collecting and analyzing metrics, logs, and traces to monitor performance and troubleshoot issues.
An open-source framework with tools, APIs, and SDKs for instrumenting, generating, collecting, and exporting telemetry data — metrics, logs, and traces.
A standalone service that receives, processes, and exports telemetry data. It acts as a bridge between instrumented applications and backends like Prometheus or Grafana.
The default protocol used by OpenTelemetry to transport telemetry data between applications, collectors, and backends via gRPC or HTTP.
Defines how telemetry flows through the Collector. A pipeline typically chains together receivers, processors, and exporters for a given signal type (metrics, logs, or traces).
Handles data transformation inside the Collector, such as enriching signals with metadata, filtering unwanted data, batching messages, or enforcing sampling policies.
Our backend for storing and querying metrics. It supports powerful time-series queries (PromQL) and integrates with the OpenTelemetry Collector via remote write.
The component of the Collector that ingests incoming telemetry data. Supports formats like OTLP, Jaeger, Prometheus, and others.
A set of libraries you embed in your application code to produce telemetry signals like spans, counters, or logs.
Our backend for traces. It stores full distributed traces and integrates closely with Prometheus and Loki for seamless correlation in Grafana.
An Infrastructure as Code (IaC) tool we use to provision and manage cloud infrastructure, including the installation of observability components via Helm.