Skip to main content
Flowker is a workflow orchestration engine designed to help you model, execute, and scale business processes with precision. In this guide, you will run Flowker locally and execute your first workflow—from creation to result retrieval. By the end, you will have a working environment to validate automation flows and integrate them into your systems.

Prerequisites


Before you begin, make sure your environment is ready:
ToolMinimum versionCheck command
Go1.22+go version
Docker24+docker --version
Docker Compose2.20+docker compose version
MakeInstalledmake --version
Flowker runs locally using Docker for its database (MongoDB). No external infrastructure is needed for this guide.

Step 1: Clone and set up the project


Start by cloning the repository and preparing the development environment:
git clone https://github.com/LerianStudio/flowker.git
cd flowker
Install development tools and create the environment file:
make dev-setup
Then start the local stack (MongoDB + Flowker on port 4000):
make dev
Once the output shows the server is running, Flowker is available at http://localhost:4000.
The make dev command starts MongoDB, generates API documentation, and runs the Flowker application with API key authentication disabled—so you can test freely during development.

Step 2: Create your first workflow


Workflows define how your business process behaves—what steps run, in which order, and under which conditions. Each workflow is composed of nodes (the steps) and edges (the connections between them). Create a workflow with a webhook trigger and a log action:
curl -s -X POST http://localhost:4000/v1/workflows \
  -H "Content-Type: application/json" \
  -d '{
    "name": "my-first-workflow",
    "description": "A simple workflow with one trigger and one action.",
    "nodes": [
      {
        "id": "trigger-1",
        "type": "trigger",
        "name": "Start",
        "position": { "x": 0, "y": 0 },
        "data": { "triggerId": "webhook" }
      },
      {
        "id": "log-event",
        "type": "action",
        "name": "Log event",
        "position": { "x": 200, "y": 0 },
        "data": { "action": "log" }
      }
    ],
    "edges": [
      {
        "id": "e1",
        "source": "trigger-1",
        "target": "log-event"
      }
    ]
  }' | jq .
The response confirms the workflow was created in draft status:
{
  "id": "019c96a0-0ac0-7de9-9f53-9cf842a2ee5a",
  "name": "my-first-workflow",
  "status": "draft"
}
Save the id value — you will need it in the next steps.
New workflows are always created in draft status. A workflow must have at least one node.
In Flowker, nodes represent the individual steps of your workflow — what you might call tasks in business terms. Edges define the order in which those steps run.

Step 3: Activate the workflow


A workflow must be activated before it can be executed. This transitions the workflow from draft to active.
curl -s -X POST http://localhost:4000/v1/workflows/019c96a0-0ac0-7de9-9f53-9cf842a2ee5a/activate \
  -H "Content-Type: application/json" | jq .
Once activated, a workflow’s structure is locked and cannot be edited directly. To make changes, clone it, modify the clone, and activate the new version.

Step 4: Execute the workflow


Trigger a workflow execution by sending input data. The Idempotency-Key header is required to ensure safe retries.
curl -s -X POST http://localhost:4000/v1/workflows/019c96a0-0ac0-7de9-9f53-9cf842a2ee5a/executions \
  -H "Content-Type: application/json" \
  -H "Idempotency-Key: 7f3e1a2b-4c5d-6e7f-8a9b-0c1d2e3f4a5b" \
  -d '{
    "inputData": {
      "message": "hello from my first workflow"
    }
  }' | jq .
The response confirms the execution has started:
{
  "executionId": "019c96a0-10ce-75fc-a273-dc799079a99c",
  "workflowId": "019c96a0-0ac0-7de9-9f53-9cf842a2ee5a",
  "status": "running",
  "startedAt": "2026-03-18T14:35:00Z"
}
Save the executionId for the next step.
The Idempotency-Key header is required. Use a unique UUID per request to prevent duplicate executions on retry.

Step 5: Check execution results


Retrieve the outcome of a workflow execution:
curl -s http://localhost:4000/v1/executions/019c96a0-10ce-75fc-a273-dc799079a99c/results | jq .
The response includes the status of each step and the final output:
{
  "executionId": "019c96a0-10ce-75fc-a273-dc799079a99c",
  "workflowId": "019c96a0-0ac0-7de9-9f53-9cf842a2ee5a",
  "status": "completed",
  "stepResults": [
    {
      "stepNumber": 1,
      "stepName": "action_log-event",
      "nodeId": "log-event",
      "status": "completed",
      "output": { "action": "log" },
      "executedAt": "2026-03-18T14:35:00Z",
      "durationMs": 12
    }
  ],
  "finalOutput": {
    "workflow": {
      "message": "hello from my first workflow"
    }
  },
  "startedAt": "2026-03-18T14:35:00Z",
  "completedAt": "2026-03-18T14:35:00Z"
}
If the execution is still running, this endpoint returns a 422 status. Poll /v1/executions/{executionId} to check the current status before requesting results.

Explore the API locally


Flowker provides an interactive Swagger UI for testing and exploration: http://localhost:4000/swagger/index.html Use it to:
  • Inspect all available endpoints
  • Test requests interactively
  • Understand request and response structures

A note on authentication


In the local development environment (make dev), API key authentication is disabled by default. In staging, production, or any configured environment, all /v1/* endpoints require the X-API-Key header:
curl -H "X-API-Key: your-api-key" http://your-flowker-host/v1/workflows

What’s next


You now have a running Flowker environment and have executed your first workflow. From here, you can:
  • Model real business processes using different node types: trigger, executor, conditional, and action
  • Integrate external systems via executor configurations (connect to KYC providers, fraud engines, payment services)
  • Design conditional flows with edges that evaluate expressions based on step outputs
  • Monitor executions using the execution status and results endpoints
Flowker is designed to move from simple flows to production-grade orchestration without changing the core model.