This is the home of

Zeebe Logo

Introduction to Zeebe

This section contains a brief introduction to Zeebe.

What is Zeebe?

Zeebe is a workflow engine for microservices orchestration. Zeebe ensures that, once started, flows are always carried out fully, retrying steps in case of failures. Along the way, Zeebe maintains a complete audit log so that the progress of flows can be monitored. Zeebe is fault tolerant and scales seamlessly to handle growing transaction volumes.

Below, we'll provide a brief overview of Zeebe. For more detail, we recommend the "What is Zeebe?" writeup on the main Zeebe site.

What problem does Zeebe solve, and how?

A company’s end-to-end workflows almost always span more than one microservice. In an e-commerce company, for example, a “customer order” workflow might involve a payments microservice, an inventory microservice, a shipping microservice, and more:

order-process

These cross-microservice workflows are mission critical, yet the workflows themselves are rarely modeled and monitored. Often, the flow of events through different microservices is expressed only implicitly in code.

If that’s the case, how can we ensure visibility of workflows and provide status and error monitoring? How do we guarantee that flows always complete, even if single microservices fail?

Zeebe gives you:

  1. Visibility into the state of a company’s end-to-end workflows, including the number of in-flight workflows, average workflow duration, current errors within a workflow, and more.
  2. Workflow orchestration based on the current state of a workflow; Zeebe publishes “commands” as events that can be consumed by one or more microservices, ensuring that workflows progress according to their definition.
  3. Monitoring for timeouts or other workflow errors with the ability to configure error-handling paths such as stateful retries or escalation to teams that can resolve an issue manually.

Zeebe was designed to operate at very large scale, and to achieve this, it provides:

  • Horizontal scalability and no dependence on an external database; Zeebe writes data directly to the filesystem on the same servers where it’s deployed. Zeebe makes it simple to distribute processing across a cluster of machines to deliver high throughput.
  • Fault tolerance via an easy-to-configure replication mechanism, ensuring that Zeebe can recover from machine or software failure with no data loss and minimal downtime. This ensures that the system as a whole remains available without requiring manual action.
  • A message-driven architecture where all workflow-relevant events are written to an append-only log, providing an audit trail and a history of the state of a workflow.
  • A publish-subscribe interaction model, which enables microservices that connect to Zeebe to maintain a high degree of control and autonomy, including control over processing rates. These properties make Zeebe resilient, scalable, and reactive.
  • Visual workflows modeled in ISO-standard BPMN 2.0 so that technical and non-technical stakeholders can collaborate on workflow design in a widely-used modeling language.
  • A language-agnostic client model, making it possible to build a Zeebe client in just about any programming language that an organization uses to build microservices.
  • Operational ease-of-use as a self-contained and self-sufficient system. Zeebe does not require a cluster coordinator such as ZooKeeper. Because all nodes in a Zeebe cluster are equal, it's relatively easy to scale, and it plays nicely with modern resource managers and container orchestrators such as Docker, Kubernetes, and DC/OS. Zeebe's CLI (Command Line Interface) allows you to script and automate management and operations tasks.

You can learn more about these technical concepts in the "Basics" section of the documentation.

Zeebe is simple and lightweight

Most existing workflow engines offer more features than Zeebe. While having access to lots of features is generally a good thing, it can come at a cost of increased complexity and degraded performance.

Zeebe is 100% focused on providing a compact, robust, and scalable solution for orchestration of workflows. Rather than supporting a broad spectrum of features, its goal is to excel within this scope.

In addition, Zeebe works well with other systems. For example, Zeebe provides a simple event stream API that makes it easy to stream all internal data into another system such as Elastic Search for indexing and querying.

Deciding if Zeebe is right for you

Note that Zeebe is currently in "developer preview", meaning that it's not yet ready for production and is under heavy development. See the roadmap for more details.

Your applications might not need the scalability and performance features provided by Zeebe. Or, you might a mature set of features around BPM (Business Process Management), which Zeebe does not yet offer. In such scenarios, a workflow automation platform such as Camunda BPM could be a better fit.

Install

This page guides you through the initial installation of your Zeebe broker. In case you are looking for more detailed information on how to set up and operate Zeebe, make sure to check out the Operations Guide as well.

There are different ways to install Zeebe:

Prerequisites

  • Operating System:
    • Linux
    • Windows/MacOS (development only, not supported for production)
  • Java Virtual Machine:
    • Oracle Hotspot v1.8
    • Open JDK v1.8

Download a distribution

You can always download the latest Zeebe release from the Github release page.

Once you have downloaded a distribution, extract it into a folder of your choice. To extract the Zeebe distribution and start the broker, Linux users can type:

tar -xzf zeebe-distribution-X.Y.Z.tar.gz -C zeebe/
./bin/broker

Windows users can download the .zippackage and extract it using their favorite unzip tool. They can then open the extracted folder, navigate to the bin folder and start the broker by double-clicking on the broker.bat file.

Once the Zeebe broker has started, it should produce the following output:

10:49:52.264 [] [main] INFO  io.zeebe.broker.system - Using configuration file zeebe-broker-X.Y.Z/conf/zeebe.cfg.toml
10:49:52.342 [] [main] INFO  io.zeebe.broker.system - Scheduler configuration: Threads{cpu-bound: 2, io-bound: 2}.
10:49:52.383 [] [main] INFO  io.zeebe.broker.system - Version: X.Y.Z
10:49:52.430 [] [main] INFO  io.zeebe.broker.clustering - Starting standalone broker.
10:49:52.435 [service-controller] [0.0.0.0:26500-zb-actors-1] INFO  io.zeebe.broker.transport - Bound managementApi.server to /0.0.0.0:26502
10:49:52.460 [service-controller] [0.0.0.0:26500-zb-actors-1] INFO  io.zeebe.transport - Bound clientApi.server to /0.0.0.0:26501
10:49:52.460 [service-controller] [0.0.0.0:26500-zb-actors-1] INFO  io.zeebe.transport - Bound replicationApi.server to /0.0.0.0:26503

Using Docker

You can run Zeebe with Docker:

docker run --name zeebe -p 26500:26500 camunda/zeebe:latest

Exposed Ports

  • 26500: Client API
  • 26502: Management API for broker to broker communcation
  • 26503: Replication API for broker to broker replication

Volumes

The default data volume is under /usr/local/zeebe/bin/data. It contains all data which should be persisted.

Configuration

The Zeebe configuration is located at /usr/local/zeebe/conf/zeebe.cfg.toml. The logging configuration is located at /usr/local/zeebe/conf/log4j2.xml.

The configuration of the docker image can also be changed by using environment variables.

Available environment variables:

  • ZEEBE_LOG_LEVEL: Sets the log level of the Zeebe Logger (default: info).
  • ZEEBE_HOST: Sets the host address to bind to instead of the IP of the container.
  • BOOTSTRAP: Sets the replication factor of the internal-system partition.
  • ZEEBE_CONTACT_POINTS: Sets the contact points of other brokers in a cluster setup.
  • DEPLOY_ON_KUBERNETES: If set to true, it applies some configuration changes in order to run Zeebe in a Kubernetes environment. Please note that the recommended method to run Zeebe on Kubernetes is by using the zeebe-operator.

Mac and Windows users

Note: On systems which use a VM to run Docker containers like Mac and Windows, the VM needs at least 4GB of memory, otherwise Zeebe might fail to start with an error similar to:

Exception in thread "actor-runner-service-container" java.lang.OutOfMemoryError: Direct buffer memory
        at java.nio.Bits.reserveMemory(Bits.java:694)
        at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
        at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
        at io.zeebe.util.allocation.DirectBufferAllocator.allocate(DirectBufferAllocator.java:28)
        at io.zeebe.util.allocation.BufferAllocators.allocateDirect(BufferAllocators.java:26)
        at io.zeebe.dispatcher.DispatcherBuilder.initAllocatedBuffer(DispatcherBuilder.java:266)
        at io.zeebe.dispatcher.DispatcherBuilder.build(DispatcherBuilder.java:198)
        at io.zeebe.broker.services.DispatcherService.start(DispatcherService.java:61)
        at io.zeebe.servicecontainer.impl.ServiceController$InvokeStartState.doWork(ServiceController.java:269)
        at io.zeebe.servicecontainer.impl.ServiceController.doWork(ServiceController.java:138)
        at io.zeebe.servicecontainer.impl.ServiceContainerImpl.doWork(ServiceContainerImpl.java:110)
        at io.zeebe.util.actor.ActorRunner.tryRunActor(ActorRunner.java:165)
        at io.zeebe.util.actor.ActorRunner.runActor(ActorRunner.java:145)
        at io.zeebe.util.actor.ActorRunner.doWork(ActorRunner.java:114)
        at io.zeebe.util.actor.ActorRunner.run(ActorRunner.java:71)
        at java.lang.Thread.run(Thread.java:748)

If you use a Docker setup with docker-machine and your default VM does not have 4GB of memory, you can create a new one with the following command:

docker-machine create --driver virtualbox --virtualbox-memory 4000 zeebe

Verify that the Docker Machine is running correctly:

docker-machine ls
NAME        ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER        ERRORS
zeebe     *        virtualbox   Running   tcp://192.168.99.100:2376           v17.03.1-ce

Configure your terminal:

eval $(docker-machine env zeebe)

Then run Zeebe:

docker run --rm -p 26500:26500 camunda/zeebe:latest

To get the ip of Zeebe:

docker-machine ip zeebe
192.168.99.100

Verify that you can connect to Zeebe:

telnet 192.168.99.100 26500

First Contact

The quickstart guide demonstrates the main concepts of Zeebe using only the command line client.

As a Java user, you should have look at the Java client library. The Get Started Java tutorial guides you through the first steps.

As a Go user, you should look into the Go client library. The Get Started Go tutorial guides you through the first steps.

Quickstart

This tutorial should help you to get to know the main concepts of Zeebe without the need to write a single line of code.

  1. Download the Zeebe distribution
  2. Start the Zeebe broker
  3. Create a job
  4. Complete a job
  5. Deploy a workflow
  6. Create a workflow instance
  7. Complete the workflow instance
  8. Next steps

Note: Some command examples might not work on Windows if you use cmd or Powershell. For Windows users we recommend to use a bash-like shell, i.e. Git Bash, Cygwin or MinGW for this guide.

Step 1: Download the Zeebe distribution

You can download the latest distribution from the Zeebe release page.

Extract the archive and enter the Zeebe directory.

tar -xzvf zeebe-distribution-X.Y.Z.tar.gz
cd zeebe-broker-X.Y.Z/

Inside the Zeebe directory you will find multiple directories.

tree -d
.
├── bin     - Binaries and start scripts of the distribution
├── conf    - Zeebe and logging configuration
└── lib     - Shared java libraries

Step 2: Start the Zeebe broker

Change into the bin/ folder and execute the broker file if you are using Linux or MacOS, or the broker.bat file if you are using Windows. This will start a new Zeebe broker.

cd bin/
./broker
13:59:21.973 [service-controller] [0.0.0.0:26501-zb-actors-1] INFO  io.zeebe.broker.transport - Bound managementApi.server to /0.0.0.0:26502
13:59:21.973 [service-controller] [0.0.0.0:26501-zb-actors-0] INFO  io.zeebe.transport - Bound replicationApi.server to /0.0.0.0:26503
13:59:21.975 [service-controller] [0.0.0.0:26501-zb-actors-1] INFO  io.zeebe.broker.transport - Bound subscriptionApi.server to /0.0.0.0:26504
13:59:22.014 [] [main] INFO  io.zeebe.transport.endpoint - Registering endpoint for node '-1' with address '0.0.0.0:26501' on transport 'broker-client'
13:59:22.015 [] [main] INFO  io.zeebe.transport.endpoint - Registering endpoint for node '-1' with address '0.0.0.0:26501' on transport 'broker-client-internal'
13:59:22.031 [service-controller] [0.0.0.0:26501-zb-actors-1] INFO  io.zeebe.transport - Bound clientApi.server to /0.0.0.0:26501
13:59:22.031 [service-controller] [0.0.0.0:26501-zb-actors-0] INFO  io.zeebe.transport.endpoint - Registering endpoint for node '0' with address '0.0.0.0:26502' on transport 'managementApi.client'
13:59:22.033 [service-controller] [0.0.0.0:26501-zb-actors-0] INFO  io.zeebe.transport.endpoint - Registering endpoint for node '0' with address '0.0.0.0:26504' on transport 'subscriptionApi.client'
13:59:22.144 [topology] [0.0.0.0:26501-zb-actors-1] INFO  io.zeebe.transport.endpoint - Registering endpoint for node '0' with address '0.0.0.0:26502' on transport 'managementApi.client'
13:59:22.145 [topology] [0.0.0.0:26501-zb-actors-1] INFO  io.zeebe.transport.endpoint - Registering endpoint for node '0' with address '0.0.0.0:26503' on transport 'replicationApi.client'
13:59:22.145 [topology] [0.0.0.0:26501-zb-actors-1] INFO  io.zeebe.transport.endpoint - Registering endpoint for node '0' with address '0.0.0.0:26504' on transport 'subscriptionApi.client'
13:59:22.169 [io.zeebe.gateway.impl.broker.cluster.BrokerTopologyManagerImpl] [client-zb-actors-0] INFO  io.zeebe.transport.endpoint - Registering endpoint for node '0' with address '0.0.0.0:26501' on transport 'broker-client'
13:59:22.170 [io.zeebe.gateway.impl.broker.cluster.BrokerTopologyManagerImpl] [client-zb-actors-0] INFO  io.zeebe.transport.endpoint - Registering endpoint for node '0' with address '0.0.0.0:26501' on transport 'broker-client-internal'
13:59:22.444 [] [main] INFO  io.zeebe.gateway - Gateway started using grpc server: ServerImpl{logId=2, transportServer=NettyServer{logId=1, address=/0.0.0.0:26500}}
13:59:22.453 [service-controller] [0.0.0.0:26501-zb-actors-1] INFO  io.zeebe.raft - Created raft partition-0 with configuration RaftConfiguration{heartbeatInterval='250ms', electionInterval='1s', leaveTimeout='1s'}
13:59:22.505 [partition-0] [0.0.0.0:26501-zb-actors-0] INFO  io.zeebe.raft - Joined raft in term 0
13:59:22.897 [exporter] [0.0.0.0:26501-zb-actors-0] INFO  io.zeebe.broker.exporter.debug - Debug exporter opened

You will see some output which contains the version of the broker, or different configuration parameters like directory locations and API socket addresses.

To continue this guide open another terminal to execute commands using the Zeebe CLI zbctl.

We can now check the status of the Zeebe broker.

./bin/zbctl status
Broker 0.0.0.0 : 26501
  Partition 0 : Leader

Step 3: Create a job

A work item in Zeebe is called a job. To identify a category of work items the job has a type specified by the user. For this example we will use the job type step3. A job can have a payload which can then be used to execute the action required for this work item. In this example we will set the payload to contain the key zeebe and the value 2018. When the job was created the response will contain the unique key of this job instance.

Note: Windows users who want to execute this command using cmd or Powershell have to escape the payload differently.

  • cmd: "{\"zeebe\": 2018}"
  • Powershell: '{"\"zeebe"\": 2018}'
./bin/zbctl create job step3 --payload '{"zeebe": 2018}'
{
  "key": 2
}

Step 4: Complete a job

Before a job can be completed it has to be activated by a job worker. After a job is activated it can be completed or failed by the worker. zbctl allows us to activate jobs for a given type and afterwards complete them with an updated payload.

First activate a job of the type step3. The returned job contains the unique key of the job which is used to complete the job later.

./bin/zbctl activate jobs step3
2018/10/15 13:47:35 Activated 1 for type step3
2018/10/15 13:47:35 Job 1 / 1
{
  "key": 2,
  "type": "step3",
  "jobHeaders": {
    "workflowInstanceKey": -1,
    "workflowDefinitionVersion": -1,
    "workflowKey": -1,
    "activityInstanceKey": -1
  },
  "customHeaders": "{}",
  "worker": "zbctl",
  "retries": 3,
  "deadline": 1539604355443,
  "payload": "{\"zeebe\":2018}"
}

After working on the job we can complete it using the key from the activated job.

./bin/zbctl complete job 2
2018/10/15 13:48:26 Completed job with key 2 and payload {}

Step 5: Deploy a workflow

A workflow is used to orchestrate loosely coupled job workers and the flow of data between them.

In this guide we will use an example process order-process.bpmn. You can download it with the following link: order-process.bpmn.

order-process

The process describes a sequential flow of three tasks Collect Money, Fetch Items and Ship Parcel. If you open the order-process.bpmn file in a text editor you will see that every task has type defined in the XML which is later used as job type.

<!-- [...] -->
<bpmn:serviceTask id="collect-money" name="Collect Money">
  <bpmn:extensionElements>
    <zeebe:taskDefinition type="payment-service" />
  </bpmn:extensionElements>
</bpmn:serviceTask>
<!-- [...] -->
<bpmn:serviceTask id="fetch-items" name="Fetch Items">
  <bpmn:extensionElements>
    <zeebe:taskDefinition type="inventory-service" />
  </bpmn:extensionElements>
</bpmn:serviceTask>
<!-- [...] -->
<bpmn:serviceTask id="ship-parcel" name="Ship Parcel">
  <bpmn:extensionElements>
    <zeebe:taskDefinition type="shipment-service" />
  </bpmn:extensionElements>
</bpmn:serviceTask>
<!-- [...] -->

To complete an instance of this workflow we would need to activate and complete one job for each of the types payment-service, inventory-service and shipment-service.

But first let's deploy the workflow to the Zeebe broker.

./bin/zbctl deploy order-process.bpmn
{
  "key": 1,
  "workflows": [
    {
      "bpmnProcessId": "order-process",
      "version": 1,
      "workflowKey": 1,
      "resourceName": "order-process.bpmn"
    }
  ]
}

Step 6: Create a workflow instance

After the workflow is deployed we can create new instances of it. Every instance of a workflow is a single execution of the workflow. To create a new instance we have to specify the process ID from the BPMN file, in our case the ID is order-process.

<bpmn:process id="order-process" isExecutable="true">

Every instance of a workflow normally processes some kind of data. We can specify the initial data of the instance as payload when we start the instance.

Note: Windows users who want to execute this command using cmd or Powershell have to escape the payload differently.

  • cmd: "{\"orderId\": 1234}"
  • Powershell: '{"\"orderId"\": 1234}'
./bin/zbctl create instance order-process --payload '{"orderId": 1234}'
{
  "workflowKey": 1,
  "bpmnProcessId": "order-process",
  "version": 1,
  "workflowInstanceKey": 6
}

Step 7: Complete the workflow instance

To complete the instance all three jobs have to be completed. Therefore we need activate and complete one job per type.

First activate a job for the payment-service type.

./bin/zbctl activate jobs payment-service
2018/10/15 13:55:00 Activated 1 for type payment-service
2018/10/15 13:55:00 Job 1 / 1
{
  "key": 12,
  "type": "payment-service",
  "jobHeaders": {
    "workflowInstanceKey": 6,
    "bpmnProcessId": "order-process",
    "workflowDefinitionVersion": 1,
    "workflowKey": 1,
    "activityId": "collect-money",
    "activityInstanceKey": 21
  },
  "customHeaders": "{\"method\":\"VISA\"}",
  "worker": "zbctl",
  "retries": 3,
  "deadline": 1539604800511,
  "payload": "{\"orderId\":1234}"
}

And complete it by its job key 12.

./bin/zbctl complete job 12
2018/10/15 13:55:08 Completed job with key 12 and payload {}

Next activate a job for the inventory-service type.

./bin/zbctl activate jobs inventory-service
2018/10/15 13:55:18 Activated 1 for type inventory-service
2018/10/15 13:55:18 Job 1 / 1
{
  "key": 22,
  "type": "inventory-service",
  "jobHeaders": {
    "workflowInstanceKey": 6,
    "bpmnProcessId": "order-process",
    "workflowDefinitionVersion": 1,
    "workflowKey": 1,
    "activityId": "fetch-items",
    "activityInstanceKey": 31
  },
  "customHeaders": "{}",
  "worker": "zbctl",
  "retries": 3,
  "deadline": 1539604818042,
  "payload": "{\"orderId\":1234}"
}

And complete it by its job key 22.

./bin/zbctl complete job 22
2018/10/15 13:55:20 Completed job with key 22 and payload {}

Last activate a job for the shipment-service type.

./bin/zbctl activate jobs shipment-service
2018/10/15 13:55:31 Activated 1 for type shipment-service
2018/10/15 13:55:31 Job 1 / 1
{
  "key": 32,
  "type": "shipment-service",
  "jobHeaders": {
    "workflowInstanceKey": 6,
    "bpmnProcessId": "order-process",
    "workflowDefinitionVersion": 1,
    "workflowKey": 1,
    "activityId": "ship-parcel",
    "activityInstanceKey": 41
  },
  "customHeaders": "{}",
  "worker": "zbctl",
  "retries": 3,
  "deadline": 1539604831656,
  "payload": "{\"orderId\":1234}"
}

And complete it by its job key 32.

./bin/zbctl complete job 32
2018/10/15 13:55:33 Completed job with key 32 and payload {}

If you want to visualize the state of the workflow instances you can start the Zeebe simple monitor.

Next steps

To continue working with Zeebe we recommend to get more familiar with the basic concepts of Zeebe, see the Basics chapter of the documentation.

In the BPMN Workflows chapter you can find an introduction to creating Workflows with BPMN. And the BPMN Modeler chapter shows you how to model them by yourself.

The documentation also provides getting started guides for implementing job workers using Java or Go.

Overview

This section provides an overview of Zeebe's core concepts. Understanding them helps to successfully build workflow applications.

Client / Server

Zeebe uses a client / server architecture. There are two main components: Broker and Client.

client-server

Broker

A Zeebe broker (server) has three main responsibilities:

  1. Storing and managing workflow data

  2. Distributing work items to clients

  3. Exposing a workflow event stream to clients via publish-subscribe

A Zeebe setup typically consists of more than one broker. Adding brokers scales storage and computing resources. In addition, it provides fault tolerance since Zeebe keeps multiple copies of its data on different brokers.

Brokers form a peer-to-peer network in which there is no single point of failure. This is possible because all brokers perform the same kind of tasks and the responsibiltiies of an unavailable broker are transparently reassigned in the network.

Client

Clients are libraries which you embed into your application to connect to the broker.

Clients connect to the broker using Zeebe's binary protocols. The protocols are programming-language-agnostic, which makes it possible to write clients in different programming languages. There is a number of clients readily available.

Workflows

Workflows are flow-chart-like blueprints that define the orchestration of tasks. Every task represents a piece of business logic such that their ordered execution produces a meaningful result.

A job worker is your implementation of the business logic of a task. It must embed a Zeebe client library to connect to the broker but other than that there are no restrictions on its implementation. You can choose to write a worker as a microservice, as part of a classical three-tier application, as a (lambda) function, via command line tools, etc.

Running a workflow then requires two steps: Submitting the workflow to Zeebe and connecting the required workers.

Sequences

The simplest kind of workflow is an ordered sequence of tasks. Whenever workflow execution reaches a task, Zeebe creates a job and triggers the invocation of a job worker.

workflow-sequence

You can think of Zeebe's workflow orchestration as a state machine. Orchestration reaches a task, triggers the worker and then waits for the worker to complete its work. Once the work is completed, the flow continues with the next step. If the worker fails to complete the work, the workflow remains at the current step, potentially retrying the job until it eventually succeeds.

Data Flow

As Zeebe progresses from one task to the next in a workflow, it can move custom data in the form of a JSON document along. This data is called the workflow payload and is created whenever a workflow is started.

data-flow

Every job worker can read the current payload and modify it when completing a job so that data can be shared between different tasks in a workflow. A workflow model can contain simple, yet powerful payload transformation instructions to keep workers decoupled from each other.

Data-based Conditions

Some workflows do not always execute the same tasks but need to choose different tasks based on payload and conditions:

data-conditions

The diamond shape with the "X" in the middle is a special step marking that the workflow decides to take one or the other path.

Conditions use JSON Path to extract properties and values from the current payload document.

Events

Events represent things that happen. A workflow can react to events (catching event) as well as emit events (throwing event). For example:

workflow

There are different types of events like message or timer.

Fork / Join Concurrency

In many cases, it is also useful to perform multiple tasks in parallel. This can be achieved with Fork / Join concurrency:

data-conditions

The diamond shape with the "+" marker means that all outgoing paths are activated and all incoming paths are merged.

Concurrency can also be based on data, meaning that a task is performed for every data item:

data-conditions

BPMN 2.0

Zeebe uses BPMN 2.0 for representing workflows. BPMN is an industry standard which is widely supported by different vendors and implementations. Using BPMN ensures that workflows can be interchanged between Zeebe and other workflow systems.

YAML Workflows

In addition to BPMN 2.0, Zeebe supports a YAML workflow format. It can be used to rapidly write simple workflows in text. Unlike BPMN, it has no visual representation and is not standardized. Zeebe transforms YAML to BPMN on submission.

BPMN Modeler

Zeebe provides a BPMN modeling tool to create BPMN diagrams and maintain their technical properties. It is a desktop application based on the bpmn.io open source project.

The modeler can be downloaded from github.com/zeebe-io/zeebe-modeler.

Job Workers

A job worker is a component capable of performing a particular step in a workflow.

What is a Job?

A job is a work item in a workflow. For example:

  • Sending an email
  • Generating a PDF document
  • Updating customer data in a backend system

A job has the following properties:

  • Type: Describes the kind of work item this is, defined in the workflow. The type allows workers to specify which jobs they are able to perform.
  • Payload: The business data the worker should work on in the form of a JSON document/object.
  • Headers: Additional metadata defined in the workflow, e.g. describing the structure of the payload.

Connecting to the Broker

To start executing jobs, a worker must connect to the Zeebe broker declaring the specific job type it can handle. The Zeebe broker will then notify the worker whenever new jobs are created based on a publish-subsribe protocol. Upon receiving a notification, a worker performs the job and sends back a complete or fail message depending on whether the job could be completed successfully or not.

task-subscriptions

Many workers can subscribe to the same job type in order to scale up processing. In this scenario, the Zeebe broker ensures that each job is published exclusively to only one of the workers. It eventually reassigns the job in case the assigned worker is unresponsive. This means, job processing has at least once semantics.

Job Queueing

An important aspect of Zeebe is that a broker does not assume that a worker is always available or that it can process jobs at any particular rate.

Zeebe decouples creation of jobs from performing the work on them. It is always possible to create jobs at the highest possible rate, regardless of whether there is currently a worker available to work on them or not. This is possible as Zeebe queues jobs until it can push them out to workers. If no job worker is currently subscribed, jobs remain queued. If a worker is subscribed, Zeebe's backpressure protocols ensure that workers can control the rate at which they receive jobs.

This allows the system to take in bursts of traffic and effectively act as a buffer in front of the job workers.

Standalone Jobs

Jobs can also be created standalone (i.e., without an orchestrating workflow). In that case, Zeebe acts like a regular queue.

Partitions

Note: If you have worked with the Apache Kafka System before, the concepts presented on this page will sound very familiar to you.

In Zeebe, all data is organized into partitions. A partition is a persistent stream of workflow-related events. In a cluster of brokers, partitions are distributed among the nodes so it can be thought of as a shard. When you bootstrap a Zeebe broker you can configure how many partitions you need.

Usage Examples

Whenever you deploy a workflow, you deploy it to the first partition. The workflow is then distributed to all partitions. On all partitions, this workflow receives the same key and version such that it can be consistently identified.

When you start an instance of a workflow, the client library will then route the request to one partition in which the workflow instance will be published. All subsequent processing of the workflow instance will happen in that partition.

Scalability

Use partitions to scale your workflow processing. Partitions are dynamically distributed in a Zeebe cluster and for each partition there is one leading broker at a time. This leader accepts requests and performs event processing for the partition. Let us assume you want to distribute workflow processing load over five machines. You can achieve that by bootstraping five partitions.

Partition Data Layout

A partition is a persistent append-only event stream. Initially, a partition is empty. As the first entry gets inserted, it takes the place of the first entry. As the second entry comes in and is inserted, it takes the place as the second entry and so on and so forth. Each entry has a position in the partition which uniquely identifies it.

partition

Replication

For fault tolerance, data in a partition is replicated from the leader of the partition to its followers. Followers are other Zeebe Broker nodes that maintain a copy of the partition without performing event processing.

Protocols

Zeebe defineses protocols for communication between clients and brokers. Zeebe supports two styles of interactions:

  • Command Protocol: A client sends a command to a broker and expects a response. Example: Completing a job.
  • Subscription Protocol: A broker streams data to a client. Example: Broker streams jobs to a client to work on.

Both protocols are binary on top of TCP/IP.

Non-Blocking

Zeebe protocols are non-blocking by design, including commands. Command and Reply are always separate, independent messages. A client can send multiple commands using the same TCP connection before receiving a reply.

Streaming

The subscription protocol works in streaming mode: A broker pushes out jobs and records to the subscribers in a non-blocking way. To keep latency low, the broker pushes out jobs and records to the clients as soon as they become available.

Backpressure

The subscription protocol embeds a backpressure mechanism to prevent brokers from overwhelming clients with more jobs or records than they can handle. This mechanism is client-driven, i.e. clients submit a capacity of records they can safely receive at a time. A broker pushes records until this capacity is exhausted and the client replenishes the capacity whenever it completes processing of records.

The effect of backpressure is that the system automatically adjusts flow rates to the available resources. For example, a fast consumer running on a powerful machine can process jobs at a very high rate, while a slower consumer is automatically throttled. This works without additional user configuration.

Internal Processing

Internally, Zeebe is implemented as a collection of stream processors working on record streams (partitions). The stream processing model is used since it is a unified approach to provide:

  • Command Protocol (Request-Response),
  • Record Export (Streaming),
  • Workflow Evaluation (Asynchronous Background Tasks)

Record export solves the history problem: The stream provides exactly the kind of exhaustive audit log that a workflow engine needs to produce.

State Machines

Zeebe manages stateful entities: Jobs, Workflows, etc. Internally, these entities are implemented as State Machines managed by a stream processor.

The concept of the state machine pattern is simple: An instance of a state machine is always in one of several logical states. From each state, a set of transitions defines the next possible states. Transitioning into a new state may produce outputs/side effects.

Let's look at the state machine for jobs. Simplified, it looks as follows:

partition

Every oval is a state. Every arrow is a state transition. Note how each state transition is only applicable in a specific state. For example, it is not possible to complete a job when it is in state CREATED.

Events and Commands

Every state change in a state machine is called an event. Zeebe publishes every event as a record on the stream.

State changes can be requested by submitting a command. A Zeebe broker receives commands from two sources:

  1. Clients send commands remotely. Examples: Deploying workflows, starting workflow instances, creating and completing jobs, etc.
  2. The broker itself generates commands. Examples: Locking a job for exclusive processing by a worker, etc.

Once received, a command is published as a record on the addressed stream.

Stateful Stream Processing

A stream processor reads the record stream sequentially and interprets the commands with respect to the addressed entity's lifecycle. More specifically, a stream processor repeatedly performs the following steps:

  1. Consume the next command from the stream.
  2. Determine whether the command is applicable based on the state lifecycle and the entity's current state.
  3. If the command is applicable: Apply it to the state machine. If the command was sent by a client, send a reply/response.
  4. If the command is not applicable: Reject it. If it was sent by a client, send an error reply/response.
  5. Publish an event reporting the entity's new state.

For example, processing the Create Job command produces the event Job Created.

Command Triggers

A state change which occurred in one entity can automatically trigger a command for another entity. Example: When a job is completed, the corresponding workflow instance shall continue with the next step. Thus, the Event Job Completed triggers the command Complete Activity.

Exporters

As Zeebe processes jobs and workflows, or performs internal maintenance (e.g. raft failover), it will generate an ordered stream of records:

record-stream

While the clients provide no way to inspect this stream directly, Zeebe can load and configure user code that can process each and every one of those records, in the form of an exporter.

An exporter provides a single entry point to process every record that is written on a stream.

With it, you can:

  • Persist historical data by pushing it to an external data warehouse
  • Export records to a visualization tool (e.g. zeebe-simple-monitor)

Zeebe will only load exporters which are configured through the main Zeebe TOML configuration file.

Once an exporter is configured, the next time Zeebe is started, the exporter will start receiving records. Note that it is only guaranteed to see records produced from that point on.

For more information, you can read the reference information page, and you can find a reference implementation in the form of the Zeebe-maintained ElasticSearch exporter.

Considerations

The main impact exporters have on a Zeebe cluster is that they remove the burden of persisting data indefinitely.

Once data is not needed by Zeebe itself anymore, it will query its exporters to know if it can be safely deleted, and if so, will permanently erase it, thereby reducing disk usage.

Note:, if no exporters are configured at all, then Zeebe will automatically erase data when it is not necessary anymore. If you need historical data, then you need to configure an exporter to stream records into your external data warehouse.

Performance

Zeebe is designed for performance, applying the following design principles:

  • Batching of I/O operations
  • Linear read/write data access patterns
  • Compact, cache-optimized data structures
  • Lock-free algorithms and actor concurrency (green threads model)
  • Broker is garbage-free in the hot/data path

As a result, Zeebe is capable of very high throughput on a single node and scales horizontally (see this benchmarking blog post for more detail).

Clustering

Zeebe can operate as a cluster of brokers, forming a peer-to-peer network. In this network, all brokers have the same responsibilities and there is no single point of failure.

cluster

Gossip Membership Protocol

Zeebe implements the Gossip protocol to know which brokers are currently part of the cluster.

The cluster is bootstrapped using a set of well-known bootstrap brokers, to which the other ones can connect. To achieve this, each broker must have at least one bootstrap broker as its initial contact point in their configuration:

[network.gossip]
initialContactPoints = [ "node1.mycluster.loc:26502" ]

When a broker is connected to the cluster for the first time, it fetches the topology from the initial contact points and then starts gossiping with the other brokers. Brokers keep cluster topology locally across restarts.

Raft Consensus and Replication Protocol

To ensure fault tolerance, Zeebe replicates data across machines using the Raft protocol.

Data is divided into partitions (shards). Each partition has a number of replicas. Among the replica set, a leader is determined by the raft protocol which takes in requests and performs all the processing. All other brokers are passive followers. When the leader becomes unavailable, the followers transparently select a new leader.

Each broker in the cluster may be both leader and follower at the same time for different partitions. This way, client traffic is distributed evenly across all brokers.

cluster

Commit

Before a new record on a partition can be processed, it must be replicated to a quorum (typically majority) of followers. This procedure is called commit. Committing ensures that a record is durable even in case of complete data loss on an individual broker. The exact semantics of committing are defined by the raft protocol.

cluster

BPMN Workflows

Zeebe uses visual workflows based on the industry standard BPMN 2.0.

workflow

Read more about:

BPMN Primer

Business Process Model And Notation 2.0 (BPMN) is an industry standard for workflow modeling and execution. A BPMN workflow is an XML document that has a visual representation. For example, here is a BPMN workflow:

workflow

This is the corresponding XML: Click here.

This duality makes BPMN very powerful. The XML document contains all the necessary information to be interpreted by workflow engines and modelling tools like Zeebe. At the same time, the visual representation contains just enough information to be quickly understood by humans, even when they are non-technical people. The BPMN model is source code and documentation in one artifact.

The following is an introduction to BPMN 2.0, its elements and their execution semantics. It tries to briefly provide an intuitive understanding of BPMN's power, but does not cover the entire feature set. For more exhaustive BPMN resources, see the reference links at the end of this section.

Modeling BPMN Diagrams

The best tool for modeling BPMN diagrams for Zeebe is Zeebe Modeler.

BPMN Elements

Sequence Flow: Controlling the Flow of Execution

A core concept of BPMN is sequence flow that defines the order in which steps in the workflow happen. In BPMN's visual representation, a sequence flow is an arrow connecting two elements. The direction of the arrow indicates their order of execution.

workflow

You can think of workflow execution as tokens running through the workflow model. When a workflow is started, a token is spawned at the beginning of the model. It advances with every completed step. When the token reaches the end of the workflow, it is consumed and the workflow instance ends. Zeebe's task is to drive the token and to make sure that the job workers are invoked whenever necessary.

Tasks: Units of Work

The basic elements of BPMN workflows are tasks, atomic units of work that are composed to create a meaningful result. Whenever a token reaches a task, the token stops and Zeebe creates a job and notifies a registered worker to perform work. When that handler signals completion, then the token continues on the outgoing sequence flow.

Choosing the granularity of a task is up to the person modeling the workflow. For example, the activity of processing an order can be modeled as a single Process Order task, or as three individual tasks Collect Money, Fetch Items, Ship Parcel. If you use Zeebe to orchestrate microservices, one task can represent one microservice invocation.

See the Tasks section on which types of tasks are currently supported and how to use them.

Gateways: Steering Flow

Gateways are elements that route tokens in more complex patterns than plain sequence flow.

BPMN's exclusive gateway chooses one sequence flow out of many based on data:

BPMN's parallel gateway generates new tokens by activating multiple sequence flows in parallel:

See the Gateways section on which types of gateways are currently supported and how to use them.

Events: Waiting for Something to Happen

Events in BPMN represent things that happen. A workflow can react to events (catching event) as well as emit events (throwing event). For example:

The circle with the envolope symbol is a catching message event. It makes the token continue as soon as a message is received. The XML representation of the workflow contains the criteria for which kind of message triggers continuation.

Events can be added to the workflow in various ways. Not only can they be used to make a token wait at a certain point, but also for interrupting a token's progress.

See the Events section on which types of events are currently supported and how to use them.

Sub Processes: Grouping Elements

Sub Processes are element containers that allow to define common functionality. For example, we can attach an event to a sub process's border:

payload

When the event is triggered, the sub process is interrupted regardless which of its elements is currently active.

See the Sub Processes section on which types of sub processes are currently supported and how to use them.

Additional Resources

BPMN Coverage

Elements marked in orange are currently implemented by Zeebe.

Participants

Pool
Lane

Subprocesses

Subprocess
Call Activity
Event Subprocess
Transaction

Tasks

Service Task
User Task
Script Task
Business Rule Task
Manual Task
Receive Task
Undefined Task
Send Task
Receive Task (instantiated)

Gateways

XOR
OR
AND
Event
Complex

Data

Data Object
Data Store

Artifacts

Text Annotation
Group

Events

Type Start Intermediate End
Normal Event Sub-Process Event Sub-Process
non-interrupt
Catch Boundary Boundary
non-interrupt
Throw
None
Message
Timer
Conditional
Link
Signal
Error
Escalation
Termination
Compensation
Cancel
Multiple
Multiple Parallel

Data Flow

Every BPMN workflow instance has a JSON document associated with it, called the workflow instance payload. The payload carries contextual data of the workflow instance that is required by job workers to do their work. It can be provided when a workflow instance is created. Job workers can read and modify it.

payload

Payload is the link between workflow instance context and workers and therefore a form of coupling. We distinguish two cases:

  1. Tightly coupled job workers: Job workers work with the exact JSON structure that the workflow instance payload has. This approach is applicable when workers are used only in a single workflow and the payload format can be agreed upon. It is convenient when building both, workflow and job workers, from scratch.
  2. Loosely coupled job workers: Job workers work with a different JSON structure than the workflow instance payload. This is often the case when job workers are reused for many workflows or when workers are developed independently of the workflow.

Without additional configuration, Zeebe assumes tightly coupled workers. That means, on job execution the workflow instance payload is provided as is to the job worker:

payload

When the worker modifies the payload, the result is merged on top-level into the workflow instance payload. In order to use loosely coupled job workers, the workflow can be extended by payload mappings.

Payload Mappings

We distinguish between input, output and merging mappings. Input/Output mappings are used to adapt payload to the context of an activity. Merging mappings are used whenever multiple flows of execution are joined into one, for example at a merging parallel gateway.

Payload mappings are pairs of JSONPath expressions. Every mapping has a source and a target expression. The source expression describes the path in the source document from which to copy data. The target expression describes the path in the new document that the data should be copied to. When multiple mappings are defined, they are applied in the order of their appearance.

Note: Mappings are not a tool for performance optimization. While a smaller document can save network bandwidth when publishing the job, the broker has extra effort of applying the mappings during workflow execution.

Input/Output Mappings

Before starting a workflow element, Zeebe applies input mappings to the payload and generates a new JSON document. Upon element completion, output mappings are applied to map the result back into the workflow instance payload.

Examples in BPMN:

  • Service Task: Input and output mappings can be used to adapt the workflow instance payload to the job worker.
  • Message Catch Event: Output mappings can be used to merge the message payload into the workflow instance payload.

Example:

payload

XML representation:

<serviceTask id="collectMoney">
    <extensionElements>
      <zeebe:ioMapping>
        <zeebe:input source="$.price" target="$.total"/>
        <zeebe:output source="$.paymentMethod" target="$.paymentMethod"/>
       </zeebe:ioMapping>
    </extensionElements>
</serviceTask>

When no output mapping is defined, the job payload is by default merged into the workflow instance payload. This default output behavior is configurable via the outputBehavior attribute on the <ioMapping> tag. It accepts three differents values:

  • MERGE merges the job payload into the workflow instance payload, if no output mapping is specified. If output mappings are specified, then the payloads are merged according to those. This is the default output behavior.
  • OVERWRITE overwrites the workflow instance payload with the job payload. If output mappings are specified, then the content is extracted from the job payload, which then overwrites the workflow instance payload.
  • NONE indicates that the worker does not produce any output. Output mappings cannot be used in combination with this behavior. The Job payload is simply ignored on job completion and the workflow instance payload remains unchanged.

Example:

<serviceTask id="collectMoney">
    <extensionElements>
      <zeebe:ioMapping outputBehavior="overwrite">
        <zeebe:input source="$.price" target="$.total"/>
        <zeebe:output source="$.paymentMethod" target="$.paymentMethod"/>
       </zeebe:ioMapping>
    </extensionElements>
</serviceTask>

Additional Resources:

Merging Mappings

A merging mapping can be defined wherever multiple paths of executions are merged into one. Examples in BPMN:

  • Merging Parallel Gateway: when triggered, the payloads of each incoming sequence flow are merged.
  • Embedded Sub Process: On completion, the payloads of each triggered end event are merged.

payload

The merge consists of three steps:

  1. Initialize target payload as empty document.
  2. Merge all source payloads into the target payload.
  3. On top, apply merge mappings as defined at the workflow elements.

In addition to source and target expressions, merge mappings have a third parameter type with these values:

  • PUT: Puts the source value at the target path. This is the default output value.
  • COLLECT: Collects the source value in an array at the target path.

Example:

<bpmn:sequenceFlow id="flow1">
  <extensionElements>
    <zeebe:payloadMappings>
      <zeebe:mapping source="$.flightPrice" target="$.prices" type="COLLECT" />
    </zeebe:payloadMappings>
  </extensionElements>
</sequenceFlow>

Additional Resources:

Tasks

Currently supported elements:

Service Tasks

workflow

A service task represents a work item in the workflow with a specific type. When the workflow instance arrives a service task then it creates a corresponding job. The token flow stops at this point.

A worker can subscribe to these jobs and complete them when the work is done. When a job is completed, the token flow continues.

Read more about job handling.

XML representation:

<bpmn:serviceTask id="collect-money" name="Collect Money">
  <bpmn:extensionElements>
    <zeebe:taskDefinition type="payment-service" />
    <zeebe:taskHeaders>
      <zeebe:header key="method" value="VISA" />
    </zeebe:taskHeaders>
  </bpmn:extensionElements>
</bpmn:serviceTask>

BPMN Modeler: Click Here

Task Definition

Each service task must have a task definition. It specifies the type of the job which workers can subscribe to.

Optionally, a task definition can specify the amount of times the job is retried when a worker signals failure (default = 3).

<zeebe:taskDefinition type="payment-service" retries="5" />

BPMN Modeler: Click Here

Task Headers

A service task can define an arbitrary number of task headers. Task headers are metadata that are handed to workers along with the job. They can be used as configuration parameters for the worker.

<zeebe:taskHeaders>
  <zeebe:header key="method" value="VISA" />
</zeebe:taskHeaders>

BPMN Modeler: Click Here

Payload Mapping

In order to map workflow instance payload to a format that is accepted by the job worker, payload mappings can be configured. We distinguish between input and output mappings. Input mappings are used to extract data from the workflow instance payload and generate the job payload. Output mappings are used to merge the job result with the workflow instance payload on job completion.

Payload mappings can be defined as pairs of JSON-Path expressions. Each mapping has a source and a target expression. The source expressions describes the path in the source document from which to copy data. The target expression describes the path in the new document that the data should be copied to. When multiple mappings are defined, they are applied in the order of their appearance. For details and examples, see the references on JSONPath and JSON Payload Mapping.

Example:

payload

XML representation:

<serviceTask id="collectMoney">
    <extensionElements>
      <zeebe:ioMapping>
        <zeebe:input source="$.price" target="$.total"/>
        <zeebe:output source="$.paymentMethod" target="$.paymentMethod"/>
       </zeebe:ioMapping>
    </extensionElements>
</serviceTask>

BPMN Modeler: Click Here

Payload Mapping

In order to map workflow instance payload to a format that is accepted by the job worker, payload input and output mappings can be configured. See the Input/Output Mappings section for details on this concept.

XML representation:

<serviceTask id="collectMoney">
  <extensionElements>
    <zeebe:ioMapping>
      <zeebe:input source="$.price" target="$.total"/>
      <zeebe:output source="$.paymentMethod" target="$.paymentMethod"/>
     </zeebe:ioMapping>
  </extensionElements>
</serviceTask>

BPMN Modeler: Click Here

Receive Tasks

Receive tasks are tasks which references a message. They can be used to wait until a proper message is received.

Messages

A message can be referenced by one or more receive tasks. It holds the information which is used for the message correlation. The required attributes are

  • the name of the message
  • the correlation key

The correlation key is specified as JSON Path expression. It is evaluated when the receive task is entered and extracts the value from the workflow instance payload. The value must be either a string or a number.

XML representation:

<bpmn:message id="Message_1iz5qtq" name="Money collected">
   <bpmn:extensionElements>
     <zeebe:subscription correlationKey="$.orderId" />
   </bpmn:extensionElements>
</bpmn:message>

Receive Tasks

Receive Tasks

When a token arrives at the receive task, it will wait there until a proper message is correlated. The correlation to the event is based on the name of the message and the correlation key. When a message is correlated then its payload is merged into the workflow instance payload and the task is left.

Read more about message correlation.

XML representation:

<bpmn:receiveTask id="IntermediateThrowEvent_090vvrf" name="Money collected"
    messageRef="Message_1iz5qtq">
</bpmn:receiveTask>

Message intermediate catch events are an alternative to receive tasks.

Gateways

Currently supported elements:

workflow workflow

Exclusive Gateway (XOR)

workflow

An exclusive gateway chooses one of its outgoing sequence flows for continuation. Each sequence flow has a condition that is evaluated in the context of the current workflow instance payload. The workflow instance takes the first sequence flow which condition is fulfilled.

If no condition is fulfilled, then it takes the default flow which has no condition. In case the gateway has no default flow (not recommended), the execution stops and an incident is created.

Read more about conditions in the JSON Conditions reference.

XML representation:

<bpmn:exclusiveGateway id="exclusiveGateway" default="else" />

<bpmn:sequenceFlow id="priceGreaterThan100" name="$.totalPrice &#62; 100" sourceRef="exclusiveGateway" targetRef="shipParcelWithInsurance">
  <bpmn:conditionExpression xsi:type="bpmn:tFormalExpression">
    <![CDATA[ $.totalPrice > 100 ]]>
  </bpmn:conditionExpression>
</bpmn:sequenceFlow>

<bpmn:sequenceFlow id="else" name="else" sourceRef="exclusiveGateway" targetRef="shipParcel" />

BPMN Modeler: Click Here

Parallel Gateway (AND)

workflow

A parallel gateway is only activated, when a token has arrived on each of its incoming sequence flows. Once activated, all of the outgoing sequence flows are taken. So in the case of multiple outgoing sequence flows the branches are executed concurrently. Execution progresses independently until a synchronizing element is reached, for example another merging parallel gateway.

Payload Merge

When the gateway is activated, then the payloads of each incoming path are merged into one document. A copy of this document is then propagated on each of the outgoing sequence flows. If fine-grained control over the merging process is required, then a merging mapping can be added to the gateway's incoming sequence flows. See the section on merging mappings for details.

XML representation:

<bpmn:sequenceFlow id="flow1">
  <extensionElements>
    <zeebe:payloadMappings>
      <zeebe:mapping source="$.total" target="$.sum" type="PUT" />
      <zeebe:mapping source="$.flightPrice" target="$.prices" type="COLLECT" />
    </zeebe:payloadMappings>
  </extensionElements>
</sequenceFlow>

Payload Split

On activation, the merged payload is propagated on each of the outgoing sequence flows.

Events

Currently supported elements:

None Events

None events are unspecified events, also called ‘blank’ events.

None Start Events

workflow

A workflow must have exactly one none start event. The event is triggered when the workflow is started via API and in consequence a token spawns at the event.

XML representation:

<bpmn:startEvent id="order-placed" name="Order Placed" />

None End Events

workflow

A workflow can have one or more none end events. When a token arrives at an end event, then the it is consumed. If it is the last token within a scope (top-level workflow or sub process), then the scope instance ends.

XML representation:

<bpmn:endEvent id="order-delivered" name="Order Delivered" />

Note that an activity without outgoing sequence flow has the same semantics as a none end event. After the task is completed, the token is consumed and the workflow instance may end.

Payload Merge

When all execution paths within a scope have ended, then the respective payloads are merged into one document that is used for scope completion. If fine-grained control over the merging process is required, then a merging mapping can be used. See the section on merging mappings for details.

XML representation:

<bpmn:endEvent id="order-delivered" name="Order Delivered">
  <extensionElements>
    <zeebe:payloadMappings>
      <zeebe:mapping source="$.paymentMethod" target="$.paymentMethod" />
      <zeebe:mapping source="$.price" target="$.prices" type="COLLECT" />
    </zeebe:payloadMappings>
  </extensionElements>
</serviceTask>

Message Events

Message events are events which reference a message. They can be used to wait until a proper message is received.

Currently, messages can be published only externally using one of the Zeebe clients.

Messages

A message can be referenced by one or more message events. It holds the information which is used for the message correlation. The required attributes are

  • the name of the message
  • the correlation key

The correlation key is specified as JSON Path expression. It is evaluated when the message event is entered and extracts the value from the workflow instance payload. The value must be either a string or a number.

XML representation:

<bpmn:message id="Message_1iz5qtq" name="Money collected">
   <bpmn:extensionElements>
     <zeebe:subscription correlationKey="$.orderId" />
   </bpmn:extensionElements>
</bpmn:message>

Message Intermediate Catch Events

workflow

When a token arrives at the message intermediate catch event, it will wait there until a proper message is correlated. The correlation to the event is based on the name of the message and the correlation key. When a message is correlated then its payload is merged into the workflow instance payload and the event is left.

Read more about message correlation.

XML representation:

<bpmn:intermediateCatchEvent id="IntermediateThrowEvent_090vvrf" name="Money collected">
  <bpmn:messageEventDefinition messageRef="Message_1iz5qtq" />
</bpmn:intermediateCatchEvent>

Receive tasks are an alternative to message intermediate catch events.

Sub Processes

Currently supported elements:

embedded-subprocess

Embedded Sub Process

An embedded sub process can be used to group workflow elements. It must have a single none start event. When activated, execution starts at that start event. The sub process only completes when all contained paths of execution have ended.

XML representation:

<bpmn:subProcess id="shipping" name="Shipping">
  <bpmn:startEvent id="shipping-start" />
  ... more contained elements ...
</bpmn:subProcess>

Payload Mapping

When a subprocess completes, the payloads of each executed end event are merged into the result document of the subprocess. See the end events section for how to configure the merge.

BPMN Modeler

Zeebe Modeler is a tool to create BPMN 2.0 models that run on Zeebe.

overview

Learn how to create models:

Introduction

The modeler's basic elements are:

  • Element/Tool Palette
  • Context Menu
  • Properties Panel

Palette

The palette can be used to add new elements to the diagram. It also provides tools for navigating the diagram.

intro-palette

Context Menu

The context menu of a model element allows to quickly delete as well as add new follow-up elements.

intro-context-menu

Properties Panel

The properties panel can be used to configure element properties required for execution. Select an element to inspect and modify its properties.

intro-properties-panel

Tasks

Create a Service Task

create-task

Configure Job Type

task-type

Add Task Header

task-header

Add Input/Output Mapping

task-mapping

Gateways

Create an Exclusive Gateway

create-exclusive-gw

Configure Sequence Flow Condition

exclusive-gw-condition

Configure Default Flow

exclusive-gw-default

YAML Workflows

In addition to BPMN, Zeebe provides a YAML format to define workflows. Creating a YAML workflow can be done with a regular text editor and does not require a graphical modelling tool. It is inspired by imperative programming concepts and aims to be easily understandable by programmers. Internally, Zeebe transforms a deployed YAML file to BPMN.

name: order-process

tasks:
    - id: collect-money
      type: payment-service

    - id: fetch-items
      type: inventory-service

    - id: ship-parcel
      type: shipment-service

Read more about:

Tasks

A workflow can contain multiple tasks, where each represents a step in the workflow.

name: order-process

tasks:
    - id: collect-money
      type: payment-service

    - id: fetch-items
      type: inventory-service
      retries: 5

    - id: ship-parcel
      type: shipment-service
      headers:
            method: "express"
            withInsurance: false

Each task has the following properties:

  • id (required): the unique identifier of the task.
  • type (required): the name to which job workers can subscribe.
  • retries: the amount of times the job is retried in case of failure. (default = 3)
  • headers: a list of metadata in the form of key-value pairs that can be accessed by a worker.

When Zeebe executes a task, it creates a job that is handed to a job worker. The worker can perform the business logic and complete the job eventually to trigger continuation in the workflow.

Related resources:

Control Flow

Control flow is about the order in which tasks are executed. The YAML format provides tools to decide which task is executed when.

Sequences

In a sequence, a task is executed after the previous one is completed. By default, tasks are executed top-down as they are declared in the YAML file.

name: order-process

tasks:
    - id: collect-money
      type: payment-service

    - id: fetch-items
      type: inventory-service

    - id: ship-parcel
      type: shipment-service

In the example above, the workflow starts with collect-money, followed by fetch-items and ends with ship-parcel.

We can use the goto and end attributes to define a different order:

name: order-process

tasks:
    - id: collect-money
      type: payment-service
      goto: ship-parcel

    - id: fetch-items
      type: inventory-service
      end: true

    - id: ship-parcel
      type: shipment-service
      goto: fetch-items

In the above example, we have reversed the order of fetch-items and ship-parcel. Note that the end attribute is required so that workflow execution stops after fetch-items.

Data-based Conditions

Some workflows do not always execute the same tasks but need to pick and choose different tasks, based on workflow instance payload.

To achieve this, the switch attribute together with JSON-Path-based conditions can be used.

name: order-process

tasks:
    - id: collect-money
      type: payment-service

    - id: fetch-items
      type: inventory-service
      switch:
          - case: $.totalPrice > 100
            goto: ship-parcel-with-insurance

          - default: ship-parcel

    - id: ship-parcel-with-insurance
      type: shipment-service-premium
      end: true

    - id: ship-parcel
      type: shipment-service

In the above example, the order-process starts with collect-money, followed by fetch-items. If the totalPrice value in the workflow instance payload is greater than 100, then it continues with ship-parcel-with-insurance. Otherwise, ship-parcel is chosen. In either case, the workflow instance ends after that.

In the switch element, there is one case element per alternative to choose from. If none of the conditions evaluates to true, then the default element is evaluated. While default is not required, it is best practice to include to avoid errors at workflow runtime. Should such an error occur (i.e. no case is fulfilled and there is no default), then workflow execution stops and an incident is raised.

Related resources:

Data Flow

Zeebe carries a JSON document from task to task. This is called the workflow instance payload. For every task, we can define input and output mappings in order to transform the workflow instance payload JSON to a JSON document that the job worker can work with.

name: order-process

tasks:
    - id: collect-money
      type: payment-service
      inputs:
          - source: $.totalPrice
            target: $.price
      outputs:
          - source: $.success
            target: $.paymentSuccess

    - id: fetch-items
      type: inventory-service

    - id: ship-parcel
      type: shipment-service

Every mapping element has a source and a target element which must be a JSON Path expression. source defines which data is extracted from the source payload and target defines how the value is inserted into the target payload.

Related resources:

Reference

This section gives in-depth explanations of Zeebe usage concepts.

Partitions

Every event in Zeebe belongs to a partition. All existing partitions share a common set of workflow definitions. It is up to you to define the granularity of partitions. This section provides assistance with doing that.

Recommendations

Choosing the number of partitions depends on the use case, workload and cluster setup. Here are some rules of thumb:

  • For testing and early development, start with a single partition. Note that Zeebe's workflow processing is highly optimized for efficiency, so a single partition can already handle high event loads. See the Performance section for details.
  • With a single Zeebe Broker, a single partition is always enough as there is nothing to scale to.
  • Base your decisions on data. Simulate the expected workload, measure and compare the performance of different partition setups.

Job Handling

TODO: Concepts of task handling via subscriptions, lifecycle, retries, incidents, etc.

Message Correlation

Message correlation describes how a message is correlated to a workflow instance (i.e. to a message catch event).

In Zeebe, a message is not sent to a workflow instance directly. Instead, it is published with correlation information. When a workflow instance is waiting at a message catch event which specifies the same correlation information then the message is correlated to the workflow instance.

The correlation information contains the name of the message and the correlation key. In the workflow the correlation key is a JSON Path expression and is extracted from the workflow instance payload.

Message Correlation

For example:

An instance of the order workflow is created and wait at the message catch event until the money is collected. The message catch event specifies the message name Money collected and the correlation key $.orderId. The key is resolved with the workflow instance payload which contains "orderId": "order-123", to order-123.

A message is published by the payment service using one of the Zeebe clients. It has the name Money collected and the correlation key order-123. Since the correlation information matches, the message is correlated to the workflow instance. That means its payload is merged into the workflow instance payload and the message catch event is left.

Note that a message can be correlated to multiple workflow instances if they share the same correlation information.

Message Buffering

In Zeebe, messages can be buffered for a given time. Buffering can be useful in a situation when it is not guaranteed that the message catch event is entered before the message is published.

A message has a time-to-live (TTL) which specifies the time how long it is buffered. Within this time, the message can be correlated to a workflow instance.

When a workflow instance enters a message catch event then it polls the buffer for a proper message. If a proper message exists then it is correlated to the workflow instance. In case multiple messages match the correlation information then the first published message is correlated. The behavior of the buffer is similar to a queue.

The buffering of a message is disabled when its TTL is set to zero. If the message can't be correlated to a workflow instance then it is discarded.

Message Uniqueness

A message can contain a unique id to ensure that it is published only once (i.e. idempotent). The id can be any string, for example, a request id, a tracking number or the offset/position in a message queue.

When the message is published then it checks if a message with the same name, correlation key and id exists in the buffer. If yes then the message is rejected and not correlated.

Note that the uniqueness check only looks into the buffer. If a message is discarded from the buffer then a message with the same name, correlation key and id can be published afterward.

JSON

Message Pack

For performance reasons JSON is encoded using MessagePack. MessagePack allows the broker to traverse a JSON document on the binary level without interpreting it as text and without need for complex parsing.

As a user, you do not need to deal with MessagePack. The client libraries take care of converting between MessagePack and JSON.

JSONPath

JSONPath is an expression language to extract data from a JSON document. It solves similar use cases that XPath solves for XML. See the JSONPath documentation for general information on the language.

JSONPath Support in Zeebe

The following table contains the JSONPath features/syntax which are supported by Zeebe:

JSONPath Description Supported
$ The root object/element Yes
@ The current object/element No
. or [] Child operator Yes
.. Recursive descent No
* Wildcard, matches all objects/elements regardless of their names.Yes
[] Subscript operator Yes
[,] Union operator No
[start:end:step] Array slice operator No
?() Applies a filter (script) expression No
() Script expression, using underlying script engine No

JSON Conditions

Conditions can be used for conditional flows to determine the following task.

A condition is a boolean expression with a JavaScript-like syntax. It allows to compare properties of the workflow instance payload with other properties or literals (e.g., numbers, strings, etc.). The payload properties are selected using JSON Path.

$.totalPrice > 100

$.owner == "Paul"

$.orderCount >= 5 && $.orderCount < 15

Literals

Literal Examples
JSON Path $.totalPrice, $.order.id, $.items[0].name
Number 25, 4.5, -3, -5.5
String "Paul", 'Jonny'
Boolean true, false
Null-Value null

A Null-Value can be used to check if a property is set (e.g., $.owner == null). If the property doesn't exist, then the JSON Path evaluation fails.

Comparison Operators

Operator Description Example
== equal to $.owner == "Paul"
!= not equal to $.owner != "Paul"
< less than $.totalPrice < 25
<= less than or equal to $.totalPrice <= 25
> greater than $.totalPrice > 25
>= greater than or equal to $.totalPrice >= 25

The operators <, <=, > and >= can only be used for numbers.

If the values of an operator have different types, then the evaluation fails.

Logical Operators

Operator Description Example
&& and $.orderCount >= 5 && $.orderCount < 15
|| or $.orderCount > 15 || $.totalPrice > 50

It's also possible to use parentheses between the operators to change the precedence (e.g., ($.owner == "Paul" || $.owner == "Jonny") && $.totalPrice > 25).

JSON Payload Mapping

This section provides examples for mapping JSON payloads. In the context of workflow execution, there are three types of mappings:

  1. Input mappings map workflow instance payload to task payload.
  2. Output mappings map task payload back into workflow instance payload.
  3. Merging mappings on end events or sequence flows that lead to parallel gateways.

Input Mapping

Description Workflow Instance Payload Input Mapping Task Payload
Default
{
 "price": 342.99,
 "productId": 41234
}
    
none
    
{
 "price": 342.99,
 "productId": 41234
}
    
Copy entire payload
{
 "price": 342.99,
 "productId": 41234
}
    
Source: $
Target: $
    
{
 "price": 342.99,
 "productId": 41234
}
    
Move payload into new object
{
 "price": 342.99,
 "productId": 41234
}
    
Source: $
Target: $.orderedItem
    
{
  "orderedItem": {
    "price": 342.99,
    "productId": 41234
  }
}
    
Extract object
{
 "address": {
    "street": "Borrowway 1",
    "postcode": "SO40 9DA",
    "city": "Southampton",
    "country": "UK"
  },
 "name": "Hans Horn"
}
    
Source: $.address
Target: $
    
{
  "street": "Borrowway 1",
  "postcode": "SO40 9DA",
  "city": "Southampton",
  "country": "UK"
}
    
Extract and put into new object
{
 "address": {
    "street": "Borrowway 1",
    "postcode": "SO40 9DA",
    "city": "Southampton",
    "country": "UK"
  },
 "name": "Hans Horn"
}
    
Source: $.address
Target: $.newAddress
    
{
 "newAddress":{
  "street": "Borrowway",
  "postcode": "SO40 9DA",
  "city": "Southampton",
  "country": "UK"
 }
}
    
Extract and put into new objects
{
 "order":
 {
  "customer:{
   "name": "Hans Horst",
   "customerId": 231
  },
  "price": 34.99
 }
}
    
Source: $.order.customer
Target: $.new.details
    
{
 "new":{
   "details": {
     "name": "Hans Horst",
     "customerId": 231
  }
 }
}
    
Extract array and put into new array
{
 "name": "Hans Hols",
 "numbers": [
   "221-3231-31",
   "312-312313",
   "31-21313-1313"
  ],
  "age": 43
{
    
Source: $.numbers
Target: $.contactNrs
    
{
 "contactNrs": [
   "221-3231-31",
   "312-312313",
   "31-21313-1313"
  ]
}
    
Extract single array value and put into new array
{
 "name": "Hans Hols",
 "numbers": [
   "221-3231-31",
   "312-312313",
   "31-21313-1313"
  ],
  "age": 43
{
    
Source: $.numbers[1]
Target: $.contactNrs[0]
    
{
 "contactNrs": [
   "312-312313"
  ]
 }
}
    
Extract single array value and put into property
{
 "name": "Hans Hols",
 "numbers": [
   "221-3231-31",
   "312-312313",
   "31-21313-1313"
  ],
  "age": 43
{
    
Source: $.numbers[1]
Target: $.contactNr
    
{
 "contactNr": "312-312313"
 }
}
    

Output Mapping

All examples assume merge output behavior.

Description Job Payload Workflow Instance Payload Output Mapping Result
Default Merge
{
 "sum": 234.97
}
  
{
 "prices": [
   199.99,
   29.99,
   4.99]
}
  
 none 
{
 "prices": [
   199.99,
   29.99,
   4.99],
 "sum": 234.97
}
  
Default Merge without workflow payload
{
 "sum": 234.97
}
  
{
}
  
 none 
{
 "sum": 234.97
}
  
Replace with entire payload
{
 "sum": 234.97
}
  
{
 "prices": [
   199.99,
   29.99,
   4.99]
}
  
Source: $
Target: $
  
{
 "sum": 234.97
}
  
Merge payload and write into new object
{
 "sum": 234.97
}
  
{
 "prices": [
   199.99,
   29.99,
   4.99]
}
  
Source: $
Target: $.total
  
{
 "prices": [
   199.99,
   29.99,
   4.99],
 "total": {
  "sum": 234.97
 }
}
  
Replace payload with object value
{
 "order":{
  "id": 12,
  "sum": 21.23
 }
}
  
{
 "ordering": true
}
  
Source: $.order
Target: $
  
{
  "id": 12,
  "sum": 21.23
}
  
Merge payload and write into new property
{
 "sum": 234.97
}
  
{
 "prices": [
   199.99,
   29.99,
   4.99]
}
  
Source: $.sum
Target: $.total
  
{
 "prices": [
   199.99,
   29.99,
   4.99],
 "total": 234.97
}
  
Merge payload and write into array
{
 "prices": [
   199.99,
   29.99,
   4.99]
}
  
{
 "orderId": 12
}
  
Source: $.prices
Target: $.prices
  
{
 "orderId": 12,
 "prices": [
   199.99,
   29.99,
   4.99]
}
  
Merge and update array value
{
 "newPrices": [
   199.99,
   99.99,
   4.99]
}
  
{
 "prices": [
   199.99,
   29.99,
   4.99]
}
  
Source: $.newPrices[1]
Target: $.prices[0]
  
{
 "prices": [
   99.99,
   29.99,
   4.99]
}
  
Extract array value and write into payload
{
 "newPrices": [
   199.99,
   99.99,
   4.99]
}
  
{
 "orderId": 12
}
  
Source: $.newPrices[1]
Target: $.price
  
{
 "orderId": 12,
 "price": 99.99
}
  

Merging Mapping

workflow

Description Sequence Flow Payload Mapping Result
Default Merge flow1:
{
 "orderId": "XY67C"
}
  
flow2:
{
 "total": 200.00
}
  
 none 
{
 "orderId": "XY67C",
 "total": 200.00
}
  
PUT instruction flow1:
{
 "orderId": "XY67C"
}
  
flow2:
{
 "total": 200.00
}
  
flow2:
Source: $.total
Target: $.sum
Type: PUT
  
{
 "orderId": "XY67C",
 "sum": 200.00,
 "total": 200.00
}
  
COLLECT instruction flow1:
{
 "item1Price": 130.99
}
  
flow2:
{
 "item2Price": 49.99
}
  
flow1:
Source: $.item1Price
Target: $.prices
Type: COLLECT
  

flow2:

Source: $.item2Price
Target: $.prices
Type: COLLECT

{
 "item1Price": 130.99,
 "item2Price": 49.99,
 "prices": [130.99, 49.99]
}
  

Exporters

Regardless of how an exporter is loaded (whether through an external JAR or not), all exporters interact in the same way with the broker, which is defined by the Exporter interface.

Loading

Once configured, exporters are loaded as part of the broker startup phase, before any processing is done.

During the loading phase, the configuration for each exporter is validated, such that the broker will not start if:

  • An exporter ID is not unique
  • An exporter points to a non-existent/non-accessible JAR
  • An exporter points to a non-existent/non-instantiable class
  • An exporter instance throws an exception in its Exporter#configure method.

The last point is there to provide individual exporters to perform lightweight validation of their configuration (e.g. fail if missing arguments).

One of the caveat of the last point is that an instance of an exporter is created and immediately thrown away; therefore, exporters should not perform any computationally heavy work during instantiation/configuration.

Note: Zeebe will create an isolated class loader for every JAR referenced by exporter configurations - that is, only once per JAR; if the same JAR is reused to define different exporters, then these will share the same class loader.

This has some nice properties, primarily that different exporters can depend on the same third party libraries without having to worry about versions, or class name collisions.

Additionally, exporters use the system class loader for system classes, or classes packaged as part of the Zeebe JAR.

Exporter specific configuration is handled through the exporter's [exporters.args] nested map. This provides a simple Map<String, Object> which is passed directly in form of a Configuration object when the broker calls the Exporter#configure(Configuration) method.

Configuration occurs at two different phases: during the broker startup phase, and once every time a leader is elected for a partition.

Processing

At any given point, there is exactly one leader node for a given partition. Whenever a node becomes the leader for a partition, one of the things it will do is run an instance of an exporter stream processor.

This stream processor will create exactly one instance of each configured exporter, and forward every record written on the stream to each of these in turn.

Note: this implies that there will be exactly one instance of every exporter for every partition: if you have 4 partitions, and at least 4 threads for processing, then there are potentially 4 instances of your exporter exporting simultaneously.

Note that Zeebe only guarantees at-least-once semantics, that is, a record will be seen at least once by an exporter, and maybe more. Cases where this may happen include:

  • During reprocessing after raft failover (i.e. new leader election)
  • On error if the position has not been updated yet

To reduce the amount of duplicate records an exporter will process, the stream processor will keep track of the position of the last successfully exported record for every single exporter; the position is sufficient since a stream is an ordered sequence of records whose position is monotonically increasing. This position is set by the exporter itself once it can guarantee a record has been successfully updated.

Note: although Zeebe tries to reduce the amount of duplicate records an exporter has to handle, it is likely that it will have to; therefore, it is necessary that export operations be idempotent.

This can be implemented either in the exporter itself, but if it exports to an external system, it is recommended that you perform deduplication there to reduce the load on Zeebe itself. Refer to the exporter specific documentation for how this is meant to be achieved.

Error handling

If an error occurs during the Exporter#open(Context) phase, the stream processor will fail and be restarted, potentially fixing the error; worst case scenario, this means no exporter is running at all until these errors stop.

If an error occurs during the Exporter#close phase, it will be logged, but will still allow other exporters to gracefully finish their work.

If an error occurs during processing, we will retry infinitely the same record until no error is produced. Worst case scenario, this means a failing exporter could bring all exporters to a halt. Currently, exporter implementations are expected to implement their own retry/error handling strategies, though this may change in the future.

Performance impact

Zeebe naturally incurs a performance impact for each loaded exporter. A slow exporter will slow down all other exporters for a given partition, and, in the worst case, could completely block a thread.

It's therefore recommended to keep exporters as simple as possible, and perform any data enrichment or transformation through the external system.

Zeebe Java Client

See also the Java Client Examples section.

Setting up the Zeebe Java Client

Prerequisites

  • Java 8

Usage in a Maven project

To use the Java client library, declare the following Maven dependency in your project:

<dependency>
  <groupId>io.zeebe</groupId>
  <artifactId>zeebe-client-java</artifactId>
  <version>${zeebe.version}</version>
</dependency>

The version of the client should always match the broker's version.

Bootstrapping

In Java code, instantiate the client as follows:

ZeebeClient client = ZeebeClient.newClientBuilder()
  .brokerContactPoint("127.0.0.1:26500")
  .build();

See the class io.zeebe.client.ZeebeClientBuilder for a description of all available configuration properties.

Get Started with the Java client

In this tutorial, you will learn to use the Java client in a Java application to interact with Zeebe.

You will be guided through the following steps:

You can find the complete source code, including the BPMN diagrams, on GitHub.

Prerequisites

Before you begin to setup your project please start the broker, i.e. by running the start up script bin/broker or bin/broker.bat in the distribution. Per default the broker is binding to the address localhost:26500, which is used as contact point in this guide. In case your broker is available under another address please adjust the broker contact point when building the client.

Set up a project

First, we need a Maven project. Create a new project using your IDE, or run the Maven command:

mvn archetype:generate
    -DgroupId=io.zeebe
    -DartifactId=zeebe-get-started-java-client
    -DarchetypeArtifactId=maven-archetype-quickstart
    -DinteractiveMode=false

Add the Zeebe client library as dependency to the project's pom.xml:

<dependency>
  <groupId>io.zeebe</groupId>
  <artifactId>zeebe-client-java</artifactId>
  <version>${zeebe.version}</version>
</dependency>

Create a main class and add the following lines to bootstrap the Zeebe client:

package io.zeebe;

import java.util.Properties;
import io.zeebe.client.ClientProperties;
import io.zeebe.client.ZeebeClient;

public class Application
{
    public static void main(String[] args)
    {
        final ZeebeClient client = ZeebeClient.newClientBuilder()
            // change the contact point if needed
            .brokerContactPoint("127.0.0.1:26500")
            .build();

        System.out.println("Connected.");

        // ...

        client.close();
        System.out.println("Closed.");
    }
}

Run the program. If you use an IDE, you can just execute the main class. Otherwise, you must build an executable JAR file with Maven and execute it. (See the GitHub repository on how to do this.)

You should see the output:

Connected.

Closed.

Model a workflow

Now, we need a first workflow which can then be deployed. Later, we will extend the workflow with more functionality.

Open the Zeebe Modeler and create a new BPMN diagram. Add a start event and an end event to the diagram and connect the events.

model-workflow-step-1

Set the id (i.e., the BPMN process id) and mark the diagram as executable. Save the diagram in the project's source folder.

Deploy a workflow

Next, we want to deploy the modeled workflow to the broker. The broker stores the workflow under its BPMN process id and assigns a version (i.e., the revision).

Add the following deploy command to the main class:

package io.zeebe;

import io.zeebe.client.api.events.DeploymentEvent;

public class Application
{
    public static void main(String[] args)
    {
        // after the client is connected

        final DeploymentEvent deployment = client.workflowClient()
            .newDeployCommand()
            .addResourceFromClasspath("order-process.bpmn")
            .send()
            .join();

        final int version = deployment.getWorkflows().get(0).getVersion();
        System.out.println("Workflow deployed. Version: " + version);

        // ...
    }
}

Run the program and verify that the workflow is deployed successfully. You should see the output:

Workflow deployed. Version: 1

Create a workflow instance

Finally, we are ready to create a first instance of the deployed workflow. A workflow instance is created of a specific version of the workflow, which can be set on creation.

Add the following create command to the main class:

package io.zeebe;

import io.zeebe.client.api.events.WorkflowInstanceEvent;

public class Application
{
    public static void main(String[] args)
    {
        // after the workflow is deployed

        final WorkflowInstanceEvent wfInstance = client.workflowClient()
            .newCreateInstanceCommand()
            .bpmnProcessId("order-process")
            .latestVersion()
            .send()
            .join();

        final long workflowInstanceKey = wfInstance.getWorkflowInstanceKey();

        System.out.println("Workflow instance created. Key: " + workflowInstanceKey);

        // ...
    }
}

Run the program and verify that the workflow instance is created. You should see the output:

Workflow instance created. Key: 6

You did it! You want to see how the workflow instance is executed?

Start the Zeebe Monitor using java -jar zeebe-simple-monitor.jar.

Open a web browser and go to http://localhost:8080/.

Connect to the broker and switch to the workflow instances view. Here, you see the current state of the workflow instance which includes active jobs, completed activities, the payload and open incidents.

zeebe-monitor-step-1

Work on a job

Now we want to do some work within your workflow. First, add a few service jobs to the BPMN diagram and set the required attributes. Then extend your main class and create a job worker to process jobs which are created when the workflow instance reaches a service task.

Open the BPMN diagram in the Zeebe Modeler. Insert a few service tasks between the start and the end event.

model-workflow-step-2

You need to set the type of each task, which identifies the nature of the work to be performed. Set the type of the first task to 'payment-service'.

Optionally, you can define parameters of the task by adding headers. Add the header method = VISA to the first task.

Save the BPMN diagram and switch back to the main class.

Add the following lines to create a job worker for the first jobs type:

package io.zeebe;

import io.zeebe.client.api.subscription.JobWorker;

public class Application
{
    public static void main(String[] args)
    {
        // after the workflow instance is created

        final JobWorker jobWorker = client.jobClient()
            .newWorker()
            .jobType("payment-service")
            .handler((jobClient, job) ->
            {
                final Map<String, Object> headers = job.getCustomHeaders();
                final String method = (String) headers.get("method");

                System.out.println("Collect money using payment method: " + method);

                // ...

                jobClient.newCompleteCommand(job.getKey())
                    .send()
                    .join();
            })
            .open();

        // waiting for the jobs

        jobWorker.close();

        // ...
    }
}

Run the program and verify that the job is processed. You should see the output:

Collect money using payment method: VISA

When you have a look at the Zeebe Monitor, then you can see that the workflow instance moved from the first service task to the next one:

zeebe-monitor-step-2

Work with data

Usually, a workflow is more than just tasks, there is also data flow. The tasks need data as input and in order to produce data.

In Zeebe, the data is represented as a JSON document. When you create a workflow instance, then you can pass the data as payload. Within the workflow, you can use input and output mappings on tasks to control the data flow.

In our example, we want to create a workflow instance with the following data:

{
  "orderId": 31243,
  "orderItems": [435, 182, 376]
}

The first task should take orderId as input and return totalPrice as result.

Open the BPMN diagram and switch to the input-output-mappings of the first task. Add the input mapping $.orderId : $.orderId and the output mapping $.totalPrice : $.totalPrice.

Save the BPMN diagram and go back to the main class.

Modify the create command and pass the data as payload. Also, modify the job worker to read the jobs payload and complete the job with payload.

package io.zeebe;

public class Application
{
    public static void main(String[] args)
    {
        // after the workflow is deployed

        final Map<String, Object> data = new HashMap<>();
        data.put("orderId", 31243);
        data.put("orderItems", Arrays.asList(435, 182, 376));

        final WorkflowInstanceEvent wfInstance = client.workflowClient()
            .newCreateInstanceCommand()
            .bpmnProcessId("order-process")
            .latestVersion()
            .payload(data)
            .send()
            .join();

        // ...

        final JobWorker jobWorker = client.jobClient()
            .newWorker()
            .jobType("payment-service")
            .handler((jobClient, job) ->
            {
                final Map<String, Object> headers = job.getCustomHeaders();
                final String method = (String) headers.get("method");

                final Map<String, Object> payload = job.getPayloadAsMap();

                System.out.println("Process order: " + payload.get("orderId"));
                System.out.println("Collect money using payment method: " + method);

                // ...

                payload.put("totalPrice", 46.50);

                jobClient.newCompleteCommand(job.getKey())
                    .payload(payload)
                    .send()
                    .join();
            })
            .open();

        // ...
    }
}

Run the program and verify that the payload is mapped into the job. You should see the output:

Process order: {"orderId":31243}
Collect money using payment method: VISA

When we have a look at the Zeebe Monitor, then we can see how the payload is modified after the activity:

zeebe-monitor-step-3

What's next?

Hurray! You finished this tutorial and learned the basic usage of the Java client.

Next steps:

Logging

The client uses SLF4J for logging. It logs useful things, such as exception stack traces when a job handler fails execution. Using the SLF4J API, any SLF4J implementation can be plugged in. The following example uses Log4J 2.

Maven dependencies

<dependency>
  <groupId>org.apache.logging.log4j</groupId>
  <artifactId>log4j-slf4j-impl</artifactId>
  <version>2.8.1</version>
</dependency>

<dependency>
  <groupId>org.apache.logging.log4j</groupId>
  <artifactId>log4j-core</artifactId>
  <version>2.8.1</version>
</dependency>

Configuration

Add a file called log4j2.xml to the classpath of your application. Add the following content:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN" strict="true"
    xmlns="http://logging.apache.org/log4j/2.0/config"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://logging.apache.org/log4j/2.0/config https://raw.githubusercontent.com/apache/logging-log4j2/log4j-2.8.1/log4j-core/src/main/resources/Log4j-config.xsd">
  <Appenders>
    <Console name="Console" target="SYSTEM_OUT">
      <PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level Java Client: %logger{36} - %msg%n"/>
    </Console>
  </Appenders>
  <Loggers>
    <Root level="info">
      <AppenderRef ref="Console"/>
    </Root>
  </Loggers>
</Configuration>

This will log every log message to the console.

Zeebe Go Client

Get Started with the Go client

In this tutorial, you will learn to use the Go client in a Go application to interact with Zeebe.

You will be guided through the following steps:

You can find the complete source code, on GitHub.

Prerequisites

Before you begin to setup your project please start the broker, i.e. by running the start up script bin/broker or bin/broker.bat in the distribution. Per default the broker is binding to the address localhost:26500, which is used as contact point in this guide. In case your broker is available under another address please adjust the broker contact point when building the client.

Set up a project

First, we need a new Go project. Create a new project using your IDE, or create new Go module with:

mkdir -p $GOPATH/src/github.com/{{your username}}/zb-example
cd $GOPATH/src/github.com/{{your username}}/zb-example

Install Zeebe Go client library:

go get github.com/zeebe-io/zeebe/clients/go

Create a main.go file inside the module and add the following lines to bootstrap the Zeebe client:

package main

import (
    "fmt"
    "github.com/zeebe-io/zeebe/clients/go"
    "github.com/zeebe-io/zeebe/clients/go/pb"
)

const BrokerAddr = "0.0.0.0:26500"

func main() {
    zbClient, err := zbc.NewZBClient(BrokerAddr)
    if err != nil {
        panic(err)
    }

    topology, err := zbClient.NewTopologyCommand().Send()
    if err != nil {
        panic(err)
    }

    for _, broker := range topology.Brokers {
        fmt.Println("Broker", broker.Host, ":", broker.Port)
        for _, partition := range broker.Partitions {
            fmt.Println("  Partition", partition.PartitionId, ":", roleToString(partition.Role))
        }
    }
}

func roleToString(role pb.Partition_PartitionBrokerRole) string {
    switch role {
    case  pb.Partition_LEADER:
        return "Leader"
    case pb.Partition_FOLLOW:
        return "Follower"
    default:
        return "Unknown"
    }
}

Run the program.

go run main.go

You should see similar output:

Broker 0.0.0.0 : 26501
  Partition 0 : Leader
}

Model a workflow

Now, we need a first workflow which can then be deployed. Later, we will extend the workflow with more functionality.

Open the Zeebe Modeler and create a new BPMN diagram. Add a start event and an end event to the diagram and connect the events.

model-workflow-step-1

Set the id to order-process (i.e., the BPMN process id) and mark the diagram as executable. Save the diagram in the project's source folder.

Deploy a workflow

Next, we want to deploy the modeled workflow to the broker. The broker stores the workflow under its BPMN process id and assigns a version (i.e., the revision).

package main

import (
    "fmt"
    "github.com/zeebe-io/zeebe/clients/go"
)

const brokerAddr = "0.0.0.0:26500"

func main() {
    zbClient, err := zbc.NewZBClient(brokerAddr)
    if err != nil {
        panic(err)
    }

    response, err := zbClient.NewDeployWorkflowCommand().AddResourceFile("order-process.bpmn").Send()
    if err != nil {
        panic(err)
    }

    fmt.Println(response.String())
}

Run the program and verify that the workflow is deployed successfully. You should see similar the output:

key:1 workflows:<bpmnProcessId:"order-process" version:1 workflowKey:1 resourceName:"order-process.bpmn" >

Create a workflow instance

Finally, we are ready to create a first instance of the deployed workflow. A workflow instance is created of a specific version of the workflow, which can be set on creation.

package main

import (
    "fmt"
    "github.com/zeebe-io/zeebe/clients/go"
)

const brokerAddr = "0.0.0.0:26500"

func main() {
    client, err := zbc.NewZBClient(brokerAddr)
    if err != nil {
        panic(err)
    }

    // After the workflow is deployed.
    payload := make(map[string]interface{})
    payload["orderId"] = "31243"

    request, err := client.NewCreateInstanceCommand().BPMNProcessId("order-process").LatestVersion().PayloadFromMap(payload)
    if err != nil {
        panic(err)
    }

    msg, err := request.Send()
    if err != nil {
        panic(err)
    }

    fmt.Println(msg.String())
}

Run the program and verify that the workflow instance is created. You should see the output:

workflowKey:1 bpmnProcessId:"order-process" version:1 workflowInstanceKey:6

You did it! You want to see how the workflow instance is executed?

Start the Zeebe Monitor using java -jar zeebe-simple-monitor.jar.

Open a web browser and go to http://localhost:8080/.

Connect to the broker and switch to the workflow instances view. Here, you see the current state of the workflow instance which includes active jobs, completed activities, the payload and open incidents.

zeebe-monitor-step-1

Work on a task

Now we want to do some work within your workflow. First, add a few service tasks to the BPMN diagram and set the required attributes. Then extend your main.go file and activate a job which are created when the workflow instance reaches a service task.

Open the BPMN diagram in the Zeebe Modeler. Insert a few service tasks between the start and the end event.

model-workflow-step-2

You need to set the type of each task, which identifies the nature of the work to be performed. Set the type of the first task to payment-service.

Add the following lines to redeploy the modified process, then activate and complete a job of the first task type:

package main

import (
    "fmt"
    "github.com/zeebe-io/zeebe/clients/go"
    "time"
)

const brokerAddr = "0.0.0.0:26500"

func main() {
    client, err := zbc.NewZBClient(brokerAddr)
    if err != nil {
        panic(err)
    }

    // deploy workflow
    response, err := client.NewDeployWorkflowCommand().AddResourceFile("order-process.bpmn").Send()
    if err != nil {
        panic(err)
    }

    fmt.Println(response.String())

    // create a new workflow instance
    payload := make(map[string]interface{})
    payload["orderId"] = "31243"

    request, err := client.NewCreateInstanceCommand().BPMNProcessId("order-process").LatestVersion().PayloadFromMap(payload)
    if err != nil {
        panic(err)
    }

    result, err := request.Send()
    if err != nil {
        panic(err)
    }

    fmt.Println(result.String())

    // sleep to allow job to be created
    time.Sleep(1 * time.Second)

    jobs, err := client.NewActivateJobsCommand().JobType("payment-service").Amount(1).WorkerName("sample-app").Timeout(1 * time.Minute).Send()
    if err != nil {
        panic(err)
    }

    for _, job := range jobs {
        client.NewCompleteJobCommand().JobKey(job.GetKey()).Send()
        fmt.Println("Completed job", job.String())
    }
}

In this example we activate a job from the previously created workflow instance and complete it. When you have a look at the Zeebe Monitor, then you can see that the workflow instance moved from the first service task to the next one:

zeebe-monitor-step-2

When you run the above example you should see similar output:

key:26 workflows:<bpmnProcessId:"order-process" version:2 workflowKey:2 resourceName:"order-process.bpmn" >
workflowKey:2 bpmnProcessId:"order-process" version:2 workflowInstanceKey:31
Completed job key:2 type:"payment-service" jobHeaders:<workflowInstanceKey:31 bpmnProcessId:"order-process" workflowDefinitionVersion:2 workflowKey:2 activityId:"collect-money" activityInstanceKey:46 > customHeaders:"{\"method\":\"VISA\"}" worker:"sample-app" retries:3 deadline:1539603072292 payload:"{\"orderId\":\"31243\"}"

What's next?

Yay! You finished this tutorial and learned the basic usage of the Go client.

Next steps:

Operations

The Monitor

Zeebe provides a monitor to inspect workflow instances. The monitor is a standalone web application which connects to a Zeebe broker and consumes all records.

The monitor can be downloaded from github.com/zeebe-io/zeebe-simple-monitor.

The zeebe.cfg.toml file

The following snipped represents the default Zeebe configuration, which is shipped with the distribution. It can be found inside the config folder (config/zeebe.cfg.toml) and can be used to adjust Zeebe to your needs.

Source on github

# Zeebe broker configuration file

# Overview -------------------------------------------

# This file contains a complete list of available configuration options.

# Default values:
#
# When the default value is used for a configuration option, the option is
# commented out. You can learn the default value from this file

# Conventions:
#
# Byte sizes
# For buffers and others must be specified as strings and follow the following
# format: "10U" where U (unit) must be replaced with K = Kilobytes, M = Megabytes or G = Gigabytes.
# If unit is omitted then the default unit is simply bytes.
# Example:
# sendBufferSize = "16M" (creates a buffer of 16 Megabytes)
#
# Time units
# Timeouts, intervals, and the likes, must be specified as strings and follow the following
# format: "VU", where:
#   - V is a numerical value (e.g. 1, 1.2, 3.56, etc.)
#   - U is the unit, one of: ms = Millis, s = Seconds, m = Minutes, or h = Hours
#
# Paths:
# Relative paths are resolved relative to the installation directory of the
# broker.

# ----------------------------------------------------

[network]

# This section contains the network configuration. Particularly, it allows to
# configure the hosts and ports the broker should bind to. the broker exposes 3
# ports: 1. client: the port on which client (Java, CLI, Go, ...) connections
# are handled 2. management: used internally by the cluster for the gossip
# membership protocol and other management interactions 3. replication: used
# internally by the cluster for replicating data across nodes using the raft
# protocol

# Controls the default host the broker should bind to. Can be overwritten on a
# per binding basis for client, management and replication
#
# This setting can also be overridden using the environment variable ZEEBE_HOST.
# host = "0.0.0.0"

# If a port offset is set it will be added to all ports specified in the config
# or the default values. This is a shortcut to not always specifying every port.
#
# The offset will be added to the second last position of the port, as Zeebe
# requires multiple ports. As example a portOffset of 5 will increment all ports
# by 50, i.e. 26500 will become 26550 and so on.
#
# This setting can also be overridden using the environment variable ZEEBE_PORT_OFFSET.
# portOffset = 0

# Controls the default size of the buffers that are used for buffering outgoing
# messages. Can be overwritten on a per binding basis for client, management and
# replication
# defaultSendBufferSize = "16M"

[network.gateway]

# Enables embedded gateway to start
# enabled = true
#
# Overrides the host the gateway binds to
# host = "localhost"
#
# Sets the port the gateway binds to
# port = 26500

[network.client]

# Allows to override the host the client api binds to
# host = "localhost"
#
# The port the client api binds to
# port = 26501
#
# Overrides the size of the buffer used for buffering outgoing messages to
# clients
# sendBufferSize = "16M"
#
# Sets the size of the buffer used for receiving control messages from clients
# (such as management of subscriptions)
# controlMessageBufferSize = "8M"

[network.management]

# Overrides the host the management api binds to
# host = "localhost"
#
# Sets the port the management api binds to
# port = 26502
#
# Overrides the size of the buffer to be used for buffering outgoing messages to
# other brokers through the management protocols
# sendBufferSize = "16M"
#
# Sets the buffer size used for receiving gossip messages and others
# receiveBufferSize = "8M"

[network.replication]

# Overrides the host the replication api binds to
# host = "localhost"
#
# Sets the port the replication api binds to
# port = 26503
#
# Sets the buffer size used for buffering outgoing raft (replication) messages
# sendBufferSize = "16M"

[network.subscription]

# Overrides the host the subscription api binds to
# host = "localhost"
#
# Sets the port the subscription api binds to
# port = 26504
#
# Overrides the size of the buffer to be used for buffering outgoing messages to
# other brokers through the subscription protocols
# sendBufferSize = "16M"
#
# Sets the buffer size used for receiving subscription messages and others
# receiveBufferSize = "8M"


[data]

# This section allows to configure Zeebe's data storage. Data is stored in
# "partition folders". A partition folder has the following structure:
#
# internal-system-0                 (root partition folder)
# ├── partition.json                (metadata about the partition)
# ├── segments                      (the actual data as segment files)
# │   ├── 00.data
# │   └── 01.data
# └── snapshots                     (snapshot data)
#

# Specify a list of directories in which data is stored. Using multiple
# directories makes sense in case the machine which is running Zeebe has
# multiple disks which are used in a JBOD (just a bunch of disks) manner. This
# allows to get greater throughput in combination with a higher io thread count
# since writes to different disks can potentially be done in parallel.
#
# This setting can also be overridden using the environment variable ZEEBE_DIRECTORIES.
# directories = [ "data" ]

# The default size of data segments.
# defaultSegmentSize = "512M"

# How often we take snapshots of streams (time unit)
# snapshotPeriod = "15m"

# How often follower partitions will check for new snapshots to replicate from
# the leader partitions. Snapshot replication enables faster failover by
# reducing how many log entries must be reprocessed in case of leader change.
# snapshotReplicationPeriod = "5m"


[cluster]

# This section contains all cluster related configurations, to setup an zeebe cluster

# Specifies the unique id of this broker node in a cluster.
# The id should be between 0 and number of nodes in the cluster (exclusive).
#
# This setting can also be overridden using the environment variable ZEEBE_NODE_ID.
# nodeId = 0

# Controls the number of partitions, which should exist in the cluster.
#
# This can also be overridden using the environment variable ZEEBE_PARTITIONS_COUNT.
# partitionsCount = 1

# Controls the replication factor, which defines the count of replicas per partition.
# The replication factor cannot be greater than the number of nodes in the cluster.
#
# This can also be overridden using the environment variable ZEEBE_REPLICATION_FACTOR.
# replicationFactor = 1

# Specifies the zeebe cluster size. This value is used to determine which broker
# is responsible for which partition.
#
# This can also be overridden using the environment variable ZEEBE_CLUSTER_SIZE.
# clusterSize = 1

# Allows to specify a list of known other nodes to connect to on startup
# The contact points of the management api must be specified.
# The format is [HOST:PORT]
# Example:
# initialContactPoints = [ "192.168.1.22:26502", "192.168.1.32:26502" ]
#
# This setting can also be overridden using the environment variable ZEEBE_CONTACT_POINTS
# specifying a comma-separated list of contact points.
#
# Default is empty list:
# initialContactPoints = []

[threads]

# Controls the number of non-blocking CPU threads to be used. WARNING: You
# should never specify a value that is larger than the number of physical cores
# available. Good practice is to leave 1-2 cores for ioThreads and the operating
# system (it has to run somewhere). For example, when running Zeebe on a machine
# which has 4 cores, a good value would be 2.
#
# The default value is 2.
#cpuThreadCount = 2

# Controls the number of io threads to be used. These threads are used for
# workloads that write data to disk. While writing, these threads are blocked
# which means that they yield the CPU.
#
# The default value is 2.
#ioThreadCount = 2

[metrics]

# Path to the file to which metrics are written. Metrics are written in a
# text-based format understood by prometheus.io
# metricsFile = "metrics/zeebe.prom"

# Controls the interval at which the metrics are written to the metrics file
# reportingInterval = "5s"

[gossip]

# retransmissionMultiplier = 3
# probeInterval = "1s"
# probeTimeout = "500ms"
# probeIndirectNodes = 3
# probeIndirectTimeout = "1s"
# suspicionMultiplier = 5
# syncTimeout = "3s"
# syncInterval = "15s"
# joinTimeout = "1s"
# joinInterval = "5s"
# leaveTimeout = "1s"
# maxMembershipEventsPerMessage = 32
# maxCustomEventsPerMessage = 8

[raft]

# heartbeatInterval = "250ms"
# electionInterval = "1s"
# leaveTimeout = "1s"

# Configure exporters below; note that configuration parsing conventions do not apply to exporter
# arguments, which will be parsed as normal TOML.
#
# Each exporter should be configured following this template:
#
# id:
#   property should be unique in this configuration file, as it will server as the exporter
#   ID for loading/unloading.
# jarPath:
#   path to the JAR file containing the exporter class. JARs are only loaded once, so you can define
#   two exporters that point to the same JAR, with the same class or a different one, and use args
#   to parametrize its instantiation.
# className:
#   entry point of the exporter, a class which *must* extend the io.zeebe.exporter.Exporter
#   interface.
#
# A nested table as [exporters.args] will allow you to inject arbitrary arguments into your
# class through the use of annotations.
#
# Enable the following exporter to get debug output of the exporter records
#
# [[exporters]]
# id = "debug"
# className = "io.zeebe.broker.exporter.DebugExporter"
# [exporters.args]
#   logLevel = "debug"
#   prettyPrint = false
#
# An example configuration for the elasticsearch exporter:
#
#[[exporters]]
#id = "elasticsearch"
#className = "io.zeebe.exporter.ElasticsearchExporter"
#
#  [exporters.args]
#  url = "http://localhost:9200"
#
#  [exporters.args.bulk]
#  delay = 5
#  size = 1_000
#
#  [exporters.args.index]
#  prefix = "zeebe-record"
#  createTemplate = true
#
#  command = false
#  event = true
#  rejection = false
#
#  deployment = true
#  incident = true
#  job = true
#  message = false
#  messageSubscription = false
#  raft = false
#  workflowInstance = true
#  workflowInstanceSubscription = false

Setting up a Zeebe Cluster

To setup a cluster you need to adjust the cluster section in the Zeebe configuration file. Below is a snipped of the default Zeebe configuration file, it should be self explanatory.

[cluster]

# This section contains all cluster related configurations, to setup an zeebe cluster

# Specifies the unique id of this broker node in a cluster.
# The id should be between 0 and number of nodes in the cluster (exclusive).
#
# This setting can also be overridden using the environment variable ZEEBE_NODE_ID.
# nodeId = 0

# Controls the number of partitions, which should exist in the cluster.
#
# This can also be overridden using the environment variable ZEEBE_PARTITIONS_COUNT.
# partitionsCount = 1

# Controls the replication factor, which defines the count of replicas per partition.
# The replication factor cannot be greater than the number of nodes in the cluster.
#
# This can also be overridden using the environment variable ZEEBE_REPLICATION_FACTOR.
# replicationFactor = 1

# Specifies the zeebe cluster size. This value is used to determine which broker
# is responsible for which partition.
#
# This can also be overridden using the environment variable ZEEBE_CLUSTER_SIZE.
# clusterSize = 1

# Allows to specify a list of known other nodes to connect to on startup
# The contact points of the management api must be specified.
# The format is [HOST:PORT]
# Example:
# initialContactPoints = [ "192.168.1.22:26502", "192.168.1.32:26502" ]
#
# This setting can also be overridden using the environment variable ZEEBE_CONTACT_POINTS
# specifying a comma-separated list of contact points.
#
# Default is empty list:
# initialContactPoints = []

Example

In this example we will setup an Zeebe cluster with five brokers. Each broker needs to get an unique node id. To scale well, we will bootstrap five partitions with an replication factor of three. For more information about this, please take a look into the Clustering section.

The clustering setup will look like this:

cluster

Configuration

The configuration of the first broker could look like this:

[cluster]
nodeId = 0
partitionsCount = 5
replicationFactor = 3
clusterSize = 5

For the other brokers the configuration will slightly change.

[cluster]
nodeId = NODE_ID
partitionsCount = 5
replicationFactor = 3
clusterSize = 5
initialContactPoints = [ ADDRESS_AND_PORT_OF_NODE_0]

Each broker needs an unique node id. The ids should be in range of zero and clusterSize - 1. You need to replace the NODE_ID placeholder with an appropriate value. Furthermore the brokers needs an initial contact point to start there gossip conversation. Make sure that you use the address and management port of another broker. You need to replace the ADDRESS_AND_PORT_OF_NODE_0 placeholder.

It is not necessary that each broker has the first node as initial contact point, but it is easier for the configuration. You could also configure more brokers as initial contact points, to make sure that the bootstrapping works without any problems.

Partitions bootstrapping

On bootstrap, each node will create an partition matrix.

These matrix depends on the partitions count, replication factor and the cluster size. If you did the configuration right and used the same values for partitionsCount, replicationFactor and clusterSize on each node, then all nodes will generate the same partition matrix.

For the current example the matrix will look like the following:

Node 0 Node 1 Node 2 Node 3 Node 4
Partition 0 Leader Follower Follower - -
Partition 1 - Leader Follower Follower -
Partition 2 - - Leader Follower Follower
Partition 3 Follower - - Leader Follower
Partition 4 Follower Follower - - Leader

The matrix ensures that the partitions are well distributed between the different nodes. Furthermore it guarantees that each node knows exactly, which partitions he has to bootstrap and for which he will become leader as first (this could change later, if he needs to step down for example).

The Metrics

When operating a distributed system like Zeebe, it is important to put proper monitoring in place. To facilitate this, Zeebe exposes an extensive set of metrics.

Zeebe writes metrics to a file. The reporting interval can be configured.

Types of metrics

  • Counters: a time series that records a growing count of some unit. Examples: number of bytes transmitted over the network, number of workflow instances started, ...
  • Gauges: a time series that records the current size of some unit. Examples: number of currently open client connections, current number of partitions, ...

Metrics Format

Zeebe exposes metrics directly in Prometheus text format. The details of the format can be read in the Prometheus documentation.

Example:

zb_storage_fs_total_bytes{cluster="zeebe",node="localhost:26500",partition="0"} 4192 1522124395234

The record above descibes that the total size on bytes of partition 0 on node localhost:26500 in cluster zeebe is 4192. The last number is a unix epoch timestamp.

Configuring Metrics

Metrics can be configured in the configuration file.

Connecting Prometheus

As explained, Zeebe writes metrics to a file. The default location of the file is $ZB_HOME/metrics/zeebe.prom. There are two ways to connect Zeebe to Prometheus:

Node exporter

In case you are already using the prometheus node exporter, you can relocate the metrics file to the scrape directory of the node exporter. The configuration would look as follows:

[metrics]
reportingInterval = 15
metricsFile = "zeebe.prom"
directory = "/var/lib/metrics"

HTTP

In case you want to scrape Zeebe nodes via HTTP, you can start a http server inside the metrics directory and expose the directory via HTTP. In case python is available, you can use

$cd $ZB_HOME/metrics
$python -m SimpleHTTPServer 8000

Then, add the following entry to your prometheus.yml:

- job_name: zb
  scrape_interval: 15s
  metrics_path: /zeebe.prom
  scheme: http
  static_configs:
  - targets:
    - localhost:8000

Grafana Dashboards

The Zeebe community has prepared two ready to use Grafana dashboars:

Overview Dashboard

The overview dashboard summarized high level metrics on a cluster level. This can be used to monitor a Zeebe production cluster. The dashboard can be found here.

Low-level Diagnostics Dashboard

The diagnostics dashboard provides more low level metrics on a node level. It can be used for gaining a better understanding about the workload currently performed by individual nodes. The dashboard can be found here.

Available Metrics

All metrics exposed by Zeebe have the zb_*-prefix.

To each metric, the following labels are added:

  • cluster: the name of the Zeebe cluster (relevant in case you operate multiple clusters).
  • node: the identifier of the node which has written the metrics

Many metrics also add the following labels:

  • partition: cluster-unique id of the partition

The following components expose metrics:

  • zb_broker_info: summarized information about available nodes
  • zb_buffer_*: diagnostics, buffer metrics
  • zb_scheduler_*: diagnostics, utilization metrics of Zeebe's internal task scheduler
  • zb_storage_*: storage metrics
  • zb_streamprocessor_*: stream processing metrics such as events processed by partition
  • zb_transport_*: network transport metrics such as number of open connections, bytes received, transmitted, etc ...
  • zb_workflow_*: worflow metrics such as number of workflow instances created, completed, ...

Example Code using the Zeebe Java Client

These examples are accessible in the zeebe-io github repository at commit f81fc87e5122d89c4850b844054eee48f26c4b29. Link to browse code on github.

Instructions to access code locally:

git clone https://github.com/zeebe-io/zeebe.git
git checkout f81fc87e5122d89c4850b844054eee48f26c4b29
cd zeebe/samples

Import the Maven project in the samples directory into your IDE to start hacking.

Workflow

Job

Data

Cluster

Deploy a Workflow

Related Resources

Prerequisites

  1. Running Zeebe broker with endpoint localhost:26500 (default)

WorkflowDeployer.java

Source on github

/*
 * Copyright © 2017 camunda services GmbH (info@camunda.com)
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package io.zeebe.example.workflow;

import io.zeebe.client.ZeebeClient;
import io.zeebe.client.ZeebeClientBuilder;
import io.zeebe.client.api.clients.WorkflowClient;
import io.zeebe.client.api.events.DeploymentEvent;

public class WorkflowDeployer {

  public static void main(final String[] args) {
    final String broker = "localhost:26500";

    final ZeebeClientBuilder clientBuilder =
        ZeebeClient.newClientBuilder().brokerContactPoint(broker);

    try (ZeebeClient client = clientBuilder.build()) {
      final WorkflowClient workflowClient = client.workflowClient();

      final DeploymentEvent deploymentEvent =
          workflowClient
              .newDeployCommand()
              .addResourceFromClasspath("demoProcess.bpmn")
              .send()
              .join();

      System.out.println("Deployment created with key: " + deploymentEvent.getKey());
    }
  }
}

demoProcess.bpmn

Source on github

Download the XML and save it in the Java classpath before running the example. Open the file with Zeebe Modeler for a graphical representation.

<?xml version="1.0" encoding="UTF-8"?>
<bpmn:definitions xmlns:bpmn="http://www.omg.org/spec/BPMN/20100524/MODEL" xmlns:bpmndi="http://www.omg.org/spec/BPMN/20100524/DI" xmlns:di="http://www.omg.org/spec/DD/20100524/DI" xmlns:dc="http://www.omg.org/spec/DD/20100524/DC" xmlns:camunda="http://camunda.org/schema/1.0/bpmn" xmlns:zeebe="http://camunda.org/schema/zeebe/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" id="Definitions_1" targetNamespace="http://bpmn.io/schema/bpmn" exporter="Camunda Modeler" exporterVersion="1.5.0-nightly">
  <bpmn:process id="demoProcess" isExecutable="true">
    <bpmn:startEvent id="start" name="start">
      <bpmn:outgoing>SequenceFlow_1sz6737</bpmn:outgoing>
    </bpmn:startEvent>
    <bpmn:sequenceFlow id="SequenceFlow_1sz6737" sourceRef="start" targetRef="taskA" />
    <bpmn:sequenceFlow id="SequenceFlow_06ytcxw" sourceRef="taskA" targetRef="taskB" />
    <bpmn:sequenceFlow id="SequenceFlow_1oh45y7" sourceRef="taskB" targetRef="taskC" />
    <bpmn:endEvent id="end" name="end">
      <bpmn:incoming>SequenceFlow_148rk2p</bpmn:incoming>
    </bpmn:endEvent>
    <bpmn:sequenceFlow id="SequenceFlow_148rk2p" sourceRef="taskC" targetRef="end" />
    <bpmn:serviceTask id="taskA" name="task A">
      <bpmn:extensionElements>
        <zeebe:taskDefinition type="foo" />
      </bpmn:extensionElements>
      <bpmn:incoming>SequenceFlow_1sz6737</bpmn:incoming>
      <bpmn:outgoing>SequenceFlow_06ytcxw</bpmn:outgoing>
    </bpmn:serviceTask>
    <bpmn:serviceTask id="taskB" name="task B">
      <bpmn:extensionElements>
        <zeebe:taskDefinition type="bar" />
      </bpmn:extensionElements>
      <bpmn:incoming>SequenceFlow_06ytcxw</bpmn:incoming>
      <bpmn:outgoing>SequenceFlow_1oh45y7</bpmn:outgoing>
    </bpmn:serviceTask>
    <bpmn:serviceTask id="taskC" name="task C">
      <bpmn:extensionElements>
        <zeebe:taskDefinition type="foo" />
      </bpmn:extensionElements>
      <bpmn:incoming>SequenceFlow_1oh45y7</bpmn:incoming>
      <bpmn:outgoing>SequenceFlow_148rk2p</bpmn:outgoing>
    </bpmn:serviceTask>
  </bpmn:process>
  <bpmndi:BPMNDiagram id="BPMNDiagram_1">
    <bpmndi:BPMNPlane id="BPMNPlane_1" bpmnElement="demoProcess">
      <bpmndi:BPMNShape id="_BPMNShape_StartEvent_2" bpmnElement="start">
        <dc:Bounds x="173" y="102" width="36" height="36" />
        <bpmndi:BPMNLabel>
          <dc:Bounds x="180" y="138" width="22" height="12" />
        </bpmndi:BPMNLabel>
      </bpmndi:BPMNShape>
      <bpmndi:BPMNEdge id="SequenceFlow_1sz6737_di" bpmnElement="SequenceFlow_1sz6737">
        <di:waypoint xsi:type="dc:Point" x="209" y="120" />
        <di:waypoint xsi:type="dc:Point" x="310" y="120" />
        <bpmndi:BPMNLabel>
          <dc:Bounds x="260" y="105" width="0" height="0" />
        </bpmndi:BPMNLabel>
      </bpmndi:BPMNEdge>
      <bpmndi:BPMNEdge id="SequenceFlow_06ytcxw_di" bpmnElement="SequenceFlow_06ytcxw">
        <di:waypoint xsi:type="dc:Point" x="410" y="120" />
        <di:waypoint xsi:type="dc:Point" x="502" y="120" />
        <bpmndi:BPMNLabel>
          <dc:Bounds x="456" y="105" width="0" height="0" />
        </bpmndi:BPMNLabel>
      </bpmndi:BPMNEdge>
      <bpmndi:BPMNEdge id="SequenceFlow_1oh45y7_di" bpmnElement="SequenceFlow_1oh45y7">
        <di:waypoint xsi:type="dc:Point" x="602" y="120" />
        <di:waypoint xsi:type="dc:Point" x="694" y="120" />
        <bpmndi:BPMNLabel>
          <dc:Bounds x="648" y="105" width="0" height="0" />
        </bpmndi:BPMNLabel>
      </bpmndi:BPMNEdge>
      <bpmndi:BPMNShape id="EndEvent_0gbv3sc_di" bpmnElement="end">
        <dc:Bounds x="867" y="102" width="36" height="36" />
        <bpmndi:BPMNLabel>
          <dc:Bounds x="876" y="138" width="18" height="12" />
        </bpmndi:BPMNLabel>
      </bpmndi:BPMNShape>
      <bpmndi:BPMNEdge id="SequenceFlow_148rk2p_di" bpmnElement="SequenceFlow_148rk2p">
        <di:waypoint xsi:type="dc:Point" x="794" y="120" />
        <di:waypoint xsi:type="dc:Point" x="867" y="120" />
        <bpmndi:BPMNLabel>
          <dc:Bounds x="831" y="105" width="0" height="0" />
        </bpmndi:BPMNLabel>
      </bpmndi:BPMNEdge>
      <bpmndi:BPMNShape id="ServiceTask_09m0goq_di" bpmnElement="taskA">
        <dc:Bounds x="310" y="80" width="100" height="80" />
      </bpmndi:BPMNShape>
      <bpmndi:BPMNShape id="ServiceTask_0sryj72_di" bpmnElement="taskB">
        <dc:Bounds x="502" y="80" width="100" height="80" />
      </bpmndi:BPMNShape>
      <bpmndi:BPMNShape id="ServiceTask_1xu4l3g_di" bpmnElement="taskC">
        <dc:Bounds x="694" y="80" width="100" height="80" />
      </bpmndi:BPMNShape>
    </bpmndi:BPMNPlane>
  </bpmndi:BPMNDiagram>
</bpmn:definitions>

Create a Workflow Instance

Prerequisites

  1. Running Zeebe broker with endpoint localhost:26500 (default)
  2. Run the Deploy a Workflow example

WorkflowInstanceCreator.java

Source on github

/*
 * Copyright © 2017 camunda services GmbH (info@camunda.com)
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package io.zeebe.example.workflow;

import io.zeebe.client.ZeebeClient;
import io.zeebe.client.ZeebeClientBuilder;
import io.zeebe.client.api.clients.WorkflowClient;
import io.zeebe.client.api.events.WorkflowInstanceEvent;

public class WorkflowInstanceCreator {

  public static void main(final String[] args) {
    final String broker = "127.0.0.1:26500";

    final String bpmnProcessId = "demoProcess";

    final ZeebeClientBuilder builder = ZeebeClient.newClientBuilder().brokerContactPoint(broker);

    try (ZeebeClient client = builder.build()) {
      final WorkflowClient workflowClient = client.workflowClient();

      System.out.println("Creating workflow instance");

      final WorkflowInstanceEvent workflowInstanceEvent =
          workflowClient
              .newCreateInstanceCommand()
              .bpmnProcessId(bpmnProcessId)
              .latestVersion()
              .send()
              .join();

      System.out.println(
          "Workflow instance created with key: " + workflowInstanceEvent.getWorkflowInstanceKey());
    }
  }
}

Create Workflow Instances Non-Blocking

Prerequisites

  1. Running Zeebe broker with endpoint localhost:26500 (default)
  2. Run the Deploy a Workflow example

NonBlockingWorkflowInstanceCreator.java

Source on github

/*
 * Copyright © 2017 camunda services GmbH (info@camunda.com)
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package io.zeebe.example.workflow;

import io.zeebe.client.ZeebeClient;
import io.zeebe.client.ZeebeClientBuilder;
import io.zeebe.client.api.ZeebeFuture;
import io.zeebe.client.api.clients.WorkflowClient;
import io.zeebe.client.api.events.WorkflowInstanceEvent;

public class NonBlockingWorkflowInstanceCreator {
  public static void main(final String[] args) {
    final String broker = "127.0.0.1:26500";
    final int numberOfInstances = 100_000;
    final String bpmnProcessId = "demoProcess";

    final ZeebeClientBuilder builder = ZeebeClient.newClientBuilder().brokerContactPoint(broker);

    try (ZeebeClient client = builder.build()) {
      final WorkflowClient workflowClient = client.workflowClient();

      System.out.println("Creating " + numberOfInstances + " workflow instances");

      final long startTime = System.currentTimeMillis();

      long instancesCreating = 0;

      while (instancesCreating < numberOfInstances) {
        // this is non-blocking/async => returns a future
        final ZeebeFuture<WorkflowInstanceEvent> future =
            workflowClient
                .newCreateInstanceCommand()
                .bpmnProcessId(bpmnProcessId)
                .latestVersion()
                .send();

        // could put the future somewhere and eventually wait for its completion

        instancesCreating++;
      }

      // creating one more instance; joining on this future ensures
      // that all the other create commands were handled
      workflowClient
          .newCreateInstanceCommand()
          .bpmnProcessId(bpmnProcessId)
          .latestVersion()
          .send()
          .join();

      System.out.println("Took: " + (System.currentTimeMillis() - startTime));
    }
  }
}

Request all Workflows

Prerequisites

  1. Running Zeebe broker with endpoint localhost:26500 (default)
  2. Make sure a couple of workflows are deployed, e.g. run the Deploy a Workflow example multiple times to create multiple workflow versions.

DeploymentViewer.java

Source on github

/*
 * Copyright © 2017 camunda services GmbH (info@camunda.com)
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package io.zeebe.example.workflow;

import io.zeebe.client.ZeebeClient;
import io.zeebe.client.ZeebeClientBuilder;
import io.zeebe.client.api.clients.WorkflowClient;
import io.zeebe.client.api.commands.WorkflowResource;
import io.zeebe.client.api.commands.Workflows;

public class DeploymentViewer {

  public static void main(final String[] args) {

    final String broker = "localhost:26500";

    final ZeebeClientBuilder clientBuilder =
        ZeebeClient.newClientBuilder().brokerContactPoint(broker);

    try (ZeebeClient client = clientBuilder.build()) {
      final WorkflowClient workflowClient = client.workflowClient();

      final Workflows workflows = workflowClient.newWorkflowRequest().send().join();

      System.out.println("Printing all deployed workflows:");

      workflows
          .getWorkflows()
          .forEach(
              wf -> {
                System.out.println("Workflow resource for " + wf + ":");

                final WorkflowResource resource =
                    workflowClient
                        .newResourceRequest()
                        .workflowKey(wf.getWorkflowKey())
                        .send()
                        .join();

                System.out.println(resource);
              });

      System.out.println("Done");
    }
  }
}

Open a Job Worker

Related Resources

Prerequisites

  1. Running Zeebe broker with endpoint localhost:26500 (default)
  2. Run the Deploy a Workflow example
  3. Run the Create a Workflow Instance example a couple of times

JobWorkerCreator.java

Source on github

/*
 * Copyright © 2017 camunda services GmbH (info@camunda.com)
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package io.zeebe.example.job;

import io.zeebe.client.ZeebeClient;
import io.zeebe.client.ZeebeClientBuilder;
import io.zeebe.client.api.clients.JobClient;
import io.zeebe.client.api.response.ActivatedJob;
import io.zeebe.client.api.subscription.JobHandler;
import io.zeebe.client.api.subscription.JobWorker;
import java.time.Duration;
import java.util.Scanner;

public class JobWorkerCreator {
  public static void main(final String[] args) {
    final String broker = "127.0.0.1:26500";

    final String jobType = "foo";

    final ZeebeClientBuilder builder = ZeebeClient.newClientBuilder().brokerContactPoint(broker);

    try (ZeebeClient client = builder.build()) {
      final JobClient jobClient = client.jobClient();

      System.out.println("Opening job worker.");

      final JobWorker workerRegistration =
          jobClient
              .newWorker()
              .jobType(jobType)
              .handler(new ExampleJobHandler())
              .timeout(Duration.ofSeconds(10))
              .open();

      System.out.println("Job worker opened and receiving jobs.");

      // call workerRegistration.close() to close it

      // run until System.in receives exit command
      waitUntilSystemInput("exit");
    }
  }

  private static class ExampleJobHandler implements JobHandler {
    @Override
    public void handle(final JobClient client, final ActivatedJob job) {
      // here: business logic that is executed with every job
      System.out.println(
          String.format(
              "[type: %s, key: %s, lockExpirationTime: %s]\n[headers: %s]\n[payload: %s]\n===",
              job.getType(),
              job.getKey(),
              job.getDeadline().toString(),
              job.getHeaders(),
              job.getPayload()));

      client.newCompleteCommand(job.getKey()).send().join();
    }
  }

  private static void waitUntilSystemInput(final String exitCode) {
    try (Scanner scanner = new Scanner(System.in)) {
      while (scanner.hasNextLine()) {
        final String nextLine = scanner.nextLine();
        if (nextLine.contains(exitCode)) {
          return;
        }
      }
    }
  }
}

Handle the Payload as POJO

Related Resources

Prerequisites

  1. Running Zeebe broker with endpoint localhost:26500 (default)
  2. Run the Deploy a Workflow example

HandlePayloadAsPojo.java

Source on github

/*
 * Copyright © 2017 camunda services GmbH (info@camunda.com)
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package io.zeebe.example.data;

import io.zeebe.client.ZeebeClient;
import io.zeebe.client.ZeebeClientBuilder;
import io.zeebe.client.api.clients.JobClient;
import io.zeebe.client.api.clients.WorkflowClient;
import io.zeebe.client.api.response.ActivatedJob;
import io.zeebe.client.api.subscription.JobHandler;
import java.util.Scanner;

public class HandlePayloadAsPojo {
  public static void main(final String[] args) {
    final String broker = "127.0.0.1:26500";

    final ZeebeClientBuilder builder = ZeebeClient.newClientBuilder().brokerContactPoint(broker);

    try (ZeebeClient client = builder.build()) {
      final WorkflowClient workflowClient = client.workflowClient();
      final JobClient jobClient = client.jobClient();

      final Order order = new Order();
      order.setOrderId(31243);

      workflowClient
          .newCreateInstanceCommand()
          .bpmnProcessId("demoProcess")
          .latestVersion()
          .payload(order)
          .send()
          .join();

      jobClient.newWorker().jobType("foo").handler(new DemoJobHandler()).open();

      // run until System.in receives exit command
      waitUntilSystemInput("exit");
    }
  }

  private static class DemoJobHandler implements JobHandler {
    @Override
    public void handle(final JobClient client, final ActivatedJob job) {
      // read the payload of the job
      final Order order = job.getPayloadAsType(Order.class);
      System.out.println("new job with orderId: " + order.getOrderId());

      // update the payload and complete the job
      order.setTotalPrice(46.50);

      client.newCompleteCommand(job.getKey()).payload(order).send();
    }
  }

  public static class Order {
    private long orderId;
    private double totalPrice;

    public long getOrderId() {
      return orderId;
    }

    public void setOrderId(final long orderId) {
      this.orderId = orderId;
    }

    public double getTotalPrice() {
      return totalPrice;
    }

    public void setTotalPrice(final double totalPrice) {
      this.totalPrice = totalPrice;
    }
  }

  private static void waitUntilSystemInput(final String exitCode) {
    try (Scanner scanner = new Scanner(System.in)) {
      while (scanner.hasNextLine()) {
        final String nextLine = scanner.nextLine();
        if (nextLine.contains(exitCode)) {
          return;
        }
      }
    }
  }
}

Request Cluster Topology

Shows which broker is leader and follower for which partition. Particularly useful when you run a cluster with multiple Zeebe brokers.

Related Resources

Prerequisites

  1. Running Zeebe broker with endpoint localhost:26500 (default)

TopologyViewer.java

Source on github

/*
 * Copyright © 2017 camunda services GmbH (info@camunda.com)
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package io.zeebe.example.cluster;

import io.zeebe.client.ZeebeClient;
import io.zeebe.client.ZeebeClientBuilder;
import io.zeebe.client.api.commands.Topology;

public class TopologyViewer {

  public static void main(final String[] args) {
    final String broker = "127.0.0.1:26500";

    final ZeebeClientBuilder builder = ZeebeClient.newClientBuilder().brokerContactPoint(broker);

    try (ZeebeClient client = builder.build()) {
      System.out.println("Requesting topology with initial contact point " + broker);

      final Topology topology = client.newTopologyRequest().send().join();

      System.out.println("Topology:");
      topology
          .getBrokers()
          .forEach(
              b -> {
                System.out.println("    " + b.getAddress());
                b.getPartitions()
                    .forEach(
                        p ->
                            System.out.println(
                                "      " + p.getPartitionId() + " - " + p.getRole()));
              });

      System.out.println("Done.");
    }
  }
}