This is the home of

Zeebe Logo

Introduction to Zeebe

This section contains a brief introduction to Zeebe.

What is Zeebe?

Zeebe is a workflow engine for microservices orchestration. Zeebe ensures that, once started, flows are always carried out fully, retrying steps in case of failures. Along the way, Zeebe maintains a complete audit log so that the progress of flows can be monitored. Zeebe is fault tolerant and scales seamlessly to handle growing transaction volumes.

Below, we'll provide a brief overview of Zeebe. For more detail, we recommend the "What is Zeebe?" writeup on the main Zeebe site.

What problem does Zeebe solve, and how?

A company’s end-to-end workflows almost always span more than one microservice. In an e-commerce company, for example, a “customer order” workflow might involve a payments microservice, an inventory microservice, a shipping microservice, and more:

order-process

These cross-microservice workflows are mission critical, yet the workflows themselves are rarely modeled and monitored. Often, the flow of events through different microservices is expressed only implicitly in code.

If that’s the case, how can we ensure visibility of workflows and provide status and error monitoring? How do we guarantee that flows always complete, even if single microservices fail?

Zeebe gives you:

  1. Visibility into the state of a company’s end-to-end workflows, including the number of in-flight workflows, average workflow duration, current errors within a workflow, and more.
  2. Workflow orchestration based on the current state of a workflow; Zeebe publishes “commands” as events that can be consumed by one or more microservices, ensuring that workflows progress according to their definition.
  3. Monitoring for timeouts or other workflow errors with the ability to configure error-handling paths such as stateful retries or escalation to teams that can resolve an issue manually.

Zeebe was designed to operate at very large scale, and to achieve this, it provides:

  • Horizontal scalability and no dependence on an external database; Zeebe writes data directly to the filesystem on the same servers where it’s deployed. Zeebe makes it simple to distribute processing across a cluster of machines to deliver high throughput.
  • Fault tolerance via an easy-to-configure replication mechanism, ensuring that Zeebe can recover from machine or software failure with no data loss and minimal downtime. This ensures that the system as a whole remains available without requiring manual action.
  • A message-driven architecture where all workflow-relevant events are written to an append-only log, providing an audit trail and a history of the state of a workflow.
  • A publish-subscribe interaction model, which enables microservices that connect to Zeebe to maintain a high degree of control and autonomy, including control over processing rates. These properties make Zeebe resilient, scalable, and reactive.
  • Visual workflows modeled in ISO-standard BPMN 2.0 so that technical and non-technical stakeholders can collaborate on workflow design in a widely-used modeling language.
  • A language-agnostic client model, making it possible to build a Zeebe client in just about any programming language that an organization uses to build microservices.
  • Operational ease-of-use as a self-contained and self-sufficient system. Zeebe does not require a cluster coordinator such as ZooKeeper. Because all nodes in a Zeebe cluster are equal, it's relatively easy to scale, and it plays nicely with modern resource managers and container orchestrators such as Docker, Kubernetes, and DC/OS. Zeebe's CLI (Command Line Interface) allows you to script and automate management and operations tasks.

You can learn more about these technical concepts in the "Basics" section of the documentation.

Zeebe is simple and lightweight

Most existing workflow engines offer more features than Zeebe. While having access to lots of features is generally a good thing, it can come at a cost of increased complexity and degraded performance.

Zeebe is 100% focused on providing a compact, robust, and scalable solution for orchestration of workflows. Rather than supporting a broad spectrum of features, its goal is to excel within this scope.

In addition, Zeebe works well with other systems. For example, Zeebe provides a simple event stream API that makes it easy to stream all internal data into another system such as Elastic Search for indexing and querying.

Deciding if Zeebe is right for you

Note that Zeebe is currently in "developer preview", meaning that it's not yet ready for production and is under heavy development. See the roadmap for more details.

Your applications might not need the scalability and performance features provided by Zeebe. Or, you might a mature set of features around BPM (Business Process Management), which Zeebe does not yet offer. In such scenarios, a workflow automation platform such as Camunda BPM could be a better fit.

Install

This page guides you through the initial installation of your Zeebe broker. In case you are looking for more detailed information on how to set up and operate Zeebe, make sure to check out the Operations Guide as well.

There are different ways to install Zeebe:

Prerequisites

  • Operating System:
    • Linux
    • Windows/MacOS (development only, not supported for production)
  • Java Virtual Machine:
    • Oracle Hotspot v1.8
    • Open JDK v1.8

Download a distribution

You can always download the latest Zeebe release from the Github release page.

Once you have downloaded a distribution, extract it into a folder of your choice. To extract the Zeebe distribution and start the broker, Linux users can type:

tar -xzf zeebe-distribution-X.Y.Z.tar.gz -C zeebe/
./bin/broker

Windows users can download the .zippackage and extract it using their favorite unzip tool. They can then open the extracted folder, navigate to the bin folder and start the broker by double-clicking on the broker.bat file.

Once the Zeebe broker has started, it should produce the following output:

10:49:52.264 [] [main] INFO  io.zeebe.broker.system - Using configuration file zeebe-broker-X.Y.Z/conf/zeebe.cfg.toml
10:49:52.342 [] [main] INFO  io.zeebe.broker.system - Scheduler configuration: Threads{cpu-bound: 2, io-bound: 2}.
10:49:52.383 [] [main] INFO  io.zeebe.broker.system - Version: X.Y.Z
10:49:52.430 [] [main] INFO  io.zeebe.broker.clustering - Starting standalone broker.
10:49:52.435 [service-controller] [0.0.0.0:51015-zb-actors-1] INFO  io.zeebe.broker.transport - Bound managementApi.server to /0.0.0.0:51016
10:49:52.460 [service-controller] [0.0.0.0:51015-zb-actors-1] INFO  io.zeebe.transport - Bound clientApi.server to /0.0.0.0:51015
10:49:52.460 [service-controller] [0.0.0.0:51015-zb-actors-1] INFO  io.zeebe.transport - Bound replicationApi.server to /0.0.0.0:51017

Using Docker

You can run Zeebe with Docker:

docker run --name zeebe -p 51015:51015 camunda/zeebe:latest

Exposed Ports

  • 51015: Client API
  • 51016: Management API for broker to broker communcation
  • 51017: Replication API for broker to broker replication

Volumes

The default data volume is under /usr/local/zeebe/bin/data. It contains all data which should be persisted.

Configuration

The Zeebe configuration is located at /usr/local/zeebe/conf/zeebe.cfg.toml. The logging configuration is located at /usr/local/zeebe/conf/log4j2.xml.

The configuration of the docker image can also be changed by using environment variables.

Available environment variables:

  • ZEEBE_LOG_LEVEL: Sets the log level of the Zeebe Logger (default: info).
  • ZEEBE_HOST: Sets the host address to bind to instead of the IP of the container.
  • BOOTSTRAP: Sets the replication factor of the internal-system partition.
  • INITIAL_CONTACT_POINT: Sets the contact points of other brokers in a cluster setup.
  • DEPLOY_ON_KUBERNETES: If set to true, it applies some configuration changes in order to run Zeebe in a Kubernetes environment. Please note that the recommended method to run Zeebe on Kubernetes is by using the zeebe-operator.

Mac and Windows users

Note: On systems which use a VM to run Docker containers like Mac and Windows, the VM needs at least 4GB of memory, otherwise Zeebe might fail to start with an error similar to:

Exception in thread "actor-runner-service-container" java.lang.OutOfMemoryError: Direct buffer memory
        at java.nio.Bits.reserveMemory(Bits.java:694)
        at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
        at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
        at io.zeebe.util.allocation.DirectBufferAllocator.allocate(DirectBufferAllocator.java:28)
        at io.zeebe.util.allocation.BufferAllocators.allocateDirect(BufferAllocators.java:26)
        at io.zeebe.dispatcher.DispatcherBuilder.initAllocatedBuffer(DispatcherBuilder.java:266)
        at io.zeebe.dispatcher.DispatcherBuilder.build(DispatcherBuilder.java:198)
        at io.zeebe.broker.services.DispatcherService.start(DispatcherService.java:61)
        at io.zeebe.servicecontainer.impl.ServiceController$InvokeStartState.doWork(ServiceController.java:269)
        at io.zeebe.servicecontainer.impl.ServiceController.doWork(ServiceController.java:138)
        at io.zeebe.servicecontainer.impl.ServiceContainerImpl.doWork(ServiceContainerImpl.java:110)
        at io.zeebe.util.actor.ActorRunner.tryRunActor(ActorRunner.java:165)
        at io.zeebe.util.actor.ActorRunner.runActor(ActorRunner.java:145)
        at io.zeebe.util.actor.ActorRunner.doWork(ActorRunner.java:114)
        at io.zeebe.util.actor.ActorRunner.run(ActorRunner.java:71)
        at java.lang.Thread.run(Thread.java:748)

If you use a Docker setup with docker-machine and your default VM does not have 4GB of memory, you can create a new one with the following command:

docker-machine create --driver virtualbox --virtualbox-memory 4000 zeebe

Verify that the Docker Machine is running correctly:

docker-machine ls
NAME        ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER        ERRORS
zeebe     *        virtualbox   Running   tcp://192.168.99.100:2376           v17.03.1-ce

Configure your terminal:

eval $(docker-machine env zeebe)

Then run Zeebe:

docker run --rm -p 51015:51015 -p 51016:51016 -p 51017:51017 camunda/zeebe:latest

To get the ip of Zeebe:

docker-machine ip zeebe
192.168.99.100

Verify that you can connect to Zeebe:

telnet 192.168.99.100 51015

First Contact

The quickstart guide demonstrates the main concepts of Zeebe using only the command line client.

As a Java user, you should have look at the Java client library. The Get Started Java tutorial guides you through the first steps.

As a Go user, you should look into the Go client library. The Get Started Go tutorial guides you through the first steps.

Quickstart

This tutorial should help you to get to know the main concepts of Zeebe without the need to write a single line of code.

  1. Download the Zeebe distribution
  2. Start the Zeebe broker
  3. Create a topic
  4. Create a job
  5. Complete a job
  6. Create a topic subscription
  7. Deploy a workflow
  8. Create a workflow instance
  9. Complete the workflow instance
  10. Next steps

Note: Some command examples might not work on Windows if you use cmd or Powershell. For Windows users we recommend to use a bash-like shell, i.e. Git Bash, Cygwin or MinGW for this guide.

Step 1: Download the Zeebe distribution

You can download the latest distribution from the Zeebe release page.

Extract the archive and enter the Zeebe directory.

tar -xzvf zeebe-distribution-X.Y.Z.tar.gz
cd zeebe-broker-X.Y.Z/

Inside the Zeebe directory you will find multiple directories.

tree -d
.
├── bin     - Binaries and start scripts of the distribution
├── conf    - Zeebe and logging configuration
└── lib     - Shared java libraries

Step 2: Start the Zeebe broker

Change into the bin/ folder and execute the broker file if you are using Linux or MacOS, or the broker.bat file if you are using Windows. This will start a new Zeebe broker.

cd bin/
./broker
09:04:17.700 [] [main] INFO  io.zeebe.broker.system - Using configuration file quickstart/zeebe-broker-X.Y.Z/conf/zeebe.cfg.toml
09:04:17.776 [] [main] INFO  io.zeebe.broker.system - Scheduler configuration: Threads{cpu-bound: 2, io-bound: 2}.
09:04:17.815 [] [main] INFO  io.zeebe.broker.system - Version: X.Y.Z
09:04:17.861 [] [main] INFO  io.zeebe.broker.clustering - Starting standalone broker.
09:04:17.865 [service-controller] [0.0.0.0:51015-zb-actors-1] INFO  io.zeebe.broker.transport - Bound managementApi.server to /0.0.0.0:51016
09:04:17.887 [service-controller] [0.0.0.0:51015-zb-actors-0] INFO  io.zeebe.transport - Bound clientApi.server to /0.0.0.0:51015
09:04:17.888 [service-controller] [0.0.0.0:51015-zb-actors-0] INFO  io.zeebe.transport - Bound replicationApi.server to /0.0.0.0:51017
09:04:17.911 [io.zeebe.broker.clustering.base.bootstrap.BootstrapSystemTopic] [0.0.0.0:51015-zb-actors-1] INFO  io.zeebe.broker.clustering - Boostrapping internal system topic 'internal-system' with replication factor 1.
09:04:18.065 [service-controller] [0.0.0.0:51015-zb-actors-0] INFO  io.zeebe.raft - Created raft internal-system-0 with configuration RaftConfiguration{heartbeatInterval='250ms', electionInterval='1s', leaveTimeout='1s'}
09:04:18.069 [io.zeebe.broker.clustering.base.bootstrap.BootstrapSystemTopic] [0.0.0.0:51015-zb-actors-0] INFO  io.zeebe.broker.clustering - Bootstrapping default topics [TopicCfg{name='default-topic', partitions=1, replicationFactor=1}]
09:04:18.122 [internal-system-0] [0.0.0.0:51015-zb-actors-1] INFO  io.zeebe.raft - Joined raft in term 0

You will see some output which contains the version of the broker, or different configuration parameters like directory locations and API socket addresses.

To continue this guide open another terminal to execute commands using the Zeebe CLI zbctl.

Step 3: Create a topic

To store data in Zeebe you need a topic. To create a topic you can use the zbctl command line tool. A binary of zbctl for all major operation systems can be found in the bin/ folder of the Zeebe distribution. In the following examples we will use zbctl, replace this based on the operation system you are using.

tree bin/
bin/
├── broker        - Zeebe broker startup script for Linux & MacOS
├── broker.bat    - Zeebe broker startup script for Windows
├── zbctl         - Zeebe CLI for Linux
├── zbctl.darwin  - Zeebe CLI for MacOS
└── zbctl.exe     - Zeebe CLI for Windows

To create a topic we have to specify a name, for this guide we will use the topic name quickstart.

./bin/zbctl create topic quickstart
{
  "Name": "quickstart",
  "Partitions": 1,
  "ReplicationFactor": 1
}

We can now see our new topic in the topology of the Zeebe broker.

./bin/zbctl describe topology
+-----------------+--------------+----------------+---------+
|   TOPIC NAME    | PARTITION ID | BROKER ADDRESS |  STATE  |
+-----------------+--------------+----------------+---------+
| internal-system |            0 | 0.0.0.0:51015  | LEADER  |
+-----------------+--------------+----------------+---------+
| default-topic   |            1 | 0.0.0.0:51015  | LEADER  |
+-----------------+--------------+----------------+---------+
| quickstart      |            2 | 0.0.0.0:51015  | LEADER  |
+-----------------+--------------+----------------+---------+

Step 4: Create a job

A work item in Zeebe is called a job. To identify a category of work items the job has a type specified by the user. For this example we will use the job type step4. A job can have a payload which can then be used to execute the action required for this work item. In this example we will set the payload to contain the key zeebe and the value 2018. When we create the job we have to specify on which topic the job should be created.

Note: Windows users who want to execute this command using cmd or Powershell have to escape the payload differently.

  • cmd: "{\"zeebe\": 2018}"
  • Powershell: '{"\"zeebe"\": 2018}'
./bin/zbctl --topic quickstart create job step4 --payload '{"zeebe": 2018}'
{
  "deadline": 0,
  "worker": "",
  "headers": {
    "activityId": "",
    "activityInstanceKey": -1,
    "bpmnProcessId": "",
    "workflowDefinitionVersion": -1,
    "workflowInstanceKey": -1,
    "workflowKey": -1
  },
  "customHeaders": {},
  "retries": 3,
  "type": "step4",
  "payload": "gaV6ZWViZctAn4gAAAAAAA=="
}

Step 5: Complete a job

A job worker is able to subscribe to a specific job type to work on jobs created for this type. To create a job worker we need to specify the topic, job type and a job handler. The job handler is processing the work item and completes the job after it is finished. zbctl allows us to specify a simple script or another external application to handle a job. The handler will receive the payload of the job on standard input. And has to return the updated payload on standard output. The simplest job handler is cat, which is a unix command that just outputs its input without modifying it.

Note: For Windows users this command does not work with cmd as the cat command does not exist. We recommend to use Powershell or a bash-like shell to execute this command.

./bin/zbctl --topic quickstart subscribe job --jobType step4 cat
{
  "deadline": 1528355391412,
  "worker": "zbctl-rzXWdBARGH",
  "headers": {
    "activityId": "",
    "activityInstanceKey": -1,
    "bpmnProcessId": "",
    "workflowDefinitionVersion": -1,
    "workflowInstanceKey": -1,
    "workflowKey": -1
  },
  "customHeaders": {},
  "retries": 3,
  "type": "step4",
  "payload": "gaV6ZWViZctAn4gAAAAAAA=="
}
Completing Job

This command creates a job worker on the topic quickstart for the job type step4. So whenever a new job of this type is created the broker will push the job to this worker. You can try it out by opening another terminal and repeat the command from step 4 multiple times.

Note: Windows users who want to execute this command using cmd or Powershell have to escape the payload differently.

  • cmd: "{\"zeebe\": 2018}"
  • Powershell: '{"\"zeebe"\": 2018}'
./bin/zbctl --topic quickstart create job step4 --payload '{"zeebe": 2018}'

In the terminal with the running worker you will see that it processes every new job.

To stop the worker press CTRL-C.

Step 6: Create a topic subscription

You can see all events which are published to a topic by creating a topic subscription. You have to specify the topic name.

./bin/zbctl --topic quickstart subscribe topic
[...]
{"Intent":"ACTIVATED","Key":4294969208,"PartitionID":2,"Position":4294970032,"Type":"JOB"}
{
  "deadline": 1528355441349,
  "worker": "zbctl-rzXWdBARGH",
  "headers": {
    "activityId": "",
    "activityInstanceKey": -1,
    "bpmnProcessId": "",
    "workflowDefinitionVersion": -1,
    "workflowInstanceKey": -1,
    "workflowKey": -1
  },
  "customHeaders": {},
  "retries": 3,
  "type": "step4",
  "payload": "gaV6ZWViZctAn4gAAAAAAA=="
}

{"Intent":"COMPLETE","Key":4294969208,"PartitionID":2,"Position":4294970360,"Type":"JOB"}
{
  "deadline": 1528355441349,
  "worker": "zbctl-rzXWdBARGH",
  "headers": {
    "activityId": "",
    "activityInstanceKey": -1,
    "bpmnProcessId": "",
    "workflowDefinitionVersion": -1,
    "workflowInstanceKey": -1,
    "workflowKey": -1
  },
  "customHeaders": {},
  "retries": 3,
  "type": "step4",
  "payload": "gaV6ZWViZctAn4gAAAAAAA=="
}

{"Intent":"COMPLETED","Key":4294969208,"PartitionID":2,"Position":4294970688,"Type":"JOB"}
{
  "deadline": 1528355441349,
  "worker": "zbctl-rzXWdBARGH",
  "headers": {
    "activityId": "",
    "activityInstanceKey": -1,
    "bpmnProcessId": "",
    "workflowDefinitionVersion": -1,
    "workflowInstanceKey": -1,
    "workflowKey": -1
  },
  "customHeaders": {},
  "retries": 3,
  "type": "step4",
  "payload": "gaV6ZWViZctAn4gAAAAAAA=="
}

The event stream will now contain events which describe the lifecycle of our example jobs from type step4, which starts with the CREATE state and ends in the COMPLETED state.

To stop the topic subscription press CTRL-C.

Step 7: Deploy a workflow

A workflow is used to orchestrate loosely coupled job workers and the flow of data between them.

In this guide we will use an example process order-process.bpmn. You can download it with the following link: order-process.bpmn.

order-process

The process describes a sequential flow of three tasks Collect Money, Fetch Items and Ship Parcel. If you open the order-process.bpmn file in a text editor you will see that every task has type defined in the XML which is later used as job type.

<!-- [...] -->
<bpmn:serviceTask id="collect-money" name="Collect Money">
  <bpmn:extensionElements>
    <zeebe:taskDefinition type="payment-service" />
  </bpmn:extensionElements>
</bpmn:serviceTask>
<!-- [...] -->
<bpmn:serviceTask id="fetch-items" name="Fetch Items">
  <bpmn:extensionElements>
    <zeebe:taskDefinition type="inventory-service" />
  </bpmn:extensionElements>
</bpmn:serviceTask>
<!-- [...] -->
<bpmn:serviceTask id="ship-parcel" name="Ship Parcel">
  <bpmn:extensionElements>
    <zeebe:taskDefinition type="shipment-service" />
  </bpmn:extensionElements>
</bpmn:serviceTask>
<!-- [...] -->

To complete an instance of this workflow we would need three job workers for the types payment-service, inventory-service and shipment-service.

But first let's deploy the workflow to the Zeebe broker. We have to specify the topic to deploy to and the resource we want to deploy, in our case the order-process.bpmn.

./bin/zbctl --topic quickstart create workflow order-process.bpmn
{
  "TopicName": "quickstart",
  "Resources": [
    {
      "Resource": "[...]",
      "ResourceType": "BPMN_XML",
      "ResourceName": "order-process.bpmn"
    }
  ],
  "DeployedWorkflows": [
    {
      "BpmnProcessId": "order-process",
      "Version": 1,
      "WorkflowKey": 1,
      "ResourceName": ""
    }
  ]
}

Step 8: Create a workflow instance

After the workflow is deployed we can create new instances of it. Every instance of a workflow is a single execution of the workflow. To create a new instance we have to specify the topic and the process ID from the BPMN file, in our case the ID is order-process.

<bpmn:process id="order-process" isExecutable="true">

Every instance of a workflow normally processes some kind of data. We can specify the initial data of the instance as payload when we start the instance.

Note: Windows users who want to execute this command using cmd or Powershell have to escape the payload differently.

  • cmd: "{\"orderId\": 1234}"
  • Powershell: '{"\"orderId"\": 1234}'
./bin/zbctl --topic quickstart create instance order-process --payload '{"orderId": 1234}'
{
  "BPMNProcessID": "order-process",
  "Payload": "gadvcmRlcklky0CTSAAAAAAA",
  "Version": 1,
  "WorkflowInstanceKey": 4294971952
}

Step 9: Complete the workflow instance

To complete the instance all three jobs have to be completed. Therefore we need three job workers. Let's again use our simple cat worker for all three job types. Start a job worker for all three job types as a background process (&).

Note: For Windows users these commands do not work with cmd as the cat command does not exist. We recommend to use Powershell or a bash-like shell to execute these commands.

./bin/zbctl --topic quickstart subscribe job --jobType payment-service cat &
./bin/zbctl --topic quickstart subscribe job --jobType inventory-service cat &
./bin/zbctl --topic quickstart subscribe job --jobType shipment-service cat &

To verify that our workflow instance was completed after all jobs were processed we can again open a topic subscription. The last event should indicate that the workflow instance was completed.

./bin/zbctl --topic quickstart subscribe topic
{"Intent":"END_EVENT_OCCURRED","Key":4294983224,"PartitionID":2,"Position":4294983224,"Type":"WORKFLOW_INSTANCE"}
{
  "ActivityID": "",
  "BPMNProcessID": "order-process",
  "Payload": "gadvcmRlcklky0CTSAAAAAAA",
  "Version": 1,
  "WorkflowInstanceKey": 4294971952,
  "WorkflowKey": 1
}

{"Intent":"COMPLETED","Key":4294971952,"PartitionID":2,"Position":4294983464,"Type":"WORKFLOW_INSTANCE"}
{
  "ActivityID": "",
  "BPMNProcessID": "order-process",
  "Payload": "gadvcmRlcklky0CTSAAAAAAA",
  "Version": 1,
  "WorkflowInstanceKey": 4294971952,
  "WorkflowKey": 1
}

As you can see in the event log the last event was a workflow instance event with the intent COMPLETED, which indicates that the instance we started in step 8 was now completed. We can now start new instances in another terminal with the command from step 8.

Note: Windows users who want to execute this command using cmd or Powershell have to escape the payload differently.

  • cmd: "{\"orderId\": 1234}"
  • Powershell: '{"\"orderId"\": 1234}'
./bin/zbctl --topic quickstart create instance order-process --payload '{"orderId": 1234}'

To close all subscriptions you can press CTRL-C to stop the topic subscription and use the kill command to stop the background processes of the job workers.

kill %1 %2 %3

Next steps

To continue working with Zeebe we recommend to get more familiar with the basic concepts of Zeebe, see the Basics chapter of the documentation.

In the BPMN Workflows chapter you can find an introduction to creating Workflows with BPMN. And the BPMN Modeler chapter shows you how to model them by yourself.

The documentation also provides getting started guides for implementing job workers using Java or Go.

Overview

This section provides an overview of Zeebe's core concepts. Understanding them helps to successfully build workflow applications.

Client / Server

Zeebe uses a client / server architecture. There are two main components: Broker and Client.

client-server

Broker

A Zeebe broker (server) has three main responsibilities:

  1. Storing and managing workflow data

  2. Distributing work items to clients

  3. Exposing a workflow event stream to clients via publish-subscribe

A Zeebe setup typically consists of more than one broker. Adding brokers scales storage and computing resources. In addition, it provides fault tolerance since Zeebe keeps multiple copies of its data on different brokers.

Brokers form a peer-to-peer network in which there is no single point of failure. This is possible because all brokers perform the same kind of tasks and the responsibiltiies of an unavailable broker are transparently reassigned in the network.

Client

Clients are libraries which you embed into your application to connect to the broker.

Clients connect to the broker using Zeebe's binary protocols. The protocols are programming-language-agnostic, which makes it possible to write clients in different programming languages. There is a number of clients readily available.

Workflows

Workflows are flow-chart-like blueprints that define the orchestration of tasks. Every task represents a piece of business logic such that their ordered execution produces a meaningful result.

A job worker is your implementation of the business logic of a task. It must embed a Zeebe client library to connect to the broker but other than that there are no restrictions on its implementation. You can choose to write a worker as a microservice, as part of a classical three-tier application, as a (lambda) function, via command line tools, etc.

Running a workflow then requires two steps: Submitting the workflow to Zeebe and connecting the required workers.

Sequences

The simplest kind of workflow is an ordered sequence of tasks. Whenever workflow execution reaches a task, Zeebe creates a job and triggers the invocation of a job worker.

workflow-sequence

You can think of Zeebe's workflow orchestration as a state machine. Orchestration reaches a task, triggers the worker and then waits for the worker to complete its work. Once the work is completed, the flow continues with the next step. If the worker fails to complete the work, the workflow remains at the current step, potentially retrying the job until it eventually succeeds.

Data Flow

As Zeebe progresses from one task to the next in a workflow, it can move custom data in the form of a JSON document along. This data is called the workflow payload and is created whenever a workflow is started.

data-flow

Every job worker can read the current payload and modify it when completing a job so that data can be shared between different tasks in a workflow. A workflow model can contain simple, yet powerful payload transformation instructions to keep workers decoupled from each other.

Data-based Conditions

Some workflows do not always execute the same tasks but need to choose different tasks based on payload and conditions:

data-conditions

The diamond shape with the "X" in the middle is a special step marking that the workflow decides to take one or the other path.

Conditions use JSON Path to extract properties and values from the current payload document.

Fork / Join Concurrency

Coming soon

In many cases, it is also useful to perform multiple tasks in parallel. This can be achieved with Fork / Join concurrency.

BPMN 2.0

Zeebe uses BPMN 2.0 for representing workflows. BPMN is an industry standard which is widely supported by different vendors and implementations. Using BPMN ensures that workflows can be interchanged between Zeebe and other workflow systems.

YAML Workflows

In addition to BPMN 2.0, Zeebe supports a YAML workflow format. It can be used to rapidly write simple workflows in text. Unlike BPMN, it has no visual representation and is not standardized. Zeebe transforms YAML to BPMN on submission.

BPMN Modeler

Zeebe provides a BPMN modeling tool to create BPMN diagrams and maintain their technical properties. It is a desktop application based on the bpmn.io open source project.

The modeler can be downloaded from github.com/zeebe-io/zeebe-modeler.

Job Workers

A job worker is a component capable of performing a particular step in a workflow.

What is a Job?

A job is a work item in a workflow. For example:

  • Sending an email
  • Generating a PDF document
  • Updating customer data in a backend system

A job has the following properties:

  • Type: Describes the kind of work item this is, defined in the workflow. The type allows workers to specify which jobs they are able to perform.
  • Payload: The business data the worker should work on in the form of a JSON document/object.
  • Headers: Additional metadata defined in the workflow, e.g. describing the structure of the payload.

Connecting to the Broker

To start executing jobs, a worker must connect to the Zeebe broker declaring the specific job type it can handle. The Zeebe broker will then notify the worker whenever new jobs are created based on a publish-subsribe protocol. Upon receiving a notification, a worker performs the job and sends back a complete or fail message depending on whether the job could be completed successfully or not.

task-subscriptions

Many workers can subscribe to the same job type in order to scale up processing. In this scenario, the Zeebe broker ensures that each job is published exclusively to only one of the workers. It eventually reassigns the job in case the assigned worker is unresponsive. This means, job processing has at least once semantics.

Job Queueing

An important aspect of Zeebe is that a broker does not assume that a worker is always available or that it can process jobs at any particular rate.

Zeebe decouples creation of jobs from performing the work on them. It is always possible to create jobs at the highest possible rate, regardless of whether there is currently a worker available to work on them or not. This is possible as Zeebe queues jobs until it can push them out to workers. If no job worker is currently subscribed, jobs remain queued. If a worker is subscribed, Zeebe's backpressure protocols ensure that workers can control the rate at which they receive jobs.

This allows the system to take in bursts of traffic and effectively act as a buffer in front of the job workers.

Standalone Jobs

Jobs can also be created standalone (i.e., without an orchestrating workflow). In that case, Zeebe acts like a regular queue.

Record Consumers

As Zeebe processes jobs and workflows, it generates an ordered stream of records:

record-stream

Each record has attributes that relate it to a workflow and a particular workflow instance. For example, the record for a started workflow instance looks as follows:

Record Type: EVENT
Value Type: WORKFLOW_INSTANCE
Intent: CREATED
Body:
{
    "bpmnProcessId": "process",
    "version": 1,
    "workflowKey": 4294970288,
    "workflowInstanceKey": 4294969008,
    "activityId": "",
    "payload": "gaNmb297"
}

Zeebe clients can subscribe to this stream at any point to gain visibility into the processing, called a topic subscription. Since the broker persists the record stream, you can process records at any time after occurrence.

For example, you can build applications that:

  • Count the number of instances per workflow
  • Send an alert when a job has failed
  • Keep track of KPIs

An important concept is the position of a subscription. When opening a subscription, a consumer can choose to open the subscription at the start or the current end of the stream or anywhere in between. The broker keeps track of a subscription's position, allowing clients to disconnect and go offline. When the client reconnects, it continues processing at the position it has last acknowledged.

Topics & Partitions

Note: If you have worked with the Apache Kafka System before, the concepts presented on this page will sound very familiar to you.

In Zeebe, all data is organized into topics. Topics are user-defined and have a name for distinction. Each topic is physically organized into one or more partitions which is a persistent stream of workflow-related events. In a cluster of brokers, partitions are distributed among the nodes so it can be thought of as a shard. When you create a topic, you specify its name and how many partitions you need.

Usage Examples

Whenever you deploy a workflow, you deploy it to a specific topic. The workflow is then distributed to all partitions of that topic. On all partitions, this workflow receives the same key and version such that it can be consistently identified within the topic.

When you start an instance of a workflow, you identify the topic this request addresses. The client library will then route the request to one partition of the topic in which the workflow instance will be published. All subsequent processing of the workflow instance will happen in that partition.

Data Separation

Use topics to separate your workflow-related data. Let us assume you operate Zeebe for different departments of your organization. You can create different topics for each department ensuring that their workflows do not interfere. For example, workflow versioning applies per topic.

Scalability

Use partitions to scale your workflow processing. Partitions are dynamically distributed in a Zeebe cluster and for each partition there is one leading broker at a time. This leader accepts requests and performs event processing for the partition. Let us assume you want to distribute a topic's workflow processing load over five machines. You can achieve that by creating a topic with five partitions.

Quality of Service

Use topics to assign dedicated resources to time-critical workflow events. A Zeebe broker reserves event processing resources to each partition. If you have some workflows where processing latency is critical and some with very high event ingestion, you can create separate topics for each such that the time-critical workflows are less interfered with by the mass of unrelated events.

Partition Data Layout

A partition is a persistent append-only event stream. Initially, a partition is empty. As the first entry gets inserted, it takes the place of the first entry. As the second entry comes in and is inserted, it takes the place as the second entry and so on and so forth. Each entry has a position in the partition which uniquely identifies it.

partition

Replication

For fault tolerance, data in a partition is replicated from the leader of the partition to its followers. Followers are other Zeebe Broker nodes that maintain a copy of the partition without performing event processing.

Protocols

Zeebe defineses protocols for communication between clients and brokers. Zeebe supports two styles of interactions:

  • Command Protocol: A client sends a command to a broker and expects a response. Example: Completing a job.
  • Subscription Protocol: A broker streams data to a client. Example: Broker streams jobs to a client to work on.

Both protocols are binary on top of TCP/IP.

Non-Blocking

Zeebe protocols are non-blocking by design, including commands. Command and Reply are always separate, independent messages. A client can send multiple commands using the same TCP connection before receiving a reply.

Streaming

The subscription protocol works in streaming mode: A broker pushes out jobs and records to the subscribers in a non-blocking way. To keep latency low, the broker pushes out jobs and records to the clients as soon as they become available.

Backpressure

The subscription protocol embeds a backpressure mechanism to prevent brokers from overwhelming clients with more jobs or records than they can handle. This mechanism is client-driven, i.e. clients submit a capacity of records they can safely receive at a time. A broker pushes records until this capacity is exhausted and the client replenishes the capacity whenever it completes processing of records.

The effect of backpressure is that the system automatically adjusts flow rates to the available resources. For example, a fast consumer running on a powerful machine can process jobs at a very high rate, while a slower consumer is automatically throttled. This works without additional user configuration.

Internal Processing

Internally, Zeebe is implemented as a collection of stream processors working on record streams (partitions). The stream processing model is used since it is a unified approach to provide:

  • Command Protocol (Request-Response),
  • Subscription Protocol (Streaming),
  • Workflow Evaluation (Asynchronous Background Tasks)

In addition, it solves the history problem: The record stream provides exactly the kind of exhaustive audit log that a workflow engine needs to produce.

State Machines

Zeebe manages stateful entities: Jobs, Workflows, etc. Internally, these entities are implemented as State Machines managed by a broker.

The concept of the state machine pattern is simple: An instance of a state machine is always in one of several logical states. From each state, a set of transitions defines the next possible states. Transitioning into a new state may produce outputs/side effects.

Let's look at the state machine for jobs. Simplified, it looks as follows:

partition

Every oval is a state. Every arrow is a state transition. Note how each state transition is only applicable in a specific state. For example, it is not possible to complete a job when it is in state CREATED.

Events and Commands

Every state change in a state machine is called an event. Zeebe publishes every event as a record on the stream.

State changes can be requested by submitting a command. A Zeebe broker receives commands from two sources:

  1. Clients send commands remotely. Examples: Deploying workflows, starting workflow instances, creating and completing jobs, etc.
  2. The broker itself generates commands. Examples: Locking a job for exclusive processing by a worker, etc.

Once received, a command is published as a record on the addressed stream.

Stateful Stream Processing

A stream processor reads the record stream sequentially and interprets the commands with respect to the addressed entity's lifecycle. More specifically, a stream processor repeatedly performs the following steps:

  1. Consume the next command from the stream.
  2. Determine whether the command is applicable based on the state lifecycle and the entity's current state.
  3. If the command is applicable: Apply it to the state machine. If the command was sent by a client, send a reply/response.
  4. If the command is not applicable: Reject it. If it was sent by a client, send an error reply/response.
  5. Publish an event reporting the entity's new state.

For example, processing the Create Job command produces the event Job Created.

Command Triggers

A state change which occurred in one entity can automatically trigger a command for another entity. Example: When a job is completed, the corresponding workflow instance shall continue with the next step. Thus, the Event Job Completed triggers the command Complete Activity.

Performance

Zeebe is designed for performance, applying the following design principles:

  • Batching of I/O operations
  • Linear read/write data access patterns
  • Compact, cache-optimized data structures
  • Lock-free algorithms and actor concurrency (green threads model)
  • Broker is garbage-free in the hot/data path

As a result, Zeebe is capable of very high throughput on a single node and scales horizontally (see this benchmarking blog post for more detail).

Clustering

Zeebe can operate as a cluster of brokers, forming a peer-to-peer network. In this network, all brokers have the same responsibilities and there is no single point of failure.

cluster

Gossip Membership Protocol

Zeebe implements the Gossip protocol to know which brokers are currently part of the cluster.

The cluster is bootstrapped using a set of well-known bootstrap brokers, to which the other ones can connect. To achieve this, each broker must have at least one bootstrap broker as its initial contact point in their configuration:

[network.gossip]
initialContactPoints = [ "node1.mycluster.loc:51016" ]

When a broker is connected to the cluster for the first time, it fetches the topology from the initial contact points and then starts gossiping with the other brokers. Brokers keep cluster topology locally across restarts.

Raft Consensus and Replication Protocol

To ensure fault tolerance, Zeebe replicates data across machines using the Raft protocol.

Data is organized in topics which are divided into partitions (shards). Each partition has a number of replicas. Among the replica set, a leader is determined by the raft protocol which takes in requests and performs all the processing. All other brokers are passive followers. When the leader becomes unavailable, the followers transparently select a new leader.

Each broker in the cluster may be both leader and follower at the same time for different partitions. This way, client traffic is distributed evenly across all brokers.

cluster

Commit

Before a new record on a partition can be processed, it must be replicated to a quorum (typically majority) of followers. This procedure is called commit. Committing ensures that a record is durable even in case of complete data loss on an individual broker. The exact semantics of committing are defined by the raft protocol.

cluster

BPMN Workflows

Zeebe uses visual workflows based on the industry standard BPMN 2.0.

workflow

Read more about:

BPMN Primer

Business Process Model And Notation 2.0 (BPMN) is an industry standard for workflow modeling and execution. A BPMN workflow is an XML document that has a visual representation. For example, here is a BPMN workflow:

workflow

This is the corresponding XML: Click here.

This duality makes BPMN very powerful. The XML document contains all the necessary information to be interpreted by workflow engines and modelling tools like Zeebe. At the same time, the visual representation contains just enough information to be quickly understood by humans, even when they are non-technical people. The BPMN model is source code and documentation in one artifact.

The following is an introduction to BPMN 2.0, its elements and their execution semantics. It tries to briefly provide an intuitive understanding of BPMN's power, but does not cover the entire feature set. For more exhaustive BPMN resources, see the reference links at the end of this section.

Modeling BPMN Diagrams

The best tool for modeling BPMN diagrams for Zeebe is Zeebe Modeler.

BPMN Elements

Sequence Flow: Controlling the Flow of Execution

A core concept of BPMN is sequence flow that defines the order in which steps in the workflow happen. In BPMN's visual representation, a sequence flow is an arrow connecting two elements. The direction of the arrow indicates their order of execution.

workflow

You can think of workflow execution as tokens running through the workflow model. When a workflow is started, a token is spawned at the beginning of the model. It advances with every completed step. When the token reaches the end of the workflow, it is consumed and the workflow instance ends. Zeebe's task is to drive the token and to make sure that the job workers are invoked whenever necessary.

Tasks: Units of Work

The basic elements of BPMN workflows are tasks, atomic units of work that are composed to create a meaningful result. Whenever a token reaches a task, the token stops and Zeebe creates a job and notifies a registered worker to perform work. When that handler signals completion, then the token continues on the outgoing sequence flow.

Choosing the granularity of a task is up to the person modeling the workflow. For example, the activity of processing an order can be modeled as a single Process Order task, or as three individual tasks Collect Money, Fetch Items, Ship Parcel. If you use Zeebe to orchestrate microservices, one task can represent one microservice invocation.

See the Tasks section on which types of tasks are currently supported and how to use them.

Gateways: Steering Flow

Gateways are elements that route tokens in more complex patterns than plain sequence flow.

BPMN's exclusive gateway chooses one sequence flow out of many based on data:

BPMN's parallel gateway generates new tokens by activating multiple sequence flows in parallel:

See the Gateways section on which types of gateways are currently supported and how to use them.

Events: Waiting for Something to Happen

Events in BPMN represent things that happen. A workflow can react to events (catching event) as well as emit events (throwing event). For example:

The circle with the envolope symbol is a catching message event. It makes the token continue as soon as a message is received. The XML representation of the workflow contains the criteria for which kind of message triggers continuation.

Events can be added to the workflow in various ways. Not only can they be used to make a token wait at a certain point, but also for interrupting a token's progress.

See the Events section on which types of events are currently supported and how to use them.

BPMN Coverage

As noted in the individual sections, Zeebe currently only supports a subset of BPMN elements. It is a goal for the Zeebe project to eventually support large parts of the standard.

Additional Resources

Data Flow

Every BPMN workflow instance has a JSON document associated with it, called the workflow instance payload. The payload carries contextual data of the workflow instance that is required by job workers to do their work. It can be provided when a workflow instance is created. Job workers can read and modify it.

payload

Payload is the link between workflow instance context and workers and therefore a form of coupling. We distinguish two cases:

  1. Tightly coupled job workers: Job workers work with the exact JSON structure that the workflow instance payload has. This approach is applicable when workers are used only in a single workflow and the payload format can be agreed upon. It is convenient when building both, workflow and job workers, from scratch.
  2. Loosely coupled job workers: Job workers work with a different JSON structure than the workflow instance payload. This is often the case when job workers are reused for many workflows or when workers are developed independently of the workflow.

Without additional configuration, Zeebe assumes tightly coupled workers. That means, on job execution the workflow instance payload is provided as is to the job worker:

payload

When the worker modifies the payload, the result is merged on top-level into the workflow instance payload.

In order to use loosely coupled job workers, the workflow can be extended by payload mappings based on JSONPath. Before providing the job to the worker, Zeebe applies the mappings to the payload and generates a new JSON document. Upon job completion, the same principle is applied to map the result back into the workflow instance payload:

payload

See the Tasks section for how to define payload mappings in BPMN.

Note: Mappings are not a tool for performance optimization. While a smaller document can save network bandwidth when publishing the job, the broker has extra effort of applying the mappings during workflow execution.

Tasks

Currently supported elements:

workflow

Service Tasks

workflow

A service task represents a work item in the workflow with a specific type. When the workflow instance arrives a service task then it creates a corresponding job. The token flow stops at this point.

A worker can subscribe to these jobs and complete them when the work is done. When a job is completed, the token flow continues.

Read more about job handling.

XML representation:

<bpmn:serviceTask id="collect-money" name="Collect Money">
  <bpmn:extensionElements>
    <zeebe:taskDefinition type="payment-service" />
    <zeebe:taskHeaders>
      <zeebe:header key="method" value="VISA" />
    </zeebe:taskHeaders>
  </bpmn:extensionElements>
</bpmn:serviceTask>

BPMN Modeler: Click Here

Task Definition

Each service task must have a task definition. It specifies the type of the job which workers can subscribe to.

Optionally, a task definition can specify the amount of times the job is retried when a worker signals failure (default = 3).

<zeebe:taskDefinition type="payment-service" retries="5" />

BPMN Modeler: Click Here

Task Headers

A service task can define an arbitrary number of task headers. Task headers are metadata that are handed to workers along with the job. They can be used as configuration parameters for the worker.

<zeebe:taskHeaders>
  <zeebe:header key="method" value="VISA" />
</zeebe:taskHeaders>

BPMN Modeler: Click Here

Payload Mapping

In order to map workflow instance payload to a format that is accepted by the job worker, payload mappings can be configured. We distinguish between input and output mappings. Input mappings are used to extract data from the workflow instance payload and generate the job payload. Output mappings are used to merge the job result with the workflow instance payload on job completion.

Payload mappings can be defined as pairs of JSON-Path expressions. Each mapping has a source and a target expression. The source expressions describes the path in the source document from which to copy data. The target expression describes the path in the new document that the data should be copied to. When multiple mappings are defined, they are applied in the order of their appearance. For details and examples, see the references on JSONPath and JSON Payload Mapping.

Example:

payload

XML representation:

<serviceTask id="collectMoney">
    <extensionElements>
      <zeebe:ioMapping>
        <zeebe:input source="$.price" target="$.total"/>
        <zeebe:output source="$.paymentMethod" target="$.paymentMethod"/>
       </zeebe:ioMapping>
    </extensionElements>
</serviceTask>

BPMN Modeler: Click Here

When no output mapping is defined, the job payload is by default merged on into the workflow instance payload. This default output behavior is configurable via the outputBehavior attribute on the <ioMapping> tag. It accepts three differents values:

  • MERGE merges the job payload into the workflow instance payload, if no output mapping is specified. If output mappings are specified, then the payloads are merged according to those. This is the default output behavior.
  • OVERWRITE overwrites the workflow instance payload with the job payload. If output mappings are specified, then the content is extracted from the job payload, which then overwrites the workflow instance payload.
  • NONE indicates that the worker does not produce any output. Output mappings cannot be used in combination with this behavior. The Job payload is simply ignored on job completion and the workflow instance payload remains unchanged.

Example:

<serviceTask id="collectMoney">
    <extensionElements>
      <zeebe:ioMapping outputBehavior="overwrite">
        <zeebe:input source="$.price" target="$.total"/>
        <zeebe:output source="$.paymentMethod" target="$.paymentMethod"/>
       </zeebe:ioMapping>
    </extensionElements>
</serviceTask>

Gateways

Currently supported elements:

workflow

Exclusive Gateway (XOR)

workflow

An exclusive gateway chooses one of its outgoing sequence flows for continuation. Each sequence flow has a condition that is evaluated in the context of the current workflow instance payload. The workflow instance takes the first sequence flow which condition is fulfilled.

If no condition is fulfilled, then it takes the default flow which has no condition. In case the gateway has no default flow (not recommended), the execution stops and an incident is created.

Read more about conditions in the JSON Conditions reference.

XML representation:

<bpmn:exclusiveGateway id="exclusiveGateway" default="else" />

<bpmn:sequenceFlow id="priceGreaterThan100" name="$.totalPrice &#62; 100" sourceRef="exclusiveGateway" targetRef="shipParcelWithInsurance">
  <bpmn:conditionExpression xsi:type="bpmn:tFormalExpression">
    <![CDATA[ $.totalPrice > 100 ]]>
  </bpmn:conditionExpression>
</bpmn:sequenceFlow>

<bpmn:sequenceFlow id="else" name="else" sourceRef="exclusiveGateway" targetRef="shipParcel" />

BPMN Modeler: Click Here

Events

Currently supported elements:

workflow workflow

Start Events

A workflow must have exactly one none start event. The event is triggered when the workflow is started via API and in consequence a token spawns at the event.

XML representation:

<bpmn:startEvent id="order-placed" name="Order Placed" />

End Events

A workflow can have one or more none end events. When a token arrives at an end event, then the it is consumed. If it is the last token, then the entire workflow instance ends.

XML representation:

<bpmn:endEvent id="order-delivered" name="Order Delivered" />

Note that an activity without outgoing sequence flow has the same semantics as a none end event. After the task is completed, the token is consumed and the workflow instance may end.

BPMN Modeler

Zeebe Modeler is a tool to create BPMN 2.0 models that run on Zeebe.

overview

Learn how to create models:

Introduction

The modeler's basic elements are:

  • Element/Tool Palette
  • Context Menu
  • Properties Panel

Palette

The palette can be used to add new elements to the diagram. It also provides tools for navigating the diagram.

intro-palette

Context Menu

The context menu of a model element allows to quickly delete as well as add new follow-up elements.

intro-context-menu

Properties Panel

The properties panel can be used to configure element properties required for execution. Select an element to inspect and modify its properties.

intro-properties-panel

Tasks

Create a Service Task

create-task

Configure Job Type

task-type

Add Task Header

task-header

Add Input/Output Mapping

task-mapping

Gateways

Create an Exclusive Gateway

create-exclusive-gw

Configure Sequence Flow Condition

exclusive-gw-condition

Configure Default Flow

exclusive-gw-default

YAML Workflows

In addition to BPMN, Zeebe provides a YAML format to define workflows. Creating a YAML workflow can be done with a regular text editor and does not require a graphical modelling tool. It is inspired by imperative programming concepts and aims to be easily understandable by programmers. Internally, Zeebe transforms a deployed YAML file to BPMN.

name: order-process

tasks:
    - id: collect-money
      type: payment-service

    - id: fetch-items
      type: inventory-service

    - id: ship-parcel
      type: shipment-service

Read more about:

Tasks

A workflow can contain multiple tasks, where each represents a step in the workflow.

name: order-process

tasks:
    - id: collect-money
      type: payment-service

    - id: fetch-items
      type: inventory-service
      retries: 5

    - id: ship-parcel
      type: shipment-service
      headers:
            method: "express"
            withInsurance: false

Each task has the following properties:

  • id (required): the unique identifier of the task.
  • type (required): the name to which job workers can subscribe.
  • retries: the amount of times the job is retried in case of failure. (default = 3)
  • headers: a list of metadata in the form of key-value pairs that can be accessed by a worker.

When Zeebe executes a task, it creates a job that is handed to a job worker. The worker can perform the business logic and complete the job eventually to trigger continuation in the workflow.

Related resources:

Control Flow

Control flow is about the order in which tasks are executed. The YAML format provides tools to decide which task is executed when.

Sequences

In a sequence, a task is executed after the previous one is completed. By default, tasks are executed top-down as they are declared in the YAML file.

name: order-process

tasks:
    - id: collect-money
      type: payment-service

    - id: fetch-items
      type: inventory-service

    - id: ship-parcel
      type: shipment-service

In the example above, the workflow starts with collect-money, followed by fetch-items and ends with ship-parcel.

We can use the goto and end attributes to define a different order:

name: order-process

tasks:
    - id: collect-money
      type: payment-service
      goto: ship-parcel

    - id: fetch-items
      type: inventory-service
      end: true

    - id: ship-parcel
      type: shipment-service
      goto: fetch-items

In the above example, we have reversed the order of fetch-items and ship-parcel. Note that the end attribute is required so that workflow execution stops after fetch-items.

Data-based Conditions

Some workflows do not always execute the same tasks but need to pick and choose different tasks, based on workflow instance payload.

To achieve this, the switch attribute together with JSON-Path-based conditions can be used.

name: order-process

tasks:
    - id: collect-money
      type: payment-service

    - id: fetch-items
      type: inventory-service
      switch:
          - case: $.totalPrice > 100
            goto: ship-parcel-with-insurance

          - default: ship-parcel

    - id: ship-parcel-with-insurance
      type: shipment-service-premium
      end: true

    - id: ship-parcel
      type: shipment-service

In the above example, the order-process starts with collect-money, followed by fetch-items. If the totalPrice value in the workflow instance payload is greater than 100, then it continues with ship-parcel-with-insurance. Otherwise, ship-parcel is chosen. In either case, the workflow instance ends after that.

In the switch element, there is one case element per alternative to choose from. If none of the conditions evaluates to true, then the default element is evaluated. While default is not required, it is best practice to include to avoid errors at workflow runtime. Should such an error occur (i.e. no case is fulfilled and there is no default), then workflow execution stops and an incident is raised.

Related resources:

Data Flow

Zeebe carries a JSON document from task to task. This is called the workflow instance payload. For every task, we can define input and output mappings in order to transform the workflow instance payload JSON to a JSON document that the job worker can work with.

name: order-process

tasks:
    - id: collect-money
      type: payment-service
      inputs:
          - source: $.totalPrice
            target: $.price
      outputs:
          - source: $.success
            target: $.paymentSuccess

    - id: fetch-items
      type: inventory-service

    - id: ship-parcel
      type: shipment-service

Every mapping element has a source and a target element which must be a JSON Path expression. source defines which data is extracted from the source payload and target defines how the value is inserted into the target payload.

Related resources:

Reference

This section gives in-depth explanations of Zeebe usage concepts.

Topics & Partitions

Every event in Zeebe belongs to a partition. Groups of partitions are organized in topics which share a common set of workflow definitions. It is up to you to define the granularity of topics and partitions. This section provides assistance with doing that.

Recommendations

Choosing the number of topics and partitions depends on the use case, workload and cluster setup. Here are some rules of thumb:

  • For testing and early development, start with a single topic and a single partition. Note that Zeebe's workflow processing is highly optimized for efficiency, so a single partition can already handle high event loads. See the Performance section for details.
  • With a single Zeebe Broker, a single partition per topic is always enough as there is nothing to scale to.
  • Avoid micro topics, i.e. many topics with little throughput. Each partition requires some processing resources and coordination of Zeebe cluster nodes.
  • Base your decisions on data. Simulate the expected workload, measure and compare the performance of different topic-partition setups.

Job Handling

TODO: Concepts of task handling via subscriptions, lifecycle, retries, incidents, etc.

Incidents

In the event that a payload mapping cannot be applied, e.g. because it references non-existing properties, then an incident is raised. An incident marks the event that workflow instance execution cannot continue without administrative repair. Incidents are published as records to the topic and can be received via topic subscriptions. In the case of payload-related incidents, updating the corresponding activity instance payload resolves the incident.

JSON

Message Pack

For performance reasons JSON is encoded using MessagePack. MessagePack allows the broker to traverse a JSON document on the binary level without interpreting it as text and without need for complex parsing.

As a user, you do not need to deal with MessagePack. The client libraries take care of converting between MessagePack and JSON.

JSONPath

JSONPath is an expression language to extract data from a JSON document. It solves similar use cases that XPath solves for XML. See the JSONPath documentation for general information on the language.

JSONPath Support in Zeebe

The following table contains the JSONPath features/syntax which are supported by Zeebe:

JSONPath Description Supported
$ The root object/element Yes
@ The current object/element No
. or [] Child operator Yes
.. Recursive descent No
* Wildcard, matches all objects/elements regardless of their names.Yes
[] Subscript operator Yes
[,] Union operator No
[start:end:step] Array slice operator No
?() Applies a filter (script) expression No
() Script expression, using underlying script engine No

JSON Conditions

Conditions can be used for conditional flows to determine the following task.

A condition is a boolean expression with a JavaScript-like syntax. It allows to compare properties of the workflow instance payload with other properties or literals (e.g., numbers, strings, etc.). The payload properties are selected using JSON Path.

$.totalPrice > 100

$.owner == "Paul"

$.orderCount >= 5 && $.orderCount < 15

Literals

Literal Examples
JSON Path $.totalPrice, $.order.id, $.items[0].name
Number 25, 4.5, -3, -5.5
String "Paul", 'Jonny'
Boolean true, false
Null-Value null

A Null-Value can be used to check if a property is set (e.g., $.owner == null). If the property doesn't exist, then the JSON Path evaluation fails.

Comparison Operators

Operator Description Example
== equal to $.owner == "Paul"
!= not equal to $.owner != "Paul"
< less than $.totalPrice < 25
<= less than or equal to $.totalPrice <= 25
> greater than $.totalPrice > 25
>= greater than or equal to $.totalPrice >= 25

The operators <, <=, > and >= can only be used for numbers.

If the values of an operator have different types, then the evaluation fails.

Logical Operators

Operator Description Example
&& and $.orderCount >= 5 && $.orderCount < 15
|| or $.orderCount > 15 || $.totalPrice > 50

It's also possible to use parentheses between the operators to change the precedence (e.g., ($.owner == "Paul" || $.owner == "Jonny") && $.totalPrice > 25).

JSON Payload Mapping

This section describes the semantics when mapping JSON payloads. Mappings map segments of JSON data from a source document to a target document. In the context of workflow execution, there are two types of mappings:

  1. Input mappings map workflow instance payload to task payload.
  2. Output mappings map task payload back into workflow instance payload.

Semantics

Payload mapping follows these rules:

  • When no mapping is defined and source payload is available: Success. Source payload is merged on top-level into target payload.
  • When no mapping is defined and no source payload is available: Success. Target payload remains unchanged.
  • When a mapping is defined and source payload is available: Success. Mapping is used to merge source payload into target payload.
  • When a mapping is defined and no source payload is available: Failure. A task-related incident is raised.

Note that source payload is only unavailable when a task is completed without payload.

Patterns

Input Mappings

Description Workflow Instance Payload Input Mapping Task Payload
Copy entire payload
{
 "price": 342.99,
 "productId": 41234
}
    
Source: $
Target: $
    
{
 "price": 342.99,
 "productId": 41234
}
    
Move payload into new object
{
 "price": 342.99,
 "productId": 41234
}
    
Source: $
Target: $.orderedItem
    
{
  "orderedItem": {
    "price": 342.99,
    "productId": 41234
  }
}
    
Extract object
{
 "address": {
    "street": "Borrowway 1",
    "postcode": "SO40 9DA",
    "city": "Southampton",
    "country": "UK"
  },
 "name": "Hans Horn"
}
    
Source: $.address
Target: $
    
{
  "street": "Borrowway 1",
  "postcode": "SO40 9DA",
  "city": "Southampton",
  "country": "UK"
}
    
Extract and put into new object
{
 "address": {
    "street": "Borrowway 1",
    "postcode": "SO40 9DA",
    "city": "Southampton",
    "country": "UK"
  },
 "name": "Hans Horn"
}
    
Source: $.address
Target: $.newAddress
    
{
 "newAddress":{
  "street": "Borrowway",
  "postcode": "SO40 9DA",
  "city": "Southampton",
  "country": "UK"
 }
}
    
Extract and put into new objects
{
 "order":
 {
  "customer:{
   "name": "Hans Horst",
   "customerId": 231
  },
  "price": 34.99
 }
}
    
Source: $.order.customer
Target: $.new.details
    
{
 "new":{
   "details": {
     "name": "Hans Horst",
     "customerId": 231
  }
 }
}
    
Extract array and put into new array
{
 "name": "Hans Hols",
 "numbers": [
   "221-3231-31",
   "312-312313",
   "31-21313-1313"
  ],
  "age": 43
{
    
Source: $.numbers
Target: $.contactNrs
    
{
 "contactNrs": [
   "221-3231-31",
   "312-312313",
   "31-21313-1313"
  ]
}
    
Extract single array value and put into new array
{
 "name": "Hans Hols",
 "numbers": [
   "221-3231-31",
   "312-312313",
   "31-21313-1313"
  ],
  "age": 43
{
    
Source: $.numbers[1]
Target: $.contactNrs[0]
    
{
 "contactNrs": [
   "312-312313"
  ]
 }
}
    
Extract single array value and put into property
{
 "name": "Hans Hols",
 "numbers": [
   "221-3231-31",
   "312-312313",
   "31-21313-1313"
  ],
  "age": 43
{
    
Source: $.numbers[1]
Target: $.contactNr
    
{
 "contactNr": "312-312313"
 }
}
    

For more examples see the extract mapping tests.

Output Mapping

Description Job Payload Workflow Instance Payload Output Mapping Result
Default Merge
{
 "sum": 234.97
}
  
{
 "prices": [
   199.99,
   29.99,
   4.99]
}
  
 none 
{
 "prices": [
   199.99,
   29.99,
   4.99],
 "sum": 234.97
}
  
Default Merge without workflow payload
{
 "sum": 234.97
}
  
{
}
  
 none 
{
 "sum": 234.97
}
  
Replace with entire payload
{
 "sum": 234.97
}
  
{
 "prices": [
   199.99,
   29.99,
   4.99]
}
  
Source: $
Target: $
  
{
 "sum": 234.97
}
  
Merge payload and write into new object
{
 "sum": 234.97
}
  
{
 "prices": [
   199.99,
   29.99,
   4.99]
}
  
Source: $
Target: $.total
  
{
 "prices": [
   199.99,
   29.99,
   4.99],
 "total": {
  "sum": 234.97
 }
}
  
Replace payload with object value
{
 "order":{
  "id": 12,
  "sum": 21.23
 }
}
  
{
 "ordering": true
}
  
Source: $.order
Target: $
  
{
  "id": 12,
  "sum": 21.23
}
  
Merge payload and write into new property
{
 "sum": 234.97
}
  
{
 "prices": [
   199.99,
   29.99,
   4.99]
}
  
Source: $.sum
Target: $.total
  
{
 "prices": [
   199.99,
   29.99,
   4.99],
 "total": 234.97
}
  
Merge payload and write into array
{
 "prices": [
   199.99,
   29.99,
   4.99]
}
  
{
 "orderId": 12
}
  
Source: $.prices
Target: $.prices
  
{
 "orderId": 12,
 "prices": [
   199.99,
   29.99,
   4.99]
}
  
Merge and update array value
{
 "newPrices": [
   199.99,
   99.99,
   4.99]
}
  
{
 "prices": [
   199.99,
   29.99,
   4.99]
}
  
Source: $.newPrices[1]
Target: $.prices[0]
  
{
 "prices": [
   99.99,
   29.99,
   4.99]
}
  
Extract array value and write into payload
{
 "newPrices": [
   199.99,
   99.99,
   4.99]
}
  
{
 "orderId": 12
}
  
Source: $.newPrices[1]
Target: $.price
  
{
 "orderId": 12,
 "price": 99.99
}
  

For more examples see the merge mapping tests.

Zeebe Java Client

See also the Java Client Examples section.

Setting up the Zeebe Java Client

Prerequisites

  • Java 8

Usage in a Maven project

To use the Java client library, declare the following Maven dependency in your project:

<dependency>
  <groupId>io.zeebe</groupId>
  <artifactId>zeebe-client-java</artifactId>
  <version>${zeebe.version}</version>
</dependency>

The version of the client should always match the broker's version.

Bootstrapping

In Java code, instantiate the client as follows:

ZeebeClient client = ZeebeClient.newClientBuilder()
  .brokerContactPoint("127.0.0.1:51015")
  .build();

See the class io.zeebe.client.ZeebeClientBuilder for a description of all available configuration properties.

Get Started with the Java client

In this tutorial, you will learn to use the Java client in a Java application to interact with Zeebe.

You will be guided through the following steps:

You can find the complete source code, including the BPMN diagrams, on GitHub.

Prerequisites

Now, start the Zeebe broker. This guide uses the default-topic, which is created when the broker is started.

In case you want to create another topic you can use zbctl from the bin folder. Create the topic with zbctl by executing the following command on the command line:

zbctl create topic my-topic --partitions 1

You should see the output:

{
  "Name": "my-topic",
  "Partitions": 1,
  "ReplicationFactor": 1
}

Note: You can find the zbctl binary in the bin/ folder of the Zeebe distribution. On Windows systems the executable is called zbctl.exe and on MacOS zbctl.darwin.

Set up a project

First, we need a Maven project. Create a new project using your IDE, or run the Maven command:

mvn archetype:generate
    -DgroupId=io.zeebe
    -DartifactId=zeebe-get-started-java-client
    -DarchetypeArtifactId=maven-archetype-quickstart
    -DinteractiveMode=false

Add the Zeebe client library as dependency to the project's pom.xml:

<dependency>
  <groupId>io.zeebe</groupId>
  <artifactId>zeebe-client-java</artifactId>
  <version>${zeebe.version}</version>
</dependency>

Create a main class and add the following lines to bootstrap the Zeebe client:

package io.zeebe;

import java.util.Properties;
import io.zeebe.client.ClientProperties;
import io.zeebe.client.ZeebeClient;

public class Application
{
    public static void main(String[] args)
    {
        final Properties clientProperties = new Properties();
        // change the contact point if needed
        clientProperties.put(ClientProperties.BROKER_CONTACTPOINT, "127.0.0.1:51015");

        final ZeebeClient client = ZeebeClient.newClientBuilder()
            .withProperties(clientProperties)
            .build();

        System.out.println("Connected.");

        // ...

        client.close();
        System.out.println("Closed.");
    }
}

Run the program. If you use an IDE, you can just execute the main class. Otherwise, you must build an executable JAR file with Maven and execute it. (See the GitHub repository on how to do this.)

You should see the output:

Connected.

Closed.

Model a workflow

Now, we need a first workflow which can then be deployed. Later, we will extend the workflow with more functionality.

Open the Zeebe Modeler and create a new BPMN diagram. Add a start event and an end event to the diagram and connect the events.

model-workflow-step-1

Set the id (i.e., the BPMN process id) and mark the diagram as executable. Save the diagram in the project's source folder.

Deploy a workflow

Next, we want to deploy the modeled workflow to the broker. The broker stores the workflow under its BPMN process id and assigns a version (i.e., the revision).

Add the following deploy command to the main class:

package io.zeebe;

import io.zeebe.client.api.events.DeploymentEvent;

public class Application
{
    public static void main(String[] args)
    {
        // after the client is connected

        final DeploymentEvent deployment = client.topicClient().workflowClient()
            .newDeployCommand()
            .addResourceFromClasspath("order-process.bpmn")
            .send()
            .join();

        final int version = deployment.getDeployedWorkflows().get(0).getVersion();
        System.out.println("Workflow deployed. Version: " + version);

        // ...
    }
}

Run the program and verify that the workflow is deployed successfully. You should see the output:

Workflow deployed. Version: 1

Create a workflow instance

Finally, we are ready to create a first instance of the deployed workflow. A workflow instance is created of a specific version of the workflow, which can be set on creation.

Add the following create command to the main class:

package io.zeebe;

import io.zeebe.client.api.events.WorkflowInstanceEvent;

public class Application
{
    public static void main(String[] args)
    {
        // after the workflow is deployed

        final WorkflowInstanceEvent wfInstance = client.topicClient().workflowClient()
            .newCreateInstanceCommand()
            .bpmnProcessId("order-process")
            .latestVersion()
            .send()
            .join();

        final long workflowInstanceKey = wfInstance.getWorkflowInstanceKey();

        System.out.println("Workflow instance created. Key: " + workflowInstanceKey);

        // ...
    }
}

Run the program and verify that the workflow instance is created. You should see the output:

Workflow instance created. Key: 4294974008

You did it! You want to see how the workflow instance is executed?

Start the Zeebe Monitor using java -jar zeebe-simple-monitor.jar.

Open a web browser and go to http://localhost:8080/.

Connect to the broker and switch to the workflow instances view. Here, you see the current state of the workflow instance which includes active jobs, completed activities, the payload and open incidents.

zeebe-monitor-step-1

Work on a job

Now we want to do some work within your workflow. First, add a few service jobs to the BPMN diagram and set the required attributes. Then extend your main class and create a job worker to process jobs which are created when the workflow instance reaches a service task.

Open the BPMN diagram in the Zeebe Modeler. Insert a few service tasks between the start and the end event.

model-workflow-step-2

You need to set the type of each task, which identifies the nature of the work to be performed. Set the type of the first task to 'payment-service'.

Optionally, you can define parameters of the task by adding headers. Add the header method = VISA to the first task.

Save the BPMN diagram and switch back to the main class.

Add the following lines to create a job worker for the first jobs type:

package io.zeebe;

import io.zeebe.client.api.subscription.JobWorker;

public class Application
{
    public static void main(String[] args)
    {
        // after the workflow instance is created

        final JobWorker jobWorker = client.topicClient().jobClient()
            .newWorker()
            .jobType("payment-service")
            .handler((jobClient, job) ->
            {
                final Map<String, Object> headers = job.getCustomHeaders();
                final String method = (String) headers.get("method");

                System.out.println("Collect money using payment method: " + method);

                // ...

                jobClient.newCompleteCommand(job)
                    .send()
                    .join();
            })
            .open();

        // waiting for the jobs

        jobWorker.close();

        // ...
    }
}

Run the program and verify that the job is processed. You should see the output:

Collect money using payment method: VISA

When you have a look at the Zeebe Monitor, then you can see that the workflow instance moved from the first service task to the next one:

zeebe-monitor-step-2

Work with data

Usually, a workflow is more than just tasks, there is also data flow. The tasks need data as input and in order to produce data.

In Zeebe, the data is represented as a JSON document. When you create a workflow instance, then you can pass the data as payload. Within the workflow, you can use input and output mappings on tasks to control the data flow.

In our example, we want to create a workflow instance with the following data:

{
  "orderId": 31243,
  "orderItems": [435, 182, 376]
}

The first task should take orderId as input and return totalPrice as result.

Open the BPMN diagram and switch to the input-output-mappings of the first task. Add the input mapping $.orderId : $.orderId and the output mapping $.totalPrice : $.totalPrice.

Save the BPMN diagram and go back to the main class.

Modify the create command and pass the data as payload. Also, modify the job worker to read the jobs payload and complete the job with payload.

package io.zeebe;

public class Application
{
    public static void main(String[] args)
    {
        // after the workflow is deployed
        
        final Map<String, Object> data = new HashMap<>();
        data.put("orderId", 31243);
        data.put("orderItems", Arrays.asList(435, 182, 376));

        final WorkflowInstanceEvent wfInstance = client.topicClient().workflowClient()
            .newCreateInstanceCommand()
            .bpmnProcessId("order-process")
            .latestVersion()
            .payload(data)
            .send()
            .join();

        // ...

        final JobWorker jobWorker = client.topicClient().jobClient()
            .newWorker()
            .jobType("payment-service")
            .handler((jobClient, job) ->
            {
                final Map<String, Object> headers = job.getCustomHeaders();
                final String method = (String) headers.get("method");

                final Map<String, Object> payload = job.getPayloadAsMap();

                System.out.println("Process order: " + payload.get("orderId"));
                System.out.println("Collect money using payment method: " + method);

                // ...

                payload.put("totalPrice", 46.50);

                jobClient.newCompleteCommand(job)
                    .payload(payload)
                    .send()
                    .join();
            })
            .open();

        // ...
    }
}

Run the program and verify that the payload is mapped into the job. You should see the output:

Process order: {"orderId":31243}
Collect money using payment method: VISA

When we have a look at the Zeebe Monitor, then we can see how the payload is modified after the activity:

zeebe-monitor-step-3

Open a topic subscription

The Zeebe Monitor consume the events of the broker to build the monitoring. You can see all received events in the log view. In order to build something similar for our application, we open a topic subscription and print all workflow instance events.

When the topic subscription is open, then we receive all events which are written during execution of the workflow instance. The given handler is invoked for each received event.

Add the following lines to the main class:

package io.zeebe;

import io.zeebe.client.api.subscription.TopicSubscription;

public class Application
{
    public static void main(String[] args)
    {
        // after the workflow instance is created

        final TopicSubscription topicSubscription = client.topicClient()
            .newSubscription()
            .name("app-monitoring")
            .workflowInstanceEventHandler(e -> System.out.println(e.toJson()))
            .startAtHeadOfTopic()
            .open();

        // waiting for the events

        topicSubscription.close();

        // ...
    }
}

Run the program. You should see the output:

{
  "metadata": {
    "intent": "CREATED",
    "valueType": "WORKFLOW_INSTANCE",
    "recordType": "EVENT",
    "topicName": "default-topic",
    "partitionId": 1,
    "key": 4294967400,
    "position": 4294967616,
    "timestamp": "2018-05-30T11:40:40.599Z"
  },
  "bpmnProcessId": "order-process",
  "version": 1,
  "workflowKey": 1,
  "workflowInstanceKey": 4294967400,
  "activityId": "",
  "payload": {
    "orderId": 31243,
    "orderItems": [
      435,
      182,
      376
    ]
  }
}

{
  "metadata": {
    "intent": "START_EVENT_OCCURRED",
    "valueType": "WORKFLOW_INSTANCE",
    "recordType": "EVENT",
    "topicName": "default-topic",
    "partitionId": 1,
    "key": 4294967856,
    "position": 4294967856,
    "timestamp": "2018-05-30T11:40:40.599Z"
  },
  "bpmnProcessId": "order-process",
  "version": 1,
  "workflowKey": 1,
  "workflowInstanceKey": 4294967400,
  "activityId": "order-placed",
  "payload": {
    "orderId": 31243,
    "orderItems": [
      435,
      182,
      376
    ]
  }
}

{
  "metadata": {
    "intent": "SEQUENCE_FLOW_TAKEN",
    "valueType": "WORKFLOW_INSTANCE",
    "recordType": "EVENT",
    "topicName": "default-topic",
    "partitionId": 1,
    "key": 4294968128,
    "position": 4294968128,
    "timestamp": "2018-05-30T11:40:40.621Z"
  },
  "bpmnProcessId": "order-process",
  "version": 1,
  "workflowKey": 1,
  "workflowInstanceKey": 4294967400,
  "activityId": "SequenceFlow_18tqka5",
  "payload": {
    "orderId": 31243,
    "orderItems": [
      435,
      182,
      376
    ]
  }
}
...

What's next?

Hurray! You finished this tutorial and learned the basic usage of the Java client.

Next steps:

Logging

The client uses SLF4J for logging. It logs useful things, such as exception stack traces when a job handler fails execution. Using the SLF4J API, any SLF4J implementation can be plugged in. The following example uses Log4J 2.

Maven dependencies

<dependency>
  <groupId>org.apache.logging.log4j</groupId>
  <artifactId>log4j-slf4j-impl</artifactId>
  <version>2.8.1</version>
</dependency>

<dependency>
  <groupId>org.apache.logging.log4j</groupId>
  <artifactId>log4j-core</artifactId>
  <version>2.8.1</version>
</dependency>

Configuration

Add a file called log4j2.xml to the classpath of your application. Add the following content:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN" strict="true"
    xmlns="http://logging.apache.org/log4j/2.0/config"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://logging.apache.org/log4j/2.0/config https://raw.githubusercontent.com/apache/logging-log4j2/log4j-2.8.1/log4j-core/src/main/resources/Log4j-config.xsd">
  <Appenders>
    <Console name="Console" target="SYSTEM_OUT">
      <PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level Java Client: %logger{36} - %msg%n"/>
    </Console>
  </Appenders>
  <Loggers>
    <Root level="info">
      <AppenderRef ref="Console"/>
    </Root>
  </Loggers>
</Configuration>

This will log every log message to the console.

Zeebe Go Client

Get Started with the Go client

In this tutorial, you will learn to use the Go client in a Go application to interact with Zeebe.

You will be guided through the following steps:

You can find the complete source code, on GitHub.

Prerequisites

Now, start the Zeebe broker.

In case you want to create another topic you can use zbctl from the bin folder. Create the topic with zbctl by executing the following command on the command line:

zbctl create topic my-topic --partitions 1

You should see the output:

{
  "Name": "my-topic",
  "Partitions": 1,
  "ReplicationFactor": 1
}

Note: You can find the zbctl binary in the bin/ folder of the Zeebe distribution. On Windows systems the executable is called zbctl.exe and on MacOS zbctl.darwin.

Set up a project

First, we need a new Go project. Create a new project using your IDE, or create new Go module with:

mkdir -p $GOPATH/src/github.com/{{your username}}/zb-example
cd $GOPATH/src/github.com/{{your username}}/zb-example

Install Zeebe Go client library:

go get github.com/zeebe-io/zbc-go/zbc

Create a main.go file inside the module and add the following lines to bootstrap the Zeebe client:

package main

import (
    "encoding/json"
    "errors"
    "fmt"
    "github.com/zeebe-io/zbc-go/zbc"
)

const BrokerAddr = "0.0.0.0:51015"

var errClientStartFailed = errors.New("cannot start client")

func main() {
    zbClient, err := zbc.NewClient(BrokerAddr)
    if err != nil {
        panic(errClientStartFailed)
    }

    topology, err := zbClient.GetTopology()
    if err != nil {
        panic(err)
    }

    b, err := json.MarshalIndent(topology, "", "    ")
    fmt.Println(string(b))
}

Run the program.

go run main.go

You should see similar output:

{
    "AddrByPartitionID": {
        "0": "0.0.0.0:51015",
        "1": "0.0.0.0:51015"
    },
    "PartitionIDByTopicName": {
        "default-topic": [
            1
        ],
        "internal-system": [
            0
        ]
    },
    "Brokers": [
        {
            "Host": "0.0.0.0",
            "Port": 51015,
            "Partitions": [
                {
                    "State": "LEADER",
                    "TopicName": "internal-system",
                    "PartitionID": 0,
                    "ReplicationFactor": 1
                },
                {
                    "State": "LEADER",
                    "TopicName": "default-topic",
                    "PartitionID": 1,
                    "ReplicationFactor": 1
                }
            ]
        }
    ],
    "UpdatedAt": "2018-06-07T13:23:44.442722715+02:00"
}

Model a workflow

Now, we need a first workflow which can then be deployed. Later, we will extend the workflow with more functionality.

Open the Zeebe Modeler and create a new BPMN diagram. Add a start event and an end event to the diagram and connect the events.

model-workflow-step-1

Set the id to order-process (i.e., the BPMN process id) and mark the diagram as executable. Save the diagram in the project's source folder.

Deploy a workflow

Next, we want to deploy the modeled workflow to the broker. The broker stores the workflow under its BPMN process id and assigns a version (i.e., the revision).

package main

import (
    "errors"
    "fmt"
    "github.com/zeebe-io/zbc-go/zbc"
    "github.com/zeebe-io/zbc-go/zbc/common"
)

const topicName = "default-topic"
const brokerAddr = "0.0.0.0:51015"

var errClientStartFailed = errors.New("cannot start client")
var errWorkflowDeploymentFailed = errors.New("creating new workflow deployment failed")

func main() {
    zbClient, err := zbc.NewClient(brokerAddr)
    if err != nil {
        panic(errClientStartFailed)
    }

    response, err := zbClient.CreateWorkflowFromFile(topicName, zbcommon.BpmnXml, "order-process.bpmn")
    if err != nil {
        panic(errWorkflowDeploymentFailed)
    }

    fmt.Println(response.String())
}

Run the program and verify that the workflow is deployed successfully. You should see similar the output:

{
  "TopicName": "default-topic",
  "Resources": [
    {
      "Resource": "[...]",
      "ResourceType": "BPMN_XML",
      "ResourceName": "order-process.bpmn"
    }
  ],
  "DeployedWorkflows": [
    {
      "BpmnProcessId": "order-process",
      "Version": 1,
      "WorkflowKey": 1,
      "ResourceName": "order-process.bpmn"
    }
  ]
}

We can also deploy the workflow using command line utility:

zbctl create workflow order-process.bpmn

Create a workflow instance

Finally, we are ready to create a first instance of the deployed workflow. A workflow instance is created of a specific version of the workflow, which can be set on creation. If the version is set to -1 the latest version of the workflow is used.

package main

import (
    "errors"
    "github.com/zeebe-io/zbc-go/zbc"
    "fmt"
)

const topicName = "default-topic"
const brokerAddr = "0.0.0.0:51015"

var errClientStartFailed = errors.New("cannot start client")

func main() {
    zbClient, err := zbc.NewClient(brokerAddr)
    if err != nil {
        panic(errClientStartFailed)
    }

    // After the workflow is deployed.
    payload := make(map[string]interface{})
    payload["orderId"] = "31243"

    instance := zbc.NewWorkflowInstance("order-process", -1, payload)
    msg, err := zbClient.CreateWorkflowInstance(topicName, instance)

    if err != nil {
        panic(err)
    }

    fmt.Println(msg.String())
}

Run the program and verify that the workflow instance is created. You should see the output:

{
  "BPMNProcessID": "order-process",
  "Payload": "gadvcmRlcklkpTMxMjQz",
  "Version": 1,
  "WorkflowInstanceKey": 4294967400
}

You did it! You want to see how the workflow instance is executed?

Start the Zeebe Monitor using java -jar zeebe-simple-monitor.jar.

Open a web browser and go to http://localhost:8080/.

Connect to the broker and switch to the workflow instances view. Here, you see the current state of the workflow instance which includes active jobs, completed activities, the payload and open incidents.

zeebe-monitor-step-1

Work on a task

Now we want to do some work within your workflow. First, add a few service tasks to the BPMN diagram and set the required attributes. Then extend your main.go file and open a job subscription to process jobs which are created when the workflow instance reaches a service task.

Open the BPMN diagram in the Zeebe Modeler. Insert a few service tasks between the start and the end event.

model-workflow-step-2

You need to set the type of each task, which identifies the nature of the work to be performed. Set the type of the first task to payment-service.

Add the following lines to redeploy the modified process and open a job subscription for the first tasks type:

package main

import (
    "errors"
    "fmt"
    "github.com/zeebe-io/zbc-go/zbc"
    "github.com/zeebe-io/zbc-go/zbc/common"
    "github.com/zeebe-io/zbc-go/zbc/models/zbsubscriptions"
    "github.com/zeebe-io/zbc-go/zbc/services/zbsubscribe"
    "os"
    "os/signal"
)

const topicName = "default-topic"
const brokerAddr = "0.0.0.0:51015"

var errClientStartFailed = errors.New("cannot start client")

func main() {
    zbClient, err := zbc.NewClient(brokerAddr)
    if err != nil {
        panic(err)
    }

    // deploy workflow
    response, err := zbClient.CreateWorkflowFromFile(topicName, zbcommon.BpmnXml, "order-process.bpmn")
    if err != nil {
        panic(err)
    }

    fmt.Println(response.String())

    // create a new workflow instance
    payload := make(map[string]interface{})
    payload["orderId"] = "31243"

    instance := zbc.NewWorkflowInstance("order-process", -1, payload)
    msg, err := zbClient.CreateWorkflowInstance(topicName, instance)

    if err != nil {
        panic(err)
    }

    fmt.Println(msg.String())

    subscription, err := zbClient.JobSubscription(topicName, "sample-app", "payment-service", 1000, 32, func(client zbsubscribe.ZeebeAPI, event *zbsubscriptions.SubscriptionEvent) {
        fmt.Println(event.String())

        // complete job after processing
        response, _ := client.CompleteJob(event)
        fmt.Println(response)
    })

    if err != nil {
        panic("Unable to open subscription")
    }

    c := make(chan os.Signal, 1)
    signal.Notify(c, os.Interrupt)
    go func() {
        <-c
        err := subscription.Close()
        if err != nil {
            panic("Failed to close subscription")
        }

        fmt.Println("Closed subscription")
        os.Exit(0)
    }()

    subscription.Start()
}

In this example we shall open a job subscription for the previously created workflow instance, consume the job and complete it. Before completing it it shall print the data to the standard output. When you have a look at the Zeebe Monitor, then you can see that the workflow instance moved from the first service task to the next one:

zeebe-monitor-step-2

When you run the above example you should see similar output:

{
  "Event": {
    "deadline": 1528370870456,
    "worker": "sample-app",
    "headers": {
      "activityId": "collect-money",
      "activityInstanceKey": 4294969624,
      "bpmnProcessId": "order-process",
      "workflowDefinitionVersion": 2,
      "workflowInstanceKey": 4294968744,
      "workflowKey": 2
    },
    "customHeaders": {
      "method": "VISA"
    },
    "retries": 3,
    "type": "payment-service",
    "payload": "gadvcmRlcklkpTMxMjQz"
  },
  "Metadata": {
    "PartitionId": 1,
    "Position": 4294970840,
    "Key": 4294970088,
    "SubscriberKey": 0,
    "RecordType": 0,
    "SubscriptionType": 0,
    "ValueType": 0,
    "Intent": 3,
    "Timestamp": 1528370869456,
    "Value": "[...]"
  }
}
{
  "deadline": 1528370870456,
  "worker": "sample-app",
  "headers": {
    "activityId": "collect-money",
    "activityInstanceKey": 4294969624,
    "bpmnProcessId": "order-process",
    "workflowDefinitionVersion": 2,
    "workflowInstanceKey": 4294968744,
    "workflowKey": 2
  },
  "customHeaders": {
    "method": "VISA"
  },
  "retries": 3,
  "type": "payment-service",
  "payload": "gadvcmRlcklkpTMxMjQz"
}

To stop the service hit CTRL+C which will trigger closing of the subscription on the broker and stopping the service.

Open a topic subscription

The Zeebe Monitor consume the events of the broker to build the monitoring. You can see all received events in the log view. In order to build something similar for our application, we open a topic subscription and print all workflow instance events.

When the topic subscription is open, then we receive all events which are written during execution of the workflow instance. The given handler is invoked for each received event.

Add the following lines to the main class to print all events:

package main

import (
    "errors"
    "fmt"
    "github.com/zeebe-io/zbc-go/zbc"
    "github.com/zeebe-io/zbc-go/zbc/models/zbsubscriptions"
    "github.com/zeebe-io/zbc-go/zbc/services/zbsubscribe"
    "os"
    "os/signal"
)

const topicName = "default-topic"
const brokerAddr = "0.0.0.0:51015"

var errClientStartFailed = errors.New("cannot start client")

func handler(client zbsubscribe.ZeebeAPI, event *zbsubscriptions.SubscriptionEvent) error {
    fmt.Printf("Event: %v\n", event)
    return nil
}

func main() {
    zbClient, err := zbc.NewClient(brokerAddr)
    if err != nil {
        panic(errClientStartFailed)
    }

    subscription, err := zbClient.TopicSubscription(topicName, "subscrition-name", 128, 0, true, handler)

    if err != nil {
        panic("Failed to open subscription")
    }

    osCh := make(chan os.Signal, 1)
    signal.Notify(osCh, os.Interrupt)
    go func() {
        <-osCh
        err := subscription.Close()
        if err != nil {
            panic("Failed to close subscription")
        }
        fmt.Println("Subscription closed.")
        os.Exit(0)
    }()

    subscription.Start()
}

Run the program. You should see the similar output with more events which happened during the process.

Event: {
  "Event": {
    "deadline": 9223372036854775808,
    "worker": "",
    "headers": {
      "activityId": "fetch-items",
      "activityInstanceKey": 4294973080,
      "bpmnProcessId": "order-process",
      "workflowDefinitionVersion": 2,
      "workflowInstanceKey": 4294968744,
      "workflowKey": 2
    },
    "customHeaders": {},
    "retries": 3,
    "type": "inventory-service",
    "payload": "gadvcmRlcklkpTMxMjQz"
  },
  "Metadata": {
    "PartitionId": 1,
    "Position": 4294973544,
    "Key": 4294973544,
    "SubscriberKey": 4294974264,
    "RecordType": 1,
    "SubscriptionType": 1,
    "ValueType": 0,
    "Intent": 0,
    "Timestamp": 1528370869461,
    "Value": "[...]"
  }
}
Event: {
  "Event": {
    "deadline": 9223372036854775808,
    "worker": "",
    "headers": {
      "activityId": "fetch-items",
      "activityInstanceKey": 4294973080,
      "bpmnProcessId": "order-process",
      "workflowDefinitionVersion": 2,
      "workflowInstanceKey": 4294968744,
      "workflowKey": 2
    },
    "customHeaders": {},
    "retries": 3,
    "type": "inventory-service",
    "payload": "gadvcmRlcklkpTMxMjQz"
  },
  "Metadata": {
    "PartitionId": 1,
    "Position": 4294973904,
    "Key": 4294973544,
    "SubscriberKey": 4294974264,
    "RecordType": 0,
    "SubscriptionType": 1,
    "ValueType": 0,
    "Intent": 1,
    "Timestamp": 1528370869462,
    "Value": "[...]"
  }
}

Each of these events represents one step in the workflow instance life cycle.

When we have a look at the Zeebe Monitor, then we can see how the payload is modified after the activity:

zeebe-monitor-step-3

Again to stop the process, hit CTRL + C which will gracefully shut down the subscription on the broker and stop the process.

What's next?

Yay! You finished this tutorial and learned the basic usage of the Go client.

Next steps:

Operations

The Monitor

Zeebe provides a monitor to inspect workflow instances. The monitor is a standalone web application which connects to a Zeebe broker and consumes all records.

The monitor can be downloaded from github.com/zeebe-io/zeebe-simple-monitor.

The zeebe.cfg.toml file

The Zeebe broker can be configured using the zeebe.cfg.toml file which can be found in the distribution inside the conf folder.

The file is self-documenting: it mentions all available configuration options along with default values, examples and explanations.

Setting up a Zeebe Cluster

Coming Soon

The Metrics

When operating a distributed system like Zeebe, it is important to put proper monitoring in place. To facilitate this, Zeebe exposes an extensive set of metrics.

Zeebe writes metrics to a file. The reporting interval can be configured.

Types of metrics

  • Counters: a time series that records a growing count of some unit. Examples: number of bytes transmitted over the network, number of workflow instances started, ...
  • Gauges: a time series that records the current size of some unit. Examples: number of currently open client connections, current number of partitions, ...

Metrics Format

Zeebe exposes metrics directly in Prometheus text format. The details of the format can be read in the Prometheus documentation.

Example:

zb_storage_fs_total_bytes{cluster="zeebe",node="localhost:51015",partition="0",topic="internal-system"} 4192 1522124395234

The record above descibes that the total size on bytes of partition 0 of topic internal-system on node localhost:51015 in cluster zeebe is 4192. The last number is a unix epoch timestamp.

Configuring Metrics

Metrics can be configured in the configuration file.

Connecting Prometheus

As explained, Zeebe writes metrics to a file. The default location of the file is $ZB_HOME/metrics/zeebe.prom. There are two ways to connect Zeebe to Prometheus:

Node exporter

In case you are already using the prometheus node exporter, you can relocate the metrics file to the scrape directory of the node exporter. The configuration would look as follows:

[metrics]
reportingInterval = 15
metricsFile = "zeebe.prom"
directory = "/var/lib/metrics"

HTTP

In case you want to scrape Zeebe nodes via HTTP, you can start a http server inside the metrics directory and expose the directory via HTTP. In case python is available, you can use

$cd $ZB_HOME/metrics
$python -m SimpleHTTPServer 8000

Then, add the following entry to your prometheus.yml:

- job_name: zb
  scrape_interval: 15s
  metrics_path: /zeebe.prom
  scheme: http
  static_configs:
  - targets:
    - localhost:8000

Grafana Dashboards

The Zeebe community has prepared two ready to use Grafana dashboars:

Overview Dashboard

The overview dashboard summarized high level metrics on a cluster level. This can be used to monitor a Zeebe production cluster. The dashboard can be found here.

Low-level Diagnostics Dashboard

The diagnostics dashboard provides more low level metrics on a node level. It can be used for gaining a better understanding about the workload currently performed by individual nodes. The dashboard can be found here.

Available Metrics

All metrics exposed by Zeebe have the zb_*-prefix.

To each metric, the following labels are added:

  • cluster: the name of the Zeebe cluster (relevant in case you operate multiple clusters).
  • node: the identifier of the node which has written the metrics

Many metrics also add the following labels (if the metric is scoped to a topic):

  • topic: cluster-unique name of the topic
  • partition: cluster-unique id of the partition

The following components expose metrics:

  • zb_broker_info: summarized information about available nodes
  • zb_buffer_*: diagnostics, buffer metrics
  • zb_scheduler_*: diagnostics, utilization metrics of Zeebe's internal task scheduler
  • zb_storage_*: storage metrics
  • zb_streamprocessor_*: stream processing metrics such as events processed by topic, partition
  • zb_transport_*: network transport metrics such as number of open connections, bytes received, transmitted, etc ...
  • zb_workflow_*: worflow metrics such as number of workflow instances created, completed, ...

Example Code using the Zeebe Java Client

These examples are accessible in the zeebe-io github repository at commit a9e3f24410b76084bac3271f22948efdb0f8c14c. Link to browse code on github.

Instructions to access code locally:

git clone https://github.com/zeebe-io/zeebe.git
git checkout a9e3f24410b76084bac3271f22948efdb0f8c14c
cd zeebe/samples

Import the Maven project in the samples directory into your IDE to start hacking.

Workflow

Job

Data

Topic

Cluster

Deploy a Workflow

Related Resources

Prerequisites

  1. Running Zeebe broker with endpoint localhost:51015 (default)

WorkflowDeployer.java

Source on github

/*
 * Copyright © 2017 camunda services GmbH (info@camunda.com)
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package io.zeebe.example.workflow;

import io.zeebe.client.ZeebeClient;
import io.zeebe.client.ZeebeClientBuilder;
import io.zeebe.client.api.clients.WorkflowClient;
import io.zeebe.client.api.events.DeploymentEvent;

public class WorkflowDeployer {

  public static void main(String[] args) {
    final String broker = "localhost:51015";
    final String topic = "default-topic";

    final ZeebeClientBuilder clientBuilder =
        ZeebeClient.newClientBuilder().brokerContactPoint(broker);

    try (ZeebeClient client = clientBuilder.build()) {
      final WorkflowClient workflowClient = client.topicClient(topic).workflowClient();

      final DeploymentEvent deploymentEvent =
          workflowClient
              .newDeployCommand()
              .addResourceFromClasspath("demoProcess.bpmn")
              .send()
              .join();

      System.out.println(deploymentEvent.getState());
    }
  }
}

demoProcess.bpmn

Source on github

Download the XML and save it in the Java classpath before running the example. Open the file with Zeebe Modeler for a graphical representation.

<?xml version="1.0" encoding="UTF-8"?>
<bpmn:definitions xmlns:bpmn="http://www.omg.org/spec/BPMN/20100524/MODEL" xmlns:bpmndi="http://www.omg.org/spec/BPMN/20100524/DI" xmlns:di="http://www.omg.org/spec/DD/20100524/DI" xmlns:dc="http://www.omg.org/spec/DD/20100524/DC" xmlns:camunda="http://camunda.org/schema/1.0/bpmn" xmlns:zeebe="http://camunda.org/schema/zeebe/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" id="Definitions_1" targetNamespace="http://bpmn.io/schema/bpmn" exporter="Camunda Modeler" exporterVersion="1.5.0-nightly">
  <bpmn:process id="demoProcess" isExecutable="true">
    <bpmn:startEvent id="start" name="start">
      <bpmn:outgoing>SequenceFlow_1sz6737</bpmn:outgoing>
    </bpmn:startEvent>
    <bpmn:sequenceFlow id="SequenceFlow_1sz6737" sourceRef="start" targetRef="taskA" />
    <bpmn:sequenceFlow id="SequenceFlow_06ytcxw" sourceRef="taskA" targetRef="taskB" />
    <bpmn:sequenceFlow id="SequenceFlow_1oh45y7" sourceRef="taskB" targetRef="taskC" />
    <bpmn:endEvent id="end" name="end">
      <bpmn:incoming>SequenceFlow_148rk2p</bpmn:incoming>
    </bpmn:endEvent>
    <bpmn:sequenceFlow id="SequenceFlow_148rk2p" sourceRef="taskC" targetRef="end" />
    <bpmn:serviceTask id="taskA" name="task A">
      <bpmn:extensionElements>
        <zeebe:taskDefinition type="foo" />
      </bpmn:extensionElements>
      <bpmn:incoming>SequenceFlow_1sz6737</bpmn:incoming>
      <bpmn:outgoing>SequenceFlow_06ytcxw</bpmn:outgoing>
    </bpmn:serviceTask>
    <bpmn:serviceTask id="taskB" name="task B">
      <bpmn:extensionElements>
        <zeebe:taskDefinition type="bar" />
      </bpmn:extensionElements>
      <bpmn:incoming>SequenceFlow_06ytcxw</bpmn:incoming>
      <bpmn:outgoing>SequenceFlow_1oh45y7</bpmn:outgoing>
    </bpmn:serviceTask>
    <bpmn:serviceTask id="taskC" name="task C">
      <bpmn:extensionElements>
        <zeebe:taskDefinition type="foo" />
      </bpmn:extensionElements>
      <bpmn:incoming>SequenceFlow_1oh45y7</bpmn:incoming>
      <bpmn:outgoing>SequenceFlow_148rk2p</bpmn:outgoing>
    </bpmn:serviceTask>
  </bpmn:process>
  <bpmndi:BPMNDiagram id="BPMNDiagram_1">
    <bpmndi:BPMNPlane id="BPMNPlane_1" bpmnElement="demoProcess">
      <bpmndi:BPMNShape id="_BPMNShape_StartEvent_2" bpmnElement="start">
        <dc:Bounds x="173" y="102" width="36" height="36" />
        <bpmndi:BPMNLabel>
          <dc:Bounds x="180" y="138" width="22" height="12" />
        </bpmndi:BPMNLabel>
      </bpmndi:BPMNShape>
      <bpmndi:BPMNEdge id="SequenceFlow_1sz6737_di" bpmnElement="SequenceFlow_1sz6737">
        <di:waypoint xsi:type="dc:Point" x="209" y="120" />
        <di:waypoint xsi:type="dc:Point" x="310" y="120" />
        <bpmndi:BPMNLabel>
          <dc:Bounds x="260" y="105" width="0" height="0" />
        </bpmndi:BPMNLabel>
      </bpmndi:BPMNEdge>
      <bpmndi:BPMNEdge id="SequenceFlow_06ytcxw_di" bpmnElement="SequenceFlow_06ytcxw">
        <di:waypoint xsi:type="dc:Point" x="410" y="120" />
        <di:waypoint xsi:type="dc:Point" x="502" y="120" />
        <bpmndi:BPMNLabel>
          <dc:Bounds x="456" y="105" width="0" height="0" />
        </bpmndi:BPMNLabel>
      </bpmndi:BPMNEdge>
      <bpmndi:BPMNEdge id="SequenceFlow_1oh45y7_di" bpmnElement="SequenceFlow_1oh45y7">
        <di:waypoint xsi:type="dc:Point" x="602" y="120" />
        <di:waypoint xsi:type="dc:Point" x="694" y="120" />
        <bpmndi:BPMNLabel>
          <dc:Bounds x="648" y="105" width="0" height="0" />
        </bpmndi:BPMNLabel>
      </bpmndi:BPMNEdge>
      <bpmndi:BPMNShape id="EndEvent_0gbv3sc_di" bpmnElement="end">
        <dc:Bounds x="867" y="102" width="36" height="36" />
        <bpmndi:BPMNLabel>
          <dc:Bounds x="876" y="138" width="18" height="12" />
        </bpmndi:BPMNLabel>
      </bpmndi:BPMNShape>
      <bpmndi:BPMNEdge id="SequenceFlow_148rk2p_di" bpmnElement="SequenceFlow_148rk2p">
        <di:waypoint xsi:type="dc:Point" x="794" y="120" />
        <di:waypoint xsi:type="dc:Point" x="867" y="120" />
        <bpmndi:BPMNLabel>
          <dc:Bounds x="831" y="105" width="0" height="0" />
        </bpmndi:BPMNLabel>
      </bpmndi:BPMNEdge>
      <bpmndi:BPMNShape id="ServiceTask_09m0goq_di" bpmnElement="taskA">
        <dc:Bounds x="310" y="80" width="100" height="80" />
      </bpmndi:BPMNShape>
      <bpmndi:BPMNShape id="ServiceTask_0sryj72_di" bpmnElement="taskB">
        <dc:Bounds x="502" y="80" width="100" height="80" />
      </bpmndi:BPMNShape>
      <bpmndi:BPMNShape id="ServiceTask_1xu4l3g_di" bpmnElement="taskC">
        <dc:Bounds x="694" y="80" width="100" height="80" />
      </bpmndi:BPMNShape>
    </bpmndi:BPMNPlane>
  </bpmndi:BPMNDiagram>
</bpmn:definitions>

Create a Workflow Instance

Prerequisites

  1. Running Zeebe broker with endpoint localhost:51015 (default)
  2. Run the Deploy a Workflow example

WorkflowInstanceCreator.java

Source on github

/*
 * Copyright © 2017 camunda services GmbH (info@camunda.com)
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package io.zeebe.example.workflow;

import io.zeebe.client.ZeebeClient;
import io.zeebe.client.ZeebeClientBuilder;
import io.zeebe.client.api.clients.WorkflowClient;
import io.zeebe.client.api.events.WorkflowInstanceEvent;

public class WorkflowInstanceCreator {

  public static void main(String[] args) {
    final String broker = "127.0.0.1:51015";
    final String topic = "default-topic";

    final String bpmnProcessId = "demoProcess";

    final ZeebeClientBuilder builder = ZeebeClient.newClientBuilder().brokerContactPoint(broker);

    try (ZeebeClient client = builder.build()) {
      final WorkflowClient workflowClient = client.topicClient(topic).workflowClient();

      System.out.println("Creating workflow instance");

      final WorkflowInstanceEvent workflowInstanceEvent =
          workflowClient
              .newCreateInstanceCommand()
              .bpmnProcessId(bpmnProcessId)
              .latestVersion()
              .send()
              .join();

      System.out.println(workflowInstanceEvent.getState());
    }
  }
}

Create Workflow Instances Non-Blocking

Prerequisites

  1. Running Zeebe broker with endpoint localhost:51015 (default)
  2. Run the Deploy a Workflow example

NonBlockingWorkflowInstanceCreator.java

Source on github

/*
 * Copyright © 2017 camunda services GmbH (info@camunda.com)
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package io.zeebe.example.workflow;

import io.zeebe.client.ZeebeClient;
import io.zeebe.client.ZeebeClientBuilder;
import io.zeebe.client.api.ZeebeFuture;
import io.zeebe.client.api.clients.WorkflowClient;
import io.zeebe.client.api.events.WorkflowInstanceEvent;

public class NonBlockingWorkflowInstanceCreator {
  public static void main(String[] args) {
    final String broker = "127.0.0.1:51015";
    final String topic = "default-topic";
    final int numberOfInstances = 100_000;
    final String bpmnProcessId = "demoProcess";

    final ZeebeClientBuilder builder = ZeebeClient.newClientBuilder().brokerContactPoint(broker);

    try (ZeebeClient client = builder.build()) {
      final WorkflowClient workflowClient = client.topicClient(topic).workflowClient();

      System.out.println("Creating " + numberOfInstances + " workflow instances");

      final long startTime = System.currentTimeMillis();

      long instancesCreating = 0;

      while (instancesCreating < numberOfInstances) {
        // this is non-blocking/async => returns a future
        final ZeebeFuture<WorkflowInstanceEvent> future =
            workflowClient
                .newCreateInstanceCommand()
                .bpmnProcessId(bpmnProcessId)
                .latestVersion()
                .send();

        // could put the future somewhere and eventually wait for its completion

        instancesCreating++;
      }

      // creating one more instance; joining on this future ensures
      // that all the other create commands were handled
      workflowClient
          .newCreateInstanceCommand()
          .bpmnProcessId(bpmnProcessId)
          .latestVersion()
          .send()
          .join();

      System.out.println("Took: " + (System.currentTimeMillis() - startTime));
    }
  }
}

Request all Workflows

Prerequisites

  1. Running Zeebe broker with endpoint localhost:51015 (default)
  2. Make sure a couple of workflows are deployed, e.g. run the Deploy a Workflow example multiple times to create multiple workflow versions.

DeploymentViewer.java

Source on github

/*
 * Copyright © 2017 camunda services GmbH (info@camunda.com)
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package io.zeebe.example.workflow;

import io.zeebe.client.ZeebeClient;
import io.zeebe.client.ZeebeClientBuilder;
import io.zeebe.client.api.clients.WorkflowClient;
import io.zeebe.client.api.commands.WorkflowResource;
import io.zeebe.client.api.commands.Workflows;

public class DeploymentViewer {

  public static void main(final String[] args) {

    final String broker = "localhost:51015";

    final ZeebeClientBuilder clientBuilder =
        ZeebeClient.newClientBuilder().brokerContactPoint(broker);

    try (ZeebeClient client = clientBuilder.build()) {
      final WorkflowClient workflowClient = client.topicClient().workflowClient();

      final Workflows workflows = workflowClient.newWorkflowRequest().send().join();

      System.out.println("Printing all deployed workflows:");

      workflows
          .getWorkflows()
          .forEach(
              wf -> {
                System.out.println("Workflow resource for " + wf + ":");

                final WorkflowResource resource =
                    workflowClient
                        .newResourceRequest()
                        .workflowKey(wf.getWorkflowKey())
                        .send()
                        .join();

                System.out.println(resource);
              });

      System.out.println("Done");
    }
  }
}

Open a Job Worker

Related Resources

Prerequisites

  1. Running Zeebe broker with endpoint localhost:51015 (default)
  2. Run the Deploy a Workflow example
  3. Run the Create a Workflow Instance example a couple of times

JobWorkerCreator.java

Source on github

/*
 * Copyright © 2017 camunda services GmbH (info@camunda.com)
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package io.zeebe.example.job;

import io.zeebe.client.ZeebeClient;
import io.zeebe.client.ZeebeClientBuilder;
import io.zeebe.client.api.clients.JobClient;
import io.zeebe.client.api.events.JobEvent;
import io.zeebe.client.api.subscription.JobHandler;
import io.zeebe.client.api.subscription.JobWorker;
import java.time.Duration;
import java.util.Scanner;

public class JobWorkerCreator {
  public static void main(String[] args) {
    final String broker = "127.0.0.1:51015";
    final String topic = "default-topic";

    final String jobType = "foo";

    final ZeebeClientBuilder builder = ZeebeClient.newClientBuilder().brokerContactPoint(broker);

    try (ZeebeClient client = builder.build()) {
      final JobClient jobClient = client.topicClient(topic).jobClient();

      System.out.println("Opening job worker.");

      final JobWorker workerRegistration =
          jobClient
              .newWorker()
              .jobType(jobType)
              .handler(new ExampleJobHandler())
              .timeout(Duration.ofSeconds(10))
              .open();

      System.out.println("Job worker opened and receiving jobs.");

      // call workerRegistration.close() to close it

      // run until System.in receives exit command
      waitUntilSystemInput("exit");
    }
  }

  private static class ExampleJobHandler implements JobHandler {
    @Override
    public void handle(JobClient client, JobEvent job) {
      // here: business logic that is executed with every job
      System.out.println(
          String.format(
              "[type: %s, key: %s, lockExpirationTime: %s]\n[headers: %s]\n[payload: %s]\n===",
              job.getType(),
              job.getMetadata().getKey(),
              job.getDeadline().toString(),
              job.getHeaders(),
              job.getPayload()));

      client.newCompleteCommand(job).payload((String) null).send().join();
    }
  }

  private static void waitUntilSystemInput(final String exitCode) {
    try (Scanner scanner = new Scanner(System.in)) {
      while (scanner.hasNextLine()) {
        final String nextLine = scanner.nextLine();
        if (nextLine.contains(exitCode)) {
          return;
        }
      }
    }
  }
}

Handle the Payload as POJO

Related Resources

Prerequisites

  1. Running Zeebe broker with endpoint localhost:51015 (default)
  2. Run the Deploy a Workflow example

HandlePayloadAsPojo.java

Source on github

/*
 * Copyright © 2017 camunda services GmbH (info@camunda.com)
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package io.zeebe.example.data;

import io.zeebe.client.ZeebeClient;
import io.zeebe.client.ZeebeClientBuilder;
import io.zeebe.client.api.clients.JobClient;
import io.zeebe.client.api.clients.WorkflowClient;
import io.zeebe.client.api.events.JobEvent;
import io.zeebe.client.api.subscription.JobHandler;
import java.util.Scanner;

public class HandlePayloadAsPojo {
  public static void main(String[] args) {
    final String broker = "127.0.0.1:51015";
    final String topic = "default-topic";

    final ZeebeClientBuilder builder = ZeebeClient.newClientBuilder().brokerContactPoint(broker);

    try (ZeebeClient client = builder.build()) {
      final WorkflowClient workflowClient = client.topicClient(topic).workflowClient();
      final JobClient jobClient = client.topicClient(topic).jobClient();

      final Order order = new Order();
      order.setOrderId(31243);

      workflowClient
          .newCreateInstanceCommand()
          .bpmnProcessId("demoProcess")
          .latestVersion()
          .payload(order)
          .send()
          .join();

      jobClient.newWorker().jobType("foo").handler(new DemoJobHandler()).open();

      // run until System.in receives exit command
      waitUntilSystemInput("exit");
    }
  }

  private static class DemoJobHandler implements JobHandler {
    @Override
    public void handle(JobClient client, JobEvent job) {
      // read the payload of the job
      final Order order = job.getPayloadAsType(Order.class);
      System.out.println("new job with orderId: " + order.getOrderId());

      // update the payload and complete the job
      order.setTotalPrice(46.50);

      client.newCompleteCommand(job).payload(order).send();
    }
  }

  public static class Order {
    private long orderId;
    private double totalPrice;

    public long getOrderId() {
      return orderId;
    }

    public void setOrderId(long orderId) {
      this.orderId = orderId;
    }

    public double getTotalPrice() {
      return totalPrice;
    }

    public void setTotalPrice(double totalPrice) {
      this.totalPrice = totalPrice;
    }
  }

  private static void waitUntilSystemInput(final String exitCode) {
    try (Scanner scanner = new Scanner(System.in)) {
      while (scanner.hasNextLine()) {
        final String nextLine = scanner.nextLine();
        if (nextLine.contains(exitCode)) {
          return;
        }
      }
    }
  }
}

Create a Topic

Related Resources

Prerequisites

  1. Running Zeebe broker with endpoint localhost:51015 (default)

TopicCreator.java

Source on github

/*
 * Copyright © 2017 camunda services GmbH (info@camunda.com)
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package io.zeebe.example.topic;

import io.zeebe.client.ZeebeClient;
import io.zeebe.client.ZeebeClientBuilder;
import io.zeebe.client.api.events.TopicEvent;

public class TopicCreator {

  public static void main(final String[] args) {
    final String broker = "localhost:51015";
    final String topic = "test";
    final int partitions = 1;
    final int replicationFactor = 1;

    final ZeebeClientBuilder clientBuilder =
        ZeebeClient.newClientBuilder().brokerContactPoint(broker);

    try (ZeebeClient client = clientBuilder.build()) {
      System.out.println(
          "Creating topic "
              + topic
              + " with "
              + partitions
              + " partition(s) "
              + "with contact point "
              + broker);

      final TopicEvent topicEvent =
          client
              .newCreateTopicCommand()
              .name(topic)
              .partitions(partitions)
              .replicationFactor(replicationFactor)
              .send()
              .join();

      System.out.println(topicEvent.getState());
    }
  }
}

Subscribe to a Topic

Related Resources

Prerequisites

  1. Running Zeebe broker with endpoint localhost:51015 (default)

TopicSubscriber.java

Source on github

/*
 * Copyright © 2017 camunda services GmbH (info@camunda.com)
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package io.zeebe.example.topic;

import io.zeebe.client.ZeebeClient;
import io.zeebe.client.ZeebeClientBuilder;
import io.zeebe.client.api.clients.TopicClient;
import io.zeebe.client.api.record.RecordMetadata;
import io.zeebe.client.api.subscription.TopicSubscription;
import java.util.Scanner;

public class TopicSubscriber {

  public static void main(String[] args) {
    final String broker = "127.0.0.1:51015";
    final String topic = "default-topic";

    final ZeebeClientBuilder builder = ZeebeClient.newClientBuilder().brokerContactPoint(broker);

    try (ZeebeClient client = builder.build()) {
      final TopicClient topicClient = client.topicClient(topic);

      System.out.println("Opening topic subscription.");

      final TopicSubscription subscription =
          topicClient
              .newSubscription()
              .name("record-logger")
              .recordHandler(
                  record -> {
                    final RecordMetadata metadata = record.getMetadata();
                    System.out.println(
                        String.format(
                            "[topic: %d, position: %d, key: %d, type: %s, intent: %s]\n%s\n===",
                            metadata.getPartitionId(),
                            metadata.getPosition(),
                            metadata.getKey(),
                            metadata.getValueType(),
                            metadata.getIntent(),
                            record.toJson()));
                  })
              .startAtHeadOfTopic()
              .forcedStart()
              .open();

      System.out.println("Subscription opened and receiving records.");

      // call subscription.close() to close it

      // run until System.in receives exit command
      waitUntilSystemInput("exit");
    }
  }

  private static void waitUntilSystemInput(final String exitCode) {
    try (Scanner scanner = new Scanner(System.in)) {
      while (scanner.hasNextLine()) {
        final String nextLine = scanner.nextLine();
        if (nextLine.contains(exitCode)) {
          return;
        }
      }
    }
  }
}

Request Cluster Topology

Shows which broker is leader and follower for which partition. Particularly useful when you run a cluster with multiple Zeebe brokers.

Related Resources

Prerequisites

  1. Running Zeebe broker with endpoint localhost:51015 (default)

TopologyViewer.java

Source on github

/*
 * Copyright © 2017 camunda services GmbH (info@camunda.com)
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package io.zeebe.example.cluster;

import io.zeebe.client.ZeebeClient;
import io.zeebe.client.ZeebeClientBuilder;
import io.zeebe.client.api.commands.Topology;

public class TopologyViewer {

  public static void main(final String[] args) {
    final String broker = "127.0.0.1:51015";

    final ZeebeClientBuilder builder = ZeebeClient.newClientBuilder().brokerContactPoint(broker);

    try (ZeebeClient client = builder.build()) {
      System.out.println("Requesting topology with initial contact point " + broker);

      final Topology topology = client.newTopologyRequest().send().join();

      System.out.println("Topology:");
      topology
          .getBrokers()
          .forEach(
              b -> {
                System.out.println("    " + b.getAddress());
                b.getPartitions()
                    .forEach(
                        p ->
                            System.out.println(
                                "      "
                                    + p.getTopicName()
                                    + "."
                                    + p.getPartitionId()
                                    + " - "
                                    + p.getRole()));
              });

      System.out.println("Done.");
    }
  }
}