1. Introduction

1.1. License

Flowable is distributed under the Apache V2 license.

1.3. Sources

The distribution contains most of the sources as JAR files. The source code for Flowable can be found on https://github.com/flowable/flowable-engine

1.4. Required software

1.4.1. JDK 7+

Flowable runs on a JDK higher than or equal to version 7. Go to Oracle Java SE downloads and click on button "Download JDK". There are installation instructions on that page as well. To verify that your installation was successful, run java -version on the command line. That should print the installed version of your JDK.

1.4.2. IDE

Flowable development can be done with the IDE of your choice. If you would like to use the Flowable Designer then you need Eclipse Mars or Neon. Download the Eclipse distribution of your choice from the Eclipse download page. Unzip the downloaded file and then you should be able to start it with the Eclipse file in the directory eclipse. Further on in this guide, there is a section on installing our eclipse designer plugin.

1.5. Reporting problems

We expect developers to have read How to ask questions the smart way before reporting or asking anything.

After you’ve done that you can post questions, comments or suggestions for enhancements on the User forum and create issues for bugs in our Github issue tracker.

1.6. Experimental features

Sections marked with [EXPERIMENTAL] should not be considered stable.

All classes that have .impl. in the package name are internal implementation classes and cannot be considered stable or guaranteed in any way. However, if the User Guide mentions any classes as configuration values, they are supported and can be considered stable.

1.7. Internal implementation classes

In the JAR files, all classes in packages that have .impl. (e.g. org.flowable.engine.impl.db) in their name are implementation classes and should be considered internal use only. No stability guarantees are given on classes or interfaces that are in implementation classes.

2. Getting Started

2.1. What is Flowable?

Flowable is a light-weight business process engine written in Java. The Flowable process engine allows you to deploy BPMN 2.0 process definitions (an industry XML standard for defining processes), creating process instances of those process definitions, running queries, accessing active or historical process instances and related data, plus much more. This section will gradually introduce various concepts and APIs to do that through examples that you can follow on your own development machine.

Flowable is extremely flexible when it comes to adding it to your application/services/architecture. You can embed the engine in your application or service by including the Flowable library, which is available as a JAR. Since it’s a JAR, you can add it easily to any Java environment: Java SE; servlet containers, such as Tomcat or Jetty, Spring; Java EE servers, such as JBoss or WebSphere, and so on. Alternatively, you can use the Flowable REST API to communicate over HTTP. There are also several Flowable Applications (Flowable Modeler, Flowable Admin, Flowable IDM and Flowable Task) that offer out-of-the-box example UIs for working with processes and tasks.

Common to all the ways of setting up Flowable is the core engine, which can be seen as a collection of services that expose APIs to manage and execute business processes. The various tutorials below start by introducing how to set up and use this core engine. The sections afterwards build upon the knowledge acquired in the previous sections.

  • The first section shows how to run Flowable in the simplest way possible: a regular Java main using only Java SE. Many core concepts and APIs will be explained here.

  • The section on the Flowable REST API shows how to run and use the same API through REST.

  • The section on the Flowable App, will guide you through the basics of using the out-of-the-box example Flowable user interfaces.

2.2. Flowable and Activiti

Flowable is a fork of Activiti (registered trademark of Alfresco). In all the following sections you’ll notice that the package names, configuration files, and so on, use flowable.

2.3. Building a command-line application

2.3.1. Creating a process engine

In this first tutorial we’re going to build a simple example that shows how to create a Flowable process engine, introduces some core concepts and shows how to work with the API. The screenshots show Eclipse, but any IDE works. We’ll use Maven to fetch the Flowable dependencies and manage the build, but likewise, any alternative also works (Gradle, Ivy, and so on).

The example we’ll build is a simple holiday request process:

  • the employee asks for a number of holidays

  • the manager either approves or rejects the request

  • we’ll mimic registering the request in some external system and sending out an email to the employee with the result

First, we create a new Maven project through File → New → Other → Maven Project

getting.started.new.maven

In the next screen, we check create a simple project (skip archetype selection)

getting.started.new.maven2

And fill in some 'Group Id' and 'Artifact id':

getting.started.new.maven3

We now have an empty Maven project, to which we’ll add two dependencies:

  • The Flowable process engine, which will allow us to create a ProcessEngine object and access the Flowable APIs.

  • An in-memory database, H2 in this case, as the Flowable engine needs a database to store execution and historical data while running process instances. Note that the H2 dependency includes both the database and the driver. If you use another database (for example, PostgresQL, MySQL, and so on), you’ll need to add the specific database driver dependency.

Add the following to your pom.xml file:

1 2 3 4 5 6 7 8 9 10 11 12
<dependencies> <dependency> <groupId>org.flowable</groupId> <artifactId>flowable-engine</artifactId> <version>6.1.2</version> </dependency> <dependency> <groupId>com.h2database</groupId> <artifactId>h2</artifactId> <version>1.3.176</version> </dependency> </dependencies>

If the dependent JARs are not automatically retrieved for some reason, you can right-click the project and select Maven → Update Project' to force a manual refresh (but this should not be needed, normally). In the project, under 'Maven Dependencies, you should now see the flowable-engine and various other (transitive) dependencies.

Create a new Java class and add a regular Java main method:

1 2 3 4 5 6 7 8 9
package org.flowable; public class HolidayRequest { public static void main(String[] args) { } }

The first thing we need to do is to instantiate a ProcessEngine instance. This is a thread-safe object that you typically have to instantiate only once in an application. A ProcessEngine is created from a ProcessEngineConfiguration instance, which allows you to configure and tweak the settings for the process engine. Often, the ProcessEngineConfiguration is created using a configuration XML file, but (as we do here) you can also create it programmatically. The minimum configuration a ProcessEngineConfiguration needs is a JDBC connection to a database:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
package org.flowable; import org.flowable.engine.ProcessEngine; import org.flowable.engine.ProcessEngineConfiguration; import org.flowable.engine.impl.cfg.StandaloneProcessEngineConfiguration; public class HolidayRequest { public static void main(String[] args) { ProcessEngineConfiguration cfg = new StandaloneProcessEngineConfiguration() .setJdbcUrl("jdbc:h2:mem:flowable;DB_CLOSE_DELAY=-1") .setJdbcUsername("sa") .setJdbcPassword("") .setJdbcDriver("org.h2.Driver") .setDatabaseSchemaUpdate(ProcessEngineConfiguration.DB_SCHEMA_UPDATE_TRUE); ProcessEngine processEngine = cfg.buildProcessEngine(); } }

In the code above, on line 10, a standalone configuration object is created. The 'standalone' here refers to the fact that the engine is created and used completely by itself (and not, for example, in a Spring environment, where you’d use the SpringProcessEngineConfiguration class instead). On lines 11 to 14, the JDBC connection parameters to an in-memory H2 database instance are passed. Important: note that such a database does not survive a JVM restart. If you want your data to be persistent, you’ll need to switch to a persistent database and switch the connection parameters accordingly. On line 15 we’re setting a flag to true to make sure that the database schema is created if it doesn’t already exist in the database pointed to by the JDBC parameters. Alternatively, Flowable ships with a set of SQL files that can be used to create the database schema with all the tables manually.

The ProcessEngine object is then created using this configuration (line 17).

You can now run this. The easiest way in Eclipse is to right-click on the class file and select Run As → Java Application:

getting.started.run.main

The application runs without problems, however, no useful information is shown in the console except a message stating that the logging has not been configured properly:

getting.started.console.logging

Flowable uses SLF4J as its logging framework internally. For this example, we’ll use the log4j logger over SLF4j, so add the following dependencies to the pom.xml file:

1 2 3 4 5 6 7 8 9 10
<dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>1.7.21</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>1.7.21</version> </dependency>

Log4j needs a properties file for configuration. Add a log4j.properties file to the src/main/resources folder with the following content:

log4j.rootLogger=DEBUG, CA

log4j.appender.CA=org.apache.log4j.ConsoleAppender
log4j.appender.CA.layout=org.apache.log4j.PatternLayout
log4j.appender.CA.layout.ConversionPattern= %d{hh:mm:ss,SSS} [%t] %-5p %c %x - %m%n

Rerun the application. You should now see informative logging about the engine booting up and the database schema being created in the database:

getting.started.console.logging2

We’ve now got a process engine booted up and ready to go. Time to feed it a process!

2.3.2. Deploying a process definition

The process we’ll build is a very simple holiday request process. The Flowable engine expects processes to be defined in the BPMN 2.0 format, which is an XML standard that is widely accepted in the industry. In Flowable terminology, we speak about this as a process definition. From a process definition, many process instances can be started. Think of the process definition as the blueprint for many executions of the process. In this particular case, the process definition defines the different steps involved in requesting holidays, while one process instance matches the request for a holiday by one particular employee.

BPMN 2.0 is stored as XML, but it has a visualization part too: it defines in a standard way how each different step type (a human task, an automatic service call, and so on) is represented and how to connect these different steps to each other. Through this, the BPMN 2.0 standard allows technical and business people to communicate about business processes in a way that both parties understand.

The process definition we’ll use is the following:

getting.started.bpmn.process

The process should be quite self-explanatory, but for clarity’s sake let’s describe the different bits:

  • We assume the process is started by providing some information, such as the employee name, the amount of holiday requested and a description. Of course, this could be modeled as a separate first step in the process. However, by having it as input data for the process, a process instance is only actually created when a real request has been made. In the alternative case, a user could change his mind and cancel before submitting, yet the process instance would now be there. In some scenarios this could be valuable information (for example, how many times is a request started, but not finished), depending on the business goal.

  • The circle on the left is called a start event. It’s the starting point of a process instance.

  • The first rectangle is a user task. This is a step in the process that a human user has to perform. In this case, the manager needs to approve or reject the request.

  • Depending on what the manager decides, the exclusive gateway (the diamond shape with the cross) will route the process instance to either the approval or the rejection path.

  • If approved, we have to register the request in some external system, which is followed by a user task again for the original employee that notifies them of the decision. This could, of course, be replaced by an email.

  • If rejected, an email is sent to the employee informing them of this.

Typically, such a process definition is modeled with a visual modeling tool, such as the Flowable Designer (Eclipse) or the Flowable Modeler (web application).

Here, however, we’re going to write the XML directly to familiarize ourselves with BPMN 2.0 and its concepts.

The BPMN 2.0 XML corresponding to the diagram above is shown below. Note that this is only the process part. If you’d used a graphical modeling tool, the underlying XML file also contains the visualization part that describes the graphical information, such as the coordinates of the various elements of the process definition (all graphical information is contained in the BPMNDiagram tag in the XML, which is a child element of the definitions tag).

Save the following XML in a file named holiday-request.bpmn20.xml in the src/main/resources folder.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
<?xml version="1.0" encoding="UTF-8"?> <definitions xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:bpmndi="http://www.omg.org/spec/BPMN/20100524/DI" xmlns:omgdc="http://www.omg.org/spec/DD/20100524/DC" xmlns:omgdi="http://www.omg.org/spec/DD/20100524/DI" xmlns:flowable="http://flowable.org/bpmn" typeLanguage="http://www.w3.org/2001/XMLSchema" expressionLanguage="http://www.w3.org/1999/XPath" targetNamespace="http://www.flowable.org/processdef"> <process id="holidayRequest" name="Holiday Request" isExecutable="true"> <startEvent id="startEvent"/> <sequenceFlow sourceRef="startEvent" targetRef="approveTask"/> <userTask id="approveTask" name="Approve or reject request"/> <sequenceFlow sourceRef="approveTask" targetRef="decision"/> <exclusiveGateway id="decision"/> <sequenceFlow sourceRef="decision" targetRef="externalSystemCall"> <conditionExpression xsi:type="tFormalExpression"> <![CDATA[ ${approved} ]]> </conditionExpression> </sequenceFlow> <sequenceFlow sourceRef="decision" targetRef="sendRejectionMail"> <conditionExpression xsi:type="tFormalExpression"> <![CDATA[ ${!approved} ]]> </conditionExpression> </sequenceFlow> <serviceTask id="externalSystemCall" name="Enter holidays in external system" flowable:class="org.flowable.CallExternalSystemDelegate"/> <sequenceFlow sourceRef="externalSystemCall" targetRef="holidayApprovedTask"/> <userTask id="holidayApprovedTask" name="Holiday approved"/> <sequenceFlow sourceRef="holidayApprovedTask" targetRef="approveEnd"/> <serviceTask id="sendRejectionMail" name="Send out rejection email" flowable:class="org.flowable.SendRejectionMail"/> <sequenceFlow sourceRef="sendRejectionMail" targetRef="rejectEnd"/> <endEvent id="approveEnd"/> <endEvent id="rejectEnd"/> </process> </definitions>

Lines 2 to 11 look a bit daunting, but it’s the same as you’ll see in almost every process definition. It’s kind of boilerplate stuff that’s needed to be fully compatible with the BPMN 2.0 standard specification.

Every step (in BPMN 2.0 terminology, activity) has an id attribute that gives it a unique identifier in the XML file. All activities can have an optional name too, which increases the readability of the visual diagram, of course.

The activities are connected by a sequence flow, which is a directed arrow in the visual diagram. When executing a process instance, the execution will flow from the start event to the next activity, following the sequence flow.

The sequence flows leaving the exclusive gateway (the diamond shape with the X) are clearly special: both have a condition defined in the form of an expression (see line 25 and 32). When the process instance execution reaches this gateway, the conditions are evaluated and the first that resolves to true is taken. This is what the exclusive stands for here: only one is selected. Other types of gateways are, of course, possible if different routing behavior is needed.

The condition written here as an expression is of the form ${approved}, which is a shorthand for ${approved == true}. The variable approved is called a process variable. A process variable is a persistent bit of data that is stored together with the process instance and can be used during the lifetime of the process instance. In this case, it does mean that we will have to set this process variable at a certain point (when the manager user task is submitted or, in Flowable terminology, completed) in the process instance, as it’s not data that is available when the process instance starts.

Now we have the process BPMN 2.0 XML file, we next need to deploy it to the engine. Deploying a process definition means that:

  • the process engine will store the XML file in the database, so it can be retrieved whenever needed

  • the process definition is parsed to an internal, executable object model, so that process instances can be started from it.

To deploy a process definition to the Flowable engine, the RepositoryService is used, which can be retrieved from the ProcessEngine object. Using the RepositoryService, a new Deployment is created by passing the location of the XML file and calling the deploy() method to actually execute it:

1 2 3 4
RepositoryService repositoryService = processEngine.getRepositoryService(); Deployment deployment = repositoryService.createDeployment() .addClasspathResource("holiday-request.bpmn20.xml") .deploy();

We can now verify that the process definition is known to the engine (and learn a bit about the API) by querying it through the API. This is done by creating a new ProcessDefinitionQuery object through the RepositoryService.

1 2 3 4
ProcessDefinition processDefinition = repositoryService.createProcessDefinitionQuery() .deploymentId(deployment.getId()) .singleResult(); System.out.println("Found process definition : " + processDefinition.getName());

2.3.3. Starting a process instance

We now have the process definition deployed to the process engine, so process instances can be started using this process definition as a blueprint.

To start the process instance, we need to provide some initial process variables. Typically, you’ll get these through a form that is presented to the user or through a REST API when a process is triggered by something automatic. In this example, we’ll keep it simple and use the java.util.Scanner class to simply input some data on the command line:

1 2 3 4 5 6 7 8 9 10
Scanner scanner= new Scanner(System.in); System.out.println("Who are you?"); String employee = scanner.nextLine(); System.out.println("How many holidays do you want to request?"); Integer nrOfHolidays = Integer.valueOf(scanner.nextLine()); System.out.println("Why do you need them?"); String description = scanner.nextLine();

Next, we can start a process instance through the RuntimeService. The collected data is passed as a java.util.Map instance, where the key is the identifier that will be used to retrieve the variables later on. The process instance is started using a key. This key matches the id attribute that is set in the BPMN 2.0 XML file, in this case holidayRequest.

(NOTE: there are many ways you’ll learn later on to start a process instance, beyond using a key)

<process id="holidayRequest" name="Holiday Request" isExecutable="true">
1 2 3 4 5 6 7 8
RuntimeService runtimeService = processEngine.getRuntimeService(); Map<String, Object> variables = new HashMap<String, Object>(); variables.put("employee", employee); variables.put("nrOfHolidays", nrOfHolidays); variables.put("description", description); ProcessInstance processInstance = runtimeService.startProcessInstanceByKey("holidayRequest", variables);

When the process instance is started, an execution is created and put in the start event. From there, this execution follows the sequence flow to the user task for the manager approval and executes the user task behavior. This behavior will create a task in the database that can be found using queries later on. A user task is a wait state and the engine will stop executing anything further, returning the API call.

2.3.4. Sidetrack: transactionality

In Flowable, database transactions play a crucial role to guarantee data consistency and solve concurrency problems. When you make a Flowable API call, by default, everything is synchronous and part of the same transaction. Meaning, when the method call returns, a transaction will be started and committed.

When a process instance is started, there will be one database transaction from the start of the process instance to the next wait state. In this example, this is the first user task. When the engine reaches this user task, the state is persisted to the database and the transaction is committed and the API call returns.

In Flowable, when continuing a process instance, there will always be one database transaction going from the previous wait state to the next wait state. Once persisted, the data can be in the database for a long time, even years if it has to be, until an API call is executed that takes the process instance further. Note that no computing or memory resources are consumed when the process instance is in such a wait state, waiting for the next API call.

In the example here, when the first user task is completed, one database transaction will be used to go from the user task through the exclusive gateway (the automatic logic) until the second user task. Or straight to the end with the other path.

2.3.5. Querying and completing tasks

In a more realistic application, there will be a user interface where the employees and the managers can log in and see their task lists. With these, they can inspect the process instance data that is stored as process variables and decide what they want to do with the task. In this example, we will mimic task lists by executing the API calls that normally would be behind a service call that drives a UI.

We haven’t yet configured the assignment for the user tasks. We want the first task to go the the managers group and the second user task to be assigned to the original requester of the holiday. To do this, add the candidateGroups attribute to the first task:

<userTask id="approveTask" name="Approve or reject request" flowable:candidateGroups="managers"/>

and the assignee attribute to the second task as shown below. Note that we’re not using a static value like the managers value above, but a dynamic assignment based on a process variable that we’ve passed when the process instance was started:

<userTask id="holidayApprovedTask" name="Holiday approved" flowable:assignee="${employee}"/>

To get the actual task list, we create a TaskQuery through the TaskService and we configure the query to only return the tasks for the managers group:

1 2 3 4 5 6
TaskService taskService = processEngine.getTaskService(); List<Task> tasks = taskService.createTaskQuery().taskCandidateGroup("managers").list(); System.out.println("You have " + tasks.size() + " tasks:"); for (int i=0; i<tasks.size(); i++) { System.out.println((i+1) + ") " + tasks.get(i).getName()); }

Using the task identifier, we can now get the specific process instance variables and show on the screen the actual request:

1 2 3 4 5 6
System.out.println("Which task would you like to complete?"); int taskIndex = Integer.valueOf(scanner.nextLine()); Task task = tasks.get(taskIndex - 1); Map<String, Object> processVariables = taskService.getVariables(task.getId()); System.out.println(processVariables.get("employee") + " wants " + processVariables.get("nrOfHolidays") + " of holidays. Do you approve this?");

Which, if you run this, should look something like this:

getting.started.console.logging3

The manager can now complete the task. In reality, this often means that a form is submitted by the user. The data from the form is then passed as process variables. Here, we’ll mimic this by passing a map with the approved variable (the name is important, as it’s used later on in the conditions of the sequence flow!) when the task is completed:

1 2 3 4
boolean approved = scanner.nextLine().toLowerCase().equals("y"); variables = new HashMap<String, Object>(); variables.put("approved", approved); taskService.complete(task.getId(), variables);

The task is now completed and one of the two paths leaving the exclusive gateway is selected based on the approved process variable.

2.3.6. Writing a JavaDelegate

There is a last piece of the puzzle still missing: we haven’t implemented the automatic logic that will get executed when the request is approved. In the BPMN 2.0 XML this is a service task and it looked above like:

<serviceTask id="externalSystemCall" name="Enter holidays in external system"
    flowable:class="org.flowable.CallExternalSystemDelegate"/>

In reality, this logic could be anything, ranging from calling a service with HTTP REST, to executing some legacy code calls to a system the organization has been using for decades. We won’t implement the actual logic here but simply log the processing.

Create a new class (File → New → Class in Eclipse), fill in org.flowable as package name and CallExternalSystemDelegate as class name. Make that class implement the org.flowable.engine.delegate.JavaDelegate interface and implement the execute method:

1 2 3 4 5 6 7 8 9 10 11 12 13
package org.flowable; import org.flowable.engine.delegate.DelegateExecution; import org.flowable.engine.delegate.JavaDelegate; public class CallExternalSystemDelegate implements JavaDelegate { public void execute(DelegateExecution execution) { System.out.println("Calling the external system for employee " + execution.getVariable("employee")); } }

When the execution arrives at the service task, the class that is referenced in the BPMN 2.0 XML is instantiated and called.

When running the example now, the logging message is shown, demonstrating the custom logic is indeed executed:

getting.started.console.logging4

2.3.7. Working with historical data

One of the many reasons for choosing to use a process engine like Flowable is because it automatically stores audit data or historical data for all the process instances. This data allows the creation of rich reports that give insights into how the organization works, where the bottlenecks are, etc.

For example, suppose we want to show the duration of the process instance that we’ve been executing so far. To do this, we get the HistoryService from the ProcessEngine and create a query for historical activities. In the snippet below you can see we add some additional filtering:

  • only the activities for one particular process instance

  • only the activities that have finished

The results are also sorted by end time, meaning that we’ll get them in execution order.

1 2 3 4 5 6 7 8 9 10 11 12
HistoryService historyService = processEngine.getHistoryService(); List<HistoricActivityInstance> activities = historyService.createHistoricActivityInstanceQuery() .processInstanceId(processInstance.getId()) .finished() .orderByHistoricActivityInstanceEndTime().asc() .list(); for (HistoricActivityInstance activity : activities) { System.out.println(activity.getActivityId() + " took " + activity.getDurationInMillis() + " milliseconds"); }

Running the example again, we now see something like this in the console:

startEvent took 1 milliseconds
approveTask took 2638 milliseconds
decision took 3 milliseconds
externalSystemCall took 1 milliseconds

2.3.8. Conclusion

This tutorial introduced various Flowable and BPMN 2.0 concepts and terminology, while also demonstrating how to use the Flowable API programmatically.

Of course, this is just the start of the journey. The following sections will dive more deeply into the many options and features that the Flowable engine supports. Other sections go into the various ways the Flowable engine can be set up and used, and describe in detail all the BPMN 2.0 constructs that are possible.

2.4. Getting started with the Flowable REST API

This section shows the same example as the previous section: deploying a process definition, starting a process instance, getting a task list and completing a task. If you haven’t read that section, it might be good to skim through it to get an idea of what is done there.

This time, the Flowable REST API is used rather than the Java API. You’ll soon notice that the REST API closely matches the Java API, and knowing one automatically means that you can find your way around the other.

To get a full, detailed overview of the Flowable REST API, check out the REST API chapter.

2.4.1. Setting up the REST application

When you download the .zip file from the flowable.org website, the REST application can be found in the wars folder. You’ll need a servlet container, such as Tomcat, Jetty, and so on, to run the WAR file.

When using Tomcat the steps are as follows:

  • Download and unzip the latest and greatest Tomcat zip file (choose the Core distribution from the Tomcat website).

  • Copy the flowable-rest.war file from the wars folder of the unzipped Flowable distribution to the webapps folder of the unzipped Tomcat folder.

  • On the command line, go to the bin folder of the Tomcat folder.

  • Execute ./catalina run to boot up the Tomcat server.

During the server boot up, you’ll notice some Flowable logging messages passing by. At the end, a message like INFO [main] org.apache.catalina.startup.Catalina.start Server startup in xyz ms indicates that the server is ready to receive requests. Note that by default an in-memory H2 database instance is used, which means that data won’t survive a server restart.

In the following sections, we’ll use cURL to demonstrate the various REST calls. All REST calls are by default protected with basic authentication. The user kermit with password kermit is used in all calls.

After bootup, verify the application is running correctly by executing

curl --user kermit:kermit http://localhost:8080/flowable-rest/service/management/engine

If you get back a proper json response, the REST API is up and running.

2.4.2. Deploying a process definition

The first step is to deploy a process definition. With the REST API, this is done by uploading a .bpmn20.xml file (or .zip file for multiple process definitions) as multipart/formdata:

curl --user kermit:kermit -F "file=@holiday-request.bpmn20.xml" http://localhost:8080/flowable-rest/service/repository/deployments

To verify that the process definition is deployed correctly, the list of process definitions can be requested:

curl --user kermit:kermit http://localhost:8080/flowable-rest/service/repository/process-definitions

which returns a list of all process definitions currently deployed to the engine.

2.4.3. Start a process instance

Starting a process instance through the REST API is similar to doing the same through the Java API: a key is provided to identify the process definition to use along with a map of initial process variables:

curl --user kermit:kermit -H "Content-Type: application/json" -X POST -d '{ "processDefinitionKey":"holidayRequest", "variables": [ { "name":"employee", "value": "John Doe" }, { "name":"nrOfHolidays", "value": 7 }]}' http://localhost:8080/flowable-rest/service/runtime/process-instances

which returns something like

{"id":"43","url":"http://localhost:8080/flowable-rest/service/runtime/process-instances/43","businessKey":null,"suspended":false,"ended":false,"processDefinitionId":"holidayRequest:1:42","processDefinitionUrl":"http://localhost:8080/flowable-rest/service/repository/process-definitions/holidayRequest:1:42","activityId":null,"variables":[],"tenantId":"","completed":false}

2.4.4. Task list and completing a task

When the process instance is started, the first task is assigned to the managers group. To get all tasks for this group, a task query can be done through the REST API:

curl --user kermit:kermit -H "Content-Type: application/json" -X POST -d '{ "candidateGroup" : "managers" }' http://localhost:8080/flowable-rest/service/query/tasks

which returns a list of all tasks for the managers group

Such a task can now be completed using:

curl --user kermit:kermit -H "Content-Type: application/json" -X POST -d '{ "action" : "complete", "variables" : [ { "name" : "approved", "value" : true} ]  }' http://localhost:8080/flowable-rest/service/runtime/tasks/25

However, you most likely will get an error like:

{"message":"Internal server error","exception":"couldn't instantiate class org.flowable.CallExternalSystemDelegate"}

This means that the engine couldn’t find the CallExternalSystemDelegate class that is referenced in the service task. To solve this, the class needs to be put on the classpath of the application (which will require a restart). Create the class as described in this section, package it up as a JAR and put it in the WEB-INF/lib folder of the flowable-rest folder under the webapps folder of Tomcat.

3. Configuration

3.1. Creating a ProcessEngine

The Flowable process engine is configured through an XML file called flowable.cfg.xml. Note that this is not applicable if you’re using the Spring style of building a process engine.

The easiest way to obtain a ProcessEngine is to use the org.flowable.engine.ProcessEngines class:

1
ProcessEngine processEngine = ProcessEngines.getDefaultProcessEngine()

This will look for a flowable.cfg.xml file on the classpath and construct an engine based on the configuration in that file. The following snippet shows an example configuration. The following sections will give a detailed overview of the configuration properties.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> <bean id="processEngineConfiguration" class="org.flowable.engine.impl.cfg.StandaloneProcessEngineConfiguration"> <property name="jdbcUrl" value="jdbc:h2:mem:flowable;DB_CLOSE_DELAY=1000" /> <property name="jdbcDriver" value="org.h2.Driver" /> <property name="jdbcUsername" value="sa" /> <property name="jdbcPassword" value="" /> <property name="databaseSchemaUpdate" value="true" /> <property name="asyncExecutorActivate" value="false" /> <property name="mailServerHost" value="mail.my-corp.com" /> <property name="mailServerPort" value="5025" /> </bean> </beans>

Note that the configuration XML is in fact a Spring configuration. This does not mean that Flowable can only be used in a Spring environment! We are simply leveraging the parsing and dependency injection capabilities of Spring internally for building up the engine.

The ProcessEngineConfiguration object can also be created programmatically using the configuration file. It is also possible to use a different bean id (for example, see line 3).

1 2 3 4 5 6
ProcessEngineConfiguration. createProcessEngineConfigurationFromResourceDefault(); createProcessEngineConfigurationFromResource(String resource); createProcessEngineConfigurationFromResource(String resource, String beanName); createProcessEngineConfigurationFromInputStream(InputStream inputStream); createProcessEngineConfigurationFromInputStream(InputStream inputStream, String beanName);

It is also possible not to use a configuration file, and create a configuration based on defaults (see the different supported classes for more information).

1 2
ProcessEngineConfiguration.createStandaloneProcessEngineConfiguration(); ProcessEngineConfiguration.createStandaloneInMemProcessEngineConfiguration();

All these ProcessEngineConfiguration.createXXX() methods return a ProcessEngineConfiguration that can be tweaked further if needed. After calling the buildProcessEngine() operation, a ProcessEngine is created:

1 2 3 4 5
ProcessEngine processEngine = ProcessEngineConfiguration.createStandaloneInMemProcessEngineConfiguration() .setDatabaseSchemaUpdate(ProcessEngineConfiguration.DB_SCHEMA_UPDATE_FALSE) .setJdbcUrl("jdbc:h2:mem:my-own-db;DB_CLOSE_DELAY=1000") .setAsyncExecutorActivate(false) .buildProcessEngine();

3.2. ProcessEngineConfiguration bean

The flowable.cfg.xml must contain a bean that has the id 'processEngineConfiguration'.

1
<bean id="processEngineConfiguration" class="org.flowable.engine.impl.cfg.StandaloneProcessEngineConfiguration">

This bean is then used to construct the ProcessEngine. There are multiple classes available that can be used to define the processEngineConfiguration. These classes represent different environments, and set defaults accordingly. It’s best practice to select the class that best matches your environment, to minimize the number of properties needed to configure the engine. The following classes are currently available:

  • org.flowable.engine.impl.cfg.StandaloneProcessEngineConfiguration: the process engine is used in a standalone way. Flowable will take care of all transactions. By default, the database will only be checked when the engine boots (and an exception is thrown if there is no Flowable schema or the schema version is incorrect).

  • org.flowable.engine.impl.cfg.StandaloneInMemProcessEngineConfiguration: this is a convenience class for unit testing purposes. Flowable will take care of all transactions. An H2 in-memory database is used by default. The database will be created and dropped when the engine boots and shuts down. When using this, no additional configuration is probably needed (except when using, for example, the job executor or mail capabilities).

  • org.flowable.spring.SpringProcessEngineConfiguration: To be used when the process engine is used in a Spring environment. See the Spring integration section for more information.

  • org.flowable.engine.impl.cfg.JtaProcessEngineConfiguration: To be used when the engine runs in standalone mode, with JTA transactions.

3.3. Database configuration

There are two ways to configure the database that the Flowable engine will use. The first option is to define the JDBC properties of the database:

  • jdbcUrl: JDBC URL of the database.

  • jdbcDriver: implementation of the driver for the specific database type.

  • jdbcUsername: username to connect to the database.

  • jdbcPassword: password to connect to the database.

The data source that is constructed based on the provided JDBC properties will have the default MyBatis connection pool settings. The following attributes can optionally be set to tweak that connection pool (taken from the MyBatis documentation):

  • jdbcMaxActiveConnections: The number of active connections that the connection pool at maximum at any time can contain. Default is 10.

  • jdbcMaxIdleConnections: The number of idle connections that the connection pool at maximum at any time can contain.

  • jdbcMaxCheckoutTime: The amount of time in milliseconds a connection can be checked out from the connection pool before it is forcefully returned. Default is 20000 (20 seconds).

  • jdbcMaxWaitTime: This is a low level setting that gives the pool a chance to print a log status and re-attempt the acquisition of a connection in the case that it is taking unusually long (to avoid failing silently forever if the pool is misconfigured) Default is 20000 (20 seconds).

Example database configuration:

1 2 3 4
<property name="jdbcUrl" value="jdbc:h2:mem:flowable;DB_CLOSE_DELAY=1000" /> <property name="jdbcDriver" value="org.h2.Driver" /> <property name="jdbcUsername" value="sa" /> <property name="jdbcPassword" value="" />

Our benchmarks have shown that the MyBatis connection pool is not the most efficient or resilient when dealing with a lot of concurrent requests. As such, it is advised to us a javax.sql.DataSource implementation and inject it into the process engine configuration (For example DBCP, C3P0, Hikari, Tomcat Connection Pool, etc.):

1 2 3 4 5 6 7 8 9 10 11 12
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" > <property name="driverClassName" value="com.mysql.jdbc.Driver" /> <property name="url" value="jdbc:mysql://localhost:3306/flowable" /> <property name="username" value="flowable" /> <property name="password" value="flowable" /> <property name="defaultAutoCommit" value="false" /> </bean> <bean id="processEngineConfiguration" class="org.flowable.engine.impl.cfg.StandaloneProcessEngineConfiguration"> <property name="dataSource" ref="dataSource" /> ...

Note that Flowable does not ship with a library that allows you to define such a data source. So you need to make sure that the libraries are on your classpath.

The following properties can be set, regardless of whether you are using the JDBC or data source approach:

  • databaseType: it’s normally not necessary to specify this property, as it is automatically detected from the database connection metadata. Should only be specified when automatic detection fails. Possible values: {h2, mysql, oracle, postgres, mssql, db2}. This setting will determine which create/drop scripts and queries will be used. See the supported databases section for an overview of which types are supported.

  • databaseSchemaUpdate: sets the strategy to handle the database schema on process engine boot and shutdown.

    • false (default): Checks the version of the DB schema against the library when the process engine is being created and throws an exception if the versions don’t match.

    • true: Upon building the process engine, a check is performed and an update of the schema is performed if it is necessary. If the schema doesn’t exist, it is created.

    • create-drop: Creates the schema when the process engine is being created and drops the schema when the process engine is being closed.

3.4. JNDI Datasource Configuration

By default, the database configuration for Flowable is contained within the db.properties files in the WEB-INF/classes of each web application. This isn’t always ideal because it requires users to either modify the db.properties in the Flowable source and recompile the WAR file, or explode the WAR and modify the db.properties on every deployment.

By using JNDI (Java Naming and Directory Interface) to obtain the database connection, the connection is fully managed by the Servlet Container and the configuration can be managed outside the WAR deployment. This also allows more control over the connection parameters than what is provided by the db.properties file.

3.4.1. Configuration

Configuration of the JNDI data source will differ depending on what servlet container application you are using. The instructions below will work for Tomcat, but for other container applications, please refer to the documentation for your container app.

If using Tomcat, the JNDI resource is configured within $CATALINA_BASE/conf/[enginename]/[hostname]/[warname].xml (for the Flowable UI this will usually be $CATALINA_BASE/conf/Catalina/localhost/flowable-app.xml). The default context is copied from the Flowable WAR file when the application is first deployed, so if it already exists, you will need to replace it. To change the JNDI resource so that the application connects to MySQL instead of H2, for example, change the file to the following:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
<?xml version="1.0" encoding="UTF-8"?> <Context antiJARLocking="true" path="/flowable-app"> <Resource auth="Container" name="jdbc/flowableDB" type="javax.sql.DataSource" description="JDBC DataSource" url="jdbc:mysql://localhost:3306/flowable" driverClassName="com.mysql.jdbc.Driver" username="sa" password="" defaultAutoCommit="false" initialSize="5" maxWait="5000" maxActive="120" maxIdle="5"/> </Context>

3.4.2. JNDI properties

To configure a JNDI data source, use following properties in the properties file for the Flowable UI:

  • datasource.jndi.name: the JNDI name of the data source.

  • datasource.jndi.resourceRef: Set whether the lookup occurs in a J2EE container, for example, the prefix "java:comp/env/" needs to be added if the JNDI name doesn’t already contain it. Default is "true".

3.5. Supported databases

Listed below are the types (case sensitive!) that Flowable uses to refer to databases.

Flowable database type Example JDBC URL Notes

h2

jdbc:h2:tcp://localhost/flowable

Default configured database

mysql

jdbc:mysql://localhost:3306/flowable?autoReconnect=true

Tested using mysql-connector-java database driver

oracle

jdbc:oracle:thin:@localhost:1521:xe

postgres

jdbc:postgresql://localhost:5432/flowable

db2

jdbc:db2://localhost:50000/flowable

mssql

jdbc:sqlserver://localhost:1433;databaseName=flowable (jdbc.driver=com.microsoft.sqlserver.jdbc.SQLServerDriver) OR jdbc:jtds:sqlserver://localhost:1433/flowable (jdbc.driver=net.sourceforge.jtds.jdbc.Driver)

Tested using Microsoft JDBC Driver 4.0 (sqljdbc4.jar) and JTDS Driver

3.6. Creating the database tables

The easiest way to create the database tables for your database is to:

  • Add the flowable-engine JARs to your classpath

  • Add a suitable database driver

  • Add a Flowable configuration file (flowable.cfg.xml) to your classpath, pointing to your database (see database configuration section)

  • Execute the main method of the DbSchemaCreate class

However, often only database administrators can execute DDL statements on a database. On a production system, this is also the wisest of choices. The SQL DDL statements can be found on the Flowable downloads page or inside the Flowable distribution folder, in the database subdirectory. The scripts are also in the engine JAR (flowable-engine-x.jar), in the package org/flowable/db/create (the drop folder contains the drop statements). The SQL files are of the form

flowable.{db}.{create|drop}.{type}.sql

Where db is any of the supported databases and type is:

  • engine: the tables needed for engine execution. Required.

  • history: the tables that contain the history and audit information. Optional: not needed when history level is set to none. Note that this will also disable some features (such as commenting on tasks) which store the data in the history database.

Note for MySQL users: MySQL versions lower than 5.6.4 have no support for timestamps or dates with millisecond precision. To make things even worse, some versions will throw an exception when trying to create such a column, but other versions don’t. When doing auto-creation/upgrade, the engine will change the DDL when executing it. When using the DDL file approach, both a regular version and a special file with mysql55 in it are available (this applies on anything lower than 5.6.4). This latter file will have column types with no millisecond precision.

Concretely, the following applies for MySQL versions:

  • <5.6: No millisecond precision available. DDL files available (look for files containing mysql55). Auto creation/update will work out of the box.

  • 5.6.0 - 5.6.3: No millisecond precision available. Auto creation/update will NOT work. It is advised to upgrade to a newer database version anyway. DDL files for mysql 5.5 could be used if really needed.

  • 5.6.4+: Millisecond precision available. DDL files available (default file containing mysql). Auto creation/update works out of the box.

Do note that in the case of upgrading the MySQL database later on and the Flowable tables are already created/upgraded, the column type change will have to be done manually!

3.7. Database table names explained

The database names of Flowable all start with ACT_. The second part is a two-character identification of the use case of the table. This use case will also roughly match the service API.

  • ACT_RE_*: RE stands for repository. Tables with this prefix contain static information such as process definitions and process resources (images, rules, etc.).

  • ACT_RU_*: RU stands for runtime. These are the runtime tables that contain the runtime data of process instances, user tasks, variables, jobs, and so on. Flowable only stores the runtime data during process instance execution and removes the records when a process instance ends. This keeps the runtime tables small and fast.

  • ACT_HI_*: HI stands for history. These are the tables that contain historic data, such as past process instances, variables, tasks, and so on.

  • ACT_GE_*: general data, which is used for various use cases.

3.8. Database upgrade

Make sure you make a backup of your database (using your database backup capabilities) before you run an upgrade.

By default, a version check will be performed each time a process engine is created. This typically happens once at boot time of your application or of the Flowable webapps. If the Flowable library notices a difference between the library version and the version of the Flowable database tables, then an exception is thrown.

To upgrade, you have to start by putting the following configuration property in your flowable.cfg.xml configuration file:

1 2 3 4 5 6 7 8 9 10
<beans > <bean id="processEngineConfiguration" class="org.flowable.engine.impl.cfg.StandaloneProcessEngineConfiguration"> <!-- ... --> <property name="databaseSchemaUpdate" value="true" /> <!-- ... --> </bean> </beans>

Also, include a suitable database driver for your database to the classpath. Upgrade the Flowable libraries in your application. Or start up a new version of Flowable and point it to a database that contains data from an older version. With databaseSchemaUpdate set to true, Flowable will automatically upgrade the DB schema to the newest version the first time when it notices that libraries and DB schema are out of sync.

As an alternative, you can also run the upgrade DDL statements. It’s also possible to run the upgrade database scripts available on the Flowable downloads page.

3.9. Job Executor (from version 6.0.0 onwards)

The async executor of Flowable v5 is the only available job executor in Flowable V6, as it is a more performant and more database friendly way of executing asynchronous jobs in the Flowable engine. The old job executor of Flowable 5 is no longer available in V6. More information can be found in the advanced section of the user guide.

Additionally, if running under Java EE 7, JSR-236 compliant ManagedAsyncJobExecutor can be used for letting the container manage the threads. In order to enable them, the thread factory should be passed in the configuration as follows:

1 2 3 4 5 6 7 8 9
<bean id="threadFactory" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName" value="java:jboss/ee/concurrency/factory/default" /> </bean> <bean id="customJobExecutor" class="org.flowable.engine.impl.jobexecutor.ManagedAsyncJobExecutor"> <!-- ... --> <property name="threadFactory" ref="threadFactory" /> <!-- ... --> </bean>

The managed implementations fall back to their default counterparts if the thread factory is not specified.

3.10. Job executor activation

The AsyncExecutor is a component that manages a thread pool to fire timers and other asynchronous tasks. Other implementations are possible (for example using a message queue, see the advanced section of the user guide).

By default, the AsyncExecutor is not activated and not started. With the following configuration the async executor can be started together with the Flowable Engine.

1
<property name="asyncExecutorActivate" value="true" />

The property asyncExecutorActivate instructs the Flowable engine to start the Async executor at startup.

3.11. Mail server configuration

Configuring a mail server is optional. Flowable supports sending e-mails in business processes. To actually send an e-mail, a valid SMTP mail server configuration is required. See the e-mail task for the configuration options.

3.12. History configuration

Customizing the configuration of history storage is optional. This allows you to tweak settings that influence the history capabilities of the engine. See history configuration for more details.

1
<property name="history" value="audit" />

3.13. Async history configuration

[Experimental] Since Flowable 6.1.0 the async history feature has been added. When async history is enabled, the historic data will be persisted by a history job executor, instead of synchronous persistence as part of the runtime execution persistence. See async history configuration for more details.

1
<property name="asyncHistoryEnabled" value="true" />

3.14. Exposing configuration beans in expressions and scripts

By default, all beans that you specify in the flowable.cfg.xml configuration or in your own Spring configuration file are available to expressions and scripts. If you want to limit the visibility of beans in your configuration file, you can configure a property called beans in your process engine configuration. The beans property in ProcessEngineConfiguration is a map. When you specify that property, only beans specified in that map will be visible to expressions and scripts. The exposed beans will be exposed with the names as you specify in the map.

3.15. Deployment cache configuration

All process definitions are cached (after they’re parsed) to avoid hitting the database every time a process definition is needed and because process definition data doesn’t change. By default, there is no limit on this cache. To limit the process definition cache, add following property:

1
<property name="processDefinitionCacheLimit" value="10" />

Setting this property will swap the default hashmap cache with a LRU cache that has the provided hard limit. Of course, the best value for this property depends on the total amount of process definitions stored and the number of process definitions actually used at runtime by all the runtime process instances.

You can also inject your own cache implementation. This must be a bean that implements the org.flowable.engine.impl.persistence.deploy.DeploymentCache interface:

1 2 3
<property name="processDefinitionCache"> <bean class="org.flowable.MyCache" /> </property>

There is a similar property called knowledgeBaseCacheLimit and knowledgeBaseCache for configuring the rules cache. This is only needed when you use the rules task in your processes.

3.16. Logging

All logging (flowable, spring, mybatis, …​) is routed through SLF4J and allows for selecting the logging-implementation of your choice.

By default no SFL4J-binding JAR is present in the flowable-engine dependencies, this should be added in your project in order to use the logging framework of your choice. If no implementation JAR is added, SLF4J will use a NOP-logger, not logging anything at all, other than a warning that nothing will be logged. For more info on these bindings http://www.slf4j.org/codes.html#StaticLoggerBinder.

With Maven, add for example a dependency like this (here using log4j), note that you still need to add a version:

1 2 3 4
<dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> </dependency>

The flowable-ui and flowable-rest webapps are configured to use Log4j-binding. Log4j is also used when running the tests for all the flowable-* modules.

Important note when using a container with commons-logging in the classpath: In order to route the spring-logging through SLF4J, a bridge is used (see http://www.slf4j.org/legacy.html#jclOverSLF4J). If your container provides a commons-logging implementation, please follow directions on this page: http://www.slf4j.org/codes.html#release to ensure stability.

Example when using Maven (version omitted):

1 2 3 4
<dependency> <groupId>org.slf4j</groupId> <artifactId>jcl-over-slf4j</artifactId> </dependency>

3.17. Mapped Diagnostic Contexts

Flowable supports the Mapped Diagnostic Contexts feature of SLF4j. This basic information is passed to the underlying logger along with what is going to be logged:

  • processDefinition Id as mdcProcessDefinitionID

  • processInstance Id as mdcProcessInstanceID

  • execution Id as mdcExecutionId

None of this information is logged by default. The logger can be configured to show them in your desired format, extra to the usual logged messages. For example in Log4j the following sample layout definition causes the logger to show the above mentioned information:

1 2
log4j.appender.consoleAppender.layout.ConversionPattern=ProcessDefinitionId=%X{mdcProcessDefinitionID} executionId=%X{mdcExecutionId} mdcProcessInstanceID=%X{mdcProcessInstanceID} mdcBusinessKey=%X{mdcBusinessKey} %m%n

This is useful when the logs contain information that needs to checked in real time, by means of a log analyzer, for example.

3.18. Event handlers

The event mechanism in the Flowable engine allows you to get notified when various events occur within the engine. Take a look at all supported event types for an overview of the events available.

It’s possible to register a listener for certain types of events as opposed to getting notified when any type of event is dispatched. You can either add engine-wide event listeners through the configuration, add engine-wide event listeners at runtime using the API or add event-listeners to specific process definitions in the BPMN XML.

All events dispatched are a subtype of org.flowable.engine.common.api.delegate.event.FlowableEvent. The event exposes (if available) the type, executionId, processInstanceId and processDefinitionId. Certain events contain additional context related to the event that occurred, more information about additional payloads can be found in the list of all supported event types.

3.18.1. Event listener implementation

The only requirement for an event-listener is to implement org.flowable.engine.delegate.event.FlowableEventListener. Below is an example implementation of a listener, which outputs all events received to the standard-out, with exception of events related to job-execution:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
public class MyEventListener implements FlowableEventListener { @Override public void onEvent(FlowableEvent event) { switch (event.getType()) { case JOB_EXECUTION_SUCCESS: System.out.println("A job well done!"); break; case JOB_EXECUTION_FAILURE: System.out.println("A job has failed..."); break; default: System.out.println("Event received: " + event.getType()); } } @Override public boolean isFailOnException() { // The logic in the onEvent method of this listener is not critical, exceptions // can be ignored if logging fails... return false; } }

The isFailOnException() method determines the behavior when the onEvent(..) method throws an exception when an event is dispatched. When false is returned, the exception is ignored. When true is returned, the exception is not ignored and bubbles up, effectively failing the current ongoing command. If the event was part of an API-call (or any other transactional operation, for example, job-execution), the transaction will be rolled back. If the behavior in the event-listener is not business-critical, it’s recommended to return false.

There are a few base implementations provided by Flowable to facilitate common use cases of event-listeners. These can be used as base-class or as an example listener implementation:

  • org.flowable.engine.delegate.event.BaseEntityEventListener: An event-listener base-class that can be used to listen for entity-related events for a specific type of entity or for all entities. It hides away the type-checking and offers 4 methods that should be overridden: onCreate(..), onUpdate(..) and onDelete(..) when an entity is created, updated or deleted. For all other entity-related events, the onEntityEvent(..) is called.

3.18.2. Configuration and setup

If an event-listener is configured in the process engine configuration, it will be active when the process engine starts and will remain active after subsequent reboots of the engine.

The property eventListeners expects a list of org.flowable.engine.delegate.event.FlowableEventListener instances. As usual, you can either declare an inline bean definition or use a ref to an existing bean instead. The snippet below adds an event-listener to the configuration that is notified when any event is dispatched, regardless of its type:

1 2 3 4 5 6 7 8 9
<bean id="processEngineConfiguration" class="org.flowable.engine.impl.cfg.StandaloneProcessEngineConfiguration"> ... <property name="eventListeners"> <list> <bean class="org.flowable.engine.example.MyEventListener" /> </list> </property> </bean>

To get notified when certain types of events get dispatched, use the typedEventListeners property, which expects a map. The key of a map-entry is a comma-separated list of event-names (or a single event-name). The value of a map-entry is a list of org.flowable.engine.delegate.event.FlowableEventListener instances. The snippet below adds an event-listener to the configuration, that is notified when a job execution was successful or failed:

1 2 3 4 5 6 7 8 9 10 11 12 13
<bean id="processEngineConfiguration" class="org.flowable.engine.impl.cfg.StandaloneProcessEngineConfiguration"> ... <property name="typedEventListeners"> <map> <entry key="JOB_EXECUTION_SUCCESS,JOB_EXECUTION_FAILURE" > <list> <bean class="org.flowable.engine.example.MyJobEventListener" /> </list> </entry> </map> </property> </bean>

The order of dispatching events is determined by the order in which the listeners were added. First, all normal event-listeners are called (eventListeners property) in the order they are defined in the list. After that, all typed event listeners (typedEventListeners properties) are called, if an event of the right type is dispatched.

3.18.3. Adding listeners at runtime

It’s possible to add and remove additional event-listeners to the engine by using the API (RuntimeService):

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
/** * Adds an event-listener which will be notified of ALL events by the dispatcher. * @param listenerToAdd the listener to add */ void addEventListener(FlowableEventListener listenerToAdd); /** * Adds an event-listener which will only be notified when an event occurs, * which type is in the given types. * @param listenerToAdd the listener to add * @param types types of events the listener should be notified for */ void addEventListener(FlowableEventListener listenerToAdd, FlowableEventType... types); /** * Removes the given listener from this dispatcher. The listener will no longer be notified, * regardless of the type(s) it was registered for in the first place. * @param listenerToRemove listener to remove */ void removeEventListener(FlowableEventListener listenerToRemove);

Please note that the listeners added at runtime are not retained when the engine is rebooted.

3.18.4. Adding listeners to process definitions

It’s possible to add listeners to a specific process-definition. The listeners will only be called for events related to the process definition and to all events related to process instances that are started with that specific process definition. The listener implementations can be defined using a fully qualified classname, an expression that resolves to a bean that implements the listener interface or can be configured to throw a message/signal/error BPMN event.

Listeners executing user-defined logic

The snippet below adds 2 listeners to a process-definition. The first listener will receive events of any type, with a listener implementation based on a fully-qualified class name. The second listener is only notified when a job is successfully executed or when it failed, using a listener that has been defined in the beans property of the process engine configuration.

1 2 3 4 5 6 7 8 9
<process id="testEventListeners"> <extensionElements> <flowable:eventListener class="org.flowable.engine.test.MyEventListener" /> <flowable:eventListener delegateExpression="${testEventListener}" events="JOB_EXECUTION_SUCCESS,JOB_EXECUTION_FAILURE" /> </extensionElements> ... </process>

For events related to entities, it’s also possible to add listeners to a process-definition that get only notified when entity-events occur for a certain entity type. The snippet below shows how this can be achieved. It can be used along for ALL entity-events (first example) or for specific event types only (second example).

1 2 3 4 5 6 7 8 9
<process id="testEventListeners"> <extensionElements> <flowable:eventListener class="org.flowable.engine.test.MyEventListener" entityType="task" /> <flowable:eventListener delegateExpression="${testEventListener}" events="ENTITY_CREATED" entityType="task" /> </extensionElements> ... </process>

Supported values for the entityType are: attachment, comment, execution, identity-link, job, process-instance, process-definition, task.

Listeners throwing BPMN events

Another way of handling events being dispatched is to throw a BPMN event. Please bear in mind that it only makes sense to throw BPMN-events with certain kinds of Flowable event types. For example, throwing a BPMN event when the process-instance is deleted will result in an error. The snippet below shows how to throw a signal inside process-instance, throw a signal to an external process (global), throw a message-event inside the process-instance and throw an error-event inside the process-instance. Instead of using the class or delegateExpression, the attribute throwEvent is used, along with an additional attribute, specific to the type of event being thrown.

1 2 3 4 5
<process id="testEventListeners"> <extensionElements> <flowable:eventListener throwEvent="signal" signalName="My signal" events="TASK_ASSIGNED" /> </extensionElements> </process>
1 2 3 4 5
<process id="testEventListeners"> <extensionElements> <flowable:eventListener throwEvent="globalSignal" signalName="My signal" events="TASK_ASSIGNED" /> </extensionElements> </process>
1 2 3 4 5
<process id="testEventListeners"> <extensionElements> <flowable:eventListener throwEvent="message" messageName="My message" events="TASK_ASSIGNED" /> </extensionElements> </process>
1 2 3 4 5
<process id="testEventListeners"> <extensionElements> <flowable:eventListener throwEvent="error" errorCode="123" events="TASK_ASSIGNED" /> </extensionElements> </process>

If additional logic is needed to decide whether or not to throw the BPMN-event, it’s possible to extend the listener-classes provided by Flowable. By overriding the isValidEvent(FlowableEvent event) in your subclass, the BPMN-event throwing can be prevented. The classes involved are +org.flowable.engine.test.api.event.SignalThrowingEventListenerTest, org.flowable.engine.impl.bpmn.helper.MessageThrowingEventListener and org.flowable.engine.impl.bpmn.helper.ErrorThrowingEventListener.

Notes on listeners on a process-definition
  • Event-listeners can only be declared on the process element, as a child-element of the extensionElements. Listeners cannot be defined on individual activities in the process.

  • Expressions used in the delegateExpression do not have access to the execution-context, as other expressions (for example, in gateways) have. They can only reference beans defined in the beans property of the process engine configuration or when using Spring (and the beans property is absent) to any spring-bean that implements the listener interface.

  • When using the class attribute of a listener, there will only be a single instance of that class created. Make sure the listener implementations do not rely on member-fields or ensure safe usage from multiple threads/contexts.

  • When an illegal event-type is used in the events attribute or illegal throwEvent value is used, an exception will be thrown when the process-definition is deployed (effectively failing the deployment). When an illegal value for class or delegateExecution is supplied (either a nonexistent class, a nonexistent bean reference or a delegate not implementing listener interface), an exception will be thrown when the process is started (or when the first valid event for that process-definition is dispatched to the listener). Make sure the referenced classes are on the classpath and that the expressions resolve to a valid instance.

3.18.5. Dispatching events through API

We opened up the event-dispatching mechanism through the API, to allow you to dispatch custom events to any listeners that are registered in the engine. It’s recommended (although not enforced) to only dispatch FlowableEvents with type CUSTOM. Dispatching the event can be done using the RuntimeService:

1 2 3 4 5 6 7 8 9
/** * Dispatches the given event to any listeners that are registered. * @param event event to dispatch. * * @throws FlowableException if an exception occurs when dispatching the event or * when the {@link FlowableEventDispatcher} is disabled. * @throws FlowableIllegalArgumentException when the given event is not suitable for dispatching. */ void dispatchEvent(FlowableEvent event);

3.18.6. Supported event types

Listed below are all event types that can occur in the engine. Each type corresponds to an enum value in the org.flowable.engine.common.api.delegate.event.FlowableEventType.

Table 1. Supported events
Event name Description Event classes

ENGINE_CREATED

The process-engine this listener is attached to has been created and is ready for API-calls.

org.flowable…​FlowableEvent

ENGINE_CLOSED

The process-engine this listener is attached to has been closed. API-calls to the engine are no longer possible.

org.flowable…​FlowableEvent

ENTITY_CREATED

A new entity is created. The new entity is contained in the event.

org.flowable…​FlowableEntityEvent

ENTITY_INITIALIZED

A new entity has been created and is fully initialized. If any children are created as part of the creation of an entity, this event will be fired AFTER the create/initialisation of the child entities as opposed to the ENTITY_CREATE event.

org.flowable…​FlowableEntityEvent

ENTITY_UPDATED

An existing entity is updated. The updated entity is contained in the event.

org.flowable…​FlowableEntityEvent

ENTITY_DELETED

An existing entity is deleted. The deleted entity is contained in the event.

org.flowable…​FlowableEntityEvent

ENTITY_SUSPENDED

An existing entity is suspended. The suspended entity is contained in the event. Will be dispatched for ProcessDefinitions, ProcessInstances and Tasks.

org.flowable…​FlowableEntityEvent

ENTITY_ACTIVATED

An existing entity is activated. The activated entity is contained in the event. Will be dispatched for ProcessDefinitions, ProcessInstances and Tasks.

org.flowable…​FlowableEntityEvent

JOB_EXECUTION_SUCCESS

A job has been executed successfully. The event contains the job that was executed.

org.flowable…​FlowableEntityEvent

JOB_EXECUTION_FAILURE

The execution of a job has failed. The event contains the job that was executed and the exception.

org.flowable…​FlowableEntityEvent and org.flowable…​FlowableExceptionEvent

JOB_RETRIES_DECREMENTED

The number of job retries have been decremented due to a failed job. The event contains the job that was updated.

org.flowable…​FlowableEntityEvent

TIMER_SCHEDULED

A timer job has been created and is scheduled for being executed at a future point in time.

org.flowable…​FlowableEntityEvent

TIMER_FIRED

A timer has been fired. The event contains the job that was executed.

org.flowable…​FlowableEntityEvent

JOB_CANCELED

A job has been canceled. The event contains the job that was canceled. Job can be canceled by API call, task was completed and associated boundary timer was canceled, on the new process definition deployment.

org.flowable…​FlowableEntityEvent

ACTIVITY_STARTED

An activity is starting to execute

org.flowable…​FlowableActivityEvent

ACTIVITY_COMPLETED

An activity is completed successfully

org.flowable…​FlowableActivityEvent

ACTIVITY_CANCELLED

An activity is going to be canceled. There can be three reasons for activity cancellation (MessageEventSubscriptionEntity, SignalEventSubscriptionEntity, TimerEntity).

org.flowable…​FlowableActivityCancelledEvent

ACTIVITY_SIGNALED

An activity received a signal

org.flowable…​FlowableSignalEvent

ACTIVITY_MESSAGE_RECEIVED

An activity received a message. Dispatched before the activity receives the message. When received, a ACTIVITY_SIGNAL or ACTIVITY_STARTED will be dispatched for this activity, depending on the type (boundary-event or event-subprocess start-event)

org.flowable…​FlowableMessageEvent

ACTIVITY_MESSAGE_WAITING

An activity has created a message event subscription and is waiting to receive.

org.flowable…​FlowableMessageEvent

ACTIVITY_MESSAGE_CANCELLED

An activity for which a message event subscription has been created is canceled and thus receiving the message will not trigger this particular message anymore.

org.flowable…​FlowableMessageEvent

ACTIVITY_ERROR_RECEIVED

An activity has received an error event. Dispatched before the actual error has been handled by the activity. The event’s activityId contains a reference to the error-handling activity. This event will be either followed by a ACTIVITY_SIGNALLED event or ACTIVITY_COMPLETE for the involved activity, if the error was delivered successfully.

org.flowable…​FlowableErrorEvent

UNCAUGHT_BPMN_ERROR

An uncaught BPMN error has been thrown. The process did not have any handlers for that specific error. The event’s activityId will be empty.

org.flowable…​FlowableErrorEvent

ACTIVITY_COMPENSATE

An activity is about to be compensated. The event contains the id of the activity that is will be executed for compensation.

org.flowable…​FlowableActivityEvent

VARIABLE_CREATED

A variable has been created. The event contains the variable name, value and related execution and task (if any).

org.flowable…​FlowableVariableEvent

VARIABLE_UPDATED

An existing variable has been updated. The event contains the variable name, updated value and related execution and task (if any).

org.flowable…​FlowableVariableEvent

VARIABLE_DELETED

An existing variable has been deleted. The event contains the variable name, last known value and related execution and task (if any).

org.flowable…​FlowableVariableEvent

TASK_ASSIGNED

A task has been assigned to a user. The event contains the task

org.flowable…​FlowableEntityEvent

TASK_CREATED

A task has been created. This is dispatched after the ENTITY_CREATE event. If the task is part of a process, this event will be fired before the task listeners are executed.

org.flowable…​FlowableEntityEvent

TASK_COMPLETED

A task has been completed. This is dispatched before the ENTITY_DELETE event. If the task is part of a process, this event will be fired before the process has moved on and will be followed by a ACTIVITY_COMPLETE event, targeting the activity that represents the completed task.

org.flowable…​FlowableEntityEvent

PROCESS_CREATED

A process instance has been created. All basic properties have been set, but variables not yet.

org.flowable…​FlowableEntityEvent

PROCESS_STARTED

A process instance has been started. Dispatched when starting a process instance previously created. The event PROCESS_STARTED is dispatched after the associated event ENTITY_INITIALIZED and after the variables have been set.

org.flowable…​FlowableEntityEvent

PROCESS_COMPLETED

A process has been completed. Dispatched after the last activity ACTIVITY_COMPLETED event. Process is completed when it reaches state in which process instance does not have any transition to take.

org.flowable…​FlowableEntityEvent

PROCESS_CANCELLED

A process has been canceled. Dispatched before the process instance is deleted from runtime. Process instance is canceled by API call RuntimeService.deleteProcessInstance

org.flowable…​FlowableCancelledEvent

MEMBERSHIP_CREATED

A user has been added to a group. The event contains the ids of the user and group involved.

org.flowable…​FlowableMembershipEvent

MEMBERSHIP_DELETED

A user has been removed from a group. The event contains the ids of the user and group involved.

org.flowable…​FlowableMembershipEvent

MEMBERSHIPS_DELETED

All members will be removed from a group. The event is thrown before the members are removed, so they are still accessible. No individual MEMBERSHIP_DELETED events will be thrown if all members are deleted at once, for performance reasons.

org.flowable…​FlowableMembershipEvent

All ENTITY_\* events are related to entities inside the engine. The list below show an overview of what entity-events are dispatched for which entities:

  • ENTITY_CREATED, ENTITY_INITIALIZED, ENTITY_DELETED: Attachment, Comment, Deployment, Execution, Group, IdentityLink, Job, Model, ProcessDefinition, ProcessInstance, Task, User.

  • ENTITY_UPDATED: Attachment, Deployment, Execution, Group, IdentityLink, Job, Model, ProcessDefinition, ProcessInstance, Task, User.

  • ENTITY_SUSPENDED, ENTITY_ACTIVATED: ProcessDefinition, ProcessInstance/Execution, Task.

3.18.7. Additional remarks

Listeners are only notified for events dispatched from the engine they are registered with. So if you have different engines - running against the same database - only events that originated in the engine the listener is registered to are dispatched to that listener. The events that occur in other engines are not dispatched to the listeners, regardless of whether they are running in the same JVM or not.

Certain event-types (related to entities) expose the targeted entity. Depending on the type or event, these entities cannot be updated anymore (for example, when the entity is deleted). If possible, use the EngineServices exposed by the event to interact in a safe way with the engine. Even then, you need to be cautious with updates/operations on entities that are involved in the dispatched event.

No entity-events are dispatched related to history, as they all have a runtime-counterpart that dispatch their events.

4. The Flowable API

4.1. The Process Engine API and services

The engine API is the most common way of interacting with Flowable. The main starting point is the ProcessEngine, which can be created in several ways as described in the configuration section. From the ProcessEngine, you can obtain the various services that contain the workflow/BPM methods. ProcessEngine and the services objects are thread safe, so you can keep a reference to one of those for a whole server.

api.services
1 2 3 4 5 6 7 8 9 10
ProcessEngine processEngine = ProcessEngines.getDefaultProcessEngine(); RuntimeService runtimeService = processEngine.getRuntimeService(); RepositoryService repositoryService = processEngine.getRepositoryService(); TaskService taskService = processEngine.getTaskService(); ManagementService managementService = processEngine.getManagementService(); IdentityService identityService = processEngine.getIdentityService(); HistoryService historyService = processEngine.getHistoryService(); FormService formService = processEngine.getFormService(); DynamicBpmnService dynamicBpmnService = processEngine.getDynamicBpmnService();

ProcessEngines.getDefaultProcessEngine() will initialize and build a process engine the first time it is called and afterwards always return the same process engine. Proper creation and closing of all process engines can be done with ProcessEngines.init() and ProcessEngines.destroy().

The ProcessEngines class will scan for all flowable.cfg.xml and flowable-context.xml files. For all flowable.cfg.xml files, the process engine will be built in the typical Flowable way: ProcessEngineConfiguration.createProcessEngineConfigurationFromInputStream(inputStream).buildProcessEngine(). For all flowable-context.xml files, the process engine will be built in the Spring way: first the Spring application context is created and then the process engine is obtained from that application context.

All services are stateless. This means that you can easily run Flowable on multiple nodes in a cluster, each going to the same database, without having to worry about which machine actually executed previous calls. Any call to any service is idempotent regardless of where it is executed.

The RepositoryService is probably the first service needed when working with the Flowable engine. This service offers operations for managing and manipulating deployments and process definitions. Without going into much detail here, a process definition is a Java counterpart of the BPMN 2.0 process. It is a representation of the structure and behavior of each of the steps of a process. A deployment is the unit of packaging within the Flowable engine. A deployment can contain multiple BPMN 2.0 XML files and any other resource. The choice of what is included in one deployment is up to the developer. It can range from a single process BPMN 2.0 XML file to a whole package of processes and relevant resources (for example, the deployment hr-processes could contain everything related to HR processes). The RepositoryService can deploy such packages. Deploying a deployment means it is uploaded to the engine, where all processes are inspected and parsed before being stored in the database. From that point on, the deployment is known to the system and any process included in the deployment can now be started.

Furthermore, this service allows you to:

  • Query on deployments and process definitions known to the engine.

  • Suspend and activate deployments as a whole or specific process definitions. Suspending means no further operations can be performed on them, while activation is the opposite and enables operations again.

  • Retrieve various resources, such as files contained within the deployment or process diagrams that were auto-generated by the engine.

  • Retrieve a POJO version of the process definition, which can be used to introspect the process using Java rather than XML.

While the RepositoryService is mostly about static information (data that doesn’t change, or at least not a lot), the RuntimeService is quite the opposite. It deals with starting new process instances of process definitions. As said above, a process definition defines the structure and behavior of the different steps in a process. A process instance is one execution of such a process definition. For each process definition there typically are many instances running at the same time. The RuntimeService also is the service which is used to retrieve and store process variables. This is data that is specific to the given process instance and can be used by various constructs in the process (for example, an exclusive gateway often uses process variables to determine which path is chosen to continue the process). The Runtimeservice also allows you to query on process instances and executions. Executions are a representation of the 'token' concept of BPMN 2.0. Basically, an execution is a pointer to where the process instance currently is. Lastly, the RuntimeService is used whenever a process instance is waiting for an external trigger and the process needs to be continued. A process instance can have various wait states and this service contains various operations to signal to the instance that the external trigger is received and the process instance can be continued.

Tasks that need to be performed by human users of the system are core to a BPM engine such as Flowable. Everything around tasks is grouped in the TaskService, such as:

  • Querying tasks assigned to users or groups

  • Creating new standalone tasks. These are tasks that are not related to a process instances.

  • Manipulating to which user a task is assigned or which users are in some way involved with the task.

  • Claiming and completing a task. Claiming means that someone decided to be the assignee for the task, meaning that this user will complete the task. Completing means doing the work of the tasks. Typically this is filling in a form of sorts.

The IdentityService is pretty simple. It supports the management (creation, update, deletion, querying, …​) of groups and users. It is important to understand that Flowable actually doesn’t do any checking on users at runtime. For example, a task could be assigned to any user, but the engine doesn’t verify whether that user is known to the system. This is because the Flowable engine can also be used in conjunction with services such as LDAP, Active Directory, and so on.

The FormService is an optional service. Meaning that Flowable can be used quite happily without it, without sacrificing any functionality. This service introduces the concept of a start form and a task form. A start form is a form that is shown to the user before the process instance is started, while a task form is the form that is displayed when a user wants to complete a form. Flowable allows the specification of these forms in the BPMN 2.0 process definition. This service exposes this data in an easy way to work with. But again, this is optional, as forms don’t need to be embedded in the process definition.

The HistoryService exposes all historical data gathered by the Flowable engine. When executing processes, a lot of data can be kept by the engine (this is configurable), such as process instance start times, who did which tasks, how long it took to complete the tasks, which path was followed in each process instance, and so on. This service exposes mainly query capabilities to access this data.

The ManagementService is typically not needed when coding custom application using Flowable. It allows the retrieval of information about the database tables and table metadata. Furthermore, it exposes query capabilities and management operations for jobs. Jobs are used in Flowable for various things, such as timers, asynchronous continuations, delayed suspension/activation, and so on. Later on, these topics will be discussed in more detail.

The DynamicBpmnService can be used to change part of the process definition without needing to redeploy it. You can, for example, change the assignee definition for a user task in a process definition, or change the class name of a service task.

For more detailed information on the service operations and the engine API, see the javadocs.

4.2. Exception strategy

The base exception in Flowable is the org.flowable.engine.FlowableException, an unchecked exception. This exception can be thrown at all times by the API, but expected exceptions that happen in specific methods are documented in the the javadocs. For example, an extract from TaskService:

1 2 3 4 5 6
/** * Called when the task is successfully executed. * @param taskId the id of the task to complete, cannot be null. * @throws FlowableObjectNotFoundException when no task exists with the given id. */ void complete(String taskId);

In the example above, when an id is passed for which no task exists, an exception will be thrown. Also, since the javadoc explicitly states that taskId cannot be null, an FlowableIllegalArgumentException will be thrown when null is passed.

Even though we want to avoid a big exception hierarchy, the following subclasses are thrown in specific cases. All other errors that occur during process-execution or API-invocation that don’t fit into the possible exceptions below, are thrown as regular FlowableExceptionss.

  • FlowableWrongDbException: Thrown when the Flowable engine discovers a mismatch between the database schema version and the engine version.

  • FlowableOptimisticLockingException: Thrown when an optimistic locking occurs in the data store caused by concurrent access of the same data entry.

  • FlowableClassLoadingException: Thrown when a class requested to load was not found or when an error occurred while loading it (e.g. JavaDelegates, TaskListeners, …​).

  • FlowableObjectNotFoundException: Thrown when an object that is requested or actioned does not exist.

  • FlowableIllegalArgumentException: An exception indicating that an illegal argument has been supplied in a Flowable API-call, an illegal value was configured in the engine’s configuration or an illegal value has been supplied or an illegal value is used in a process-definition.

  • FlowableTaskAlreadyClaimedException: Thrown when a task is already claimed, when the taskService.claim(…​) is called.

4.3. Query API

There are two ways of querying data from the engine: the query API and native queries. The Query API allows you to program completely typesafe queries with a fluent API. You can add various conditions to your queries (all of which are applied together as a logical AND) and precisely one ordering. The following code shows an example:

1 2 3 4 5
List<Task> tasks = taskService.createTaskQuery() .taskAssignee("kermit") .processVariableValueEquals("orderId", "0815") .orderByDueDate().asc() .list();

Sometimes you need more powerful queries, for example, queries using an OR operator or restrictions you cannot express using the Query API. For these cases, we have native queries, which allow you to write your own SQL queries. The return type is defined by the Query object you use and the data is mapped into the correct objects (Task, ProcessInstance, Execution, …​). Since the query will be fired at the database you have to use table and column names as they are defined in the database; this requires some knowledge about the internal data structure and it is recommended to use native queries with care. The table names can be retrieved through the API to keep the dependency as small as possible.

1 2 3 4 5 6 7 8 9 10
List<Task> tasks = taskService.createNativeTaskQuery() .sql("SELECT count(*) FROM " + managementService.getTableName(Task.class) + " T WHERE T.NAME_ = #{taskName}") .parameter("taskName", "gonzoTask") .list(); long count = taskService.createNativeTaskQuery() .sql("SELECT count(*) FROM " + managementService.getTableName(Task.class) + " T1, " + managementService.getTableName(VariableInstanceEntity.class) + " V1 WHERE V1.TASK_ID_ = T1.ID_") .count();

4.4. Variables

Every process instance needs and uses data to execute the steps it’s made up of. In Flowable, this data is called variables, which are stored in the database. Variables can be used in expressions (for example, to select the correct outgoing sequence flow in an exclusive gateway), in Java service tasks when calling external services (for example to provide the input or store the result of the service call), and so on.

A process instance can have variables (called process variables), but also executions (which are specific pointers to where the process is active) and user tasks can have variables. A process instance can have any number of variables. Each variable is stored in a row in the ACT_RU_VARIABLE database table.

All of the startProcessInstanceXXX methods have an optional parameters to provide the variables when the process instance is created and started. For example, from the RuntimeService:

1
ProcessInstance startProcessInstanceByKey(String processDefinitionKey, Map<String, Object> variables);

Variables can be added during process execution. For example, (RuntimeService):

1 2 3 4
void setVariable(String executionId, String variableName, Object value); void setVariableLocal(String executionId, String variableName, Object value); void setVariables(String executionId, Map<String, ? extends Object> variables); void setVariablesLocal(String executionId, Map<String, ? extends Object> variables);

Note that variables can be set local for a given execution (remember, a process instance consists of a tree of executions). The variable will only be visible on that execution and not higher in the tree of executions. This can be useful if data shouldn’t be propagated to the process instance level, or the variable has a new value for a certain path in the process instance (for example, when using parallel paths).

Variables can also be retrieved, as shown below. Note that similar methods exist on the TaskService. This means that tasks, like executions, can have local variables that are alive just for the duration of the task.

1 2 3 4 5 6
Map<String, Object> getVariables(String executionId); Map<String, Object> getVariablesLocal(String executionId); Map<String, Object> getVariables(String executionId, Collection<String> variableNames); Map<String, Object> getVariablesLocal(String executionId, Collection<String> variableNames); Object getVariable(String executionId, String variableName); <T> T getVariable(String executionId, String variableName, Class<T> variableClass);

Variables are often used in Java delegates, expressions, execution- or tasklisteners, scripts, and so on. In those constructs, the current execution or task object is available and it can be used for variable setting and/or retrieval. The simplest methods are these:

1 2 3 4 5 6
execution.getVariables(); execution.getVariables(Collection<String> variableNames); execution.getVariable(String variableName); execution.setVariables(Map<String, object> variables); execution.setVariable(String variableName, Object value);

Note that a variant with local is also available for all of the above.

For historical (and backwards-compatibility reasons), when doing any of the calls above, behind the scenes all variables will be fetched from the database. This means that if you have 10 variables, but only get one through getVariable("myVariable"), behind the scenes the other 9 will be fetched and cached. This is not necessarily bad, as subsequent calls will not hit the database again. For example, when your process definition has three sequential service tasks (and thus one database transaction), using one call to fetch all variables in the first service task might be better then fetching the variables needed in each service task separately. Note that this applies both for getting and setting variables.

Of course, when using a lot of variables or simply when you want tight control on the database query and traffic, this is not appropriate. Additional methods have been introduced to give a tighter control on this, by adding in new methods that have an optional parameter that tells the engine whether or not to fetch and cache all variables:

1 2 3
Map<String, Object> getVariables(Collection<String> variableNames, boolean fetchAllVariables); Object getVariable(String variableName, boolean fetchAllVariables); void setVariable(String variableName, Object value, boolean fetchAllVariables);

When using true for the parameter fetchAllVariables, the behavior will be exactly as described above: when getting or setting a variable, all other variables will be fetched and cached.

However, when using false as value, a specific query will be used and no other variables will be fetched or cached. Only the value of the variable in question here will be cached for subsequent use.

4.5. Transient variables

Transient variables are variables that behave like regular variables, but are not persisted. Typically, transient variables are used for advanced use cases. When in doubt, use a regular process variable.

The following applies for transient variables:

  • There is no history stored at all for transient variables.

  • Like regular variables, transient variables are put on the highest parent when set. This means that when setting a variable on an execution, the transient variable is actually stored on the process instance execution. Like regular variables, a local variant of the method exists if the variable is set on the specific execution or task.

  • A transient variable can only be accessed before the next wait state in the process definition. After that, they are gone. Here, the wait state means the point in the process instance where it is persisted to the data store. Note that an async activity is also a wait state in this definition!

  • Transient variables can only be set by the setTransientVariable(name, value), but transient variables are also returned when calling getVariable(name) (a getTransientVariable(name) also exists, that only checks the transient variables). The reason for this is to make the writing of expressions easy and existing logic using variables works for both types.

  • A transient variable shadows a persistent variable with the same name. This means that when both a persistent and transient variable is set on a process instance and getVariable("someVariable") is called, the transient variable value will be returned.

You can set and get transient variables in most places where regular variables are exposed:

  • On DelegateExecution in JavaDelegate implementations

  • On DelegateExecution in ExecutionListener implementations and on DelegateTask on TaskListener implementations

  • In script task via the execution object

  • When starting a process instance through the runtime service

  • When completing a task

  • When calling the runtimeService.trigger method

The methods follow the naming convention of the regular process variables:

1 2 3 4 5 6 7 8 9 10 11 12 13
void setTransientVariable(String variableName, Object variableValue); void setTransientVariableLocal(String variableName, Object variableValue); void setTransientVariables(Map<String, Object> transientVariables); void setTransientVariablesLocal(Map<String, Object> transientVariables); Object getTransientVariable(String variableName); Object getTransientVariableLocal(String variableName); Map<String, Object> getTransientVariables(); Map<String, Object> getTransientVariablesLocal(); void removeTransientVariable(String variableName); void removeTransientVariableLocal(String variableName);

The following BPMN diagram shows a typical example:

api.transient.variable.example

Let’s assume the Fetch Data service task calls some remote service (for example, using REST). Let’s also assume some configuration parameters are needed and need to be provided when starting the process instance. Also, these configuration parameters are not important for historical audit purposes, so we pass them as transient variables:

1 2 3 4 5 6
ProcessInstance processInstance = runtimeService.createProcessInstanceBuilder() .processDefinitionKey("someKey") .transientVariable("configParam01", "A") .transientVariable("configParam02", "B") .transientVariable("configParam03", "C") .start();

Note that the transient variables will be available until the user task is reached and persisted to the database. For example, in the Additional Work user task they are no longer available. Also note that if Fetch Data had been asynchronous, they wouldn’t be available after that step either.

The Fetch Data (simplified) could be something like:

1 2 3 4 5 6 7 8 9 10
public static class FetchDataServiceTask implements JavaDelegate { public void execute(DelegateExecution execution) { String configParam01 = (String) execution.getVariable(configParam01); // ... RestReponse restResponse = executeRestCall(); execution.setTransientVariable("response", restResponse.getBody()); execution.setTransientVariable("status", restResponse.getStatus()); } }

The Process Data would get the response transient variable, parse it and store the relevant data in real process variables as we need them later.

The condition on the sequence flow leaving the exclusive gateway are oblivious to whether persistent or transient variables are used (in this case, the status transient variable):

1
<conditionExpression xsi:type="tFormalExpression">${status == 200}</conditionExpression>

4.6. Expressions

Flowable uses UEL for expression-resolving. UEL stands for Unified Expression Language and is part of the EE6 specification (see the EE6 specification for detailed information).

Expressions can be used in, for example, Java Service tasks, Execution Listeners, Task Listeners and Conditional sequence flows. Although there are two types of expressions, value-expression and method-expression, Flowable abstracts this so they can both be used where an expression is expected.

  • Value expression: resolves to a value. By default, all process variables are available to use. Also, all spring-beans (if using Spring) are available to use in expressions. Some examples:

${myVar}
${myBean.myProperty}
  • Method expression: invokes a method with or without parameters. When invoking a method without parameters, be sure to add empty parentheses after the method-name (as this distinguishes the expression from a value expression). The passed parameters can be literal values or expressions that are resolved themselves. Examples:

${printer.print()}
${myBean.addNewOrder('orderName')}
${myBean.doSomething(myVar, execution)}

Note that these expressions support resolving primitives (including comparing them), beans, lists, arrays and maps.

On top of all process variables, there are a few default objects available that can be used in expressions:

  • execution: The DelegateExecution holds additional information about the ongoing execution.

  • task: The DelegateTask holds additional information about the current Task. Note: Only works in expressions evaluated from task listeners.

  • authenticatedUserId: The id of the user that is currently authenticated. If no user is authenticated, the variable is not available.

4.7. Unit testing

Business processes are an integral part of software projects and they should be tested in the same way normal application logic is tested: with unit tests. Since Flowable is an embeddable Java engine, writing unit tests for business processes is as simple as writing regular unit tests.

Flowable supports both JUnit versions 3 and 4 styles of unit testing. In the JUnit 3 style, the org.flowable.engine.test.FlowableTestCase must be extended. This will make the ProcessEngine and the services available through protected member fields. In the setup() of the test, the processEngine will be initialized by default with the flowable.cfg.xml resource on the classpath. To specify a different configuration file, override the getConfigurationResource() method. Process engines are cached statically over multiple unit tests when the configuration resource is the same.

By extending FlowableTestCase, you can annotate test methods with org.flowable.engine.test.Deployment. Before the test is run, a resource file of the form testClassName.testMethod.bpmn20.xml in the same package as the test class, will be deployed. At the end of the test, the deployment will be deleted, including all related process instances, tasks, and so on. The Deployment annotation also supports setting the resource location explicitly. See the class itself for more information.

Taking all that in account, a JUnit 3 style test looks as follows.

1 2 3 4 5 6 7 8 9 10 11 12 13
public class MyBusinessProcessTest extends FlowableTestCase { @Deployment public void testSimpleProcess() { runtimeService.startProcessInstanceByKey("simpleProcess"); Task task = taskService.createTaskQuery().singleResult(); assertEquals("My Task", task.getName()); taskService.complete(task.getId()); assertEquals(0, runtimeService.createProcessInstanceQuery().count()); } }

To get the same functionality when using the JUnit 4 style of writing unit tests, the org.flowable.engine.test.FlowableRule Rule must be used. Through this rule, the process engine and services are available through getters. As with the FlowableTestCase (see above), including this Rule will enable the use of the org.flowable.engine.test.Deployment annotation (see above for an explanation of its use and configuration) and it will look for the default configuration file on the classpath. Process engines are statically cached over multiple unit tests when using the same configuration resource.

The following code snippet shows an example of using the JUnit 4 style of testing and the usage of the FlowableRule.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
public class MyBusinessProcessTest { @Rule public FlowableRule FlowableRule = new FlowableRule(); @Test @Deployment public void ruleUsageExample() { RuntimeService runtimeService = FlowableRule.getRuntimeService(); runtimeService.startProcessInstanceByKey("ruleUsage"); TaskService taskService = FlowableRule.getTaskService(); Task task = taskService.createTaskQuery().singleResult(); assertEquals("My Task", task.getName()); taskService.complete(task.getId()); assertEquals(0, runtimeService.createProcessInstanceQuery().count()); } }

4.8. Debugging unit tests

When using the in-memory H2 database for unit tests, the following instructions allow you to easily inspect the data in the Flowable database during a debugging session. The screenshots here are taken in Eclipse, but the mechanism should be similar for other IDEs.

Suppose we have put a breakpoint somewhere in our unit test (in Eclipse this is done by double-clicking in the left border next to the code):

api.test.debug.breakpoint

If we now run the unit test in debug mode (right-click in test class, select Run as and then JUnit test), the test execution halts at our breakpoint, where we can now inspect the variables of our test as shown in the right upper panel.

api.test.debug.view

To inspect the Flowable data, open up the 'Display' window (if this window isn’t there, open Window→Show View→Other and select Display.) and type (code completion is available) org.h2.tools.Server.createWebServer("-web").start()

api.test.debug.start.h2.server

Select the line you’ve just typed and right-click on it. Now select Display (or execute the shortcut instead of right-clicking)

api.test.debug.start.h2.server.2

Now open up a browser and go to http://localhost:8082, and fill in the JDBC URL to the in-memory database (by default this is jdbc:h2:mem:flowable), and hit the connect button.

api.test.debug.h2.login

You can now see the Flowable data and use it to understand how and why your unit test is executing your process in a certain way.

api.test.debug.h2.tables

4.9. The process engine in a web application

The ProcessEngine is a thread-safe class and can easily be shared among multiple threads. In a web application, this means it is possible to create the process engine once when the container boots and shut down the engine when the container goes down.

The following code snippet shows how you can write a simple ServletContextListener to initialize and destroy process engines in a plain Servlet environment:

1 2 3 4 5 6 7 8 9 10 11
public class ProcessEnginesServletContextListener implements ServletContextListener { public void contextInitialized(ServletContextEvent servletContextEvent) { ProcessEngines.init(); } public void contextDestroyed(ServletContextEvent servletContextEvent) { ProcessEngines.destroy(); } }

The contextInitialized method will delegate to ProcessEngines.init(). That will look for flowable.cfg.xml resource files on the classpath, and create a ProcessEngine for the given configurations (for example, multiple JARs with a configuration file). If you have multiple such resource files on the classpath, make sure they all have different names. When the process engine is needed, it can be fetched using:

1
ProcessEngines.getDefaultProcessEngine()

or

1
ProcessEngines.getProcessEngine("myName");

Of course, it is also possible to use any of the variants of creating a process engine, as described in the configuration section.

The contextDestroyed method of the context-listener delegates to ProcessEngines.destroy(). That will properly close all initialized process engines.

5. Spring integration

While you can definitely use Flowable without Spring, we’ve provided some very nice integration features that are explained in this chapter.

5.1. ProcessEngineFactoryBean

The ProcessEngine can be configured as a regular Spring bean. The starting point of the integration is the class org.flowable.spring.ProcessEngineFactoryBean. That bean takes a process engine configuration and creates the process engine. This means that the creation and configuration of properties for Spring is the same as documented in the configuration section. For Spring integration, the configuration and engine beans will look like this:

1 2 3 4 5 6 7
<bean id="processEngineConfiguration" class="org.flowable.spring.SpringProcessEngineConfiguration"> ... </bean> <bean id="processEngine" class="org.flowable.spring.ProcessEngineFactoryBean"> <property name="processEngineConfiguration" ref="processEngineConfiguration" /> </bean>

Note that the processEngineConfiguration bean now uses the org.flowable.spring.SpringProcessEngineConfiguration class.

5.2. Transactions

We’ll explain the SpringTransactionIntegrationTest found in the Spring examples of the distribution step by step. Below is the Spring configuration file that we use in this example (you can find it in SpringTransactionIntegrationTest-context.xml). The section shown below contains the dataSource, transactionManager, processEngine and the Flowable engine services.

When passing the DataSource to the SpringProcessEngineConfiguration (using property "dataSource"), Flowable uses a org.springframework.jdbc.datasource.TransactionAwareDataSourceProxy internally, which wraps the passed DataSource. This is done to make sure the SQL connections retrieved from the DataSource and the Spring transactions play well together. This implies that it’s no longer necessary to proxy the dataSource yourself in Spring configuration, although it’s still possible to pass a TransactionAwareDataSourceProxy into the SpringProcessEngineConfiguration. In this case, no additional wrapping will occur.

Make sure when declaring a TransactionAwareDataSourceProxy in Spring configuration yourself that you don’t use it for resources that are already aware of Spring transactions (e.g. DataSourceTransactionManager and JPATransactionManager need the un-proxied dataSource).

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:context="http://www.springframework.org/schema/context" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-2.5.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.0.xsd"> <bean id="dataSource" class="org.springframework.jdbc.datasource.SimpleDriverDataSource"> <property name="driverClass" value="org.h2.Driver" /> <property name="url" value="jdbc:h2:mem:flowable;DB_CLOSE_DELAY=1000" /> <property name="username" value="sa" /> <property name="password" value="" /> </bean> <bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager"> <property name="dataSource" ref="dataSource" /> </bean> <bean id="processEngineConfiguration" class="org.flowable.spring.SpringProcessEngineConfiguration"> <property name="dataSource" ref="dataSource" /> <property name="transactionManager" ref="transactionManager" /> <property name="databaseSchemaUpdate" value="true" /> <property name="asyncExecutorActivate" value="false" /> </bean> <bean id="processEngine" class="org.flowable.spring.ProcessEngineFactoryBean"> <property name="processEngineConfiguration" ref="processEngineConfiguration" /> </bean> <bean id="repositoryService" factory-bean="processEngine" factory-method="getRepositoryService" /> <bean id="runtimeService" factory-bean="processEngine" factory-method="getRuntimeService" /> <bean id="taskService" factory-bean="processEngine" factory-method="getTaskService" /> <bean id="historyService" factory-bean="processEngine" factory-method="getHistoryService" /> <bean id="managementService" factory-bean="processEngine" factory-method="getManagementService" /> ...

The remainder of that Spring configuration file contains the beans and configuration that we’ll use in this particular example:

1 2 3 4 5 6 7 8 9 10 11
<beans> ... <tx:annotation-driven transaction-manager="transactionManager"/> <bean id="userBean" class="org.flowable.spring.test.UserBean"> <property name="runtimeService" ref="runtimeService" /> </bean> <bean id="printer" class="org.flowable.spring.test.Printer" /> </beans>

First, the application context is created using any of the ways supported by Spring. In this example, you could use a classpath XML resource to configure our Spring application context:

1 2
ClassPathXmlApplicationContext applicationContext = new ClassPathXmlApplicationContext( "org/flowable/examples/spring/SpringTransactionIntegrationTest-context.xml");

or, as it’s a test:

1 2
@ContextConfiguration( "classpath:org/flowable/spring/test/transaction/SpringTransactionIntegrationTest-context.xml")

Then we can get the service beans and invoke methods on them. The ProcessEngineFactoryBean will have added an extra interceptor to the services that applies Propagation.REQUIRED transaction semantics on the Flowable service methods. So, for example, we can use the repositoryService to deploy a process like this:

1 2 3 4 5 6 7
RepositoryService repositoryService = (RepositoryService) applicationContext.getBean("repositoryService"); String deploymentId = repositoryService .createDeployment() .addClasspathResource("org/flowable/spring/test/hello.bpmn20.xml") .deploy() .getId();

The other way around also works. In this case, the Spring transaction will be around the userBean.hello() method and the Flowable service method invocation will join that same transaction.

1 2
UserBean userBean = (UserBean) applicationContext.getBean("userBean"); userBean.hello();

The UserBean looks like this. Remember, from above in the Spring bean configuration, we injected the repositoryService into the userBean.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
public class UserBean { /** injected by Spring */ private RuntimeService runtimeService; @Transactional public void hello() { // here you can do transactional stuff in your domain model // and it will be combined in the same transaction as // the startProcessInstanceByKey to the Flowable RuntimeService runtimeService.startProcessInstanceByKey("helloProcess"); } public void setRuntimeService(RuntimeService runtimeService) { this.runtimeService = runtimeService; } }

5.3. Expressions

When using the ProcessEngineFactoryBean, all expressions in the BPMN processes will also see all the Spring beans, by default. It’s possible to limit the beans (even none) you want to expose in expressions using a map that you can configure. The example below exposes a single bean (printer), available to use under the key "printer". To have NO beans exposed at all, just pass an empty list as beans property on the SpringProcessEngineConfiguration. When no beans property is set, all Spring beans in the context will be available.

1 2 3 4 5 6 7 8 9 10
<bean id="processEngineConfiguration" class="org.flowable.spring.SpringProcessEngineConfiguration"> ... <property name="beans"> <map> <entry key="printer" value-ref="printer" /> </map> </property> </bean> <bean id="printer" class="org.flowable.examples.spring.Printer" />

Now the exposed beans can be used in expressions: for example, the SpringTransactionIntegrationTest hello.bpmn20.xml shows how a method on a Spring bean can be invoked using a UEL method expression:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
<definitions id="definitions"> <process id="helloProcess"> <startEvent id="start" /> <sequenceFlow id="flow1" sourceRef="start" targetRef="print" /> <serviceTask id="print" flowable:expression="#{printer.printMessage()}" /> <sequenceFlow id="flow2" sourceRef="print" targetRef="end" /> <endEvent id="end" /> </process> </definitions>

Where Printer looks like this:

1 2 3 4 5 6
public class Printer { public void printMessage() { System.out.println("hello world"); } }

And the Spring bean configuration (also shown above) looks like this:

1 2 3 4 5 6
<beans> ... <bean id="printer" class="org.flowable.examples.spring.Printer" /> </beans>

5.4. Automatic resource deployment

Spring integration also has a special feature for deploying resources. In the process engine configuration, you can specify a set of resources. When the process engine is created, all those resources will be scanned and deployed. There is filtering in place that prevents duplicate deployments. Only when the resources have actually changed will new deployments be deployed to the Flowable DB. This makes sense in a lot of use cases, where the Spring container is rebooted frequently (for example, testing).

Here’s an example:

1 2 3 4 5 6 7 8 9
<bean id="processEngineConfiguration" class="org.flowable.spring.SpringProcessEngineConfiguration"> ... <property name="deploymentResources" value="classpath*:/org/flowable/spring/test/autodeployment/autodeploy.*.bpmn20.xml" /> </bean> <bean id="processEngine" class="org.flowable.spring.ProcessEngineFactoryBean"> <property name="processEngineConfiguration" ref="processEngineConfiguration" /> </bean>

By default, the configuration above will group all of the resources matching the filter into a single deployment to the Flowable engine. The duplicate filtering to prevent re-deployment of unchanged resources applies to the whole deployment. In some cases, this may not be what you want. For instance, if you deploy a set of process resources this way and only a single process definition in those resources has changed, the deployment as a whole will be considered new and all of the process definitions in that deployment will be re-deployed, resulting in new versions of each of the process definitions, even though only one was actually changed.

To be able to customize the way deployments are determined, you can specify an additional property in the SpringProcessEngineConfiguration, deploymentMode. This property defines the way deployments will be determined from the set of resources that match the filter. There are 3 values that are supported by default for this property:

  • default: Group all resources into a single deployment and apply duplicate filtering to that deployment. This is the default value and it will be used if you don’t specify a value.

  • single-resource: Create a separate deployment for each individual resource and apply duplicate filtering to that deployment. This is the value you would use to have each process definition be deployed separately and only create a new process definition version if it has changed.

  • resource-parent-folder: Create a separate deployment for resources that share the same parent folder and apply duplicate filtering to that deployment. This value can be used to create separate deployments for most resources, but still be able to group some by placing them in a shared folder. Here’s an example of how to specify the single-resource configuration for deploymentMode:

1 2 3 4 5 6
<bean id="processEngineConfiguration" class="org.flowable.spring.SpringProcessEngineConfiguration"> ... <property name="deploymentResources" value="classpath*:/flowable/*.bpmn" /> <property name="deploymentMode" value="single-resource" /> </bean>

In addition to using the values listed above for deploymentMode, you may require customized behavior towards determining deployments. If so, you can create a subclass of SpringProcessEngineConfiguration and override the getAutoDeploymentStrategy(String deploymentMode) method. This method determines which deployment strategy is used for a certain value of the deploymentMode configuration.

5.5. Unit testing

When integrating with Spring, business processes can be tested very easily using the standard Flowable testing facilities. The following example shows how a business process is tested in a typical Spring-based unit test:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
@RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration("classpath:org/flowable/spring/test/junit4/springTypicalUsageTest-context.xml") public class MyBusinessProcessTest { @Autowired private RuntimeService runtimeService; @Autowired private TaskService taskService; @Autowired @Rule public FlowableRule flowableSpringRule; @Test @Deployment public void simpleProcessTest() { runtimeService.startProcessInstanceByKey("simpleProcess"); Task task = taskService.createTaskQuery().singleResult(); assertEquals("My Task", task.getName()); taskService.complete(task.getId()); assertEquals(0, runtimeService.createProcessInstanceQuery().count()); } }

Note that for this to work, you need to define an org.flowable.engine.test.Flowable bean in the Spring configuration (which is injected by auto-wiring in the example above).

1 2 3
<bean id="flowableRule" class="org.flowable.engine.test.Flowable"> <property name="processEngine" ref="processEngine" /> </bean>

5.6. JPA with Hibernate 4.2.x

When using Hibernate 4.2.x JPA in service task or listener logic in the Flowable engine, an additional dependency to Spring ORM is needed. This is not needed for Hibernate 4.1.x or earlier. The following dependency should be added:

1 2 3 4 5
<dependency> <groupId>org.springframework</groupId> <artifactId>spring-orm</artifactId> <version>${org.springframework.version}</version> </dependency>

5.7. Spring Boot

Spring Boot is an application framework which, according to its website, makes it easy to create stand-alone, production-grade Spring based Applications that can you can "just run". It takes an opinionated view of the Spring platform and third-party libraries so you can get started with minimum fuss. Most Spring Boot applications need very little Spring configuration.

For more information on Spring Boot, see http://projects.spring.io/spring-boot/

The Spring Boot - Flowable integration has been developed together with Spring committers.

5.7.1. Compatibility

Spring Boot requires a JDK 7 runtime. Please check the Spring Boot documentation.

5.7.2. Getting started

Spring Boot is all about convention over configuration. To get started, simply add the spring-boot-starters-basic dependency to your project. For example for Maven:

1 2 3 4 5
<dependency> <groupId>org.flowable</groupId> <artifactId>flowable-spring-boot-starter-basic</artifactId> <version>${flowable.version}</version> </dependency>

That’s all that’s needed. This dependency will transitively add the correct Flowable and Spring dependencies to the classpath. You can now write the Spring Boot application:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; @Configuration @ComponentScan @EnableAutoConfiguration public class MyApplication { public static void main(String[] args) { SpringApplication.run(MyApplication.class, args); } }

Flowable needs a database to store its data. If you run the code above, it will give you an informative exception message that you need to add a database driver dependency to the classpath. For now, add the H2 database dependency:

1 2 3 4 5
<dependency> <groupId>com.h2database</groupId> <artifactId>h2</artifactId> <version>1.4.183</version> </dependency>

The application can now be started. You will see output like this:

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v1.1.6.RELEASE)

MyApplication                            : Starting MyApplication on ...
s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@33cb5951: startup date [Wed Dec 17 15:24:34 CET 2014]; root of context hierarchy
a.s.b.AbstractProcessEngineConfiguration : No process definitions were found using the specified path (classpath:/processes/**.bpmn20.xml).
o.flowable.engine.impl.db.DbSqlSession   : performing create on engine with resource org/flowable/db/create/flowable.h2.create.engine.sql
o.flowable.engine.impl.db.DbSqlSession   : performing create on history with resource org/flowable/db/create/flowable.h2.create.history.sql
o.flowable.engine.impl.db.DbSqlSession   : performing create on identity with resource org/flowable/db/create/flowable.h2.create.identity.sql
o.a.engine.impl.ProcessEngineImpl        : ProcessEngine default created
o.a.e.i.a.DefaultAsyncJobExecutor        : Starting up the default async job executor [org.flowable.spring.SpringAsyncExecutor].
o.a.e.i.a.AcquireTimerJobsRunnable       : {} starting to acquire async jobs due
o.a.e.i.a.AcquireAsyncJobsDueRunnable    : {} starting to acquire async jobs due
o.s.j.e.a.AnnotationMBeanExporter        : Registering beans for JMX exposure on startup
MyApplication                            : Started MyApplication in 2.019 seconds (JVM running for 2.294)

So, by just adding the dependency to the classpath and using the @EnableAutoConfiguration annotation a lot has happened behind the scenes:

  • An in-memory datasource is created automatically (because the H2 driver is on the classpath) and passed to the Flowable process engine configuration

  • A Flowable ProcessEngine bean is created and exposed

  • All the Flowable services are exposed as Spring beans

  • The Spring Job Executor is created

Also, any BPMN 2.0 process definitions in the processes folder will be automatically deployed. Create a folder processes and add a dummy process definition (named one-task-process.bpmn20.xml) to this folder.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
<?xml version="1.0" encoding="UTF-8"?> <definitions xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL" xmlns:flowable="http://flowable.org/bpmn" targetNamespace="Examples"> <process id="oneTaskProcess" name="The One Task Process"> <startEvent id="theStart" /> <sequenceFlow id="flow1" sourceRef="theStart" targetRef="theTask" /> <userTask id="theTask" name="my task" /> <sequenceFlow id="flow2" sourceRef="theTask" targetRef="theEnd" /> <endEvent id="theEnd" /> </process> </definitions>

Also, add following code lines to test if the deployment actually worked. The CommandLineRunner is a special kind of Spring bean that is executed when the application boots:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
@Configuration @ComponentScan @EnableAutoConfiguration public class MyApplication { public static void main(String[] args) { SpringApplication.run(MyApplication.class, args); } @Bean public CommandLineRunner init(final RepositoryService repositoryService, final RuntimeService runtimeService, final TaskService taskService) { return new CommandLineRunner() { @Override public void run(String... strings) throws Exception { System.out.println("Number of process definitions : " + repositoryService.createProcessDefinitionQuery().count()); System.out.println("Number of tasks : " + taskService.createTaskQuery().count()); runtimeService.startProcessInstanceByKey("oneTaskProcess"); System.out.println("Number of tasks after process start: " + taskService.createTaskQuery().count()); } }; } }

The output expected will be:

Number of process definitions : 1
Number of tasks : 0
Number of tasks after process start : 1

5.7.3. Changing the database and connection pool

As stated above, Spring Boot is about convention over configuration. By default, by having only H2 on the classpath, it created an in-memory datasource and passed that to the Flowable process engine configuration.

To change the datasource, simply override the default by providing a Datasource bean. We’re using the DataSourceBuilder class here, which is a helper class from Spring Boot. If Tomcat, HikariCP or Commons DBCP are on the classpath, one of them will be selected (in that order, with Tomcat first). For example, to switch to a MySQL database:

1 2 3 4 5 6 7 8 9
@Bean public DataSource database() { return DataSourceBuilder.create() .url("jdbc:mysql://127.0.0.1:3306/flowable-spring-boot?characterEncoding=UTF-8") .username("flowable") .password("flowable") .driverClassName("com.mysql.jdbc.Driver") .build(); }

Remove H2 from the Maven dependencies and add the MySQL driver and the Tomcat connection pooling to the classpath:

1 2 3 4 5 6 7 8 9 10
<dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.34</version> </dependency> <dependency> <groupId>org.apache.tomcat</groupId> <artifactId>tomcat-jdbc</artifactId> <version>8.0.15</version> </dependency>

When the app is now booted up, you’ll see it uses MySQL as database (and the Tomcat connection pooling framework):

org.flowable.engine.impl.db.DbSqlSession   : performing create on engine with resource org/flowable/db/create/flowable.mysql.create.engine.sql
org.flowable.engine.impl.db.DbSqlSession   : performing create on history with resource org/flowable/db/create/flowable.mysql.create.history.sql
org.flowable.engine.impl.db.DbSqlSession   : performing create on identity with resource org/flowable/db/create/flowable.mysql.create.identity.sql

When you reboot the application multiple times, you’ll see the number of tasks go up (the H2 in-memory database does not survive a shutdown, MySQL does).

5.7.4. REST support

Often, a REST API is used on top of the embedded Flowable engine (interacting with the different services in a company). Spring Boot makes this really easy. Add following dependency to the classpath:

1 2 3 4 5
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <version>${spring.boot.version}</version> </dependency>

Create a new class, a Spring service and create two methods: one to start our process and one to get a task list for a given assignee. We simply wrap Flowable calls here, but in real-life scenarios this will be more complex.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
@Service public class MyService { @Autowired private RuntimeService runtimeService; @Autowired private TaskService taskService; @Transactional public void startProcess() { runtimeService.startProcessInstanceByKey("oneTaskProcess"); } @Transactional public List<Task> getTasks(String assignee) { return taskService.createTaskQuery().taskAssignee(assignee).list(); } }

We can now create a REST endpoint by annotating a class with @RestController. Here, we simply delegate to the service defined above.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
@RestController public class MyRestController { @Autowired private MyService myService; @RequestMapping(value="/process", method= RequestMethod.POST) public void startProcessInstance() { myService.startProcess(); } @RequestMapping(value="/tasks", method= RequestMethod.GET, produces=MediaType.APPLICATION_JSON_VALUE) public List<TaskRepresentation> getTasks(@RequestParam String assignee) { List<Task> tasks = myService.getTasks(assignee); List<TaskRepresentation> dtos = new ArrayList<TaskRepresentation>(); for (Task task : tasks) { dtos.add(new TaskRepresentation(task.getId(), task.getName())); } return dtos; } static class TaskRepresentation { private String id; private String name; public TaskRepresentation(String id, String name) { this.id = id; this.name = name; } public String getId() { return id; } public void setId(String id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } } }

Both the @Service and the @RestController will be found by the automatic component scan (@ComponentScan) we added to our application class. Run the application class again. We can now interact with the REST API, for example, by using cURL:

curl http://localhost:8080/tasks?assignee=kermit
[]

curl -X POST  http://localhost:8080/process
curl http://localhost:8080/tasks?assignee=kermit
[{"id":"10004","name":"my task"}]

5.7.5. JPA support

To add JPA support for Flowable in Spring Boot, add following dependency:

1 2 3 4 5
<dependency> <groupId>org.flowable</groupId> <artifactId>flowable-spring-boot-starter-jpa</artifactId> <version>${flowable.version}</version> </dependency>

This will add in the Spring configuration and beans for using JPA. By default, the JPA provider will be Hibernate.

Let’s create a simple Entity class:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
@Entity class Person { @Id @GeneratedValue private Long id; private String username; private String firstName; private String lastName; private Date birthDate; public Person() { } public Person(String username, String firstName, String lastName, Date birthDate) { this.username = username; this.firstName = firstName; this.lastName = lastName; this.birthDate = birthDate; } public Long getId() { return id; } public void setId(Long id) { this.id = id; } public String getUsername() { return username; } public void setUsername(String username) { this.username = username; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public Date getBirthDate() { return birthDate; } public void setBirthDate(Date birthDate) { this.birthDate = birthDate; } }

By default, when not using an in-memory database, the tables won’t be created automatically. Create a file application.properties on the classpath and add following property:

spring.jpa.hibernate.ddl-auto=update

Add following class:

1 2 3 4
public interface PersonRepository extends JpaRepository<Person, Long> { Person findByUsername(String username); }

This is a Spring repository, which offers CRUD out of the box. We add the method to find a Person by username. Spring will automatically implement this based on conventions (typically, the property names used).

We now enhance our service further:

  • by adding @Transactional to the class. Note that by adding the JPA dependency above, the DataSourceTransactionManager which we were using before is now automatically swapped out by a JpaTransactionManager.

  • The startProcess now gets an assignee username passed in, which is used to look up the Person, and put the Person JPA object as a process variable in the process instance.

  • A method to create Dummy users is added. This is used in the CommandLineRunner to populate the database.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
@Service @Transactional public class MyService { @Autowired private RuntimeService runtimeService; @Autowired private TaskService taskService; @Autowired private PersonRepository personRepository; public void startProcess(String assignee) { Person person = personRepository.findByUsername(assignee); Map<String, Object> variables = new HashMap<String, Object>(); variables.put("person", person); runtimeService.startProcessInstanceByKey("oneTaskProcess", variables); } public List<Task> getTasks(String assignee) { return taskService.createTaskQuery().taskAssignee(assignee).list(); } public void createDemoUsers() { if (personRepository.findAll().size() == 0) { personRepository.save(new Person("jbarrez", "Joram", "Barrez", new Date())); personRepository.save(new Person("trademakers", "Tijs", "Rademakers", new Date())); } } }

The CommandLineRunner now looks like:

1 2 3 4 5 6 7 8 9
@Bean public CommandLineRunner init(final MyService myService) { return new CommandLineRunner() { public void run(String... strings) throws Exception { myService.createDemoUsers(); } }; }

The RestController is also modified slightly to incorporate the changes above (only showing new methods) and the HTTP POST now has a body that contains the assignee username:

@RestController
public class MyRestController {

    @Autowired
    private MyService myService;

    @RequestMapping(value="/process", method= RequestMethod.POST)
    public void startProcessInstance(@RequestBody StartProcessRepresentation startProcessRepresentation) {
        myService.startProcess(startProcessRepresentation.getAssignee());
    }

   ...

    static class StartProcessRepresentation {

        private String assignee;

        public String getAssignee() {
            return assignee;
        }

        public void setAssignee(String assignee) {
            this.assignee = assignee;
        }
    }

And finally, to try out the Spring-JPA-Flowable integration, we assign the task using the ID of the Person JPA object in the process definition:

1
<userTask id="theTask" name="my task" flowable:assignee="${person.id}"/>

We can now start a new process instance, providing the user name in the POST body:

curl -H "Content-Type: application/json" -d '{"assignee" : "jbarrez"}' http://localhost:8080/process

And the task list is now fetched using the person ID:

curl http://localhost:8080/tasks?assignee=1

[{"id":"12505","name":"my task"}]

5.7.6. Further Reading

Obviously, there is a lot about Spring Boot that hasn’t been touched upon yet, like very easy JTA integration or building a WAR file that can be run on major application servers. And there is a lot more to the Spring Boot integration:

  • Actuator support

  • Spring Integration support

  • Rest API integration: boot up the Flowable Rest API embedded within the Spring application

  • Spring Security support

6. Deployment

6.1. Business archives

To deploy processes, they have to be wrapped in a business archive (BAR). A business archive is the unit of deployment to a Flowable engine. A business archive is equivalent to a ZIP file. It can contain BPMN 2.0 processes, form definitions, DMN rules and any other type of file. In general, a business archive contains a collection of named resources.

When a business archive is deployed, it is scanned for BPMN files with a .bpmn20.xml or .bpmn extension. Each of those will be processed and may contain multiple process definitions. When the DMN engine is activated, .dmn files are also parsed, and with the form engine activated, .form files are handled.

Java classes present in the business archive will not be added to the classpath. All custom classes used in process definitions in the business archive (for example, Java service tasks or event listener implementations) must be made available on the Flowable engine classpath in order to run the processes.

6.1.1. Deploying programmatically

Deploying a business archive from a ZIP file can be done like this:

1 2 3 4 5 6 7
String barFileName = "path/to/process-one.bar"; ZipInputStream inputStream = new ZipInputStream(new FileInputStream(barFileName)); repositoryService.createDeployment() .name("process-one.bar") .addZipInputStream(inputStream) .deploy();

It’s also possible to build a deployment from individual resources. See the javadocs for more details.

6.2. External resources

Process definitions live in the Flowable database. These process definitions can reference delegation classes when using Service Tasks or execution listeners or Spring beans from the Flowable configuration file. These classes and the Spring configuration file have to be available to all process engines that may execute the process definitions.

6.2.1. Java classes

All custom classes that are used in your process (for example, JavaDelegates used in Service Tasks or event-listeners, TaskListeners and so on) should be present on the engine’s classpath when an instance of the process is started.

During deployment of a business archive however, those classes don’t have to be present on the classpath. This means that your delegation classes don’t have to be on the classpath when deploying a new business archive with Ant, for example.

When you are using the demo setup and you want to add your custom classes, you should add a JAR containing your classes to the flowable-task or flowable-rest webapp lib. Don’t forget to include the dependencies of your custom classes (if any) as well. Alternatively, you can include your dependencies in the libraries directory of your Tomcat installation, ${tomcat.home}/lib.

6.2.2. Using Spring beans from a process

When expressions or scripts use Spring beans, those beans have to be available to the engine when executing the process definition. If you are building your own webapp and you configure your process engine in your context as described in the spring integration section, that is straightforward. But bear in mind that you also should update the Flowable task and rest webapps with that context if you use it.

6.2.3. Creating a single app

Instead of making sure that all process engines have all the delegation classes on their classpath and use the right Spring configuration, you may consider including the Flowable REST webapp inside your own webapp so that there is only a single ProcessEngine.

6.3. Versioning of process definitions

BPMN doesn’t have a notion of versioning. That is actually good, because the executable BPMN process file will probably live in a version control system repository (such as Subversion, Git or Mercurial) as part of your development project. However, versions of process definitions are created in the engine as part of deployment. During deployment, Flowable will assign a version to the ProcessDefinition before it is stored in the Flowable DB.

For each process definition in a business archive, the following steps are performed to initialize the properties key, version, name and id:

  • The process definition id attribute in the XML file is used as the process definition key property.

  • The process definition name attribute in the XML file is used as the process definition name property. If the name attribute is not specified, then the id attribute is used as the name.

  • The first time a process with a particular key is deployed, version 1 is assigned. For all subsequent deployments of process definitions with the same key, the version will be set 1 higher than the highest currently deployed version. The key property is used to distinguish process definitions.

  • The id property is set to {processDefinitionKey}:{processDefinitionVersion}:{generated-id}, where generated-id is a unique number added to guarantee uniqueness of the process ID for the process definition caches in a clustered environment.

Take for example the following process

1 2 3
<definitions id="myDefinitions" > <process id="myProcess" name="My important process" > ...

When deploying this process definition, the process definition in the database will look like this:

id key name version

myProcess:1:676

myProcess

My important process

1

Suppose we now deploy an updated version of the same process (for example, changing some user tasks), but the id of the process definition remains the same. The process definition table will now contain the following entries:

id key name version

myProcess:1:676

myProcess

My important process

1

myProcess:2:870

myProcess

My important process

2

When the runtimeService.startProcessInstanceByKey("myProcess") is called, it will now use the process definition with version 2, as this is the latest version of the process definition.

Should we create a second process, as defined below and deploy this to Flowable, a third row will be added to the table.

1 2 3
<definitions id="myNewDefinitions" > <process id="myNewProcess" name="My important process" > ...

The table will look like this:

id key name version

myProcess:1:676

myProcess

My important process

1

myProcess:2:870

myProcess

My important process

2

myNewProcess:1:1033

myNewProcess

My important process

1

Note how the key for the new process is different from our first process. Even though the name is the same (we should probably have changed that too), Flowable only considers the id attribute when distinguishing processes. The new process is therefore deployed with version 1.

6.4. Providing a process diagram

A process diagram image can be added to a deployment. This image will be stored in the Flowable repository and is accessible through the API. This image is also used to visualize the process in Flowable apps.

Suppose we have a process on our classpath, org/flowable/expenseProcess.bpmn20.xml that has a process key expense. The following naming conventions for the process diagram image apply (in this specific order):

  • If an image resource exists in the deployment that has a name of the BPMN 2.0 XML file name concatenated with the process key and an image suffix, this image is used. In our example, this would be org/flowable/expenseProcess.expense.png (or .jpg/gif). In case you have multiple images defined in one BPMN 2.0 XML file, this approach makes most sense. Each diagram image will then have the process key in its file name.

  • If no such image exists, am image resource in the deployment matching the name of the BPMN 2.0 XML file is searched for. In our example this would be org/flowable/expenseProcess.png. Note that this means that every process definition defined in the same BPMN 2.0 file has the same process diagram image. In case there is only one process definition in each BPMN 2.0 XML file, this is obviously not a problem.

Example when deploying programmatically:

1 2 3 4 5
repositoryService.createDeployment() .name("expense-process.bar") .addClasspathResource("org/flowable/expenseProcess.bpmn20.xml") .addClasspathResource("org/flowable/expenseProcess.png") .deploy();

The image resource can be retrieved through the API afterwards:

1 2 3 4 5 6 7
ProcessDefinition processDefinition = repositoryService.createProcessDefinitionQuery() .processDefinitionKey("expense") .singleResult(); String diagramResourceName = processDefinition.getDiagramResourceName(); InputStream imageStream = repositoryService.getResourceAsStream( processDefinition.getDeploymentId(), diagramResourceName);

6.5. Generating a process diagram

If no image is provided in the deployment, as described in the previous section, the Flowable engine will generate a process diagram image if the process definition contains the necessary diagram interchange information.

The resource can be retrieved in exactly the same way as when an image is provided in the deployment.

deployment.image.generation

If, for some reason, it’s not necessary or desirable to generate a diagram during deployment, the isCreateDiagramOnDeploy property can be set on the process engine configuration:

1
<property name="createDiagramOnDeploy" value="false" />

No diagram will be generated now.

6.6. Category

Both deployments and process definitions have user-defined categories. The process definition category is initialized with the value of the targetNamespace attribute in the BPMN XML: <definitions …​ targetNamespace="yourCategory" …​

The deployment category can also be specified in the API like this:

1 2 3 4 5
repositoryService .createDeployment() .category("yourCategory") ... .deploy();

7. BPMN 2.0 Introduction

7.1. What is BPMN?

BPMN is a widely accepted and supported standard notation for representing processes OMG BPMN Standard.

7.2. Defining a process

This introduction is written with the assumption you are using the Eclipse IDE to create and edit files. Very little of this is specific to Eclipse, however, you can use any other tool you prefer to create XML files containing BPMN 2.0.

Create a new XML file (right-click on any project and select New→Other→XML-XML File) and give it a name. Make sure that the file ends with .bpmn20.xml or .bpmn, otherwise the engine won’t pick it up for deployment.

new.bpmn.procdef

The root element of the BPMN 2.0 schema is the definitions element. Within this element, multiple process definitions can be given (although our advice is to have only one process definition in each file, as this simplifies maintenance later in the development process). An empty process definition looks like the one shown below. Note that the minimal definitions element only needs the xmlns and targetNamespace declaration. The targetNamespace can be anything and is useful for categorizing process definitions.

1 2 3 4 5 6 7 8 9 10
<definitions xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL" xmlns:flowable="http://flowable.org/bpmn" targetNamespace="Examples"> <process id="myProcess" name="My First Process"> .. </process> </definitions>

Optionally, you can also add the online schema location of the BPMN 2.0 XML schema, as an alternative to the XML catalog configuration in Eclipse.

1 2 3
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.omg.org/spec/BPMN/20100524/MODEL http://www.omg.org/spec/BPMN/2.0/20100501/BPMN20.xsd

The process element has two attributes:

  • id: this attribute is required and maps to the key property of a Flowable ProcessDefinition object. This id can then be used to start a new process instance of the process definition, through the startProcessInstanceByKey method on the RuntimeService. This method will always take the latest deployed version of the process definition.

1
ProcessInstance processInstance = runtimeService.startProcessInstanceByKey("myProcess");
  • Important to note here is that this is not the same as calling the startProcessInstanceById method, which expects the String ID that was generated at deploy time by the Flowable engine (the ID can be retrieved by calling the processDefinition.getId() method). The format of the generated ID is key:version, and the length is constrained to 64 characters. If you get a FlowableException stating that the generated ID is too long, limit the text in the key field of the process.

  • name: this attribute is optional and maps to the name property of a ProcessDefinition. The engine itself doesn’t use this property, so it can be used for displaying a more human-friendly name in a user interface, for example.

[[10minutetutorial]]

7.3. Getting started: 10 minute tutorial

In this section we will cover a very simple business process that we will use to introduce some basic Flowable concepts and the Flowable API.

7.3.1. Prerequisites

This tutorial assumes that you have the Flowable demo setup running, and that you are using a standalone H2 server. Edit db.properties and set the jdbc.url=jdbc:h2:tcp://localhost/flowable, and then run the standalone server according to H2’s documentation.

7.3.2. Goal

The goal of this tutorial is to learn about Flowable and some basic BPMN 2.0 concepts. The end result will be a simple Java SE program that deploys a process definition, and then interacts with this process through the Flowable engine API. We’ll also touch on some of the tooling around Flowable. Of course, what you’ll learn in this tutorial can also be used when building your own web applications around your business processes.

7.3.3. Use case

The use case is straightforward: we have a company, let’s call it BPMCorp. In BPMCorp, a financial report needs to be written every month for the company shareholders. This is the responsibility of the accountancy department. When the report is finished, one of the members of the upper management needs to approve the document before it’s sent to all the shareholders.

7.3.4. Process diagram

The business process as described above can be defined graphically using the Flowable Designer. However, for this tutorial, we’ll type the XML ourselves, as we’ll learn the most this way at this stage. The graphical BPMN 2.0 notation of our process looks like this:

financial.report.example.diagram

What we see is a none Start Event (circle on the left), followed by two User Tasks: 'Write monthly financial report' and 'Verify monthly financial report', ending in a none end event (circle with thick border on the right).

7.3.5. XML representation

The XML version of this business process (FinancialReportProcess.bpmn20.xml) looks like that shown below. It’s easy to recognize the main elements of our process (click on the link to go to the detailed section of that BPMN 2.0 construct):

  • The (none) start event tells us what the entry point is to the process.

  • The User Tasks declarations are the representation of the human tasks of our process. Note that the first task is assigned to the accountancy group, while the second task is assigned to the management group. See the section on user task assignment for more information on how users and groups can be assigned to user tasks.

  • The process ends when the none end event is reached.

  • The elements are connected to each other by sequence flows. These sequence flows have a source and target, defining the direction of the sequence flow.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43
<definitions id="definitions" targetNamespace="http://flowable.org/bpmn20" xmlns:flowable="http://flowable.org/bpmn" xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL"> <process id="financialReport" name="Monthly financial report reminder process"> <startEvent id="theStart" /> <sequenceFlow id="flow1" sourceRef="theStart" targetRef="writeReportTask" /> <userTask id="writeReportTask" name="Write monthly financial report" > <documentation> Write monthly financial report for publication to shareholders. </documentation> <potentialOwner> <resourceAssignmentExpression> <formalExpression>accountancy</formalExpression> </resourceAssignmentExpression> </potentialOwner> </userTask> <sequenceFlow id="flow2" sourceRef="writeReportTask" targetRef="verifyReportTask" /> <userTask id="verifyReportTask" name="Verify monthly financial report" > <documentation> Verify monthly financial report composed by the accountancy department. This financial report is going to be sent to all the company shareholders. </documentation> <potentialOwner> <resourceAssignmentExpression> <formalExpression>management</formalExpression> </resourceAssignmentExpression> </potentialOwner> </userTask> <sequenceFlow id="flow3" sourceRef="verifyReportTask" targetRef="theEnd" /> <endEvent id="theEnd" /> </process> </definitions>

7.3.6. Starting a process instance

We have now created the process definition for our business process. From such a process definition, we can create process instances. In this scenario, one process instance corresponds to the creation and verification of a single financial report for a particular month. All the process instances for any month share the same process definition.

To be able to create process instances from a given process definition, we must first deploy the process definition. Deploying a process definition means two things:

  • The process definition will be stored in the persistent datastore that is configured for your Flowable engine. So by deploying our business process, we make sure that the engine will find the process definition after an engine restart.

  • The BPMN 2.0 process XML will be parsed to an in-memory object model that can be manipulated through the Flowable API.

More information on deployment can be found in the dedicated section on deployment.

As described in that section, deployment can happen in several ways. One way is through the API as follows. Note that all interaction with the Flowable engine happens through its services.

1 2 3
Deployment deployment = repositoryService.createDeployment() .addClasspathResource("FinancialReportProcess.bpmn20.xml") .deploy();

Now we can start a new process instance using the id we defined in the process definition (see process element in the XML). Note that this id in Flowable terminology is called the key.

1
ProcessInstance processInstance = runtimeService.startProcessInstanceByKey("financialReport");

This will create a process instance that will first go through the start event. After the start event, it follows all the outgoing sequence flows (only one in this case) and the first task (write monthly financial report) is reached. The Flowable engine will now store a task in the persistent database. At this point, the user or group assignments attached to the task are resolved and also stored in the database. It’s important to note that the Flowable engine will continue process execution steps until it reaches a wait state, such as a user task. At such a wait state, the current state of the process instance is stored in the database. It remains in that state until a user decides to complete their task. At that point, the engine will continue until it reaches a new wait state or the end of the process. If the engine reboots or crashes in the meantime, the state of the process is safe and secure in the database.

After the task is created, the startProcessInstanceByKey method will return because the user task activity is a wait state. In our scenario, the task is assigned to a group, which means that every member of the group is a candidate to perform the task.

We can now throw this all together and create a simple Java program. Create a new Eclipse project and add the Flowable JARs and dependencies to its classpath (these can be found in the libs folder of the Flowable distribution). Before we can call the Flowable services, we must first construct a ProcessEngine that gives us access to the services. Here we use the 'standalone' configuration, which constructs a ProcessEngine that uses the database also used in the demo setup.

You can download the process definition XML here. This file contains the XML shown above, but also contains the necessary BPMN diagram interchange information to visualize the process in the Flowable tools.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
public static void main(String[] args) { // Create Flowable process engine ProcessEngine processEngine = ProcessEngineConfiguration .createStandaloneProcessEngineConfiguration() .buildProcessEngine(); // Get Flowable services RepositoryService repositoryService = processEngine.getRepositoryService(); RuntimeService runtimeService = processEngine.getRuntimeService(); // Deploy the process definition repositoryService.createDeployment() .addClasspathResource("FinancialReportProcess.bpmn20.xml") .deploy(); // Start a process instance runtimeService.startProcessInstanceByKey("financialReport"); }

7.3.7. Task lists

We can now retrieve this task through the TaskService by adding the following logic:

1
List<Task> tasks = taskService.createTaskQuery().taskCandidateUser("kermit").list();

Note that the user we pass to this operation needs to be a member of the accountancy group, as that was declared in the process definition:

1 2 3 4 5
<potentialOwner> <resourceAssignmentExpression> <formalExpression>accountancy</formalExpression> </resourceAssignmentExpression> </potentialOwner>

We could also use the task query API to get the same results using the name of the group. We can now add the following logic to our code:

1 2
TaskService taskService = processEngine.getTaskService(); List<Task> tasks = taskService.createTaskQuery().taskCandidateGroup("accountancy").list();

As we’ve configured our ProcessEngine to use the same database that the demo setup is using, we can now log into the Flowable IDM. Login as admin/test and create 2 new users kermit and fozzie, and give both of them the Access the workflow application privilege. Then create 2 new organization groups named accountancy and management, and add fozzie to the new accountancy group and add kermit to the management group. Now login with fozzie to the Flowable task application, and we will find that we can start our business process by selecting the Task App, then its Processes page and selecting the 'Monthly financial report' process.

bpmn.financial.report.example.start.process

As explained, the process will execute until reaching the first user task. As we’re logged in as fozzie, we can see that there is a new candidate task available for him after we’ve started a process instance. Select the Tasks page to view this new task. Note that even if the process was started by someone else, the task would still be visible as a candidate task to everyone in the accountancy group.

bpmn.financial.report.example.task.assigned

7.3.8. Claiming the task

An accountant now needs to claim the task. By claiming the task, that specific user will become the assignee of the task, and the task will disappear from every task list of the other members of the accountancy group. Claiming a task is programmatically done as follows:

1
taskService.claim(task.getId(), "fozzie");

The task is now in the personal task list of the user that claimed the task.

1
List<Task> tasks = taskService.createTaskQuery().taskAssignee("fozzie").list();

In the Flowable Task app, clicking the claim button will call the same operation. The task will now move to the personal task list of the logged on user. You’ll also see that the assignee of the task changed to the current logged in user.

bpmn.financial.report.example.claim.task

7.3.9. Completing the task

The accountant can now start working on the financial report. Once the report is finished, he can complete the task, which means that all work for that task is done.

1
taskService.complete(task.getId());

For the Flowable engine, this is an external signal that the process instance execution can now continue. The task itself is removed from the runtime data. The single outgoing transition from the task is followed, moving the execution to the second task ('verification of the report'). The same mechanism as described for the first task will now be used to assign the second task, with the small difference that the task will be assigned to the management group.

In the demo setup, completing the task is done by clicking the complete button in the task list. Since Fozzie isn’t an accountant, we need to log out of the Flowable Task app and login in as kermit (who is a manager). The second task is now visible in the unassigned task lists.

7.3.10. Ending the process

The verification task can be retrieved and claimed in exactly the same way as before. Completing this second task will move process execution to the end event, which finishes the process instance. The process instance and all related runtime execution data are removed from the datastore.

Programmatically, you can also verify that the process has ended, using the historyService

1 2 3 4
HistoryService historyService = processEngine.getHistoryService(); HistoricProcessInstance historicProcessInstance = historyService.createHistoricProcessInstanceQuery().processInstanceId(procId).singleResult(); System.out.println("Process instance end time: " + historicProcessInstance.getEndTime());

7.3.11. Code overview

Combine all the snippets from previous sections, and you should have something like the following. The code takes into account that you probably will have started a few process instances through the Flowable app UI. It retrieves a list of tasks instead of one task, so it always works:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63
public class TenMinuteTutorial { public static void main(String[] args) { // Create Flowable process engine ProcessEngine processEngine = ProcessEngineConfiguration .createStandaloneProcessEngineConfiguration() .buildProcessEngine(); // Get Flowable services RepositoryService repositoryService = processEngine.getRepositoryService(); RuntimeService runtimeService = processEngine.getRuntimeService(); // Deploy the process definition repositoryService.createDeployment() .addClasspathResource("FinancialReportProcess.bpmn20.xml") .deploy(); // Start a process instance String procId = runtimeService.startProcessInstanceByKey("financialReport").getId(); // Get the first task TaskService taskService = processEngine.getTaskService(); List<Task> tasks = taskService.createTaskQuery().taskCandidateGroup("accountancy").list(); for (Task task : tasks) { System.out.println("Following task is available for accountancy group: " + task.getName()); // claim it taskService.claim(task.getId(), "fozzie"); } // Verify Fozzie can now retrieve the task tasks = taskService.createTaskQuery().taskAssignee("fozzie").list(); for (Task task : tasks) { System.out.println("Task for fozzie: " + task.getName()); // Complete the task taskService.complete(task.getId()); } System.out.println("Number of tasks for fozzie: " + taskService.createTaskQuery().taskAssignee("fozzie").count()); // Retrieve and claim the second task tasks = taskService.createTaskQuery().taskCandidateGroup("management").list(); for (Task task : tasks) { System.out.println("Following task is available for management group: " + task.getName()); taskService.claim(task.getId(), "kermit"); } // Completing the second task ends the process for (Task task : tasks) { taskService.complete(task.getId()); } // verify that the process is actually finished HistoryService historyService = processEngine.getHistoryService(); HistoricProcessInstance historicProcessInstance = historyService.createHistoricProcessInstanceQuery().processInstanceId(procId).singleResult(); System.out.println("Process instance end time: " + historicProcessInstance.getEndTime()); } }

7.3.12. Future enhancements

It’s easy to see that this business process is too simple to be usable in reality. However, as you are going through the BPMN 2.0 constructs available in Flowable, you will be able to enhance the business process by:

  • defining gateways so a manager can decide to reject the financial report and recreate the task for the accountant, following a different path than when accepting the report.

  • declaring and using variables to store or reference the report so that it can be visualized in the form.

  • defining a service task at the end of the process to send the report to every shareholder.

  • etc.

8. BPMN 2.0 Constructs

This chapter covers the BPMN 20 constructs supported by Flowable, as well as custom extensions to the BPMN standard.

8.1. Custom extensions

The BPMN 2.0 standard is a good thing for all parties involved. End-users don’t suffer from vendor lock-in that comes from depending on a proprietary solution. Frameworks, and particularly open-source frameworks such as Flowable, can implement a solution that has the same (and often better implemented ;-) features as those of a big vendor. Thanks to the BPMN 2.0 standard, the transition from such a big vendor solution towards Flowable can be an easy and smooth path.

The downside of a standard, however, is the fact that it is always the result of many discussions and compromises between different companies (and often visions). As a developer reading the BPMN 2.0 XML of a process definition, sometimes it feels like certain constructs or ways to do things are very cumbersome. As Flowable puts ease of development as a top-priority, we introduced something called the Flowable BPMN extensions. These extensions are new constructs or ways to simplify certain constructs that are not part of the BPMN 2.0 specification.

Although the BPMN 2.0 specification clearly states that it was designed for custom extension, we make sure that:

  • There always must be a simple transformation to the standard way of doing things, as a prerequisite of such a custom extension. So when you decide to use a custom extension, you don’t have to be concerned that there is no way back.

  • When using a custom extension, it is always clearly indicated by giving the new XML element, attribute, and so on, the flowable: namespace prefix. Note that the Flowable engine also supports the activiti: namespace prefix.

Whether you want to use a custom extension or not is completely up to you. Several factors will influence this decision (graphical editor usage, company policy, and so on). We only provide them as we believe that some points in the standard can be done in a simpler or more efficient way. Feel free to give us (positive or negative) feedback on our extensions, or to post new ideas for custom extensions. Who knows, some day your idea might pop up in the specification!

8.2. Events

Events are used to model something that happens during the lifetime of a process. Events are always visualized as a circle. In BPMN 2.0, there exist two main event categories: catching and throwing events.

  • Catching: when process execution arrives at the event, it will wait for a trigger to happen. The type of trigger is defined by the inner icon or the type declaration in the XML. Catching events are visually differentiated from a throwing event by the inner icon that is not filled (it’s just white).

  • Throwing: when process execution arrives at the event, a trigger is fired. The type of trigger is defined by the inner icon or the type declaration in the XML. Throwing events are visually differentiated from a catching event by the inner icon that is filled with black.

8.2.1. Event Definitions

Event definitions define the semantics of an event. Without an event definition, an event "does nothing special". For instance, a start event without an event definition has nothing to specify what exactly starts the process. If we add an event definition to the start event (for example, a timer event definition), we declare what "type" of event starts the process (in the case of a timer event definition, the fact that a certain point in time is reached).

8.2.2. Timer Event Definitions

Timer events are events that are triggered by a defined timer. They can be used as start event, intermediate event or boundary event. The behavior of the time event depends on the business calendar used. Every timer event has a default business calendar, but the business calendar can also be given as part of the timer event definition.

1 2 3
<timerEventDefinition flowable:businessCalendarName="custom"> ... </timerEventDefinition>

Where businessCalendarName points to a business calendar in the process engine configuration. When business calendar is omitted, default business calendars are used.

The timer definition must have exactly one element from the following:

  • timeDate. This format specifies a fixed date in ISO 8601 format, when trigger will be fired. For example:

1 2 3
<timerEventDefinition> <timeDate>2011-03-11T12:13:14</timeDate> </timerEventDefinition>
  • timeDuration. To specify how long the timer should run before it is fired, a timeDuration can be specified as a sub-element of timerEventDefinition. The format used is the ISO 8601 format (as required by the BPMN 2.0 specification). For example (interval lasting 10 days):

1 2 3
<timerEventDefinition> <timeDuration>P10D</timeDuration> </timerEventDefinition>
  • timeCycle. Specifies a repeating interval, which can be useful for starting process periodically, or for sending multiple reminders for overdue user task. A time cycle element can be in one of two formats. First, is the format of recurring time duration as specified by ISO 8601 standard. Example (3 repeating intervals, each lasting 10 hours):

It is also possible to specify the endDate as an optional attribute on the timeCycle or either in the end of the time expression as follows: R3/PT10H/${EndDate}. When the endDate is reached, the application will stop creating other jobs for this task. It accepts as a value either static values ISO 8601 standard, for example, "2015-02-25T16:42:11+00:00", or variables, for example, ${EndDate}

1 2 3
<timerEventDefinition> <timeCycle flowable:endDate="2015-02-25T16:42:11+00:00">R3/PT10H</timeCycle> </timerEventDefinition>
1 2 3
<timerEventDefinition> <timeCycle>R3/PT10H/${EndDate}</timeCycle> </timerEventDefinition>

If both are specified, then the endDate specified as attribute will be used by the system.

Currently, only the BoundaryTimerEvents and CatchTimerEvent support EndDate functionality.

Additionally, you can specify time cycle using cron expressions; the example below shows trigger firing every 5 minutes, starting at full hour:

0 0/5 * * * ?

Please see this tutorial for using cron expressions.

Note: The first symbol denotes seconds, not minutes as in normal Unix cron.

The recurring time duration is better suited for handling relative timers, which are calculated with respect to some particular point in time (for example, the time when a user task was started), while cron expressions can handle absolute timers, which is particularly useful for timer start events.

You can use expressions for timer event definitions, and by doing so, you can influence the timer definition based on process variables. The process variables must contain the ISO 8601 (or cron for cycle type) string for appropriate timer type.

1 2 3 4 5
<boundaryEvent id="escalationTimer" cancelActivity="true" attachedToRef="firstLineSupport"> <timerEventDefinition> <timeDuration>${duration}</timeDuration> </timerEventDefinition> </boundaryEvent>

Note: timers are only fired when the job or async executor is enabled (jobExecutorActivate or asyncExecutorActivate must be set to true in the flowable.cfg.xml, because the job and async executor are disabled by default).

8.2.3. Error Event Definitions

Important note: a BPMN error is NOT the same as a Java exception. In fact, the two have nothing in common. BPMN error events are a way of modeling business exceptions. Java exceptions are handled in their own specific way.

1 2 3
<endEvent id="myErrorEndEvent"> <errorEventDefinition errorRef="myError" /> </endEvent>

8.2.4. Signal Event Definitions

Signal events are events that reference a named signal. A signal is an event of global scope (broadcast semantics) and is delivered to all active handlers (waiting process instances/catching signal events).

A signal event definition is declared using the signalEventDefinition element. The attribute signalRef references a signal element declared as a child element of the definitions root element. The following is an excerpt of a process where a signal event is thrown and caught by intermediate events.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
<definitions... > <!-- declaration of the signal --> <signal id="alertSignal" name="alert" /> <process id="catchSignal"> <intermediateThrowEvent id="throwSignalEvent" name="Alert"> <!-- signal event definition --> <signalEventDefinition signalRef="alertSignal" /> </intermediateThrowEvent> ... <intermediateCatchEvent id="catchSignalEvent" name="On Alert"> <!-- signal event definition --> <signalEventDefinition signalRef="alertSignal" /> </intermediateCatchEvent> ... </process> </definitions>

The signalEventDefinitions reference the same signal element.

Throwing a Signal Event

A signal can either be thrown by a process instance using a BPMN construct or programmatically using java API. The following methods on the org.flowable.engine.RuntimeService can be used to throw a signal programmatically:

1 2
RuntimeService.signalEventReceived(String signalName); RuntimeService.signalEventReceived(String signalName, String executionId);

The difference between signalEventReceived(String signalName) and signalEventReceived(String signalName, String executionId) is that the first method throws the signal globally to all subscribed handlers (broadcast semantics) and the second method delivers the signal to a specific execution only.

Catching a Signal Event

A signal event can be caught by an intermediate catch signal event or a signal boundary event.

Querying for Signal Event subscriptions

It’s possible to query for all executions that have subscribed to a specific signal event:

1 2 3
List<Execution> executions = runtimeService.createExecutionQuery() .signalEventSubscriptionName("alert") .list();

We can then use the signalEventReceived(String signalName, String executionId) method to deliver the signal to these executions.

Signal event scope

By default, signals are broadcast process engine wide. This means that you can throw a signal event in a process instance, and other process instances with different process definitions can react on the occurrence of this event.

However, sometimes it is desirable to react to a signal event only within the same process instance. A use case, for example, is a synchronization mechanism in the process instance when two or more activities are mutually exclusive.

To restrict the scope of the signal event, add the (non-BPMN 2.0 standard!) scope attribute to the signal event definition:

1
<signal id="alertSignal" name="alert" flowable:scope="processInstance"/>

The default value for this is attribute is "global".

Signal Event example(s)

The following is an example of two separate processes communicating using signals. The first process is started if an insurance policy is updated or changed. After the changes have been reviewed by a human participant, a signal event is thrown, signaling that a policy has changed:

bpmn.signal.event.throw

This event can now be caught by all process instances that are interested. The following is an example of a process subscribing to the event.

bpmn.signal.event.catch

Note: it’s important to understand that a signal event is broadcast to all active handlers. This means, in the case of the example given above, that all instances of the process catching the signal will receive the event. In this scenario, this is what we want. However, there are also situations where the broadcast behavior is unintended. Consider the following process:

bpmn.signal.event.warning.1

The pattern described in the process above is not supported by BPMN. The idea is that the error thrown while performing the "do something" task is caught by the boundary error event, propagated to the parallel path of execution using the signal throw event and then interrupt the "do something in parallel" task. So far, Flowable would perform as expected. The signal would be propagated to the catching boundary event and interrupt the task. However, due to the broadcast semantics of the signal, it would also be propagated to all other process instances that have subscribed to the signal event. In this case, this might not be what we want.

Note: the signal event does not perform any kind of correlation to a specific process instance. On the contrary, it is broadcast to all process instances. If you need to deliver a signal to a specific process instance only, perform the correlation manually and use signalEventReceived(String signalName, String executionId) along with the appropriate query mechanisms.

Flowable does have a way to fix this by adding the scope attribute to the signal event set to processInstance.

8.2.5. Message Event Definitions

Message events are events that reference a named message. A message has a name and a payload. Unlike a signal, a message event is always directed at a single receiver.

A message event definition is declared using the messageEventDefinition element. The attribute messageRef references a message element declared as a child element of the definitions root element. The following is an excerpt of a process where two message events is declared and referenced by a start event and an intermediate catching message event.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
<definitions id="definitions" xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL" xmlns:flowable="http://flowable.org/bpmn" targetNamespace="Examples" xmlns:tns="Examples"> <message id="newInvoice" name="newInvoiceMessage" /> <message id="payment" name="paymentMessage" /> <process id="invoiceProcess"> <startEvent id="messageStart" > <messageEventDefinition messageRef="newInvoice" /> </startEvent> ... <intermediateCatchEvent id="paymentEvt" > <messageEventDefinition messageRef="payment" /> </intermediateCatchEvent> ... </process> </definitions>
Throwing a Message Event

As an embeddable process engine, Flowable is not concerned with actually receiving a message. This would be environment dependent and entail platform-specific activities, such as connecting to a JMS (Java Messaging Service) Queue/Topic or processing a Webservice or REST request. The reception of messages is therefore something you have to implement as part of the application or infrastructure into which the process engine is embedded.

After you have received a message inside your application, you must decide what to do with it. If the message should trigger the start of a new process instance, choose between the following methods offered by the runtime service:

1 2 3 4
ProcessInstance startProcessInstanceByMessage(String messageName); ProcessInstance startProcessInstanceByMessage(String messageName, Map<String, Object> processVariables); ProcessInstance startProcessInstanceByMessage(String messageName, String businessKey, Map<String, Object> processVariables);

These methods start a process instance using the referenced message.

If the message needs to be received by an existing process instance, you first have to correlate the message to a specific process instance (see next section) and then trigger the continuation of the waiting execution. The runtime service offers the following methods for triggering an execution based on a message event subscription:

1 2
void messageEventReceived(String messageName, String executionId); void messageEventReceived(String messageName, String executionId, HashMap<String, Object> processVariables);
Querying for Message Event subscriptions
  • In the case of a message start event, the message event subscription is associated with a particular process definition. Such message subscriptions can be queried using a ProcessDefinitionQuery:

1 2 3
ProcessDefinition processDefinition = repositoryService.createProcessDefinitionQuery() .messageEventSubscription("newCallCenterBooking") .singleResult();

Since there can only be one process definition for a specific message subscription, the query always returns zero or one result. If a process definition is updated, only the newest version of the process definition has a subscription to the message event.

  • In the case of an intermediate catch message event, the message event subscription is associated with a particular execution. Such message event subscriptions can be queried using a ExecutionQuery:

1 2 3 4
Execution execution = runtimeService.createExecutionQuery() .messageEventSubscriptionName("paymentReceived") .variableValueEquals("orderId", message.getOrderId()) .singleResult();

Such queries are called correlation queries and usually require knowledge about the processes (in this case, that there will be at most one process instance for a given orderId).

Message Event example(s)

The following is an example of a process that can be started using two different messages:

bpmn.start.message.event.example.1

This is useful if the process needs alternative ways to react to different start events, but eventually continues in a uniform way.

8.2.6. Start Events

A start event indicates where a process starts. The type of start event (process starts on arrival of message, on specific time intervals, and so on), defining how the process is started, is shown as a small icon in the visual representation of the event. In the XML representation, the type is given by the declaration of a sub-element.

Start events are always catching: conceptually the event is (at any time) waiting until a certain trigger happens.

In a start event, the following Flowable-specific properties can be specified:

  • initiator: identifies the variable name in which the authenticated user ID will be stored when the process is started. For example:

1
<startEvent id="request" flowable:initiator="initiator" />

The authenticated user must be set with the method IdentityService.setAuthenticatedUserId(String) in a try-finally block, like this:

1 2 3 4 5 6
try { identityService.setAuthenticatedUserId("bono"); runtimeService.startProcessInstanceByKey("someProcessKey"); } finally { identityService.setAuthenticatedUserId(null); }

This code is baked into the Flowable application, so it works in combination with Forms.

8.2.7. None Start Event

Description

A none start event technically means that the trigger for starting the process instance is unspecified. This means that the engine cannot anticipate when the process instance must be started. The none start event is used when the process instance is started through the API by calling one of the startProcessInstanceByXXX methods.

1
ProcessInstance processInstance = runtimeService.startProcessInstanceByXXX();

Note: a sub-process always has a none start event.

Graphical notation

A none start event is visualized as a circle with no inner icon (in other words, no trigger type).

bpmn.none.start.event
XML representation

The XML representation of a none start event is the normal start event declaration without any sub-element (other start event types all have a sub-element declaring the type).

1
<startEvent id="start" name="my start event" />
Custom extensions for the none start event

formKey: references a form definition that users have to fill in when starting a new process instance. More information can be found in the forms section Example:

1
<startEvent id="request" flowable:formKey="request" />

8.2.8. Timer Start Event

Description

A timer start event is used to create process instances at given time. It can be used both for processes that should start only once and for processes that should start in specific time intervals.

Note: a sub-process cannot have a timer start event.

Note: a start timer event is scheduled as soon as process is deployed. There is no need to call startProcessInstanceByXXX, although calling start process methods is not restricted and will cause one more starting of the process at the time of startProcessInstanceByXXX invocation.

Note: when a new version of a process with a start timer event is deployed, the job corresponding with the previous timer will be removed. The reasoning is that normally it is not desirable to keep automatically starting new process instances of the old version of the process.

Graphical notation

A timer start event is visualized as a circle with clock inner icon.

bpmn.clock.start.event
XML representation

The XML representation of a timer start event is the normal start event declaration, with timer definition sub-element. Please refer to timer definitions for configuration details.

Example: process will start 4 times, in 5 minute intervals, starting on 11th march 2011, 12:13

1 2 3 4 5
<startEvent id="theStart"> <timerEventDefinition> <timeCycle>R4/2011-03-11T12:13/PT5M</timeCycle> </timerEventDefinition> </startEvent>

Example: process will start once, on selected date

1 2 3 4 5
<startEvent id="theStart"> <timerEventDefinition> <timeDate>2011-03-11T12:13:14</timeDate> </timerEventDefinition> </startEvent>

8.2.9. Message Start Event

Description

A message start event can be used to start a process instance using a named message. This effectively allows us to select the right start event from a set of alternative start events using the message name.

When deploying a process definition with one or more message start events, the following considerations apply:

  • The name of the message start event must be unique across a given process definition. A process definition must not have multiple message start events with the same name. Flowable throws an exception upon deployment of a process definition containing two or more message start events referencing the same message, or if two or more message start events reference messages with the same message name.

  • The name of the message start event must be unique across all deployed process definitions. Flowable throws an exception upon deployment of a process definition containing one or more message start events referencing a message with the same name as a message start event already deployed by a different process definition.

  • Process versioning: Upon deployment of a new version of a process definition, the start message subscriptions of the previous version are removed.

When starting a process instance, a message start event can be triggered using the following methods on the RuntimeService:

1 2 3 4
ProcessInstance startProcessInstanceByMessage(String messageName); ProcessInstance startProcessInstanceByMessage(String messageName, Map<String, Object> processVariables); ProcessInstance startProcessInstanceByMessage(String messageName, String businessKey, Map<String, Object< processVariables);

The messageName is the name given in the name attribute of the message element referenced by the messageRef attribute of the messageEventDefinition. The following considerations apply when starting a process instance:

  • Message start events are only supported on top-level processes. Message start events are not supported on embedded sub processes.

  • If a process definition has multiple message start events, runtimeService.startProcessInstanceByMessage(…​) allows to select the appropriate start event.

  • If a process definition has multiple message start events and a single none start event, runtimeService.startProcessInstanceByKey(…​) and runtimeService.startProcessInstanceById(…​) starts a process instance using the none start event.

  • If a process definition has multiple message start events and no none start event, runtimeService.startProcessInstanceByKey(…​) and runtimeService.startProcessInstanceById(…​) throw an exception.

  • If a process definition has a single message start event, runtimeService.startProcessInstanceByKey(…​) and runtimeService.startProcessInstanceById(…​) start a new process instance using the message start event.

  • If a process is started from a call activity, message start event(s) are only supported if

    • in addition to the message start event(s), the process has a single none start event

    • the process has a single message start event and no other start events.

Graphical notation

A message start event is visualized as a circle with a message event symbol. The symbol is unfilled, to represent the catching (receiving) behavior.

bpmn.start.message.event
XML representation

The XML representation of a message start event is the normal start event declaration with a messageEventDefinition child-element:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
<definitions id="definitions" xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL" xmlns:flowable="http://flowable.org/bpmn" targetNamespace="Examples" xmlns:tns="Examples"> <message id="newInvoice" name="newInvoiceMessage" /> <process id="invoiceProcess"> <startEvent id="messageStart" > <messageEventDefinition messageRef="tns:newInvoice" /> </startEvent> ... </process> </definitions>

8.2.10. Signal Start Event

Description

A signal start event can be used to start a process instance using a named signal. The signal can be fired from within a process instance using the intermediary signal throw event or through the API (runtimeService.signalEventReceivedXXX methods). In both cases, all process definitions that have a signal start event with the same name will be started.

Note that in both cases, it is also possible to choose between a synchronous and asynchronous starting of the process instances.

The signalName that must be passed in the API is the name given in the name attribute of the signal element referenced by the signalRef attribute of the signalEventDefinition.

Graphical notation

A signal start event is visualized as a circle with a signal event symbol. The symbol is unfilled, to represent the catching (receiving) behavior.

bpmn.start.signal.event
XML representation

The XML representation of a signal start event is the normal start event declaration with a signalEventDefinition child-element:

1 2 3 4 5 6 7 8 9 10 11
<signal id="theSignal" name="The Signal" /> <process id="processWithSignalStart1"> <startEvent id="theStart"> <signalEventDefinition id="theSignalEventDefinition" signalRef="theSignal" /> </startEvent> <sequenceFlow id="flow1" sourceRef="theStart" targetRef="theTask" /> <userTask id="theTask" name="Task in process A" /> <sequenceFlow id="flow2" sourceRef="theTask" targetRef="theEnd" /> <endEvent id="theEnd" /> </process>

8.2.11. Error Start Event

Description

An error start event can be used to trigger an Event Sub-Process. An error start event cannot be used for starting a process instance.

An error start event is always interrupting.

Graphical notation

An error start event is visualized as a circle with an error event symbol. The symbol is unfilled, to represent the catching (receiving) behavior.

bpmn.start.error.event
XML representation

The XML representation of an error start event is the normal start event declaration with an errorEventDefinition child-element:

1 2 3
<startEvent id="messageStart" > <errorEventDefinition errorRef="someError" /> </startEvent>

8.2.12. End Events

An end event signifies the end of a path in a process or sub-process. An end event is always throwing. This means that when process execution arrives at an end event, a result is thrown. The type of result is depicted by the inner black icon of the event. In the XML representation, the type is given by the declaration of a sub-element.

8.2.13. None End Event

Description

A none end event means that the result thrown when the event is reached is unspecified. As such, the engine will not do anything extra besides ending the current path of execution.

Graphical notation

A none end event is visualized as a circle with a thick border with no inner icon (no result type).

bpmn.none.end.event
XML representation

The XML representation of a none end event is the normal end event declaration, without any sub-element (other end event types all have a sub-element declaring the type).

1
<endEvent id="end" name="my end event" />

8.2.14. Error End Event

Description

When process execution arrives at an error end event, the current path of execution ends and an error is thrown. This error can caught by a matching intermediate boundary error event. If no matching boundary error event is found, an exception will be thrown.

Graphical notation

An error end event is visualized as a typical end event (circle with thick border), with the error icon inside. The error icon is completely black, to indicate its throwing semantics.

bpmn.error.end.event
XML representation

An error end event is represented as an end event, with an errorEventDefinition child element.

1 2 3
<endEvent id="myErrorEndEvent"> <errorEventDefinition errorRef="myError" /> </endEvent>

The errorRef attribute can reference an error element that is defined outside the process:

1 2 3 4
<error id="myError" errorCode="123" /> ... <process id="myProcess"> ...

The errorCode of the error will be used to find the matching catching boundary error event. If the errorRef doesn’t match any defined error, then the errorRef is used as a shortcut for the errorCode. This is a Flowable specific shortcut. More concretely, the following snippets are equivalent in functionality.

1 2 3 4 5 6 7 8
<error id="myError" errorCode="error123" /> ... <process id="myProcess"> ... <endEvent id="myErrorEndEvent"> <errorEventDefinition errorRef="myError" /> </endEvent> ...

is equivalent with

1 2 3
<endEvent id="myErrorEndEvent"> <errorEventDefinition errorRef="error123" /> </endEvent>

Note that the errorRef must comply with the BPMN 2.0 schema, and must be a valid QName.

8.2.15. Terminate End Event

Description

When a terminate end event is reached, the current process instance or sub-process will be terminated. Conceptually, when an execution arrives at a terminate end event, the first scope (process or sub-process) will be determined and ended. Note that in BPMN 2.0, a sub-process can be an embedded sub-process, call activity, event sub-process or transaction sub-process. This rule applies in general: when, for example, there is a multi-instance call activity or embedded sub-process, only that instance will end, the other instances and the process instance are not affected.

There is an optional attribute terminateAll that can be added. When true, regardless of the placement of the terminate end event in the process definition and regardless of being in a sub-process (even nested), the (root) process instance will be terminated.

Graphical notation

A cancel end event visualized as a typical end event (circle with thick outline), with a full black circle inside.

bpmn.terminate.end.event
XML representation

A terminate end event is represented as an end event, with a terminateEventDefinition child element.

Note that the terminateAll attribute is optional (and false by default).

1 2 3
<endEvent id="myEndEvent > <terminateEventDefinition flowable:terminateAll="true"></terminateEventDefinition> </endEvent>

8.2.16. Cancel End Event

Description

The cancel end event can only be used in combination with a BPMN transaction sub-process. When the cancel end event is reached, a cancel event is thrown which must be caught by a cancel boundary event. The cancel boundary event then cancels the transaction and triggers compensation.

Graphical notation

A cancel end event is visualized as a typical end event (circle with thick outline), with the cancel icon inside. The cancel icon is completely black, to indicate its throwing semantics.

bpmn.cancel.end.event
XML representation

A cancel end event is represented as an end event, with a cancelEventDefinition child element.

1 2 3
<endEvent id="myCancelEndEvent"> <cancelEventDefinition /> </endEvent>

8.2.17. Boundary Events

Boundary events are catching events that are attached to an activity (a boundary event can never be throwing). This means that while the activity is running, the event is listening for a certain type of trigger. When the event is caught, the activity is interrupted and the sequence flow going out of the event is followed.

All boundary events are defined in the same way:

1 2 3
<boundaryEvent id="myBoundaryEvent" attachedToRef="theActivity"> <XXXEventDefinition/> </boundaryEvent>

A boundary event is defined with

  • A unique identifier (process-wide)

  • A reference to the activity to which the event is attached through the attachedToRef attribute. Note that a boundary event is defined on the same level as the activities to which they are attached (in other words, no inclusion of the boundary event inside the activity).

  • An XML sub-element of the form XXXEventDefinition (for example, TimerEventDefinition, ErrorEventDefinition, and so on) defining the type of the boundary event. See the specific boundary event types for more details.

8.2.18. Timer Boundary Event

Description

A timer boundary event acts as a stopwatch and alarm clock. When an execution arrives at the activity where the boundary event is attached, a timer is started. When the timer fires (for example, after a specified interval), the activity is interrupted and the sequence flow going out of the boundary event is followed.

Graphical Notation

A timer boundary event is visualized as a typical boundary event (circle on the border), with the timer icon on the inside.

bpmn.boundary.timer.event
XML Representation

A timer boundary event is defined as a regular boundary event. The specific type sub-element in this case is a timerEventDefinition element.

1 2 3 4 5
<boundaryEvent id="escalationTimer" cancelActivity="true" attachedToRef="firstLineSupport"> <timerEventDefinition> <timeDuration>PT4H</timeDuration> </timerEventDefinition> </boundaryEvent>

Please refer to timer event definition for details on timer configuration.

In the graphical representation, the line of the circle is dotted as you can see in the example above:

bpmn.non.interrupting.boundary.timer.event

A typical use case is sending an escalation email after a period of time, but without affecting the normal process flow.

There is a key difference between the interrupting and non interrupting timer event. Non-interrupting means the original activity is not interrupted but stays as it was. The interrupting behavior is the default. In the XML representation, the cancelActivity attribute is set to false:

1
<boundaryEvent id="escalationTimer" cancelActivity="false" attachedToRef="firstLineSupport"/>

Note: boundary timer events are only fired when the job or async executor is enabled (jobExecutorActivate or asyncExecutorActivate needs to be set to true in the flowable.cfg.xml, since the job and async executor are disabled by default).

Known issue with boundary events

There is a known issue regarding concurrency when using boundary events of any type. Currently, it is not possible to have multiple outgoing sequence flows attached to a boundary event. A solution to this problem is to use one outgoing sequence flow that goes to a parallel gateway.

bpmn.known.issue.boundary.event

8.2.19. Error Boundary Event

Description

An intermediate catching error on the boundary of an activity, or boundary error event for short, catches errors that are thrown within the scope of the activity on which it is defined.

Defining a boundary error event makes most sense on an embedded sub-process, or a call activity, as a sub-process creates a scope for all activities inside the sub-process. Errors are thrown by error end events. Such an error will propagate its parent scopes upwards until a scope is found on which a boundary error event is defined that matches the error event definition.

When an error event is caught, the activity on which the boundary event is defined is destroyed, also destroying all current executions within (concurrent activities, nested sub-processes, and so on). Process execution continues following the outgoing sequence flow of the boundary event.

Graphical notation

A boundary error event is visualized as a typical intermediate event (circle with smaller circle inside) on the boundary, with the error icon inside. The error icon is white, to indicate its catch semantics.

bpmn.boundary.error.event
XML representation

A boundary error event is defined as a typical boundary event:

1 2 3
<boundaryEvent id="catchError" attachedToRef="mySubProcess"> <errorEventDefinition errorRef="myError"/> </boundaryEvent>

As with the error end event, the errorRef references an error defined outside the process element:

1 2 3 4
<error id="myError" errorCode="123" /> ... <process id="myProcess"> ...

The errorCode is used to match the errors that are caught:

  • If errorRef is omitted, the boundary error event will catch any error event, regardless of the errorCode of the error.

  • If an errorRef is provided and it references an existing error, the boundary event will only catch errors with the same error code.

  • If an errorRef is provided, but no error is defined in the BPMN 2.0 file, then the errorRef is used as errorCode (similar for with error end events).

Example

The following example process shows how an error end event can be used. When the 'Review profitability' user task is completed by saying that not enough information is provided, an error is thrown. When this error is caught on the boundary of the sub-process, all active activities within the 'Review sales lead' sub-process are destroyed (even if 'Review customer rating' had not yet been completed), and the 'Provide additional details' user task is created.

bpmn.boundary.error.example

This process is shipped as example in the demo setup. The process XML and unit test can be found in the org.flowable.examples.bpmn.event.error package.

8.2.20. Signal Boundary Event

Description

An attached intermediate catching signal on the boundary of an activity, or boundary signal event for short, catches signals with the same signal name as the referenced signal definition.

Note: contrary to other events, such as the boundary error event, a boundary signal event doesn’t only catch signal events thrown from the scope to which it is attached. On the contrary, a signal event has global scope (broadcast semantics), meaning that the signal can be thrown from any place, even from a different process instance.

Note: contrary to other events, such as the error event, a signal is not consumed if it is caught. If you have two active signal boundary events catching the same signal event, both boundary events are triggered, even if they are part of different process instances.

Graphical notation

A boundary signal event is visualized as a typical intermediate event (circle with smaller circle inside) on the boundary, with the signal icon inside. The signal icon is white (unfilled), to indicate its catch semantics.

bpmn.boundary.signal.event
XML representation

A boundary signal event is defined as a typical boundary event:

1 2 3
<boundaryEvent id="boundary" attachedToRef="task" cancelActivity="true"> <signalEventDefinition signalRef="alertSignal"/> </boundaryEvent>
Example

See the section on signal event definitions.

8.2.21. Message Boundary Event

Description

An attached intermediate catching message on the boundary of an activity, or boundary message event for short, catches messages with the same message name as the referenced message definition.

Graphical notation

A boundary message event is visualized as a typical intermediate event (circle with smaller circle inside) on the boundary, with the message icon inside. The message icon is white (unfilled), to indicate its catch semantics.

bpmn.boundary.message.event

Note that boundary message event can be both interrupting (right-hand side) and non-interrupting (left-hand side).

XML representation

A boundary message event is defined as a typical boundary event:

1 2 3
<boundaryEvent id="boundary" attachedToRef="task" cancelActivity="true"> <messageEventDefinition messageRef="newCustomerMessage"/> </boundaryEvent>
Example

See the section on message event definitions.

8.2.22. Cancel Boundary Event

Description

An attached intermediate catching cancel event on the boundary of a transaction sub-process, or boundary cancel event for short, is triggered when a transaction is canceled. When the cancel boundary event is triggered, it first interrupts all active executions in the current scope. Next, it starts compensation for all active compensation boundary events in the scope of the transaction. Compensation is performed synchronously, in other words, the boundary event waits before compensation is completed before leaving the transaction. When compensation is completed, the transaction sub-process is left using any sequence flows running out of the cancel boundary event.

Note: Only a single cancel boundary event is allowed for a transaction sub-process.

Note: If the transaction sub-process hosts nested sub-processes, compensation is only triggered for sub-processes that have completed successfully.

Note: If a cancel boundary event is placed on a transaction sub-process with multi instance characteristics, if one instance triggers cancellation, the boundary event cancels all instances.

Graphical notation

A cancel boundary event is visualized as a typical intermediate event (circle with smaller circle inside) on the boundary, with the cancel icon inside. The cancel icon is white (unfilled), to indicate its catching semantics.

bpmn.boundary.cancel.event
XML representation

A cancel boundary event is defined as a typical boundary event:

1 2 3
<boundaryEvent id="boundary" attachedToRef="transaction" > <cancelEventDefinition /> </boundaryEvent>

As the cancel boundary event is always interrupting, the cancelActivity attribute is not required.

8.2.23. Compensation Boundary Event

Description

An attached intermediate catching compensation on the boundary of an activity or compensation boundary event for short, can be used to attach a compensation handler to an activity.

The compensation boundary event must reference a single compensation handler using a directed association.

A compensation boundary event has a different activation policy from other boundary events. Other boundary events, such as the signal boundary event, are activated when the activity they are attached to is started. When the activity is finished, they are deactivated and the corresponding event subscription is canceled. The compensation boundary event is different. The compensation boundary event is activated when the activity it is attached to completes successfully. At this point, the corresponding subscription to the compensation events is created. The subscription is removed either when a compensation event is triggered or when the corresponding process instance ends. From this, it follows:

  • When compensation is triggered, the compensation handler associated with the compensation boundary event is invoked the same number of times the activity it is attached to completed successfully.

  • If a compensation boundary event is attached to an activity with multiple instance characteristics, a compensation event subscription is created for each instance.

  • If a compensation boundary event is attached to an activity that is contained inside a loop, a compensation event subscription is created each time the activity is executed.

  • If the process instance ends, the subscriptions to compensation events are canceled.

Note: the compensation boundary event is not supported on embedded sub-processes.

Graphical notation

A compensation boundary event is visualized as a typical intermediate event (circle with smaller circle inside) on the boundary, with the compensation icon inside. The compensation icon is white (unfilled), to indicate its catching semantics. In addition to a compensation boundary event, the following figure shows a compensation handler associated with the boundary event using a unidirectional association:

bpmn.boundary.compensation.event
XML representation

A compensation boundary event is defined as a typical boundary event:

1 2 3 4 5 6 7 8
<boundaryEvent id="compensateBookHotelEvt" attachedToRef="bookHotel" > <compensateEventDefinition /> </boundaryEvent> <association associationDirection="One" id="a1" sourceRef="compensateBookHotelEvt" targetRef="undoBookHotel" /> <serviceTask id="undoBookHotel" isForCompensation="true" flowable:class="..." />

As the compensation boundary event is activated after the activity has completed successfully, the cancelActivity attribute is not supported.

8.2.24. Intermediate Catching Events

All intermediate catching events are defined in the same way:

1 2 3
<intermediateCatchEvent id="myIntermediateCatchEvent" > <XXXEventDefinition/> </intermediateCatchEvent>

An intermediate catching event is defined with:

  • A unique identifier (process-wide)

  • An XML sub-element of the form XXXEventDefinition (for example, TimerEventDefinition) defining the type of the intermediate catching event. See the specific catching event types for more details.

8.2.25. Timer Intermediate Catching Event

Description

A timer intermediate event acts as a stopwatch. When an execution arrives at a catching event activity, a timer is started. When the timer fires (for example, after a specified interval), the sequence flow going out of the timer intermediate event is followed.

Graphical Notation

A timer intermediate event is visualized as an intermediate catching event, with the timer icon on the inside.

bpmn.intermediate.timer.event
XML Representation

A timer intermediate event is defined as an intermediate catching event. The specific type sub-element is, in this case, a timerEventDefinition element.

1 2 3 4 5
<intermediateCatchEvent id="timer"> <timerEventDefinition> <timeDuration>PT5M</timeDuration> </timerEventDefinition> </intermediateCatchEvent>

See timer event definitions for configuration details.

8.2.26. Signal Intermediate Catching Event

Description

An intermediate catching signal event catches signals with the same signal name as the referenced signal definition.

Note: contrary to other events, such as an error event, a signal is not consumed if it is caught. If you have two active signal boundary events catching the same signal event, both boundary events are triggered, even if they are part of different process instances.

Graphical notation

An intermediate signal catch event is visualized as a typical intermediate event (circle with smaller circle inside), with the signal icon inside. The signal icon is white (unfilled), to indicate its catch semantics.

bpmn.intermediate.signal.catch.event
XML representation

A signal intermediate event is defined as an intermediate catching event. The specific type sub-element is in this case a signalEventDefinition element.

1 2 3
<intermediateCatchEvent id="signal"> <signalEventDefinition signalRef="newCustomerSignal" /> </intermediateCatchEvent>
Example

See the section on signal event definitions.

8.2.27. Message Intermediate Catching Event

Description

An intermediate catching message event catches messages with a specified name.

Graphical notation

An intermediate catching message event is visualized as a typical intermediate event (circle with smaller circle inside), with the message icon inside. The message icon is white (unfilled), to indicate its catch semantics.

bpmn.intermediate.message.catch.event
XML representation

A message intermediate event is defined as an intermediate catching event. The specific type sub-element is in this case a messageEventDefinition element.

1 2 3
<intermediateCatchEvent id="message"> <messageEventDefinition signalRef="newCustomerMessage" /> </intermediateCatchEvent>
Example

See the section on message event definitions.

8.2.28. Intermediate Throwing Event

All intermediate throwing events are defined in the same way:

1 2 3
<intermediateThrowEvent id="myIntermediateThrowEvent" > <XXXEventDefinition/> </intermediateThrowEvent>

An intermediate throwing event is defined with:

  • A unique identifier (process-wide)

  • An XML sub-element of the form XXXEventDefinition (for example, signalEventDefinition) defining the type of the intermediate throwing event. See the specific throwing event types for more details.

8.2.29. Intermediate Throwing None Event

The following process diagram shows a simple example of an intermediate none event, which is often used to indicate some state achieved in the process.

bpmn.intermediate.none.event

This can be a good hook to monitor some KPIs, by adding an execution listener.

1 2 3 4 5
<intermediateThrowEvent id="noneEvent"> <extensionElements> <flowable:executionListener class="org.flowable.engine.test.bpmn.event.IntermediateNoneEventTest$MyExecutionListener" event="start" /> </extensionElements> </intermediateThrowEvent>

Here you can add some of your own code to maybe send some event to your BAM tool or DWH. The engine itself doesn’t do anything in that case, it just passes through.

8.2.30. Signal Intermediate Throwing Event

Description

An intermediate throwing signal event throws a signal event for a defined signal.

In Flowable, the signal is broadcast to all active handlers (in other words, all catching signal events). Signals can be published synchronously or asynchronously.

  • In the default configuration, the signal is delivered synchronously. This means that the throwing process instance waits until the signal is delivered to all catching process instances. The catching process instances are also notified in the same transaction as the throwing process instance, which means that if one of the notified instances produces a technical error (throws an exception), all involved instances fail.

  • A signal can also be delivered asynchronously. In this case it is determined which handlers are active at the time the throwing signal event is reached. For each active handler, an asynchronous notification message (Job) is stored and delivered by the JobExecutor.

Graphical notation

An intermediate signal throw event is visualized as a typical intermediate event (circle with smaller circle inside), with the signal icon inside. The signal icon is black (filled), to indicate its throw semantics.

bpmn.intermediate.signal.throw.event
XML representation

A signal intermediate event is defined as an intermediate throwing event. The specific type sub-element is in this case a signalEventDefinition element.

1 2 3
<intermediateThrowEvent id="signal"> <signalEventDefinition signalRef="newCustomerSignal" /> </intermediateThrowEvent>

An asynchronous signal event would look like this:

1 2 3
<intermediateThrowEvent id="signal"> <signalEventDefinition signalRef="newCustomerSignal" flowable:async="true" /> </intermediateThrowEvent>
Example

See the section on signal event definitions.

8.2.31. Compensation Intermediate Throwing Event

Description

An intermediate throwing compensation event can be used to trigger compensation.

Triggering compensation: Compensation can either be triggered for a designated activity or for the scope that hosts the compensation event. Compensation is performed through execution of the compensation handler associated with an activity.

  • When compensation is thrown for an activity, the associated compensation handler is executed the same number of times the activity completed successfully.

  • If compensation is thrown for the current scope, all activities within the current scope are compensated, which includes activities on concurrent branches.

  • Compensation is triggered hierarchically: if the activity to be compensated is a sub-process, compensation is triggered for all activities contained in the sub-process. If the sub-process has nested activities, compensation is thrown recursively. However, compensation is not propagated to the "upper levels" of the process: if compensation is triggered within a sub-process, it is not propagated to activities outside of the sub-process scope. The BPMN specification states that compensation is triggered for activities at "the same level of sub-process".

  • In Flowable, compensation is performed in reverse order of execution. This means that whichever activity completed last is compensated first, and so on.

  • The intermediate throwing compensation event can be used to compensate transaction sub-processes that competed successfully.

Note: If compensation is thrown within a scope that contains a sub-process, and the sub-process contains activities with compensation handlers, compensation is only propagated to the sub-process if it has completed successfully when compensation is thrown. If some of the activities nested inside the sub-process have completed and have attached compensation handlers, the compensation handlers are not executed if the sub-process containing these activities is not completed yet. Consider the following example:

bpmn.throw.compensation.example1

In this process we have two concurrent executions: one executing the embedded sub-process and one executing the "charge credit card" activity. Let’s assume both executions are started and the first concurrent execution is waiting for a user to complete the "review bookings" task. The second execution performs the "charge credit card" activity and an error is thrown, which causes the "cancel reservations" event to trigger compensation. At this point the parallel sub-process is not yet completed which means that the compensation event is not propagated to the sub-process and consequently the "cancel hotel reservation" compensation handler is not executed. If the user task (and therefore the embedded sub-process) completes before the "cancel reservations" is performed, compensation is propagated to the embedded sub-process.

Process variables: When compensating an embedded sub-process, the execution used for executing the compensation handlers has access to the local process variables of the sub-process in the state they were in when the sub-process completed execution. To achieve this, a snapshot of the process variables associated with the scope execution (execution created for executing the sub-process) is taken. From this, a couple of implications follow:

  • The compensation handler does not have access to variables added to concurrent executions created inside the sub-process scope.

  • Process variables associated with executions higher up in the hierarchy (for instance, process variables associated with the process instance execution) are not contained in the snapshot: the compensation handler has access to these process variables in the state they are in when compensation is thrown.

  • A variable snapshot is only taken for embedded sub-processes, not for other activities.

Current limitations:

  • waitForCompletion="false" is currently unsupported. When compensation is triggered using the intermediate throwing compensation event, the event is only left after compensation completed successfully.

  • Compensation itself is currently performed by concurrent executions. The concurrent executions are started in reverse order to which the compensated activities completed.

  • Compensation is not propagated to sub-process instances spawned by call activities.

Graphical notation

An intermediate compensation throw event is visualized as a typical intermediate event (circle with smaller circle inside), with the compensation icon inside. The compensation icon is black (filled), to indicate its throw semantics.

bpmn.intermediate.compensation.throw.event
XML representation

A compensation intermediate event is defined as an intermediate throwing event. The specific type sub-element is in this case a compensateEventDefinition element.

1 2 3
<intermediateThrowEvent id="throwCompensation"> <compensateEventDefinition /> </intermediateThrowEvent>

In addition, the optional argument activityRef can be used to trigger compensation of a specific scope or activity:

1 2 3
<intermediateThrowEvent id="throwCompensation"> <compensateEventDefinition activityRef="bookHotel" /> </intermediateThrowEvent>

8.3. Sequence Flow

8.3.1. Description

A sequence flow is the connector between two elements of a process. After an element is visited during process execution, all outgoing sequence flows will be followed. This means that the default nature of BPMN 2.0 is to be parallel: two outgoing sequence flows will create two separate, parallel paths of execution.

8.3.2. Graphical notation

A sequence flow is visualized as an arrow going from the source element towards the target element. The arrow always points towards the target.

bpmn.sequence.flow

8.3.3. XML representation

Sequence flows need to have a process-unique id and references to an existing source and target element.

1
<sequenceFlow id="flow1" sourceRef="theStart" targetRef="theTask" />

8.3.4. Conditional sequence flow

Description

A sequence flow can have a condition defined on it. When a BPMN 2.0 activity is left, the default behavior is to evaluate the conditions on the outgoing sequence flows. When a condition evaluates to true, that outgoing sequence flow is selected. When multiple sequence flows are selected that way, multiple executions will be generated and the process will be continued in a parallel way.

Note: the above holds for BPMN 2.0 activities (and events), but not for gateways. Gateways will handle sequence flows with conditions in specific ways, depending on the gateway type.

Graphical notation

A conditional sequence flow is visualized as a regular sequence flow, with a small diamond at the beginning. The condition expression is shown next to the sequence flow.

bpmn.conditional.sequence.flow
XML representation

A conditional sequence flow is represented in XML as a regular sequence flow, containing a conditionExpression sub-element. Note that currently only tFormalExpressions are supported, Omitting the xsi:type="" definition will simply default to the only supported type of expressions.

1 2 3 4 5
<sequenceFlow id="flow" sourceRef="theStart" targetRef="theTask"> <conditionExpression xsi:type="tFormalExpression"> <![CDATA[${order.price > 100 && order.price < 250}]]> </conditionExpression> </sequenceFlow>

Currently, conditionalExpressions can only be used with UEL. Detailed information about these can be found in the section on Expressions. The expression used should resolve to a boolean value, otherwise an exception is thrown while evaluating the condition.

  • The example below references the data of a process variable, in the typical JavaBean style through getters.

1 2 3
<conditionExpression xsi:type="tFormalExpression"> <![CDATA[${order.price > 100 && order.price < 250}]]> </conditionExpression>
  • This example invokes a method that resolves to a boolean value.

1 2 3
<conditionExpression xsi:type="tFormalExpression"> <![CDATA[${order.isStandardOrder()}]]> </conditionExpression>

The Flowable distribution contains the following example process using value and method expressions (see org.flowable.examples.bpmn.expression):

bpmn.uel expression.on.seq.flow

8.3.5. Default sequence flow

Description

All BPMN 2.0 tasks and gateways can have a default sequence flow. This sequence flow is only selected as the outgoing sequence flow for that activity if and only if none of the other sequence flows could be selected. Conditions on a default sequence flow are always ignored.

Graphical notation

A default sequence flow is visualized as a regular sequence flow, with a slash marker at the beginning.

bpmn.default.sequence.flow
XML representation

A default sequence flow for a certain activity is defined by the default attribute on that activity. The following XML snippet shows an example of an exclusive gateway that has as default sequence flow, flow 2. Only when conditionA and conditionB both evaluate to false, will it be chosen as the outgoing sequence flow for the gateway.

1 2 3 4 5 6 7 8 9 10 11
<exclusiveGateway id="exclusiveGw" name="Exclusive Gateway" default="flow2" /> <sequenceFlow id="flow1" sourceRef="exclusiveGw" targetRef="task1"> <conditionExpression xsi:type="tFormalExpression">${conditionA}</conditionExpression> </sequenceFlow> <sequenceFlow id="flow2" sourceRef="exclusiveGw" targetRef="task2"/> <sequenceFlow id="flow3" sourceRef="exclusiveGw" targetRef="task3"> <conditionExpression xsi:type="tFormalExpression">${conditionB}</conditionExpression> </sequenceFlow>

Which corresponds with the following graphical representation:

8.4. Gateways

A gateway is used to control the flow of execution (or as the BPMN 2.0 describes, the tokens of execution). A gateway is capable of consuming or generating tokens.

A gateway is graphically visualized as a diamond shape, with an icon inside. The icon shows the type of gateway.

bpmn.gateway

8.4.1. Exclusive Gateway

Description

An exclusive gateway (also called the XOR gateway or more technical the exclusive data-based gateway), is used to model a decision in the process. When the execution arrives at this gateway, all outgoing sequence flows are evaluated in the order in which they are defined. The first sequence flow whose condition evaluates to true (or doesn’t have a condition set, conceptually having a 'true' defined on the sequence flow) is selected for continuing the process.

Note that the semantics of the outgoing sequence flow is different in this case to that of the general case in BPMN 2.0. While, in general, all sequence flows whose condition evaluates to true are selected to continue in a parallel way, only one sequence flow is selected when using the exclusive gateway. If multiple sequence flows have a condition that evaluates to true, the first one defined in the XML (and only that one!) is selected for continuing the process. If no sequence flow can be selected, an exception will be thrown.

Graphical notation

An exclusive gateway is visualized as a typical gateway (a diamond shape) with an X icon inside, referring to the XOR semantics. Note that a gateway without an icon inside defaults to an exclusive gateway. The BPMN 2.0 specification does not permit use of both the diamond with and without an X in the same process definition.

bpmn.exclusive.gateway.notation
XML representation

The XML representation of an exclusive gateway is straight-forward: one line defining the gateway and condition expressions defined on the outgoing sequence flows. See the section on conditional sequence flow to see which options are available for such expressions.

Take, for example, the following model:

bpmn.exclusive.gateway

Which is represented in XML as follows:

1 2 3 4 5 6 7 8 9 10 11 12 13
<exclusiveGateway id="exclusiveGw" name="Exclusive Gateway" /> <sequenceFlow id="flow2" sourceRef="exclusiveGw" targetRef="theTask1"> <conditionExpression xsi:type="tFormalExpression">${input == 1}</conditionExpression> </sequenceFlow> <sequenceFlow id="flow3" sourceRef="exclusiveGw" targetRef="theTask2"> <conditionExpression xsi:type="tFormalExpression">${input == 2}</conditionExpression> </sequenceFlow> <sequenceFlow id="flow4" sourceRef="exclusiveGw" targetRef="theTask3"> <conditionExpression xsi:type="tFormalExpression">${input == 3}</conditionExpression> </sequenceFlow>

8.4.2. Parallel Gateway

Description

Gateways can also be used to model concurrency in a process. The most straightforward gateway to introduce concurrency in a process model, is the Parallel Gateway, which allows you to fork into multiple paths of execution or join multiple incoming paths of execution.

The functionality of the parallel gateway is based on the incoming and outgoing sequence flow:

  • fork: all outgoing sequence flows are followed in parallel, creating one concurrent execution for each sequence flow.

  • join: all concurrent executions arriving at the parallel gateway wait in the gateway until an execution has arrived for each of the incoming sequence flows. Then the process continues past the joining gateway.

Note that a parallel gateway can have both fork and join behavior, if there are multiple incoming and outgoing sequence flows for the same parallel gateway. In this case, the gateway will first join all incoming sequence flows before splitting into multiple concurrent paths of executions.

An important difference with other gateway types is that the parallel gateway does not evaluate conditions. If conditions are defined on the sequence flows connected with the parallel gateway, they are simply ignored.

Graphical Notation

A parallel gateway is visualized as a gateway (diamond shape) with the plus symbol inside, referring to the AND semantics.

bpmn.parallel.gateway
XML representation

Defining a parallel gateway needs one line of XML:

1
<parallelGateway id="myParallelGateway" />

The actual behavior (fork, join or both), is defined by the sequence flow connected to the parallel gateway.

For example, the model above comes down to the following XML:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
<startEvent id="theStart" /> <sequenceFlow id="flow1" sourceRef="theStart" targetRef="fork" /> <parallelGateway id="fork" /> <sequenceFlow sourceRef="fork" targetRef="receivePayment" /> <sequenceFlow sourceRef="fork" targetRef="shipOrder" /> <userTask id="receivePayment" name="Receive Payment" /> <sequenceFlow sourceRef="receivePayment" targetRef="join" /> <userTask id="shipOrder" name="Ship Order" /> <sequenceFlow sourceRef="shipOrder" targetRef="join" /> <parallelGateway id="join" /> <sequenceFlow sourceRef="join" targetRef="archiveOrder" /> <userTask id="archiveOrder" name="Archive Order" /> <sequenceFlow sourceRef="archiveOrder" targetRef="theEnd" /> <endEvent id="theEnd" />

In the example above, after the process is started, two tasks will be created:

1 2 3 4 5 6 7 8 9 10 11 12 13
ProcessInstance pi = runtimeService.startProcessInstanceByKey("forkJoin"); TaskQuery query = taskService.createTaskQuery() .processInstanceId(pi.getId()) .orderByTaskName() .asc(); List<Task> tasks = query.list(); assertEquals(2, tasks.size()); Task task1 = tasks.get(0); assertEquals("Receive Payment", task1.getName()); Task task2 = tasks.get(1); assertEquals("Ship Order", task2.getName());

When these two tasks are completed, the second parallel gateway will join the two executions and since there is only one outgoing sequence flow, no concurrent paths of execution will be created, and only the Archive Order task will be active.

Note that a parallel gateway does not need to be balanced (a matching number of incoming/outgoing sequence flow for corresponding parallel gateways). A parallel gateway will simply wait for all incoming sequence flows and create a concurrent path of execution for each outgoing sequence flow, not influenced by other constructs in the process model. So, the following process is legal in BPMN 2.0:

bpmn.unbalanced.parallel.gateway

8.4.3. Inclusive Gateway

Description

The Inclusive Gateway can be seen as a combination of an exclusive and a parallel gateway. Like an exclusive gateway you can define conditions on outgoing sequence flows and the inclusive gateway will evaluate them. But the main difference is that the inclusive gateway can take more than one sequence flow, like the parallel gateway.

The functionality of the inclusive gateway is based on the incoming and outgoing sequence flows:

  • fork: all outgoing sequence flow conditions are evaluated and for the sequence flow conditions that evaluate to true the flows are followed in parallel, creating one concurrent execution for each sequence flow.

  • join: all concurrent executions arriving at the inclusive gateway wait at the gateway until an execution has arrived for each of the incoming sequence flows that have a process token. This is an important difference with the parallel gateway. So, in other words, the inclusive gateway will only wait for the incoming sequence flows that will be executed. After the join, the process continues past the joining inclusive gateway.

Note that an inclusive gateway can have both fork and join behavior, if there are multiple incoming and outgoing sequence flows for the same inclusive gateway. In this case, the gateway will first join all incoming sequence flows that have a process token, before splitting into multiple concurrent paths of executions for the outgoing sequence flows that have a condition that evaluates to true.

Graphical Notation

An inclusive gateway is visualized as a gateway (diamond shape) with the circle symbol inside.

bpmn.inclusive.gateway
XML representation

Defining an inclusive gateway needs one line of XML:

1
<inclusiveGateway id="myInclusiveGateway" />

The actual behavior (fork, join or both), is defined by the sequence flows connected to the inclusive gateway.

For example, the model above comes down to the following XML:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
<startEvent id="theStart" /> <sequenceFlow id="flow1" sourceRef="theStart" targetRef="fork" /> <inclusiveGateway id="fork" /> <sequenceFlow sourceRef="fork" targetRef="receivePayment" > <conditionExpression xsi:type="tFormalExpression">${paymentReceived == false}</conditionExpression> </sequenceFlow> <sequenceFlow sourceRef="fork" targetRef="shipOrder" > <conditionExpression xsi:type="tFormalExpression">${shipOrder == true}</conditionExpression> </sequenceFlow> <userTask id="receivePayment" name="Receive Payment" /> <sequenceFlow sourceRef="receivePayment" targetRef="join" /> <userTask id="shipOrder" name="Ship Order" /> <sequenceFlow sourceRef="shipOrder" targetRef="join" /> <inclusiveGateway id="join" /> <sequenceFlow sourceRef="join" targetRef="archiveOrder" /> <userTask id="archiveOrder" name="Archive Order" /> <sequenceFlow sourceRef="archiveOrder" targetRef="theEnd" /> <endEvent id="theEnd" />

In the example above, after the process is started, two tasks will be created if the process variables paymentReceived == false and shipOrder == true. If only one of these process variables equals true, only one task will be created. If no condition evaluates to true an exception is thrown. This can be prevented by specifying a default outgoing sequence flow. In the following example one task will be created, the ship order task:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
HashMap<String, Object> variableMap = new HashMap<String, Object>(); variableMap.put("receivedPayment", true); variableMap.put("shipOrder", true); ProcessInstance pi = runtimeService.startProcessInstanceByKey("forkJoin"); TaskQuery query = taskService.createTaskQuery() .processInstanceId(pi.getId()) .orderByTaskName() .asc(); List<Task> tasks = query.list(); assertEquals(1, tasks.size()); Task task = tasks.get(0); assertEquals("Ship Order", task.getName());

When this task is completed, the second inclusive gateway will join the two executions and as there is only one outgoing sequence flow, no concurrent paths of execution will be created, and only the Archive Order task will be active.

Note that an inclusive gateway does not need to be balanced (a matching number of incoming/outgoing sequence flow for corresponding inclusive gateways). An inclusive gateway will simply wait for all incoming sequence flow and create a concurrent path of execution for each outgoing sequence flow, not influenced by other constructs in the process model.

8.4.4. Event-based Gateway

Description

The Event-based Gateway provides a way to take a decision based on events. Each outgoing sequence flow of the gateway needs to be connected to an intermediate catching event. When process execution reaches an Event-based Gateway, the gateway acts like a wait state: execution is suspended. In addition, for each outgoing sequence flow, an event subscription is created.

Note the sequence flows running out of an Event-based Gateway are different from ordinary sequence flows. These sequence flows are never actually "executed". On the contrary, they allow the process engine to determine which events an execution arriving at an Event-based Gateway needs to subscribe to. The following restrictions apply:

  • An Event-based Gateway must have two or more outgoing sequence flows.

  • An Event-based Gateway must only be connected to elements of type intermediateCatchEvent (Receive Tasks after an Event-based Gateway are not supported by Flowable).

  • An intermediateCatchEvent connected to an Event-based Gateway must have a single incoming sequence flow.

Graphical notation

An Event-based Gateway is visualized as a diamond shape like other BPMN gateways with a special icon inside.

bpmn.event.based.gateway.notation
XML representation

The XML element used to define an Event-based Gateway is eventBasedGateway.

Example(s)

The following process is an example of a process with an Event-based Gateway. When the execution arrives at the Event-based Gateway, process execution is suspended. In addition, the process instance subscribes to the alert signal event and creates a timer that fires after 10 minutes. This effectively causes the process engine to wait for ten minutes for a signal event. If the signal occurs within 10 minutes, the timer is cancelled and execution continues after the signal. If the signal is not fired, execution continues after the timer and the signal subscription is canceled.

bpmn.event.based.gateway.example
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41
<definitions id="definitions" xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL" xmlns:flowable="http://flowable.org/bpmn" targetNamespace="Examples"> <signal id="alertSignal" name="alert" /> <process id="catchSignal"> <startEvent id="start" /> <sequenceFlow sourceRef="start" targetRef="gw1" /> <eventBasedGateway id="gw1" /> <sequenceFlow sourceRef="gw1" targetRef="signalEvent" /> <sequenceFlow sourceRef="gw1" targetRef="timerEvent" /> <intermediateCatchEvent id="signalEvent" name="Alert"> <signalEventDefinition signalRef="alertSignal" /> </intermediateCatchEvent> <intermediateCatchEvent id="timerEvent" name="Alert"> <timerEventDefinition> <timeDuration>PT10M</timeDuration> </timerEventDefinition> </intermediateCatchEvent> <sequenceFlow sourceRef="timerEvent" targetRef="exGw1" /> <sequenceFlow sourceRef="signalEvent" targetRef="task" /> <userTask id="task" name="Handle alert"/> <exclusiveGateway id="exGw1" /> <sequenceFlow sourceRef="task" targetRef="exGw1" /> <sequenceFlow sourceRef="exGw1" targetRef="end" /> <endEvent id="end" /> </process> </definitions>

8.5. Tasks

8.5.1. User Task

Description

A user task is used to model work that needs to be done by a human. When the process execution arrives at such a user task, a new task is created in the task list of any users or groups assigned to that task.

Graphical notation

A user task is visualized as a typical task (rounded rectangle), with a small user icon in the left upper corner.

bpmn.user.task
XML representation

A user task is defined in XML as follows. The id attribute is required, the name attribute is optional.

1
<userTask id="theTask" name="Important task" />

A user task can also have a description. In fact, any BPMN 2.0 element can have a description. A description is defined by adding the documentation element.

1 2 3 4
<userTask id="theTask" name="Schedule meeting" > <documentation> Schedule an engineering meeting for next week with the new hire. </documentation>

The description text can be retrieved from the task in the standard Java way:

1
task.getDescription()
Due Date

Each task has a field indicating the due date of that task. The Query API can be used to query for tasks that are due on, before or after a given date.

There is an activity extension that allows you to specify an expression in your task-definition to set the initial due date of a task when it is created. The expression should always resolve to a java.util.Date, java.util.String (ISO8601 formatted), ISO8601 time-duration (for example, PT50M) or null. For example, you could use a date that was entered in a previous form in the process or calculated in a previous Service Task. If a time-duration is used, the due-date is calculated based on the current time and incremented by the given period. For example, when "PT30M" is used as dueDate, the task is due in thirty minutes from now.

1
<userTask id="theTask" name="Important task" flowable:dueDate="${dateVariable}"/>

The due date of a task can also be altered using the TaskService or in TaskListeners using the passed DelegateTask.

User assignment

A user task can be directly assigned to a user. This is done by defining a humanPerformer sub element. Such a humanPerformer definition needs a resourceAssignmentExpression that actually defines the user. Currently, only formalExpressions are supported.

1 2 3 4 5 6 7 8 9 10 11
<process > ... <userTask id='theTask' name='important task' > <humanPerformer> <resourceAssignmentExpression> <formalExpression>kermit</formalExpression> </resourceAssignmentExpression> </humanPerformer> </userTask>

Only one user can be assigned as the human performer for the task. In Flowable terminology, this user is called the assignee. Tasks that have an assignee are not visible in the task lists of other people and can be found in the personal task list of the assignee instead.

Tasks directly assigned to users can be retrieved through the TaskService as follows:

1
List<Task> tasks = taskService.createTaskQuery().taskAssignee("kermit").list();

Tasks can also be put in the candidate task list of people. In this case, the potentialOwner construct must be used. The usage is similar to the humanPerformer construct. Do note that it is necessary to specify if it is a user or a group defined for each element in the formal expression (the engine cannot guess this).

1 2 3 4 5 6 7 8 9 10 11
<process > ... <userTask id='theTask' name='important task' > <potentialOwner> <resourceAssignmentExpression> <formalExpression>user(kermit), group(management)</formalExpression> </resourceAssignmentExpression> </potentialOwner> </userTask>

Tasks defined with the potential owner construct can be retrieved as follows (or a similar TaskQuery usage as for the tasks with an assignee):

1
List<Task> tasks = taskService.createTaskQuery().taskCandidateUser("kermit");

This will retrieve all tasks where kermit is a candidate user, in other words, the formal expression contains user(kermit). This will also retrieve all tasks that are assigned to a group of which kermit is a member (for example, group(management), if kermit is a member of that group and the Flowable identity component is used). The user’s groups are resolved at runtime and these can be managed through the IdentityService.

If no specifics are given as to whether the given text string is a user or group, the engine defaults to group. The following would be the same as when group(accountancy) was declared.

1
<formalExpression>accountancy</formalExpression>
Flowable extensions for task assignment

It is clear that user and group assignments are quite cumbersome for use cases where the assignment is not complex. To avoid these complexities, custom extensions on the user task are possible.

  • assignee attribute: this custom extension allows direct assignment of a given user to a task.

1
<userTask id="theTask" name="my task" flowable:assignee="kermit" />

This is exactly the same as using a humanPerformer construct as defined above.

  • candidateUsers attribute: this custom extension makes a given user a candidate for a task.

1
<userTask id="theTask" name="my task" flowable:candidateUsers="kermit, gonzo" />

This is exactly the same as using the potentialOwner construct as defined above. Note that it is not necessary to use the user(kermit) declaration, as with the case of the potential owner construct, since the attribute can only be used for users.

  • candidateGroups attribute: this custom extension makes a given group a candidate for a task.

1
<userTask id="theTask" name="my task" flowable:candidateGroups="management, accountancy" />

This is exactly the same as using a potentialOwner construct as defined above. Note that it is not necessary to use the group(management) declaration, as with the case of the potential owner construct, since the attribute can only be used for groups.

  • candidateUsers and candidateGroups can both be defined on the same user task.

Note: Although Flowable provides an identity management component, which is exposed through the IdentityService, no check is made whether a provided user is known by the identity component. This is to allow Flowable to integrate with existing identity management solutions when it is embedded in an application.

Custom identity link types

The BPMN standard supports a single assigned user or humanPerformer or a set of users that form a potential pool of potentialOwners, as defined in User assignment. In addition, Flowable defines extension attribute elements for the User Task that can represent the task assignee or candidate owner.

The supported Flowable identity link types are:

1 2 3 4 5 6 7 8
public class IdentityLinkType { /* Flowable native roles */ public static final String ASSIGNEE = "assignee"; public static final String CANDIDATE = "candidate"; public static final String OWNER = "owner"; public static final String STARTER = "starter"; public static final String PARTICIPANT = "participant"; }

The BPMN standard and Flowable example authorization identities are user and group. As mentioned in the previous section, the Flowable identity management implementation is not intended for production use, but should be extended depending on the supported authorization scheme.

If additional link types are required, custom resources can be defined as extension elements with the following syntax:

1 2 3 4 5 6 7 8 9
<userTask id="theTask" name="make profit"> <extensionElements> <flowable:customResource flowable:name="businessAdministrator"> <resourceAssignmentExpression> <formalExpression>user(kermit), group(management)</formalExpression> </resourceAssignmentExpression> </flowable:customResource> </extensionElements> </userTask>

The custom link expressions are added to the TaskDefinition class:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
protected Map<String, Set<Expression>> customUserIdentityLinkExpressions = new HashMap<String, Set<Expression>>(); protected Map<String, Set<Expression>> customGroupIdentityLinkExpressions = new HashMap<String, Set<Expression>>(); public Map<String, Set<Expression>> getCustomUserIdentityLinkExpressions() { return customUserIdentityLinkExpressions; } public void addCustomUserIdentityLinkExpression( String identityLinkType, Set<Expression> idList) { customUserIdentityLinkExpressions.put(identityLinkType, idList); } public Map<String, Set<Expression>> getCustomGroupIdentityLinkExpressions() { return customGroupIdentityLinkExpressions; } public void addCustomGroupIdentityLinkExpression( String identityLinkType, Set<Expression> idList) { customGroupIdentityLinkExpressions.put(identityLinkType, idList); }

These are populated at runtime by the UserTaskActivityBehavior handleAssignments method.

Finally, the IdentityLinkType class must be extended to support the custom identity link types:

1 2 3 4 5 6 7 8
package com.yourco.engine.task; public class IdentityLinkType extends org.flowable.engine.task.IdentityLinkType { public static final String ADMINISTRATOR = "administrator"; public static final String EXCLUDED_OWNER = "excludedOwner"; }
Custom Assignment via task listeners

If the previous approaches are not sufficient, it is possible to delegate to custom assignment logic using a task listener on the create event:

1 2 3 4 5
<userTask id="task1" name="My task" > <extensionElements> <flowable:taskListener event="create" class="org.flowable.MyAssignmentHandler" /> </extensionElements> </userTask>

The DelegateTask that is passed to the TaskListener implementation can set the assignee and candidate-users/groups:

1 2 3 4 5 6 7 8 9 10 11 12 13
public class MyAssignmentHandler implements TaskListener { public void notify(DelegateTask delegateTask) { // Execute custom identity lookups here // and then for example call following methods: delegateTask.setAssignee("kermit"); delegateTask.addCandidateUser("fozzie"); delegateTask.addCandidateGroup("management"); ... } }

When using Spring, it is possible to use the custom assignment attributes as described in the section above, and delegate to a Spring bean using a task listener with an expression that listens to task create events. In the following example, the assignee will be set by calling the findManagerOfEmployee on the ldapService Spring bean. The emp parameter that is passed, is a process variable>.

1
<userTask id="task" name="My Task" flowable:assignee="${ldapService.findManagerForEmployee(emp)}"/>

This also works similarly for candidate users and groups:

1
<userTask id="task" name="My Task" flowable:candidateUsers="${ldapService.findAllSales()}"/>

Note that this will only work if the return type of the invoked method is String or Collection<String> (for candidate users and groups):

1 2 3 4 5 6 7 8 9 10 11
public class FakeLdapService { public String findManagerForEmployee(String employee) { return "Kermit The Frog"; } public List<String> findAllSales() { return Arrays.asList("kermit", "gonzo", "fozzie"); } }

8.5.2. Script Task

Description

A script task is an automatic activity. When a process execution arrives at the script task, the corresponding script is executed.

Graphical Notation

A script task is visualized as a typical BPMN 2.0 task (rounded rectangle), with a small script icon in the top-left corner of the rectangle.

bpmn.scripttask
XML representation

A script task is defined by specifying the script and the scriptFormat.

1 2 3 4 5 6 7 8
<scriptTask id="theScriptTask" name="Execute script" scriptFormat="groovy"> <script> sum = 0 for ( i in inputArray ) { sum += i } </script> </scriptTask>

The value of the scriptFormat attribute must be a name that is compatible with the JSR-223 (scripting for the Java platform). By default, JavaScript is included in every JDK and as such doesn’t need any additional JAR files. If you want to use another (JSR-223 compatible) scripting engine, it is sufficient to add the corresponding JAR to the classpath and use the appropriate name. For example, the Flowable unit tests often use Groovy because the syntax is similar to that of Java.

Do note that the Groovy scripting engine is bundled with the groovy-all jar. Before Groovy version 2.0, the scripting engine was part of the regular Groovy JAR. As such, one must now add following dependency:

1 2 3 4 5
<dependency> <groupId>org.codehaus.groovy</groupId> <artifactId>groovy-all</artifactId> <version>2.x.x<version> </dependency>
Variables in scripts

All process variables that are accessible through the execution that arrives in the script task can be used within the script. In the example, the script variable 'inputArray' is in fact a process variable (an array of integers).

1 2 3 4 5 6
<script> sum = 0 for ( i in inputArray ) { sum += i } </script>

It’s also possible to set process variables in a script, simply by calling execution.setVariable("variableName", variableValue). By default, no variables are stored automatically (Note: in some older releases this was the case!). It’s possible to automatically store any variable defined in the script (for example, sum in the example above) by setting the property autoStoreVariables on the scriptTask to true. However, the best practice is not to do this and use an explicit execution.setVariable() call, as with some recent versions of the JDK, auto storing of variables does not work for some scripting languages. See this link for more details.

1
<scriptTask id="script" scriptFormat="JavaScript" flowable:autoStoreVariables="false">

The default for this parameter is false, meaning that if the parameter is omitted from the script task definition, all the declared variables will only exist during the duration of the script.

Here’s an example of how to set a variable in a script:

1 2 3 4
<script> def scriptVar = "test123" execution.setVariable("myVar", scriptVar) </script>

Note: the following names are reserved and cannot be used as variable names: out, out:print, lang:import, context, elcontext.

Script results

The return value of a script task can be assigned to an already existing, or to a new process variable, by specifying the process variable name as a literal value for the 'flowable:resultVariable' attribute of a script task definition. Any existing value for a specific process variable will be overwritten by the result value of the script execution. When a result variable name is not specified, the script result value gets ignored.

1 2 3
<scriptTask id="theScriptTask" name="Execute script" scriptFormat="juel" flowable:resultVariable="myVar"> <script>#{echo}</script> </scriptTask>

In the above example, the result of the script execution (the value of the resolved expression '#{echo}') is set to the process variable named 'myVar' after the script completes.

Security

It is also possible when using javascript as the scripting language to use secure scripting. See the secure scripting section.

8.5.3. Java Service Task

Description

A Java service task is used to invoke an external Java class.

Graphical Notation

A service task is visualized as a rounded rectangle with a small gear icon in the top-left corner.

bpmn.java.service.task
XML representation

There are four ways of declaring how to invoke Java logic:

  • Specifying a class that implements JavaDelegate or ActivityBehavior

  • Evaluating an expression that resolves to a delegation object

  • Invoking a method expression

  • Evaluating a value expression

To specify a class that is called during process execution, the fully qualified classname needs to be provided by the flowable:class attribute.

1 2 3
<serviceTask id="javaService" name="My Java Service Task" flowable:class="org.flowable.MyJavaDelegate" />

See the implementation section for more details on how to use such a class.

It’s also possible to use an expression that resolves to an object. This object must follow the same rules as objects that are created when the flowable:class attribute is used (see further).

1
<serviceTask id="serviceTask" flowable:delegateExpression="${delegateExpressionBean}" />

Here, the delegateExpressionBean is a bean that implements the JavaDelegate interface, defined in, for example, the Spring container.

To specify a UEL method expression that should be evaluated, use the attribute flowable:expression.

1 2 3
<serviceTask id="javaService" name="My Java Service Task" flowable:expression="#{printer.printMessage()}" />

Method printMessage (without parameters) will be called on the named object named printer.

It’s also possible to pass parameters with a method used in the expression.

1 2 3
<serviceTask id="javaService" name="My Java Service Task" flowable:expression="#{printer.printMessage(execution, myVar)}" />

Method printMessage will be called on the object named printer. The first parameter passed is the DelegateExecution, which is available in the expression context, by default, available as execution. The second parameter passed is the value of the variable with name myVar in the current execution.

To specify a UEL value expression that should be evaluated, use the attribute flowable:expression.

1 2 3
<serviceTask id="javaService" name="My Java Service Task" flowable:expression="#{split.ready}" />

The getter method of property ready, getReady (without parameters), will be called on the named bean called split. The named objects are resolved in the execution’s process variables and (if applicable) in the Spring context.

Implementation

To implement a class that can be called during process execution, the class needs to implement the org.flowable.engine.delegate.JavaDelegate interface and provide the required logic in the execute method. When process execution arrives at this particular step, it will execute the logic defined in that method and leave the activity in the default BPMN 2.0 way.

Let’s create, for example, a Java class that can be used to change a process variable String to uppercase. This class needs to implement the org.flowable.engine.delegate.JavaDelegate interface, which requires us to implement the execute(DelegateExecution) method. It’s this operation that will be called by the engine and which needs to contain the business logic. Process instance information, such as process variables, can be accessed and manipulated through the DelegateExecution interface (click on the link for a detailed Javadoc of its operations).

1 2 3 4 5 6 7 8 9
public class ToUppercase implements JavaDelegate { public void execute(DelegateExecution execution) { String var = (String) execution.getVariable("input"); var = var.toUpperCase(); execution.setVariable("input", var); } }

Note: there will be only one instance of the Java class created for the serviceTask on which it is defined. All process instances share the same class instance that will be used to call execute(DelegateExecution). This means that the class must not use any member variables and must be thread-safe, as it can be executed simultaneously from different threads. This also influences the way Field injection is handled.

The classes that are referenced in the process definition (by using flowable:class) are NOT instantiated during deployment. Only when a process execution arrives for the first time at the point in the process where the class is used, an instance of that class will be created. If the class cannot be found, an FlowableException will be thrown. The reasoning for this is that the environment (and more specifically, the classpath) when you are deploying is often different from the actual runtime environment. For example, when using ant or the business archive upload in the Flowable app to deploy processes, the classpath will not automatically contain the referenced classes.

[INTERNAL: non-public implementation classes] It is also possible to provide a class that implements the org.flowable.engine.impl.delegate.ActivityBehavior interface. Implementations then have access to more powerful engine functionality, for example, to influence the control flow of the process. Note however that this is not a very good practice and should be avoided as much as possible. So, it is advisable to use the ActivityBehavior interface only for advanced use cases and if you know exactly what you’re doing.

Field Injection

It’s possible to inject values into the fields of the delegated classes. The following types of injection are supported:

  • Fixed string values

  • Expressions

If available, the value is injected through a public setter method on your delegated class, following the Java Bean naming conventions (for example, field firstName has setter setFirstName(…​)). If no setter is available for that field, the value of the private member will be set on the delegate. SecurityManagers in some environments don’t allow modification of private fields, so it’s safer to expose a public setter-method for the fields you want to have injected.

Regardless of the type of value declared in the process-definition, the type of the setter/private field on the injection target should always be org.flowable.engine.delegate.Expression. When the expression is resolved, it can be cast to the appropriate type.

Field injection is supported when using the 'flowable:class' attribute. Field injection is also possible when using the flowable:delegateExpression attribute, however special rules with regards to thread-safety apply (see next section).

The following code snippet shows how to inject a constant value into a field declared on the class. Note that we need to declare an extensionElements XML element before the actual field injection declarations, which is a requirement of the BPMN 2.0 XML Schema.

1 2 3 4 5 6 7
<serviceTask id="javaService" name="Java service invocation" flowable:class="org.flowable.examples.bpmn.servicetask.ToUpperCaseFieldInjected"> <extensionElements> <flowable:field name="text" stringValue="Hello World" /> </extensionElements> </serviceTask>

The class ToUpperCaseFieldInjected has a field text that is of type org.flowable.engine.delegate.Expression. When calling text.getValue(execution), the configured string value Hello World will be returned:

1 2 3 4 5 6 7 8 9
public class ToUpperCaseFieldInjected implements JavaDelegate { private Expression text; public void execute(DelegateExecution execution) { execution.setVariable("var", ((String)text.getValue(execution)).toUpperCase()); } }

Alternatively, for long texts (for example, an inline e-mail) the 'flowable:string' sub element can be used:

1 2 3 4 5 6 7 8 9 10 11
<serviceTask id="javaService" name="Java service invocation" flowable:class="org.flowable.examples.bpmn.servicetask.ToUpperCaseFieldInjected"> <extensionElements> <flowable:field name="text"> <flowable:string> This is a long string with a lot of words and potentially way longer even! </flowable:string> </flowable:field> </extensionElements> </serviceTask>

To inject values that are dynamically resolved at runtime, expressions can be used. Those expressions can use process variables or Spring defined beans (if Spring is used). As noted in Service Task Implementation, an instance of the Java class is shared among all process-instances in a service task when using the flowable:class attribute. To have dynamic injection of values in fields, you can inject value and method expressions in a org.flowable.engine.delegate.Expression that can be evaluated/invoked using the DelegateExecution passed in the execute method.

The example class below uses the injected expressions and resolves them using the current DelegateExecution. A genderBean method call is used while passing the gender variable. Full code and test can be found in org.flowable.examples.bpmn.servicetask.JavaServiceTaskTest.testExpressionFieldInjection

1 2 3 4 5 6 7 8 9 10 11 12
<serviceTask id="javaService" name="Java service invocation" flowable:class="org.flowable.examples.bpmn.servicetask.ReverseStringsFieldInjected"> <extensionElements> <flowable:field name="text1"> <flowable:expression>${genderBean.getGenderString(gender)}</flowable:expression> </flowable:field> <flowable:field name="text2"> <flowable:expression>Hello ${gender == 'male' ? 'Mr.' : 'Mrs.'} ${name}</flowable:expression> </flowable:field> </ extensionElements> </ serviceTask>
1 2 3 4 5 6 7 8 9 10 11 12 13
public class ReverseStringsFieldInjected implements JavaDelegate { private Expression text1; private Expression text2; public void execute(DelegateExecution execution) { String value1 = (String) text1.getValue(execution); execution.setVariable("var1", new StringBuffer(value1).reverse().toString()); String value2 = (String) text2.getValue(execution); execution.setVariable("var2", new StringBuffer(value2).reverse().toString()); } }

Alternatively, you can also set the expressions as an attribute instead of a child-element, to make the XML less verbose.

1 2
<flowable:field name="text1" expression="${genderBean.getGenderString(gender)}" /> <flowable:field name="text1" expression="Hello ${gender == 'male' ? 'Mr.' : 'Mrs.'} ${name}" />
Field injection and thread safety

In general, using service tasks with Java delegates and field injections are thread-safe. However, there are a few situations where thread-safety is not guaranteed, depending on the setup or environment Flowable is running in.

With the flowable:class attribute, using field injection is always thread safe. For each service task that references a certain class, a new instance will be instantiated and fields will be injected once when the instance is created. Reusing the same class multiple times in different tasks or process definitions is no problem.

When using the flowable:expression attribute, use of field injection is not possible. Parameters are passed via method calls and these are always thread-safe.

When using the flowable:delegateExpression attribute, the thread-safety of the delegate instance will depend on how the expression is resolved. If the delegate expression is reused in various tasks or process definitions, and the expression always returns the same instance, using field injection is not thread-safe. Let’s look at a few examples to clarify.

Suppose the expression is ${factory.createDelegate(someVariable)}, where factory is a Java bean known to the engine (for example, a Spring bean when using the Spring integration) that creates a new instance each time the expression is resolved. When using field injection in this case, there is no problem with regards to thread-safety: each time the expression is resolved, the fields are injected in this new instance.

However, suppose the expression is ${someJavaDelegateBean} that resolves to an implementation of the JavaDelegate class and we’re running in an environment that creates singleton instances of each bean (such as Spring, but many others too). When using this expression in different tasks or process definitions, the expression will always be resolved to the same instance. In this case, using field injection is not thread-safe. For example:

1 2 3 4 5 6 7 8 9 10 11 12 13
<serviceTask id="serviceTask1" flowable:delegateExpression="${someJavaDelegateBean}"> <extensionElements> <flowable:field name="someField" expression="${input * 2}"/> </extensionElements> </serviceTask> <!-- other process definition elements --> <serviceTask id="serviceTask2" flowable:delegateExpression="${someJavaDelegateBean}"> <extensionElements> <flowable:field name="someField" expression="${input * 2000}"/> </extensionElements> </serviceTask>

This example snippet has two service tasks that use the same delegate expression, but inject different values for the Expression field. If the expression resolves to the same instance, there can be race conditions in concurrent scenarios when it comes to injecting the field someField when the processes are executed.

The easiest solution to solve this is to either:

  • rewrite the Java delegate to use an expression and passing the required data to the delegate via a method arguments.

  • return a new instance of the delegate class each time the delegate expression is resolved. For example, when using Spring, this means that the scope of the bean must be set to prototype (such as by adding the @Scope(SCOPE_PROTOTYPE) annotation to the delegate class).

As of Flowable v5.22, the process engine configuration can be set in a way to disable the use of field injection on delegate expressions, by setting the value of the delegateExpressionFieldInjectionMode property (which takes one of the values in the org.flowable.engine.imp.cfg.DelegateExpressionFieldInjectionMode enum).

Following settings are possible:

  • DISABLED : fully disables field injection when using delegate expressions. No field injection will be attempted. This is the safest mode when it comes to thread-safety.

  • COMPATIBILITY: in this mode, the behavior will be exactly as it was before v5.21: field injection is possible when using delegate expressions and an exception will be thrown when the fields are not defined on the delegate class. This is, of course, the least safe mode with regards to thread-safety, but it can be necessary for backwards compatibility or can be used safely when the delegate expression is used only on one task in a set of process definitions (and thus no concurrent race conditions can happen).

  • MIXED: Allows injection when using delegateExpressions, but will not throw an exception when the fields are not defined on the delegate. This allows for mixed behaviors, where some delegates have injection (for example, because they are not singletons) and some don’t.

  • The default mode for Flowable version 5.x is COMPATIBILITY.

  • The default mode for Flowable version 6.x is MIXED.

As an example, suppose that we’re using MIXED mode and we’re using Spring integration. Suppose that we have the following beans in the Spring configuration:

1 2 3 4 5 6
<bean id="singletonDelegateExpressionBean" class="org.flowable.spring.test.fieldinjection.SingletonDelegateExpressionBean" /> <bean id="prototypeDelegateExpressionBean" class="org.flowable.spring.test.fieldinjection.PrototypeDelegateExpressionBean" scope="prototype" />

The first bean is a regular Spring bean and thus a singleton. The second one has prototype as scope, and the Spring container will return a new instance every time the bean is requested.

Given the following process definition:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
<serviceTask id="serviceTask1" flowable:delegateExpression="${prototypeDelegateExpressionBean}"> <extensionElements> <flowable:field name="fieldA" expression="${input * 2}"/> <flowable:field name="fieldB" expression="${1 + 1}"/> <flowable:field name="resultVariableName" stringValue="resultServiceTask1"/> </extensionElements> </serviceTask> <serviceTask id="serviceTask2" flowable:delegateExpression="${prototypeDelegateExpressionBean}"> <extensionElements> <flowable:field name="fieldA" expression="${123}"/> <flowable:field name="fieldB" expression="${456}"/> <flowable:field name="resultVariableName" stringValue="resultServiceTask2"/> </extensionElements> </serviceTask> <serviceTask id="serviceTask3" flowable:delegateExpression="${singletonDelegateExpressionBean}"> <extensionElements> <flowable:field name="fieldA" expression="${input * 2}"/> <flowable:field name="fieldB" expression="${1 + 1}"/> <flowable:field name="resultVariableName" stringValue="resultServiceTask1"/> </extensionElements> </serviceTask> <serviceTask id="serviceTask4" flowable:delegateExpression="${singletonDelegateExpressionBean}"> <extensionElements> <flowable:field name="fieldA" expression="${123}"/> <flowable:field name="fieldB" expression="${456}"/> <flowable:field name="resultVariableName" stringValue="resultServiceTask2"/> </extensionElements> </serviceTask>

We’ve got four service tasks, where the first and the second use the ${prototypeDelegateExpressionBean} delegate expression and the third and fourth use the ${singletonDelegateExpressionBean} delegate expression.

Let’s look at the prototype bean first:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
public class PrototypeDelegateExpressionBean implements JavaDelegate { public static AtomicInteger INSTANCE_COUNT = new AtomicInteger(0); private Expression fieldA; private Expression fieldB; private Expression resultVariableName; public PrototypeDelegateExpressionBean() { INSTANCE_COUNT.incrementAndGet(); } @Override public void execute(DelegateExecution execution) { Number fieldAValue = (Number) fieldA.getValue(execution); Number fieldValueB = (Number) fieldB.getValue(execution); int result = fieldAValue.intValue() + fieldValueB.intValue(); execution.setVariable(resultVariableName.getValue(execution).toString(), result); } }

When we check the INSTANCE_COUNT after running a process instance of the process definition above, we’ll get two back, as a new instance is created every time ${prototypeDelegateExpressionBean} is resolved. Fields can be injected without any problem here and we can see the three Expression member fields here.

The singleton bean, however, looks slightly different:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
public class SingletonDelegateExpressionBean implements JavaDelegate { public static AtomicInteger INSTANCE_COUNT = new AtomicInteger(0); public SingletonDelegateExpressionBean() { INSTANCE_COUNT.incrementAndGet(); } @Override public void execute(DelegateExecution execution) { Expression fieldAExpression = DelegateHelper.getFieldExpression(execution, "fieldA"); Number fieldA = (Number) fieldAExpression.getValue(execution); Expression fieldBExpression = DelegateHelper.getFieldExpression(execution, "fieldB"); Number fieldB = (Number) fieldBExpression.getValue(execution); int result = fieldA.intValue() + fieldB.intValue(); String resultVariableName = DelegateHelper.getFieldExpression(execution, "resultVariableName").getValue(execution).toString(); execution.setVariable(resultVariableName, result); } }

The INSTANCE_COUNT will always be one here, as it is a singleton. In this delegate, there are no Expression member fields. This is possible as we’re running in MIXED mode. In COMPATIBILITY mode, this would throw an exception as it expects the member fields to be there. DISABLED mode would also work for this bean, but it would disallow the use the prototype bean above that does use field injection.

In this delegate code, the org.flowable.engine.delegate.DelegateHelper class is used, which has some useful utility methods to execute the same logic, but in a thread-safe way when the delegate is a singleton. Instead of injecting the Expression, it is fetched via the getFieldExpression method. This means that when it comes to the service task XML, the fields are defined exactly the same as for the singleton bean. If you look at the XML snippet above, you can see they are equal in definition and only the implementation logic differs.

Technical note: the getFieldExpression will introspect the BpmnModel and create the Expression on the fly when the method is executed, making it thread-safe.

  • For Flowable v5.x, the DelegateHelper cannot be used for an ExecutionListener or TaskListener (due to an architectural flaw). To make thread-safe instances of those listeners, use either an expression or make sure a new instance is created every time the delegate expression is resolved.

  • For Flowable V6.x the DelegateHelper does work in ExecutionListener and TaskListener implementations. For example, in V6.x, the following code can be written, using the DelegateHelper:

1 2 3 4 5 6 7
<extensionElements> <flowable:executionListener delegateExpression="${testExecutionListener}" event="start"> <flowable:field name="input" expression="${startValue}" /> <flowable:field name="resultVar" stringValue="processStartValue" /> </flowable:executionListener> </extensionElements>

Where testExecutionListener resolves to an instance implementing the ExecutionListener interface:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
@Component("testExecutionListener") public class TestExecutionListener implements ExecutionListener { @Override public void notify(DelegateExecution execution) { Expression inputExpression = DelegateHelper.getFieldExpression(execution, "input"); Number input = (Number) inputExpression.getValue(execution); int result = input.intValue() * 100; Expression resultVarExpression = DelegateHelper.getFieldExpression(execution, "resultVar"); execution.setVariable(resultVarExpression.getValue(execution).toString(), result); } }
Service task results

The return value of a service execution (for service task using expression only) can be assigned to an existing or to a new process variable by specifying the process variable name as a literal value for the 'flowable:resultVariable' attribute of a service task definition. Any existing value for a specific process variable will be overwritten by the result value of the service execution. When a result variable name is not specified, the service execution result value gets ignored.

1 2 3
<serviceTask id="aMethodExpressionServiceTask" flowable:expression="#{myService.doSomething()}" flowable:resultVariable="myVar" />

In the example above, the result of the service execution (the return value of the 'doSomething()' method invocation on an object is made available under the name 'myService', either in the process variables or as a Spring bean) is set to the process variable named 'myVar' after the service execution completes.

Handling exceptions

When custom logic is executed, it is often necessary to catch certain business exceptions and handle them inside the surrounding process. Flowable provides different options to do that.

Throwing BPMN Errors

It is possible to throw BPMN Errors from user code inside Service Tasks or Script Tasks. In order to do this, a special FlowableException called BpmnError can be thrown in JavaDelegates, scripts, expressions and delegate expressions. The engine will catch this exception and forward it to an appropriate error handler, for example, a Boundary Error Event or an Error Event Sub-Process.

1 2 3 4 5 6 7 8 9 10 11
public class ThrowBpmnErrorDelegate implements JavaDelegate { public void execute(DelegateExecution execution) throws Exception { try { executeBusinessLogic(); } catch (BusinessException e) { throw new BpmnError("BusinessExceptionOccurred"); } } }

The constructor argument is an error code, which will be used to determine the error handler that is responsible for the error. See Boundary Error Event for information on how to catch a BPMN Error.

This mechanism should be used only for business faults that will be handled by a Boundary Error Event or Error Event Sub-Process modeled in the process definition. Technical errors should be represented by other exception types and are not usually handled inside a process.

Exception mapping

It’s also possible to directly map a Java exception to business exception by using the mapException extension. Single mapping is the simplest form:

1 2 3 4 5 6
<serviceTask id="servicetask1" name="Service Task" flowable:class="..."> <extensionElements> <flowable:mapException errorCode="myErrorCode1">org.flowable.SomeException</flowable:mapException> </extensionElements> </serviceTask>

In above code, if an instance of org.flowable.SomeException is thrown in the service task, it will be caught and converted to a BPMN exception with the given errorCode. From this point on, it will be handled exactly like a normal BPMN exception. Any other exception will be treated as if there is no mapping in place. It will be propagated to the API caller.

One can map all the child exceptions of a certain exception in a single line by using includeChildExceptions attribute.

1 2 3 4 5 6
<serviceTask id="servicetask1" name="Service Task" flowable:class="..."> <extensionElements> <flowable:mapException errorCode="myErrorCode1" includeChildExceptions="true">org.flowable.SomeException</flowable:mapException> </extensionElements> </serviceTask>

The above code will cause Flowable to convert any direct or indirect descendent of SomeException to a BPMN error with the given error code. includeChildExceptions will be considered "false" when not given.

The most generic mapping is a default map, which is a map with no class. It will match any Java exception:

1 2 3 4 5
<serviceTask id="servicetask1" name="Service Task" flowable:class="..."> <extensionElements> <flowable:mapException errorCode="myErrorCode1"/> </extensionElements> </serviceTask>

The mappings are checked in order, from top to bottom, and the first match found will be followed, except for the default map. The default map is selected only after all maps have been checked unsuccessfully. Only the first map with no class will be considered as a default map. includeChildExceptions is ignored with a default map.

Exception Sequence Flow

Another option is to route process execution through a different path when some exception occurs. The following example shows how this is done.

1 2 3 4 5 6 7
<serviceTask id="javaService" name="Java service invocation" flowable:class="org.flowable.ThrowsExceptionBehavior"> </serviceTask> <sequenceFlow id="no-exception" sourceRef="javaService" targetRef="theEnd" /> <sequenceFlow id="exception" sourceRef="javaService" targetRef="fixException" />

Here, the service task has two outgoing sequence flows, named exception and no-exception. The sequence flow ID will be used to direct process flow if there’s an exception:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
public class ThrowsExceptionBehavior implements ActivityBehavior { public void execute(DelegateExecution execution) { String var = (String) execution.getVariable("var"); String sequenceFlowToTake = null; try { executeLogic(var); sequenceFlowToTake = "no-exception"; } catch (Exception e) { sequenceFlowToTake = "exception"; } DelegateHelper.leaveDelegate(execution, sequenceFlowToTake); } }
Using a Flowable service from within a JavaDelegate

For some use cases, it might be necessary to use the Flowable services from within a Java service task (for example, starting a process instance through the RuntimeService, if the callActivity doesn’t suit your needs).

1 2 3 4 5 6 7 8
public class StartProcessInstanceTestDelegate implements JavaDelegate { public void execute(DelegateExecution execution) throws Exception { RuntimeService runtimeService = Context.getProcessEngineConfiguration().getRuntimeService(); runtimeService.startProcessInstanceByKey("myProcess"); } }

All of the Flowable service APIs are available through this interface.

All data changes that occur as an effect of using these API calls will be part of the current transaction. This also works in environments with dependency injection, such as Spring and CDI with or without a JTA enabled datasource. For example, the following snippet of code will do the same as the snippet above, but now the RuntimeService is injected rather than being fetched through the org.flowable.engine.EngineServices interface.

1 2 3 4 5 6 7 8 9 10 11
@Component("startProcessInstanceDelegate") public class StartProcessInstanceTestDelegateWithInjection { @Autowired private RuntimeService runtimeService; public void startProcess() { runtimeService.startProcessInstanceByKey("oneTaskProcess"); } }

Important technical note: because the service call is being done as part of the current transaction, any data that was produced or altered before the service task is executed is not yet flushed to the database. All API calls work on the database data, which means that these uncommitted changes are not visible within the API call of the service task.

8.5.4. Web Service Task

Description

A Web Service task is used to synchronously invoke an external Web service.

Graphical Notation

A Web Service task is visualized in the same way as a Java service task.

bpmn.web.service.task
XML representation

To use a Web service we need to import its operations and complex types. This can be done automatically by using the import tag pointing to the WSDL of the Web service:

1 2 3
<import importType="http://schemas.xmlsoap.org/wsdl/" location="http://localhost:63081/counter?wsdl" namespace="http://webservice.flowable.org/" />

The previous declaration tells Flowable to import the definitions, but it doesn’t create the item definitions and messages for you. Let’s suppose we want to invoke a specific method called prettyPrint, therefore we will need to create the corresponding message and item definitions for the request and response messages:

1 2 3 4 5
<message id="prettyPrintCountRequestMessage" itemRef="tns:prettyPrintCountRequestItem" /> <message id="prettyPrintCountResponseMessage" itemRef="tns:prettyPrintCountResponseItem" /> <itemDefinition id="prettyPrintCountRequestItem" structureRef="counter:prettyPrintCount" /> <itemDefinition id="prettyPrintCountResponseItem" structureRef="counter:prettyPrintCountResponse" />

Before declaring the service task, we have to define the BPMN interfaces and operations that actually reference the Web service ones. Basically, we define and interface and the required operations. For each operation we reuse the previous defined message for in and out. For example, the following declaration defines the counter interface and the prettyPrintCountOperation operation:

1 2 3 4 5 6 7
<interface name="Counter Interface" implementationRef="counter:Counter"> <operation id="prettyPrintCountOperation" name="prettyPrintCount Operation" implementationRef="counter:prettyPrintCount"> <inMessageRef>tns:prettyPrintCountRequestMessage</inMessageRef> <outMessageRef>tns:prettyPrintCountResponseMessage</outMessageRef> </operation> </interface>

Then we can declare a Web Service Task by using the ##WebService implementation and a reference to the Web service operation.

1 2 3 4
<serviceTask id="webService" name="Web service invocation" implementation="##WebService" operationRef="tns:prettyPrintCountOperation">
Web Service Task IO Specification

Unless we are using the simplistic approach for data input and output associations (see below), each Web Service Task needs to declare an IO Specification that specifies the inputs and outputs of the task. The approach is pretty straightforward and BPMN 2.0 complaint, for our prettyPrint example we define the input and output sets according to the previously declared item definitions:

1 2 3 4 5 6 7 8 9 10
<ioSpecification> <dataInput itemSubjectRef="tns:prettyPrintCountRequestItem" id="dataInputOfServiceTask" /> <dataOutput itemSubjectRef="tns:prettyPrintCountResponseItem" id="dataOutputOfServiceTask" /> <inputSet> <dataInputRefs>dataInputOfServiceTask</dataInputRefs> </inputSet> <outputSet> <dataOutputRefs>dataOutputOfServiceTask</dataOutputRefs> </outputSet> </ioSpecification>
Web Service Task data input associations

There are 2 ways of specifying data input associations:

  • Using expressions

  • Using the simplistic approach

To specify the data input association using expressions, we need to define the source and target items and specify the corresponding assignments between the fields of each item. In the following example we assign prefix and suffix fields for the items:

1 2 3 4 5 6 7 8 9 10 11 12
<dataInputAssociation> <sourceRef>dataInputOfProcess</sourceRef> <targetRef>dataInputOfServiceTask</targetRef> <assignment> <from>${dataInputOfProcess.prefix}</from> <to>${dataInputOfServiceTask.prefix}</to> </assignment> <assignment> <from>${dataInputOfProcess.suffix}</from> <to>${dataInputOfServiceTask.suffix}</to> </assignment> </dataInputAssociation>

On the other hand, we can use the simplistic approach, which is much more straightforward. The sourceRef element is a Flowable variable name and the targetRef element is a property of the item definition. In the following example, we assign the prefix field the value of the variable PrefixVariable, and the suffix field the value of the variable SuffixVariable.

1 2 3 4 5 6 7 8
<dataInputAssociation> <sourceRef>PrefixVariable</sourceRef> <targetRef>prefix</targetRef> </dataInputAssociation> <dataInputAssociation> <sourceRef>SuffixVariable</sourceRef> <targetRef>suffix</targetRef> </dataInputAssociation>
Web Service Task data output associations

There are 2 ways of specifying data out associations:

  • Using expressions

  • Using the simplistic approach

To specify the data out association using expressions we need to define the target variable and the source expression. The approach is pretty straightforward and similar to data input associations:

1 2 3 4
<dataOutputAssociation> <targetRef>dataOutputOfProcess</targetRef> <transformation>${dataOutputOfServiceTask.prettyPrint}</transformation> </dataOutputAssociation>

Alternatively, we can use the simplistic approach that is much more straightforward. The sourceRef element is a property of the item definition and the targetRef element is a Flowable variable name. The approach is pretty simple and similar to data input associations:

1 2 3 4
<dataOutputAssociation> <sourceRef>prettyPrint</sourceRef> <targetRef>OutputVariable</targetRef> </dataOutputAssociation>

8.5.5. Business Rule Task

Description

A Business Rule task is used to synchronously execute one or more rules. Flowable uses Drools Expert, the Drools rule engine to execute business rules. Currently, the .drl files containing the business rules have to be deployed together with the process definition that defines a business rule task to execute those rules. This means that all .drl files that are used in a process have to be packaged in the process BAR file, as for task forms and so on. For more information about creating business rules for Drools Expert, please refer to the Drools documentation at JBoss Drools

If you want to plug in your own implementation of the rule task, for example, because you want to use Drools differently or you want to use a completely different rule engine, then you can use the class or expression attribute on the BusinessRuleTask and it will behave exactly like a ServiceTask

Graphical Notation

A Business Rule task is visualized the with a table icon.

bpmn.business.rule.task
XML representation

To execute one or more business rules that are deployed in the same BAR file as the process definition, we need to define the input and result variables. For the input variable definition, a list of process variables can be defined separated by a comma. The output variable definition can only contain one variable name that will be used to store the output objects of the executed business rules in a process variable. Note that the result variable will contain a List of objects. If no result variable name is specified by default, org.flowable.engine.rules.OUTPUT is used.

The following business rule task executes all business rules deployed with the process definition:

1 2 3 4 5 6 7 8 9 10 11 12 13
<process id="simpleBusinessRuleProcess"> <startEvent id="theStart" /> <sequenceFlow sourceRef="theStart" targetRef="businessRuleTask" /> <businessRuleTask id="businessRuleTask" flowable:ruleVariablesInput="${order}" flowable:resultVariable="rulesOutput" /> <sequenceFlow sourceRef="businessRuleTask" targetRef="theEnd" /> <endEvent id="theEnd" /> </process>

The business rule task can also be configured to execute only a defined set of rules from the deployed .drl files. A list of rule names separated by a comma must be specified for this.

1 2
<businessRuleTask id="businessRuleTask" flowable:ruleVariablesInput="${order}" flowable:rules="rule1, rule2" />

In this case only rule1 and rule2 are executed.

You can also define a list of rules that should be excluded from execution.

1 2
<businessRuleTask id="businessRuleTask" flowable:ruleVariablesInput="${order}" flowable:rules="rule1, rule2" exclude="true" />

In this case all rules deployed in the same BAR file as the process definition will be executed, except for rule1 and rule2.

As mentioned earlier another option is to hook in the implementation of the BusinessRuleTask yourself:

1
<businessRuleTask id="businessRuleTask" flowable:class="${MyRuleServiceDelegate}" />

Now the BusinessRuleTask behaves exactly like a ServiceTask, but still keeps the BusinessRuleTask icon to visualize that we are doing business rule processing here.

8.5.6. Email Task

Flowable allows you to enhance business processes with automatic mail service tasks that send e-mails to one or more recipients, including support for cc, bcc, HTML content, and so on. Note that the mail task is not an official task of the BPMN 2.0 spec (and doesn’t have a dedicated icon as a consequence). Hence, in Flowable the mail task is implemented as a dedicated service task.

Mail server configuration

The Flowable engine sends e-mails trough an external mail server with SMTP capabilities. To actually send e-mails, the engine needs to know how to reach the mail server. The following properties can be set in the flowable.cfg.xml configuration file:

Property Required? Description

mailServerHost

no

The hostname of your mail server (for example, mail.mycorp.com). Default is localhost

mailServerPort

yes, if not on the default port

The port for SMTP traffic on the mail server. The default is 25

mailServerDefaultFrom

no

The default e-mail address of the sender of e-mails, when none is provided by the user. By default this is flowable@flowable.org

mailServerUsername

if applicable for your server

Some mail servers require credentials for sending e-mail. By default not set.

mailServerPassword

if applicable for your server

Some mail servers require credentials for sending e-mail. By default not set.

mailServerUseSSL

if applicable for your server

Some mail servers require ssl communication. By default set to false.

mailServerUseTLS

if applicable for your server

Some mail servers (for instance gmail) require TLS communication. By default set to false.

Defining an Email Task

The Email task is implemented as a dedicated Service Task and is defined by setting 'mail' for the type of the service task.

1
<serviceTask id="sendMail" flowable:type="mail">

The Email task is configured by field injection. All the values for these properties can contain EL expression, which are resolved at runtime during process execution. The following properties can be set:

Property Required? Description

to

yes

The recipients of the e-mail. Multiple recipients are defined in a comma-separated list

from

no

The sender e-mail address. If not provided, the default configured from address is used.

subject

no

The subject of the e-mail.

cc

no

The cc’s of the e-mail. Multiple recipients are defined in a comma-separated list

bcc

no

The bcc’s of the e-mail. Multiple recipients are defined in a comma-separated list

charset

no

Allows specification of the charset of the email, which is necessary for many non-English languages.

html

no

A piece of HTML that is the content of the e-mail.

text

no

The content of the e-mail, in case one needs to send plain, non-rich e-mails. Can be used in combination with html, for e-mail clients that don’t support rich content. The email client can then fall back to this text-only alternative.

htmlVar

no

The name of a process variable that holds the HTML that is the content of the e-mail. The key difference between this and html is that this content will have expressions replaced before being sent by the mail task.

textVar

no

The name of a process variable that holds the plain text content of the e-mail. The key difference between this and text is that this content will have expressions replaced before being sent by the mail task.

ignoreException

no

Whether a failure when handling the e-mail is ignored rather than throw a FlowableException. By default this is set to false.

exceptionVariableName

no

When email handling does not throw an exception because ignoreException = true, a variable with the given name is used to hold a failure message

Example usage

The following XML snippet shows an example of using the Email Task.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
<serviceTask id="sendMail" flowable:type="mail"> <extensionElements> <flowable:field name="from" stringValue="order-shipping@thecompany.com" /> <flowable:field name="to" expression="${recipient}" /> <flowable:field name="subject" expression="Your order ${orderId} has been shipped" /> <flowable:field name="html"> <flowable:expression> <![CDATA[ <html> <body> Hello ${male ? 'Mr.' : 'Mrs.' } ${recipientName},<br/><br/> As of ${now}, your order has been <b>processed and shipped</b>.<br/><br/> Kind regards,<br/> TheCompany. </body> </html> ]]> </flowable:expression> </flowable:field> </extensionElements> </serviceTask>

8.5.7. Http Task

Http task allows you to make HTTP requests, enhancing the integration features of Flowable. Note that Http task is not an official task of the BPMN 2.0 spec (and doesn’t have a dedicated icon as a consequence). Hence, in Flowable, Http task is implemented as a dedicated service task.

Http Client configuration

The Flowable engine makes Http requests through a configurable Http Client. The following properties can be set in the flowable.cfg.xml configuration file:

1 2 3 4 5 6 7 8 9 10 11 12
<bean id="processEngineConfiguration" class="org.flowable.engine.impl.cfg.StandaloneProcessEngineConfiguration"> <!-- http client configurations --> <property name="httpClientConfig" ref="httpClientConfig"/> </bean> <bean id="httpClientConfig" class="org.flowable.engine.cfg.HttpClientConfig"> <property name="connectTimeout" value="5000"/> <property name="socketTimeout" value="5000"/> <property name="connectionRequestTimeout" value="5000"/> <property name="requestRetryLimit" value="5"/> </bean>
Property Required? Description

connectTimeout

no

Connection timeout in milliseconds.
By default set to 5000.

socketTimeout

no

Socket timeout in milliseconds.
By default set to 5000.

connectionRequestTimeout

no

Connection request timeout in milliseconds.
By default set to 5000.

requestRetryLimit

no

Request retry limit (0 means do not retry).
By default set to 3.

disableCertVerify

no

Flag to disable SSL certificate verification.
By default set to false.

Defining Http Task

Http task is implemented as a dedicated Service Task and is defined by setting 'http' for the type of the service task.

1
<serviceTask id="httpGet" flowable:type="http">

Also its possible to override default Http Task behavior by providing custom implementation. Custom implemention should extend org.flowable.http.HttpActivityBehavior and override perform() method. Field httpActivityBehaviorClass should be set in the task definition. Default value for this field is org.flowable.http.impl.HttpActivityBehaviorImpl. Currently HttpActivityBehaviorImpl is based on Apache Http Client. As Apache Http Client can be customized in many ways, all possible options are not used in Http Client config. To create a custom client refer to Http Client builder

<serviceTask id="httpGet" flowable:type="http">
  <extensionElements>
    <flowable:field name="httpActivityBehaviorClass">
        <flowable:string>
          <![CDATA[org.example.flowable.HttpActivityBehaviorCustomImpl]]>
        </flowable:string>
    </flowable:field>
  </extensionElements>
</sericeTask>
Http Task configuration

Http task is configured by field injection. All the values for these properties can contain EL expression, which are resolved at runtime during process execution. The following properties can be set:

Property Required? Description

requestMethod

yes

Request method
(GET,POST,PUT,DELETE).

requestUrl

yes

Request URL
(Example - http://flowable.org).

requestHeaders

no

Line separated Http request headers.
Example -
Content-Type: application/json
Authorization: Basic aGFRlc3Q=

requestBody

no

Request body
Example - ${sampleBody}

requestTimeout

no

Timeout in milliseconds for the request
(Example - 5000).
Default is 0, meaning no timeout.
Please refer Http Client configuration for connection related timeouts.

disallowRedirects

no

Flag to disallow Http redirects.
Default is false.
(Example - true).

failStatusCodes

no

Comma separated list of Http response status codes to fail the request and error thrown as FlowableException.
Example: 400, 404, 500, 503
Example: 400, 5XX

handleStatusCodes

no

Comma separated list of status codes for which the task will throw BpmnError.
The error code of the BpmnError is HTTP<statuscode>.
For example, 404 status code is set with error code HTTP404.
3XX status codes are thrown only if disallowRedirects field is also set.
Status codes in handleStatusCodes override those in failStatusCodes when they are set in both.
Example: 400, 404, 500, 503
Example: 3XX, 4XX, 5XX

ignoreException

no

Flag for ignoring the exceptions, catch and save the exception message as <taskId>.errorMessage.

saveRequestVariables

no

Flag to save request variables.
By default, only response related variables are saved in execution.

saveResponseParameters

no

Flag to save all response variables including HTTP status, headers etc.
By default, only response body is saved in execution.

resultVariablePrefix

no

Prefix for the execution variable names.
If prefix is not set, variables will be saved with name as <taskId>.fieldName.
For example, requestUrl is saved as task7.requestUrl for task with id task7.

httpActivityBehaviorClass

no

Full class name of custom extension of org.flowable.http.HttpActivityBehavior.

In addition to the provided fields, following will be set as variables on successful execution, based on saveResponseParameters flag.

Variable Optional? Description

responseProtocol

Yes

Http version

responseReason

Yes

Http response reason phrase.

responseStatusCode

Yes

Http response status code (Example - 200).

responseHeaders

Yes

Line separated Http response headers.
Example -
Content-Type: application/json
Content-Length: 777

responseBody

Yes

Response body as string, if any.

errorMessage

Yes

Ignored error message, if any.

Result variables

Remember all the above execution variable names are prefixed by evaluated value of resultVariablePrefix. For example response status code can be accessed in another activity as task7.responseStatusCode. Here task7 is the id of the service task. To override this behavior, set resultVariablePrefix as required.

Example usage

The following XML snippet shows an example of using the Http Task.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
<serviceTask id="httpGet" flowable:type="http"> <extensionElements> <flowable:field name="requestMethod" stringValue="GET" /> <flowable:field name="requestUrl" stringValue="http://flowable.org" /> <flowable:field name="requestHeaders"> <flowable:expression> <![CDATA[ Accept: text/html Cache-Control: no-cache ]]> </flowable:expression> </flowable:field> <flowable:field name="requestTimeout"> <flowable:expression> <![CDATA[ ${requestTimeout} ]]> </flowable:expression> </flowable:field> <flowable:field name="resultVariablePrefix"> <flowable:string>task7</flowable:string> </flowable:field> </extensionElements> </serviceTask>
Error handling

By default Http Task throws FlowableException when Connection, IO or any unhandled exceptions occur. But by default it does not handle any redirect/client/server error http status codes. We can configure the task to handle the exceptions and http status by setting failStatusCodes and/or handleStatusCodes fields. Refer Http Task configuration. BpmnError thrown by handleStatusCodes should be handled exactly like a normal BPMN exception, by having a corresponding boundary error handler. Below are few examples for exception handling and retry for http task.

Fail on 400 and 5XX, async http task and retry with failedJobRetryTimeCycle
1 2 3 4 5 6 7 8 9 10 11 12 13 14
<serviceTask id="failGet" name="Fail test" flowable:async="true" flowable:type="http"> <extensionElements> <flowable:field name="requestMethod"> <flowable:string><![CDATA[GET]]></flowable:string> </flowable:field> <flowable:field name="requestUrl"> <flowable:string><![CDATA[http://localhost:9798/api/fail]]></flowable:string> </flowable:field> <flowable:field name="failStatusCodes"> <flowable:string><![CDATA[400, 5XX]]></flowable:string> </flowable:field> <flowable:failedJobRetryTimeCycle>R3/PT5S</flowable:failedJobRetryTimeCycle> </extensionElements> </serviceTask>
Handle 400 as BmpnError
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
<serviceTask id="handleGet" name="HTTP Task" flowable:type="http"> <extensionElements> <flowable:field name="requestMethod"> <flowable:string><![CDATA[GET]]></flowable:string> </flowable:field> <flowable:field name="requestUrl"> <flowable:string><![CDATA[http://localhost:9798/api/fail]]></flowable:string> </flowable:field> <flowable:field name="handleStatusCodes"> <flowable:string><![CDATA[4XX]]></flowable:string> </flowable:field> </extensionElements> </serviceTask> <boundaryEvent id="catch400" attachedToRef="handleGet"> <errorEventDefinition errorRef="HTTP400"></errorEventDefinition> </boundaryEvent>
Ignore exceptions.
1 2 3 4 5 6 7 8 9 10 11 12 13
<serviceTask id="ignoreTask" name="Fail test" flowable:type="http"> <extensionElements> <flowable:field name="requestMethod"> <flowable:string><![CDATA[GET]]></flowable:string> </flowable:field> <flowable:field name="requestUrl"> <flowable:string><![CDATA[http://nohost:9798/api]]></flowable:string> </flowable:field> <flowable:field name="ignoreException"> <flowable:string><![CDATA[true]]></flowable:string> </flowable:field> </extensionElements> </serviceTask>
Exception mapping

8.5.8. Mule Task

The mule task allows you to send messages to Mule, enhancing the integration features of Flowable. Note that the Mule task is not an official task of the BPMN 2.0 spec (and doesn’t have a dedicated icon as a consequence). Hence, in Flowable the mule task is implemented as a dedicated service task.

Defining an Mule Task

The Mule task is implemented as a dedicated Service Task and is defined by setting 'mule' for the type of the service task.

1
<serviceTask id="sendMule" flowable:type="mule">

The Mule task is configured by field injection. All the values for these properties can contain EL expression, which are resolved at runtime during process execution. Following properties can be set:

Property Required? Description

endpointUrl

yes

The Mule endpoint you want to invoke.

language

yes

The language you want to use to evaluate the payloadExpression field.

payloadExpression

yes

An expression that will be the message’s payload.

resultVariable

no

The name of the variable which will store the result of the invocation.

Example usage

The following XML snippet shows an example of using the Mule Task.

1 2 3 4 5 6 7 8 9 10 11 12 13 14
<extensionElements> <flowable:field name="endpointUrl"> <flowable:string>vm://in</flowable:string> </flowable:field> <flowable:field name="language"> <flowable:string>juel</flowable:string> </flowable:field> <flowable:field name="payloadExpression"> <flowable:string>"hi"</flowable:string> </flowable:field> <flowable:field name="resultVariable"> <flowable:string>theVariable</flowable:string> </flowable:field> </extensionElements>

8.5.9. Camel Task

The Camel task allows you to send messages to and receive messages from Camel, and thereby enhances the integration features of Flowable. Note that the Camel task is not an official task of the BPMN 2.0 spec (and doesn’t have a dedicated icon as a consequence). Hence, in Flowable the Camel task is implemented as a dedicated service task. Also note that you must include the Flowable Camel module in your project to use the Camel task functionality.

Defining a Camel Task

The Camel task is implemented as a dedicated Service Task and is defined by setting 'camel' for the type of the service task.

1
<serviceTask id="sendCamel" flowable:type="camel">

The process definition itself needs nothing other than the camel type definition on a service task. The integration logic is all delegated to the Camel container. By default, the Flowable engine looks for a camelContext bean in the Spring container. The camelContext bean defines the Camel routes that will be loaded by the Camel container. In the following example, the routes are loaded from a specific Java package, but you can also define routes directly in the Spring configuration itself.

1 2 3 4 5
<camelContext id="camelContext" xmlns="http://camel.apache.org/schema/spring"> <packageScan> <package>org.flowable.camel.route</package> </packageScan> </camelContext>

For more documentation about Camel routes you can look on the Camel website. The basic concepts are demonstrated through a few small samples below. In the first sample, we will do the simplest form of Camel call from a Flowable workflow. Let’s call it SimpleCamelCall.

If you want to define multiple Camel context beans or want to use a different bean name, this can be overridden in the Camel task definition like this:

1 2 3 4 5
<serviceTask id="serviceTask1" flowable:type="camel"> <extensionElements> <flowable:field name="camelContext" stringValue="customCamelContext" /> </extensionElements> </serviceTask>
Simple Camel Call example

All the files related to this example can be found in the org.flowable.camel.examples.simpleCamelCall package of flowable-camel module. The target simply activates a specific Camel route. First of all, we need a Spring context that contains the introduction to the routes as mentioned previously. The following serves this purpose:

1 2 3 4 5
<camelContext id="camelContext" xmlns="http://camel.apache.org/schema/spring"> <packageScan> <package>org.flowable.camel.examples.simpleCamelCall</package> </packageScan> </camelContext>
1 2 3 4 5 6 7
public class SimpleCamelCallRoute extends RouteBuilder { @Override public void configure() throws Exception { from("flowable:SimpleCamelCallProcess:simpleCall").to("log:org.flowable.camel.examples.SimpleCamelCall"); } }

The route just logs the message body and nothing more. Notice the format of the from endpoint. It consists of three parts:

Endpoint URL Part Description

flowable

refers to the engine endpoint

SimpleCamelCallProcess

name of the process

simpleCall

name of the Camel service in the process

OK, our route is now properly configured and accessible to Camel. Now comes the workflow part. The workflow looks like:

1 2 3 4 5 6 7 8 9
<process id="SimpleCamelCallProcess"> <startEvent id="start"/> <sequenceFlow id="flow1" sourceRef="start" targetRef="simpleCall"/> <serviceTask id="simpleCall" flowable:type="camel"/> <sequenceFlow id="flow2" sourceRef="simpleCall" targetRef="end"/> <endEvent id="end"/> </process>
Ping Pong example

Our example worked, but nothing is really transferred between Camel and Flowable so there is not much merit in it. In this example, we try to send and receive data to and from Camel. We send a string, Camel concatenates something to it and returns back the result. The sender part is trivial, we send our message in the form of a variable to Camel Task. Here is our caller code:

1 2 3 4 5 6 7 8 9 10 11 12 13
@Deployment public void testPingPong() { Map<String, Object> variables = new HashMap<String, Object>(); variables.put("input", "Hello"); Map<String, String> outputMap = new HashMap<String, String>(); variables.put("outputMap", outputMap); runtimeService.startProcessInstanceByKey("PingPongProcess", variables); assertEquals(1, outputMap.size()); assertNotNull(outputMap.get("outputValue")); assertEquals("Hello World", outputMap.get("outputValue")); }

The variable "input" is actually the input for the Camel route, and outputMap is there to capture the result back from Camel. The process could be something like this:

1 2 3 4 5 6 7 8 9
<process id="PingPongProcess"> <startEvent id="start"/> <sequenceFlow id="flow1" sourceRef="start" targetRef="ping"/> <serviceTask id="ping" flowable:type="camel"/> <sequenceFlow id="flow2" sourceRef="ping" targetRef="saveOutput"/> <serviceTask id="saveOutput" flowable:class="org.flowable.camel.examples.pingPong.SaveOutput" /> <sequenceFlow id="flow3" sourceRef="saveOutput" targetRef="end"/> <endEvent id="end"/> </process>

Note that SaveOutput Service task stores the value of "Output" variable from the context to the previously mentioned OutputMap. Now we have to know how the variables are sent to Camel and returned back. Here comes the notion of Camel behavior into play. The way variables are communicated to Camel is configurable via CamelBehavior. Here we use the default one in our sample, a short description of the other ones comes afterwards. With similar code you can configure the desired Camel behavior:

1 2 3 4 5
<serviceTask id="serviceTask1" flowable:type="camel"> <extensionElements> <flowable:field name="camelBehaviorClass" stringValue="org.flowable.camel.impl.CamelBehaviorCamelBodyImpl" /> </extensionElements> </serviceTask>

If you do not give a specific behavior, then org.flowable.camel.impl.CamelBehaviorDefaultImpl will be set. This behavior copies the variables to Camel properties of the same name. In return, regardless of selected behavior, if the Camel message body is a map, then each of its elements is copied as a variable, else the whole object is copied into a specific variable with the name of "camelBody". Knowing this, this Camel route concludes our second example:

1 2 3 4
@Override public void configure() throws Exception { from("flowable:PingPongProcess:ping").transform().simple("${property.input} World"); }

In this route, the string "world" is concatenated to the end of the property named "input" and the result will be set in the message body. It’s accessible by checking the "camelBody" variable in the Java service task and copied to "outputMap". Now that the example with its default behavior works, let’s see what the other possibilities are. In starting every Camel route, the Process Instance ID will be copied into a Camel property with the specific name of "PROCESS_ID_PROPERTY". It’s later used for correlating the process instance and Camel route. Also, it can be exploited in the Camel route.

There are three different behaviors already available out of the box in Flowable. The behavior can be overwritten by a specific phrase in the route URL. Here’s an example of overriding the already defined behavior in the URL:

1
from("flowable:asyncCamelProcess:serviceTaskAsync2?copyVariablesToProperties=true").

The following table provides an overview of three available Camel behaviors:

Behavior In URL Description

CamelBehaviorDefaultImpl

copyVariablesToProperties

Copy Flowable variables as Camel properties

CamelBehaviorCamelBodyImpl

copyCamelBodyToBody

Copy only the Flowable variable named "camelBody" as the Camel message body

CamelBehaviorBodyAsMapImpl

copyVariablesToBodyAsMap

Copy all the Flowable variables in a map as the Camel message body

The above table describes how Flowable variables are going to be transferred to Camel. The following table describes how the Camel variables are returned back to Flowable. This can only be configured in route URLs.

URL Description

Default

If Camel body is a map, copy each element as a Flowable variable, otherwise copy the whole Camel body as a "camelBody" Flowable variable

copyVariablesFromProperties

Copy Camel properties as Flowable variables of the same name

copyCamelBodyToBodyAsString

As for default, but if camelBody is not a map, first convert it to String and then copy it into "camelBody"

copyVariablesFromHeader

Additionally copy Camel headers to Flowable variables of the same names

Returning back the variables

What is mentioned above about passing variables only holds for the initiating side of the variable transfer, whether you’re coming from from Camel to Flowable or from Flowable to Camel.
It is important to note that because of the special non-blocking behavior of Flowable, variables are not automatically returned back from Flowable to Camel. For that to happen, a special syntax is available. There can be one or more parameters in a Camel route URL with the format of var.return.someVariableName. All variables having a name equal to one of these parameters (without var.return part) will be considered as output variables and will be copied back as Camel properties with the same names.
For example in a route like:

from("direct:start").to("flowable:process?var.return.exampleVar").to("mock:result");

A Flowable variable with the name of exampleVar will be considered as an output variable and will be copied back as a property in Camel with the same name.

Asynchronous Ping Pong example

All the previous examples were synchronous. The process instance waits until the Camel route is concluded and returned. In some cases, we might need the Flowable process instance to continue. For such purposes, the asynchronous capability of the Camel service task is useful. You can make use of this feature by setting the async property of the Camel service task to true.

1
<serviceTask id="serviceAsyncPing" flowable:type="camel" flowable:async="true"/>

By setting this feature, the specified Camel route is activated asynchronously by the Flowable job executor. When you define a queue in the Camel route, the Flowable process instance will continue with the activities defined after the Camel service task in the process definition. The Camel route will be executed fully asynchronously from the process execution. If you want to wait for a response of the Camel service task somewhere in your process definition, you can use a receive task.

1
<receiveTask id="receiveAsyncPing" name="Wait State" />

The process instance will wait until a signal is received, for example from Camel. In Camel you can send a signal to the process instance by sending a message to the proper Flowable endpoint.

1
from("flowable:asyncPingProcess:serviceAsyncPing").to("flowable:asyncPingProcess:receiveAsyncPing");
  • constant string "flowable"

  • process name

  • receive task name

Instantiate workflow from Camel route

In all the previous examples, the Flowable process instance is started first and the Camel route was started from the process instance. It is also possible to do things the other way around, with a process instance being started or invoked from an already started Camel route. It’s very similar to signalling a receive task. Here’s a sample route:

1
from("direct:start").to("flowable:camelProcess");

As you can see, the URL has two parts: the first is the constant string "flowable" and the second is the name of the process definition. Obviously, the process definition should already be deployed to the Flowable engine.

It is also possible to set the initiator of the process instance to some authenticated user ID that is provided in a Camel header. To achieve this, first of all, an initiator variable must be specified in the process definition:

1
<startEvent id="start" flowable:initiator="initiator" />

Then given that the user ID is contained in a Camel header named CamelProcessInitiatorHeader, the Camel route could be defined as follows:

1 2 3
from("direct:startWithInitiatorHeader") .setHeader("CamelProcessInitiatorHeader", constant("kermit")) .to("flowable:InitiatorCamelCallProcess?processInitiatorHeaderName=CamelProcessInitiatorHeader");

8.5.10. Manual Task

Description

A Manual Task defines a task that is external to the BPM engine. It’s used to model work that is done by somebody, which the engine does not need to know of, nor is there a system or user interface. For the engine, a manual task is handled as a pass-through activity, automatically continuing the process from the moment process execution arrives into it.

Graphical Notation

A manual task is visualized as a rounded rectangle, with a little hand icon in the upper left corner

bpmn.manual.task
XML representation
1
<manualTask id="myManualTask" name="Call client for more information" />

8.5.11. Java Receive Task

Description

A Receive Task is a simple task that waits for the arrival of a certain message. Currently, we have only implemented Java semantics for this task. When process execution arrives at a Receive Task, the process state is committed to the persistence store. This means that the process will stay in this wait state until a specific message is received by the engine, which triggers the continuation of the process past the Receive Task.

Graphical notation

A Receive Task is visualized as a task (rounded rectangle) with a message icon in the top left corner. The message is white (a black message icon would have send semantics)

bpmn.receive.task
XML representation
1
<receiveTask id="waitState" name="wait" />

To continue a process instance that is currently waiting at such a Receive Task, the runtimeService.trigger(executionId) must be called using the ID of the execution that arrived in the Receive Task. The following code snippet shows how this works in practice:

1 2 3 4 5 6 7 8
ProcessInstance pi = runtimeService.startProcessInstanceByKey("receiveTask"); Execution execution = runtimeService.createExecutionQuery() .processInstanceId(pi.getId()) .activityId("waitState") .singleResult(); assertNotNull(execution); runtimeService.trigger(execution.getId());

8.5.12. Shell Task

Description

The Shell task allows you to run shell scripts and commands. Note that the Shell task is not an official task of BPMN 2.0 spec (and doesn’t have a dedicated icon as a consequence).

Defining a Shell task

The Shell task is implemented as a dedicated Service Task and is defined by setting 'shell' for the type of the service task.

1
<serviceTask id="shellEcho" flowable:type="shell">

The Shell task is configured by field injection. All the values for these properties can contain EL expression, which are resolved at runtime during process execution. The following properties can be set:

Property Required? Type Description Default

command

yes

String

Shell command to execute.

arg0-5

no

String

Parameter 0 to Parameter 5

wait

no

true/false

Wait if necessary, until the shell process has terminated.

true

redirectError

no

true/false

Merge standard error with the standard output.

false

cleanEnv

no

true/false

Shell process does not inherit current environment.

false

outputVariable

no

String

Name of variable to hold the output

Output is not recorded.

errorCodeVariable

no

String

Name of variable to hold any result error code

Error level is not registered.

directory

no

String

Default directory of the shell process

Current directory

Example usage

The following XML snippet shows an example of using the Shell Task. It runs the shell script "cmd /c echo EchoTest", waits for it to be terminated and puts the result in resultVar:

1 2 3 4 5 6 7 8 9 10
<serviceTask id="shellEcho" flowable:type="shell" > <extensionElements> <flowable:field name="command" stringValue="cmd" /> <flowable:field name="arg1" stringValue="/c" /> <flowable:field name="arg2" stringValue="echo" /> <flowable:field name="arg3" stringValue="EchoTest" /> <flowable:field name="wait" stringValue="true" /> <flowable:field name="outputVariable" stringValue="resultVar" /> </extensionElements> </serviceTask>

8.5.13. Execution listener

Execution listeners allow you to execute external Java code or evaluate an expression when certain events occur during process execution. The events that can be captured are:

  • Starting and ending a process instance.

  • Taking a transition.

  • Starting and ending an activity.

  • Starting and ending a gateway.

  • Starting and ending an intermediate event.

  • Ending a start event and starting an end event.

The following process definition contains 3 execution listeners:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
<process id="executionListenersProcess"> <extensionElements> <flowable:executionListener class="org.flowable.examples.bpmn.executionlistener.ExampleExecutionListenerOne" event="start" /> </extensionElements> <startEvent id="theStart" /> <sequenceFlow sourceRef="theStart" targetRef="firstTask" /> <userTask id="firstTask" /> <sequenceFlow sourceRef="firstTask" targetRef="secondTask"> <extensionElements> <flowable:executionListener class="org.flowable.examples.bpmn.executionListener.ExampleExecutionListenerTwo" /> </extensionElements> </sequenceFlow> <userTask id="secondTask" > <extensionElements> <flowable:executionListener expression="${myPojo.myMethod(execution.event)}" event="end" /> </extensionElements> </userTask> <sequenceFlow sourceRef="secondTask" targetRef="thirdTask" /> <userTask id="thirdTask" /> <sequenceFlow sourceRef="thirdTask" targetRef="theEnd" /> <endEvent id="theEnd" /> </process>

The first execution listener is notified when the process starts. The listener is an external Java-class (ExampleExecutionListenerOne) and should implement an org.flowable.engine.delegate.ExecutionListener interface. When the event occurs (in this case end event), the method notify(ExecutionListenerExecution execution) is called.

1 2 3 4 5 6 7
public class ExampleExecutionListenerOne implements ExecutionListener { public void notify(ExecutionListenerExecution execution) throws Exception { execution.setVariable("variableSetInExecutionListener", "firstValue"); execution.setVariable("eventReceived", execution.getEventName()); } }

It is also possible to use a delegation class that implements the org.flowable.engine.delegate.JavaDelegate interface. These delegation classes can then be reused in other constructs, such as a delegation for a serviceTask.

The second execution listener is called when the transition is taken. Note that the listener element doesn’t define an event, since only take events are fired on transitions. Values in the event attribute are ignored when a listener is defined on a transition.

The last execution listener is called when the activity secondTask ends. Instead of using the class on the listener declaration, a expression is defined instead, which is evaluated/invoked when the event is fired.

1
<flowable:executionListener expression="${myPojo.myMethod(execution.eventName)}" event="end" />

As with other expressions, execution variables are resolved and can be used. Because the execution implementation object has a property that exposes the event name, it’s possible to pass the event-name to your methods using execution.eventName.

Execution listeners also support using a delegateExpression, similar to a service task.

1
<flowable:executionListener event="start" delegateExpression="${myExecutionListenerBean}" />

A while back, we also introduced a new type of execution listener, the org.flowable.engine.impl.bpmn.listener.ScriptExecutionListener. This script execution listener allows you to execute a piece of script logic for an execution listener event.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
<flowable:executionListener event="start" class="org.flowable.engine.impl.bpmn.listener.ScriptExecutionListener"> <flowable:field name="script"> <flowable:string> def bar = "BAR"; // local variable foo = "FOO"; // pushes variable to execution context execution.setVariable("var1", "test"); // test access to execution instance bar // implicit return value </flowable:string> </flowable:field> <flowable:field name="language" stringValue="groovy" /> <flowable:field name="resultVariable" stringValue="myVar" /> </flowable:executionListener>
Field injection on execution listeners

When using an execution listener that is configured with the class attribute, field injection can be applied. This is exactly the same mechanism as used in Service task field injection, which contains an overview of the possibilities provided by field injection.

The fragment below shows a simple example process with an execution listener with fields injected.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
<process id="executionListenersProcess"> <extensionElements> <flowable:executionListener class="org.flowable.examples.bpmn.executionListener.ExampleFieldInjectedExecutionListener" event="start"> <flowable:field name="fixedValue" stringValue="Yes, I am " /> <flowable:field name="dynamicValue" expression="${myVar}" /> </flowable:executionListener> </extensionElements> <startEvent id="theStart" /> <sequenceFlow sourceRef="theStart" targetRef="firstTask" /> <userTask id="firstTask" /> <sequenceFlow sourceRef="firstTask" targetRef="theEnd" /> <endEvent id="theEnd" /> </process>
1 2 3 4 5 6 7 8 9 10 11
public class ExampleFieldInjectedExecutionListener implements ExecutionListener { private Expression fixedValue; private Expression dynamicValue; public void notify(ExecutionListenerExecution execution) throws Exception { execution.setVariable("var", fixedValue.getValue(execution).toString() + dynamicValue.getValue(execution).toString()); } }

The class ExampleFieldInjectedExecutionListener concatenates the two injected fields (one fixed an the other dynamic) and stores this in the process variable var.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
@Deployment(resources = { "org/flowable/examples/bpmn/executionListener/ExecutionListenersFieldInjectionProcess.bpmn20.xml"}) public void testExecutionListenerFieldInjection() { Map<String, Object> variables = new HashMap<String, Object>(); variables.put("myVar", "listening!"); ProcessInstance processInstance = runtimeService.startProcessInstanceByKey( "executionListenersProcess", variables); Object varSetByListener = runtimeService.getVariable(processInstance.getId(), "var"); assertNotNull(varSetByListener); assertTrue(varSetByListener instanceof String); // Result is a concatenation of fixed injected field and injected expression assertEquals("Yes, I am listening!", varSetByListener); }

Note that the same rules with regards to thread-safety apply to service tasks. Please read the relevant section for more information.

8.5.14. Task listener

A task listener is used to execute custom Java logic or an expression on the occurrence of a certain task-related event.

A task listener can only be added in the process definition as a child element of a user task. Note that this also must happen as a child of the BPMN 2.0 extensionElements and in the flowable namespace, since a task listener is a Flowable-specific construct.

1 2 3 4 5
<userTask id="myTask" name="My Task" > <extensionElements> <flowable:taskListener event="create" class="org.flowable.MyTaskCreateListener" /> </extensionElements> </userTask>

A task listener supports the following attributes:

  • event (required): the type of task event on which the task listener will be invoked. Possible events are

    • create: occurs when the task has been created and all task properties are set.

    • assignment: occurs when the task is assigned to somebody. Note: when process execution arrives in a userTask, first an assignment event will be fired, before the create event is fired. This might seem an unnatural order, but the reason is pragmatic: when receiving the create event, we usually want to inspect all properties of the task including the assignee.

    • complete: occurs when the task is completed and just before the task is deleted from the runtime data.

    • delete: occurs just before the task is going to be deleted. Notice that it will also be executed when task is normally finished via completeTask.

  • class: the delegation class that must be called. This class must implement the org.flowable.engine.delegate.TaskListener interface.

1 2 3 4 5 6
public class MyTaskCreateListener implements TaskListener { public void notify(DelegateTask delegateTask) { // Custom logic goes here } }

It is also possible to use field injection to pass process variables or the execution to the delegation class. Note that an instance of the delegation class is created on process deployment (as is the case with any class delegation in Flowable), which means that the instance is shared between all process instance executions.

  • expression: (cannot be used together with the class attribute): specifies an expression that will be executed when the event happens. It is possible to pass the DelegateTask object and the name of the event (using task.eventName) as parameter to the called object.

1
<flowable:taskListener event="create" expression="${myObject.callMethod(task, task.eventName)}" />
  • delegateExpression allows you to specify an expression that resolves to an object implementing the TaskListener interface, similar to a service task.

1
<flowable:taskListener event="create" delegateExpression="${myTaskListenerBean}" />
  • A while back, we also introduced a new type of task listener, the org.flowable.engine.impl.bpmn.listener.ScriptTaskListener. This script task listener allows you to execute a piece of script logic for a task listener event.

1 2 3 4 5 6 7 8 9 10 11 12
<flowable:taskListener event="complete" class="org.flowable.engine.impl.bpmn.listener.ScriptTaskListener" > <flowable:field name="script"> <flowable:string> def bar = "BAR"; // local variable foo = "FOO"; // pushes variable to execution context task.setOwner("kermit"); // test access to task instance bar // implicit return value </flowable:string> </flowable:field> <flowable:field name="language" stringValue="groovy" /> <flowable:field name="resultVariable" stringValue="myVar" /> </flowable:taskListener>

8.5.15. Multi-instance (for each)

Description

A multi-instance activity is a way of defining repetition for a certain step in a business process. In programming concepts, a multi-instance is equivalent to the for each construct: it allows you to execute a certain step, or even a complete sub-process, for each item in a given collection, sequentially or in parallel.

A multi-instance is a regular activity that has extra properties defined (named 'multi-instance characteristics') that will cause the activity to be executed multiple times at runtime. The following activities can become a _multi-instance activity:

A Gateway or Event cannot become multi-instance.

As required by the BPMN 2.0 specification, each parent execution of the created executions for each instance will have following variables:

  • nrOfInstances: the total number of instances

  • nrOfActiveInstances: the number of currently active (not yet finished) instances. For a sequential multi-instance, this will always be 1.

  • nrOfCompletedInstances: the number of already completed instances.

These values can be retrieved by calling the execution.getVariable(x) method.

Additionally, each of the created executions will have an execution-local variable (not visible for the other executions, and not stored on process instance level):

  • loopCounter: indicates the index in the for-each loop of that particular instance. The loopCounter variable can be renamed using a Flowable elementIndexVariable attribute.

Graphical notation

If an activity is multi-instance, this is indicated by three short lines at the bottom of the activity. Three vertical lines indicates that the instances will be executed in parallel, while three horizontal lines indicate sequential execution.

bpmn.multi.instance
XML representation

To make an activity multi-instance, the activity XML element must have a multiInstanceLoopCharacteristics child element.

1 2 3
<multiInstanceLoopCharacteristics isSequential="false|true"> ... </multiInstanceLoopCharacteristics>

The isSequential attribute indicates if the instances of that activity are executed sequentially or in parallel.

The number of instances are calculated once when entering the activity. There are a few ways of configuring this. One way is directly specifying a number using the loopCardinality child element.

1 2 3
<multiInstanceLoopCharacteristics isSequential="false|true"> <loopCardinality>5</loopCardinality> </multiInstanceLoopCharacteristics>

Expressions that resolve to a positive number are also allowed:

1 2 3
<multiInstanceLoopCharacteristics isSequential="false|true"> <loopCardinality>${nrOfOrders-nrOfCancellations}</loopCardinality> </multiInstanceLoopCharacteristics>

Another way to define the number of instances is to specify the name of a process variable that is a collection using the loopDataInputRef child element. For each item in the collection, an instance will be created. Optionally, it is possible to set that specific item of the collection for the instance using the inputDataItem child element. This is shown in the following XML example:

1 2 3 4 5 6
<userTask id="miTasks" name="My Task ${loopCounter}" flowable:assignee="${assignee}"> <multiInstanceLoopCharacteristics isSequential="false"> <loopDataInputRef>assigneeList</loopDataInputRef> <inputDataItem name="assignee" /> </multiInstanceLoopCharacteristics> </userTask>

Suppose the variable assigneeList contains the values \[kermit, gonzo, fozzie\]. In the snippet above, three user tasks will be created in parallel. Each of the executions will have a process variable named assignee containing one value of the collection, which is used to assign the user task in this example.

The downside of the loopDataInputRef and inputDataItem is that the names are pretty hard to remember, and due to the BPMN 2.0 schema restrictions, they can’t contain expressions. Flowable solves this by offering the collection and elementVariable attributes on the multiInstanceCharacteristics:

1 2 3 4 5
<userTask id="miTasks" name="My Task" flowable:assignee="${assignee}"> <multiInstanceLoopCharacteristics isSequential="true" flowable:collection="${myService.resolveUsersForTask()}" flowable:elementVariable="assignee" > </multiInstanceLoopCharacteristics> </userTask>

A multi-instance activity ends when all instances are finished. However, it is possible to specify an expression that is evaluated every time an instance ends. When this expression evaluates to true, all remaining instances are destroyed and the multi-instance activity ends, continuing the process. Such an expression must be defined in the completionCondition child element.

1 2 3 4 5 6
<userTask id="miTasks" name="My Task" flowable:assignee="${assignee}"> <multiInstanceLoopCharacteristics isSequential="false" flowable:collection="assigneeList" flowable:elementVariable="assignee" > <completionCondition>${nrOfCompletedInstances/nrOfInstances >= 0.6 }</completionCondition> </multiInstanceLoopCharacteristics> </userTask>

In this example, there will be parallel instances created for each element of the assigneeList collection. However, when 60% of the tasks are completed, the other tasks are deleted and the process continues.

Boundary events and multi-instance

Since a multi-instance is a regular activity, it is possible to define a boundary event on its boundary. In the case of an interrupting boundary event, when the event is caught, all instances that are still active will be destroyed. Take, for example, the following multi-instance sub-process:

bpmn.multi.instance.boundary.event

Here, all instances of the sub-process will be destroyed when the timer fires, regardless of how many instances there are or which inner activities haven’t yet completed.

Multi instance and execution listeners

There is a caveat when using execution listeners in combination with multi instance. Take, for example, the following snippet of BPMN 2.0 XML, which is defined at the same level as the multiInstanceLoopCharacteristics XML element is set:

1 2 3 4
<extensionElements> <flowable:executionListener event="start" class="org.flowable.MyStartListener"/> <flowable:executionListener event="end" class="org.flowable.MyEndListener"/> </extensionElements>

For a normal BPMN activity, there will be an invocation of these listeners when the activity is started and ended.

However, when the activity is multi-instance, the behavior is different:

  • When the multi-instance activity is entered, before any of the inner activities is executed, a start event is thrown. The loopCounter variable is not yet set (it is null).

  • For each of the actual activities visited, a start event is thrown. The loopCounter variable is set.

The same logic applies for the end event:

  • After leaving the actual activity, an end even is thrown. The loopCounter variable is set.

  • When the multi-instance activity has finished as a whole, an end event is thrown. The loopCounter variable is not set.

For example:

1 2 3 4 5 6 7 8 9 10 11 12 13
<subProcess id="subprocess1" name="Sub Process"> <extensionElements> <flowable:executionListener event="start" class="org.flowable.MyStartListener"/> <flowable:executionListener event="end" class="org.flowable.MyEndListener"/> </extensionElements> <multiInstanceLoopCharacteristics isSequential="false"> <loopDataInputRef>assignees</loopDataInputRef> <inputDataItem name="assignee"></inputDataItem> </multiInstanceLoopCharacteristics> <startEvent id="startevent2" name="Start"></startEvent> <endEvent id="endevent2" name="End"></endEvent> <sequenceFlow id="flow3" name="" sourceRef="startevent2" targetRef="endevent2"></sequenceFlow> </subProcess>

In this example, suppose the assignees list has three items. The following happens at runtime:

  • A start event is thrown for the multi-instance as a whole. The start execution listener is invoked. The loopCounter and the assignee variables will not be set (they will be null).

  • A start event is thrown for each activity instance. The start execution listener is invoked three times. The loopCounter and the assignee variables will be set (not null).

  • So, in total, the start execution listener is invoked four times.

Note that the same applies when the multiInstanceLoopCharacteristics are defined on something other than a sub-process too. For example, if the example above was a simple userTask, the same reasoning still applies.

8.5.16. Compensation Handlers

Description

If an activity is used for compensating the effects of another activity, it can be declared to be a compensation handler. Compensation handlers do not exist in normal flows and are only executed when a compensation event is thrown.

Compensation handlers must not have incoming or outgoing sequence flows.

A compensation handler must be associated with a compensation boundary event using a directed association.

Graphical notation

If an activity is a compensation handler, the compensation event icon is displayed in the center bottom area. The following excerpt from a process diagram shows a service task with an attached compensation boundary event, which is associated to a compensation handler. Notice the compensation handler icon in the bottom canter area of the "cancel hotel reservation" service task.

bpmn.boundary.compensation.event
XML representation

In order to declare an activity to be a compensation handler, we need to set the attribute isForCompensation to true:

1 2
<serviceTask id="undoBookHotel" isForCompensation="true" flowable:class="..."> </serviceTask>

8.6. Sub-Processes and Call Activities

8.6.1. Sub-Process

Description

A Sub-Process is an activity that contains other activities, gateways, events, and so on, which in itself forms a process that is part of the bigger process. A Sub-Process is completely defined inside a parent process (that’s why it’s often called an embedded Sub-Process).

Sub-Processes have two major use cases:

  • Sub-Processes allow hierarchical modeling. Many modeling tools allow Sub-Processes to be collapsed, hiding all the details of the Sub-Process, resulting in a high-level, end-to-end overview of the business process.

  • A Sub-Process creates a new scope for events. Events that are thrown during execution of the Sub-Process can be caught by a boundary event on the boundary of the Sub-Process, creating a scope for that event limited to the Sub-Process.

Using a Sub-Process does impose some constraints:

  • A Sub-Process can only have one none start event, no other start event types are allowed. A Sub-Process must at least have one end event. Note that the BPMN 2.0 specification allows the omission of the start and end events in a Sub-Process, but the current Flowable implementation does not support this.

  • Sequence flows cannot cross Sub-Process boundaries.

Graphical Notation

A Sub-Process is visualized as a typical activity (a rounded rectangle). If the Sub-Process is collapsed, only the name and a plus-sign are displayed, giving a high-level overview of the process:

bpmn.collapsed.subprocess

If the Sub-Process is expanded, the steps of the Sub-Process are displayed within the Sub-Process boundaries:

bpmn.expanded.subprocess

One of the main reasons to use a Sub-Process is to define a scope for a certain event. The following process model shows this: both the investigate software/investigate hardware tasks need to be done in parallel, but both tasks need to be done within a certain time, before Level 2 support is consulted. Here, the scope of the timer (in which activities must be done in time) is constrained by the Sub-Process.

bpmn.subprocess.with.boundary.timer
XML representation

A Sub-Process is defined by the subProcess element. All activities, gateways, events, and son on, that are part of the Sub-Process, need to be enclosed within this element.

1 2 3 4 5 6 7 8 9
<subProcess id="subProcess"> <startEvent id="subProcessStart" /> ... other Sub-Process elements ... <endEvent id="subProcessEnd" /> </subProcess>

8.6.2. Event Sub-Process

Description

The Event Sub-Process is new in BPMN 2.0. An Event Sub-Process is a sub-process that is triggered by an event. An Event Sub-Process can be added at the process level or at any sub-process level. The event used to trigger an event sub-process is configured using a start event. From this, it follows that none start events are not supported for Event Sub-Processes. An Event Sub-Process might be triggered using events, such as message events, error events, signal events, timer events, or compensation events. The subscription to the start event is created when the scope (process instance or sub-process) hosting the Event Sub-Process is created. The subscription is removed when the scope is destroyed.

An Event Sub-Process may be interrupting or non-interrupting. An interrupting sub-process cancels any executions in the current scope. A non-interrupting Event Sub-Process spawns a new concurrent execution. While an interrupting Event Sub-Process can only be triggered once for each activation of the scope hosting it, a non-interrupting Event Sub-Process can be triggered multiple times. The fact of whether a sub-process is interrupting is configured using the start event triggering the Event Sub-Process.

An Event Sub-Process must not have any incoming or outgoing sequence flows. As an Event Sub-Process is triggered by an event, an incoming sequence flow makes no sense. When an Event Sub-Process is ended, either the current scope is ended (if an interrupting Event Sub-Process), or the concurrent execution spawned for the non-interrupting sub-process is ended.

Current limitations:

  • Flowable supports Event Sub-Process triggered using an Error, Timer, Signal and Message Start Events.

Graphical Notation

An Event Sub-Process can be visualized as an embedded sub-process with a dotted outline.

bpmn.subprocess.eventSubprocess
XML representation

An Event Sub-Process is represented using XML in the same way as an embedded sub-process. In addition the attribute triggeredByEvent must have the value true:

1 2 3
<subProcess id="eventSubProcess" triggeredByEvent="true"> ... </subProcess>
Example

The following is an example of an Event Sub-Process triggered using an Error Start Event. The Event Sub-Process is located at the "process level", in other words, is scoped to the process instance:

bpmn.subprocess.eventSubprocess.example.1

This is how the Event Sub-Process would look in XML:

1 2 3 4 5 6 7
<subProcess id="eventSubProcess" triggeredByEvent="true"> <startEvent id="catchError"> <errorEventDefinition errorRef="error" /> </startEvent> <sequenceFlow id="flow2" sourceRef="catchError" targetRef="taskAfterErrorCatch" /> <userTask id="taskAfterErrorCatch" name="Provide additional data" /> </subProcess>

As already stated, an Event Sub-Process can also be added to an embedded sub-process. If it’s added to an embedded sub-process, it becomes an alternative to a boundary event. Consider the two following process diagrams. In both cases the embedded sub-process throws an error event. Both times the error is caught and handled using a user task.

bpmn.subprocess.eventSubprocess.example.2a

As opposed to:

bpmn.subprocess.eventSubprocess.example.2b

In both cases the same tasks are executed. However, there are differences between both modeling alternatives:

  • The embedded sub-process is executed using the same execution that executed the scope it is hosted in. This means that an embedded sub-process has access to the variables local to its scope. When using a boundary event, the execution created for executing the embedded sub-process is deleted by the sequence flow leaving the boundary event. This means that the variables created by the embedded sub-process are not available anymore.

  • When using an Event Sub-Process, the event is completely handled by the sub-process it is added to. When using a boundary event, the event is handled by the parent process.

These two differences can help you decide whether a boundary event or an embedded sub-process is better suited for solving a particular process modeling or implementation problem.

8.6.3. Transaction sub-process

Description

A transaction sub-process is an embedded sub-process that can be used to group multiple activities to a transaction. A transaction is a logical unit of work that allows to group a set of individual activities, such that they either succeed or fail collectively.

Possible outcomes of a transaction: A transaction can have three different outcomes:

  • A transaction is successful if it isn’t either cancelled or terminated by a hazard. If a transaction sub-process is successful, it is left using the outgoing sequence flows. A successful transaction might be compensated if a compensation event is thrown later in the process. Note: just as with "ordinary" embedded sub-processes, a transaction may be compensated after successful completion using an intermediary throwing compensation event.

  • A transaction is canceled if an execution reaches the cancel end event. In this case, all executions are terminated and removed. A single remaining execution is then set to the cancel boundary event, which triggers compensation. After compensation has completed, the transaction sub-process is left using the outgoing sequence flows of the cancel boundary event.

  • A transaction is ended by a hazard if an error event is thrown that is not caught within the scope of the transaction sub-process. This also applies if the error is caught on the boundary of the transaction sub-process. In these cases, compensation is not performed.

The following diagram illustrates the three different outcomes:

bpmn.transaction.subprocess.example.1

Relation to ACID transactions: it is important not to confuse the BPMN transaction sub-process with technical (ACID) transactions. The BPMN transaction sub-process is not a way to scope technical transactions. In order to understand transaction management in Flowable, read the section on concurrency and transactions. A BPMN transaction is different from a technical transaction in the following ways:

  • While an ACID transaction is typically short-lived, a BPMN transaction may take hours, days or even months to complete. Consider the case where one of the activities grouped by a transaction is a user task: typically people have longer response times than applications. Or, in another situation, a BPMN transaction might wait for some business event to occur, like the fact that a particular order has been fulfilled. Such operations usually take considerably longer to complete than updating a record in a database, or storing a message using a transactional queue.

  • Because it is impossible to scope a technical transaction to the duration of a business activity, a BPMN transaction typically spans multiple ACID transactions.

  • As a BPMN transaction spans multiple ACID transactions, we loose ACID properties. For example, consider the example given above. Let’s assume the "book hotel" and the "charge credit card" operations are performed in separate ACID transactions. Let’s also assume that the "book hotel" activity is successful. Now we have an intermediary inconsistent state, because we have performed a hotel booking, but have not yet charged the credit card. Now, in an ACID transaction, we would also perform different operations sequentially and thus also have an intermediary inconsistent state. What is different here, is that the inconsistent state is visible outside of the scope of the transaction. For example, if the reservations are made using an external booking service, other parties using the same booking service might already see that the hotel is booked. This means that when implementing business transactions, we completely loose the isolation property (granted, we usually also relax isolation when working with ACID transactions to allow for higher levels of concurrency, but there we have fine grained control and intermediary inconsistencies are only present for very short periods of times).

  • A BPMN business transaction can not be rolled back in the traditional sense. As it spans multiple ACID transactions, some of these ACID transactions might already be committed at the time the BPMN transaction is canceled. At this point, they cannot be rolled back anymore.

As BPMN transactions are long-running in nature, the lack of isolation and a rollback mechanism needs to be dealt with differently. In practice, there is usually no better solution than to deal with these problems in a domain-specific way:

  • The rollback is performed using compensation. If a cancel event is thrown in the scope of a transaction, the effects of all activities that executed successfully and have a compensation handler are compensated.

  • The lack of isolation is also often dealt with using domain-specific solutions. For instance, in the example above, a hotel room might appear to be booked to a second customer before we have actually made sure that the first customer can pay for it. As this might be undesirable from a business perspective, a booking service might choose to allow for a certain amount of overbooking.

  • In addition, as the transaction can be aborted in the case of a hazard, the booking service has to deal with the situation where a hotel room is booked, but payment is never attempted (since the transaction was aborted). In this case, the booking service might choose a strategy where a hotel room is reserved for a maximum period of time and if payment is not received by then, the booking is canceled.

To sum it up: while ACID transactions offer a generic solution to such problems (rollback, isolation levels and heuristic outcomes), we need to find domain-specific solutions to these problems when implementing business transactions.

Current limitations:

  • The BPMN specification requires that the process engine reacts to events issued by the underlying transaction protocol and, for instance, that a transaction is cancelled if a cancel event occurs in the underlying protocol. As an embeddable engine, Flowable does not currently support this. For some ramifications of this, see the paragraph on consistency below.

Consistency on top of ACID transactions and optimistic concurrency: A BPMN transaction guarantees consistency in the sense that either all activities compete successfully, or if some activity cannot be performed, the effects of all other successful activities are compensated. So, either way, we end up in a consistent state. However, it is important to recognize that in Flowable, the consistency model for BPMN transactions is superposed on top of the consistency model for process execution. Flowable executes processes in a transactional way. Concurrency is addressed using optimistic locking. In Flowable, BPMN error, cancel and compensation events are built on top of the same ACID transactions and optimistic locking. For example, a cancel end event can only trigger compensation if it is actually reached. It is not reached if some undeclared exception is thrown by a service task before. Or, the effects of a compensation handler cannot be committed if some other participant in the underlying ACID transaction sets the transaction to the state rollback-only. Or, when two concurrent executions reach a cancel end event, compensation might be triggered twice and fail with an optimistic locking exception. All of this is to say that when implementing BPMN transactions in Flowable, the same set of rules apply as when implementing "ordinary" processes and sub-processes. So, to effectively guarantee consistency, it is important to implement processes in a way that does take the optimistic, transactional execution model into consideration.

Graphical Notation

A transaction sub-process is visualized as a an embedded sub-process with a double outline.

bpmn.transaction.subprocess
XML representation

A transaction sub-process is represented in XML using the transaction tag:

1 2 3
<transaction id="myTransaction" > ... </transaction>
Example

The following is an example of a transaction sub-process:

bpmn.transaction.subprocess.example.2

8.6.4. Call activity (sub-process)

Description

BPMN 2.0 makes a distinction between a regular sub-process, often also called embedded sub-process, and the call activity, which looks very similar. From a conceptual point of view, both will call a sub-process when the process execution arrives at the activity.

The difference is that the call activity references a process that is external to the process definition, whereas the sub-process is embedded within the original process definition. The main use case for the call activity is to have a reusable process definition that can be called from multiple other process definitions.

When process execution arrives at the call activity, a new execution is created that is a sub-execution of the execution that arrived at the call activity. This sub-execution is then used to execute the sub-process, potentially creating parallel child executions, as within a regular process. The super-execution waits until the sub-process has completely ended, and continues with the original process afterwards.

Graphical Notation

A call activity is visualized in the same way as a sub-process, but with a thick border (collapsed and expanded). Depending on the modeling tool, a call activity can also be expanded, but the default visualization is the collapsed sub-process representation.

bpmn.collapsed.call.activity
XML representation

A call activity is a regular activity, which requires a calledElement that references a process definition by its key. In practice, this means that the ID of the process is used in the calledElement.

1
<callActivity id="callCheckCreditProcess" name="Check credit" calledElement="checkCreditProcess" />

Note that the process definition of the sub-process is resolved at runtime. This means that the sub-process can be deployed independently from the calling process, if needed.

Passing variables

You can pass process variables to the sub-process and vice versa. The data is copied into the sub-process when it is started and copied back into the main process when it ends.

1 2 3 4 5 6 7 8
<callActivity id="callSubProcess" calledElement="checkCreditProcess"> <extensionElements> <flowable:in source="someVariableInMainProcess" target="nameOfVariableInSubProcess" /> <flowable:out source="someVariableInSubProcess" target="nameOfVariableInMainProcess" /> </extensionElements> </callActivity>

You can pass all process variables to the sub-process by setting the option inheritVariables to true.

1
<callActivity id="callSubProcess" calledElement="checkCreditProcess" flowable:inheritVariables="true"/>

We provide a Flowable Extension as a shortcut for the BPMN standard elements, named dataInputAssociation and dataOutputAssociation, which only work if you declare process variables in the BPMN 2.0 standard way.

It is possible to use expressions here as well:

1 2 3 4 5 6
<callActivity id="callSubProcess" calledElement="checkCreditProcess" > <extensionElements> <flowable:in sourceExpression="${x+5}" target="y" /> <flowable:out source="${y+5}" target="z" /> </extensionElements> </callActivity>

So, in the end z = y+5 = x+5+5.

The callActivity element also supports setting the business key on the sub-process instance using a custom Flowable attribute extension. The businessKey attribute can be used to set a custom business key value on the sub-process instance.

<callActivity id="callSubProcess" calledElement="checkCreditProcess" flowable:businessKey="${myVariable}">
...
</callActivity>

Defining the inheritBusinessKey attribute with a value of true will set the business key value on the sub-process to the value of the business key as defined in the calling process.

<callActivity id="callSubProcess" calledElement="checkCreditProcess" flowable:inheritBusinessKey="true">
...
</callActivity>
Example

The following process diagram shows a simple handling of an order. As the checking of the customer’s credit could be common to many other processes, the check credit step is modeled here as a call activity.

bpmn.call.activity.super.process

The process looks as follows:

1 2 3 4 5 6 7 8 9 10 11 12 13
<startEvent id="theStart" /> <sequenceFlow id="flow1" sourceRef="theStart" targetRef="receiveOrder" /> <manualTask id="receiveOrder" name="Receive Order" /> <sequenceFlow id="flow2" sourceRef="receiveOrder" targetRef="callCheckCreditProcess" /> <callActivity id="callCheckCreditProcess" name="Check credit" calledElement="checkCreditProcess" /> <sequenceFlow id="flow3" sourceRef="callCheckCreditProcess" targetRef="prepareAndShipTask" /> <userTask id="prepareAndShipTask" name="Prepare and Ship" /> <sequenceFlow id="flow4" sourceRef="prepareAndShipTask" targetRef="end" /> <endEvent id="end" />

The sub-process looks as follows:

bpmn.call.activity.sub.process

There is nothing special about the process definition of the sub-process. It could as well be used without being called from another process.

8.7. Transactions and Concurrency

8.7.1. Asynchronous Continuations

Flowable executes processes in a transactional way that can be configured to suite your needs. Let’s start by looking at how Flowable scopes transactions normally. If you trigger Flowable (start a process, complete a task, signal an execution), Flowable is going to advance in the process until it reaches wait states on each active path of execution. More concretely speaking, it performs a depth-first search through the process graph and returns if it has reached wait states on every branch of execution. A wait state is a task that is performed "later", which means that Flowable persists the current execution and waits to be triggered again. The trigger can either come from an external source, for example, if we have a user task or a receive message task, or from Flowable itself if we have a timer event. This is illustrated in the following picture:

async.example.no.async

We see a segment of a BPMN process with a user task, a service task and a timer event. Completing the user task and validating the address is part of the same unit of work, so it should succeed or fail atomically. That means that if the service task throws an exception, we want to rollback the current transaction, such that the execution tracks back to the user task and the user task is still present in the database. This is also the default behavior of Flowable. In (1) an application or client thread completes the task. In that same thread, Flowable is now executing the service and advances until it reaches a wait state, in this case, the timer event (2). Then it returns the control to the caller (3), potentially committing the transaction (if it was started by Flowable).

In some cases this is not what we want. Sometimes we need custom control over transaction boundaries in a process, in order to be able to scope logical units of work. This is where asynchronous continuations come into play. Consider the following process (fragment):

async.example.async

This time we are completing the user task, generating an invoice and then sending that invoice to the customer. This time the generation of the invoice is not part of the same unit of work, so we do not want to rollback the completion of the user task if generating an invoice fails. What we want Flowable to do is complete the user task (1), commit the transaction and return the control to the calling application. Then we want to generate the invoice asynchronously, in a background thread. This background thread is the Flowable job executor (actually a thread pool) that periodically polls the database for jobs. Behind the scenes, when we reach the "generate invoice" task, we are creating a job "message" for Flowable to continue the process later and persisting it into the database. This job is then picked up by the job executor and executed. We are also giving the local job executor a little hint that there is a new job, to improve performance.

In order to use this feature, we can use the flowable:async="true" extension. So, for example, the service task would look like this:

1 2 3
<serviceTask id="service1" name="Generate Invoice" flowable:class="my.custom.Delegate" flowable:async="true" />

flowable:async can be specified on the following BPMN task types: task, serviceTask, scriptTask, businessRuleTask, sendTask, receiveTask, userTask, subProcess, callActivity

On a userTask, receiveTask or other wait states, the async continuation allows us to execute the start execution listeners in a separate thread/transaction.

8.7.2. Fail Retry

Flowable, in its default configuration, reruns a job three times if there’s any exception in the execution of a job. This is also true for asynchronous jobs. In some cases more flexibility is required, and two additional parameters can be configured:

  • Number of retries

  • Delay between retries

These parameters can be configured by the flowable:failedJobRetryTimeCycle element. Here is a sample usage:

1 2 3 4 5 6 7
<serviceTask id="failingServiceTask" flowable:async="true" flowable:class="org.flowable.engine.test.jobexecutor.RetryFailingDelegate"> <extensionElements> <flowable:failedJobRetryTimeCycle>R5/PT7M</flowable:failedJobRetryTimeCycle> </extensionElements> </serviceTask>

Time cycle expression follows ISO 8601 standard, just like timer event expressions. The example above makes the job executor retry the job 5 times and wait 7 minutes between before each retry.

8.7.3. Exclusive Jobs

In recent releases, the JobExecutor makes sure that jobs from a single process instance are never executed concurrently. Why is this?

Why exclusive Jobs?

Consider the following process definition:

bpmn.why.exclusive.jobs

We have a parallel gateway followed by three service tasks that all perform an asynchronous continuation. As a result of this, three jobs are added to the database. Once such a job is present in the database it can be processed by the JobExecutor. The JobExecutor acquires the jobs and delegates them to a thread pool of worker threads that actually process the jobs. This means that by using an asynchronous continuation, you can distribute the work to this thread pool (and in a clustered scenario even across multiple thread pools in the cluster). This is usually a good thing. However, it also has an inherent problem: consistency. Consider the parallel join after the service tasks. When execution of a service task is completed, we arrive at the parallel join and need to decide whether to wait for the other executions or whether we can move forward. That means, for each branch arriving at the parallel join, we need to take a decision whether we can continue or whether we need to wait for one or more other executions on the other branches.

Why is this a problem? As the service tasks are configured using an asynchronous continuation, it is possible that the corresponding jobs are all acquired at the same time and delegated to different worker threads by the JobExecutor. The consequence is that the transactions in which the services are executed and in which the three individual executions arrive at the parallel join can overlap. If they do so, each individual transaction will not "see", that another transaction is arriving at the same parallel join concurrently and therefore assume that it has to wait for the others. However, if each transaction assumes that it has to wait for the other ones, none will continue the process after the parallel join and the process instance will remain in that state forever.

How does Flowable address this problem? Flowable performs optimistic locking. Whenever we take a decision based on data that might not be current (because another transaction might modify it before we commit), we make sure to increment the version of the same database row in both transactions. This way, whichever transaction commits first wins and the other ones fail with an optimistic locking exception. This solves the problem in the case of the process discussed above: if multiple executions arrive at the parallel join concurrently, they all assume that they have to wait, increment the version of their parent execution (the process instance) and then try to commit. Whichever execution is first will be able to commit and the other ones will fail with an optimistic locking exception. As the executions are triggered by a job, Flowable will retry to perform the same job after waiting for a certain amount of time and hopefully this time pass the synchronizing gateway.

Is this a good solution? As we have seen, optimistic locking allows Flowable to prevent inconsistencies. It makes sure that we do not "stay stuck at the joining gateway", meaning: either all executions have passed the gateway, or there are jobs in the database making sure that we retry passing it. However, while this is a perfectly fine solution from the point of view of persistence and consistency, this might not always be desirable behavior at a higher level:

  • Flowable will retry the same job for a fixed maximum number of times only (3 in the default configuration). After that, the job will still be present in the database but will not be retried actively anymore. That means that an external operator would need to trigger the job manually.

  • If a job has non-transactional side effects, these will not be rolled back by the failing transaction. For instance, if the "book concert tickets" service does not share the same transaction as Flowable, we might book multiple tickets if we retry the job.

What are exclusive jobs?

An exclusive job cannot be performed at the same time as another exclusive job from the same process instance. Consider the process shown above: if we declare the service tasks to be exclusive, the JobExecutor will make sure that the corresponding jobs are not executed concurrently. Instead, it will make sure that whenever it acquires an exclusive job from a certain process instance, it acquires all other exclusive jobs from the same process instance and delegate them to the same worker thread. This ensures sequential execution of the jobs.

How can I enable this feature? In recent releases, exclusive jobs are the default configuration. All asynchronous continuations and timer events are exclusive by default. In addition, if you want a job to be non-exclusive, you can configure it as such using flowable:exclusive="false". For example, the following service task is asynchronous but non-exclusive.

1 2
<serviceTask id="service" flowable:expression="${myService.performBooking(hotel, dates)}" flowable:async="true" flowable:exclusive="false" />

Is this a good solution? We’ve had some people asking whether this was a good solution. Their concern was that this would prevent you from "doing things" in parallel and would consequently be a performance problem. Again, two things have to be taken into consideration:

  • It can be turned off if you’re an expert and know what you are doing (and have understood the section on "Why exclusive Jobs?"). Other than that, it’s more intuitive for most users if things such as asynchronous continuations and timers just work.

  • It’s not actually a performance issue. Performance is an issue under heavy load. Heavy load means that all worker threads of the job executor are busy all the time. With exclusive jobs, Flowable will simply distribute the load differently. Exclusive jobs means that jobs from a single process instance are performed by the same thread sequentially. But consider: you have more than one single process instance. Jobs from other process instances are delegated to other threads and executed concurrently. This means that with exclusive jobs, Flowable will not execute jobs from the same process instance concurrently, but it will still execute multiple instances concurrently. From an overall throughput perspective, this is desirable in most scenarios as it usually leads to individual instances being done more quickly. Furthermore, data that is required for executing subsequent jobs of the same process instance will already be in the cache of the executing cluster node. If the jobs do not have this node affinity, that data might need to be fetched from the database again.

8.8. Process Initiation Authorization

By default, everyone is allowed to start a new process instance of deployed process definitions. The process initiation authorization functionality allows you to define users and groups so that web clients can optionally restrict the users who can start a new process instance. NOTE that the authorization definition is NOT validated by the Flowable engine in any way. This functionality is only meant for developers to ease the implementation of authorization rules in a web client. The syntax is similar to the syntax of user assignment for a user task. A user or group can be assigned as the potential initiator of a process using <flowable:potentialStarter> tag. Here is an example:

1 2 3 4 5 6 7 8 9 10 11
<process id="potentialStarter"> <extensionElements> <flowable:potentialStarter> <resourceAssignmentExpression> <formalExpression>group2, group(group3), user(user3)</formalExpression> </resourceAssignmentExpression> </flowable:potentialStarter> </extensionElements> <startEvent id="theStart"/> ...

In the above XML excerpt, user(user3) refers directly to the user user3, and group(group3) to group group3. No indicator will default to a group type. It is also possible to use attributes of the <process> tag, namely <flowable:candidateStarterUsers> and <flowable:candidateStarterGroups>. Here is an example:

1 2 3
<process id="potentialStarter" flowable:candidateStarterUsers="user1, user2" flowable:candidateStarterGroups="group1"> ...

It is possible to use both attributes simultaneously.

After the process initiation authorizations are defined, a developer can retrieve the authorization definition using the following methods. This code retrieves the list of process definitions that can be initiated by the given user:

1
processDefinitions = repositoryService.createProcessDefinitionQuery().startableByUser("userxxx").list();

It’s also possible to retrieve all identity links that are defined as potential initiators for a specific process definition

1
identityLinks = repositoryService.getIdentityLinksForProcessDefinition("processDefinitionId");

The following example shows how to get list of users who can initiate the given process:

1 2 3
List<User> authorizedUsers = identityService().createUserQuery() .potentialStarter("processDefinitionId") .list();

In exactly the same way, the list of groups that is configured as a potential starter to a given process definition can be retrieved:

1 2 3
List<Group> authorizedGroups = identityService().createGroupQuery() .potentialStarter("processDefinitionId") .list();

8.9. Data objects

BPMN provides the possibility to define data objects as part of a process or sub process element. According to the BPMN specification it’s possible to include complex XML structures that might be imported from XSD definitions. As a first start to support data objects in Flowable the following XSD types are supported:

1 2 3 4 5 6
<dataObject id="dObj1" name="StringTest" itemSubjectRef="xsd:string"/> <dataObject id="dObj2" name="BooleanTest" itemSubjectRef="xsd:boolean"/> <dataObject id="dObj3" name="DateTest" itemSubjectRef="xsd:datetime"/> <dataObject id="dObj4" name="DoubleTest" itemSubjectRef="xsd:double"/> <dataObject id="dObj5" name="IntegerTest" itemSubjectRef="xsd:int"/> <dataObject id="dObj6" name="LongTest" itemSubjectRef="xsd:long"/>

The data object definitions will be automatically converted to process variables using the name attribute value as the name for the new variable. In addition to the definition of the data object, Flowable also provides an extension element to assign a default value to the variable. The following BPMN snippet provides an example:

1 2 3 4 5 6 7
<process id="dataObjectScope" name="Data Object Scope" isExecutable="true"> <dataObject id="dObj123" name="StringTest123" itemSubjectRef="xsd:string"> <extensionElements> <flowable:value>Testing123</flowable:value> </extensionElements> </dataObject> ...

9. Forms

Flowable provides a convenient and flexible way to add forms for the manual steps of your business processes. We support two strategies to work with forms: Built-in form rendering with a form definition (created with the form designer) and external form rendering. For the external form rendering strategy form properties can be used (that was supported in the Explorer web application in version 5), or a form key definition that points to an external form reference that can be resolved with custom coding.

9.1. Form definition

Full information about the form definitions and Flowable form engine can be found in the Form Engine user guide. Form definitions can be created with the Flowable Form Designer that’s part of the Flowable Modeler web application, or created by hand with a JSON editor. The Form Engine user guide describes the structure of the form definition JSON in full length. The following form field types are supported:

  • Text: rendered as a text field

  • Multiline text: rendered as a text area field

  • Number: rendered as a text field, but only allows numeric values

  • Checkbox: rendered as a checkbox field

  • Date: rendered as a date field

  • Dropdown: rendered as a select field with the option values configured in the field definition

  • Radio buttons: rendered as a radio field with the option values configured in the field definition

  • People: rendered as a select field where a person from the Identity user table can be selected

  • Group of people: rendered as a select field where a group from the Identity group table can be selected

  • Upload: rendered as an upload field

  • Expression: rendered as a label and allows you to use JUEL expressions to use variables and/or other dynamic values in the label text

The Flowable task application is able to render an html form from the form definition JSON. You can also use the Flowable API to get the form definition JSON yourself.

1
FormModel RuntimeService.getStartFormModel(String processDefinitionId, String processInstanceId)

or

1
FormModel TaskService.getTaskFormModel(String taskId)

The FormModel object is a Java object representation of the form definition JSON.

To start a process instance with a start form definition you can use the following API call:

1 2
ProcessInstance RuntimeService.startProcessInstanceWithForm(String processDefinitionId, String outcome, Map<String, Object> variables, String processInstanceName)

When a form definition is defined on (one of) the start event(s) of a process definition, this method can be used to start a process instance with the values filled-in in the start form. The Flowable task application uses this method to start a process instance with a form as well. All form values need to be passed in the variables map and an optional form outcome string and process instance name can be provided.

In a similar way, a user task can be completed with a form using the following API call:

1 2
void TaskService.completeTaskWithForm(String taskId, String formDefinitionId, String outcome, Map<String, Object> variables)

Again, for more information about form definitions have a look at the Form Engine user guide.

9.2. Form properties

All information relevant to a business process is either included in the process variables themselves or referenced through the process variables. Flowable supports complex Java objects to be stored as process variables like Serializable objects, JPA entities or whole XML documents as Strings.

Starting a process and completing user tasks is where people are involved into a process. Communicating with people requires forms to be rendered in some UI technology. In order to facilitate multiple UI technologies easy, the process definition can include the logic of transforming of the complex Java typed objects in the process variables to a Map<String,String> of properties.

Any UI technology can then build a form on top of those properties, using the Flowable API methods that expose the property information. The properties can provide a dedicated (and more limited) view on the process variables. The properties needed to display a form are available in the FormData return values of for example

1
StartFormData FormService.getStartFormData(String processDefinitionId)

or

1
TaskFormdata FormService.getTaskFormData(String taskId)

By default, the build-in form engines, sees the properties as well as the process variables. So there is no need to declare task form properties if they match 1-1 with the process variables. For example, with the following declaration:

1
<startEvent id="start" />

All process variables are available when execution arrives in the startEvent, but

1
formService.getStartFormData(String processDefinitionId).getFormProperties()

will be empty since no specific mapping was defined.

In the above case, all the submitted properties will be stored as process variables. This means that by simply adding a new input field in the form, a new variable can be stored.

Properties are derived from process variables, but they don’t have to be stored as process variables. For example, a process variable could be a JPA entity of class Address. And a form property StreetName used by the UI technology could be linked with an expression #{address.street}

Analogue, the properties that a user is supposed to submit in a form can be stored as a process variable or as a nested property in one of the process variables with a UEL value expression like e.g. #{address.street} .

Analogue the default behavior of properties that are submitted is that they will be stored as process variables unless a formProperty declaration specifies otherwise.

Also type conversions can be applied as part of the processing between form properties and process variables.

For example:

1 2 3 4 5 6 7 8
<userTask id="task"> <extensionElements> <flowable:formProperty id="room" /> <flowable:formProperty id="duration" type="long"/> <flowable:formProperty id="speaker" variable="SpeakerName" writable="false" /> <flowable:formProperty id="street" expression="#{address.street}" required="true" /> </extensionElements> </userTask>
  • Form property room will be mapped to process variable room as a String

  • Form property duration will be mapped to process variable duration as a java.lang.Long

  • Form property speaker will be mapped to process variable SpeakerName. It will only be available in the TaskFormData object. If property speaker is submitted, an FlowableException will be thrown. Analogue, with attribute readable="false", a property can be excluded from the FormData, but still be processed in the submit.

  • Form property street will be mapped to Java bean property street in process variable address as a String. And required="true" will throw an exception during the submit if the property is not provided.

It’s also possible to provide type metadata as part of the FormData that is returned from methods StartFormData FormService.getStartFormData(String processDefinitionId) and TaskFormdata FormService.getTaskFormData(String taskId)

We support the following form property types:

  • string (org.flowable.engine.impl.form.StringFormType

  • long (org.flowable.engine.impl.form.LongFormType)

  • enum (org.flowable.engine.impl.form.EnumFormType)

  • date (org.flowable.engine.impl.form.DateFormType)

  • boolean (org.flowable.engine.impl.form.BooleanFormType)

For each form property declared, the following FormProperty information will be made available through List<FormProperty> formService.getStartFormData(String processDefinitionId).getFormProperties() and List<FormProperty> formService.getTaskFormData(String taskId).getFormProperties()

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
public interface FormProperty { /** the key used to submit the property in {@link FormService#submitStartFormData(String, java.util.Map)} * or {@link FormService#submitTaskFormData(String, java.util.Map)} */ String getId(); /** the display label */ String getName(); /** one of the types defined in this interface like e.g. {@link #TYPE_STRING} */ FormType getType(); /** optional value that should be used to display in this property */ String getValue(); /** is this property read to be displayed in the form and made accessible with the methods * {@link FormService#getStartFormData(String)} and {@link FormService#getTaskFormData(String)}. */ boolean isReadable(); /** is this property expected when a user submits the form? */ boolean isWritable(); /** is this property a required input field */ boolean isRequired(); }

For example:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
<startEvent id="start"> <extensionElements> <flowable:formProperty id="speaker" name="Speaker" variable="SpeakerName" type="string" /> <flowable:formProperty id="start" type="date" datePattern="dd-MMM-yyyy" /> <flowable:formProperty id="direction" type="enum"> <flowable:value id="left" name="Go Left" /> <flowable:value id="right" name="Go Right" /> <flowable:value id="up" name="Go Up" /> <flowable:value id="down" name="Go Down" /> </flowable:formProperty> </extensionElements> </startEvent>

All that information is accessible through the API. The type names can be obtained with formProperty.getType().getName(). And even the date pattern is available with formProperty.getType().getInformation("datePattern") and the enumeration values are accessible with formProperty.getType().getInformation("values")

The following XML snippet

1 2 3 4 5 6 7
<startEvent> <extensionElements> <flowable:formProperty id="numberOfDays" name="Number of days" value="${numberOfDays}" type="long" required="true"/> <flowable:formProperty id="startDate" name="First day of holiday (dd-MM-yyy)" value="${startDate}" datePattern="dd-MM-yyyy hh:mm" type="date" required="true" /> <flowable:formProperty id="vacationMotivation" name="Motivation" value="${vacationMotivation}" type="string" /> </extensionElements> </userTask>

could be used to render to a process start form in a custom app.

9.3. External form rendering

The API also allows for you to perform your own task form rendering outside of the Flowable Engine. These steps explain the hooks that you can use to render your task forms yourself.

Essentially, all the data that’s needed to render a form is assembled in one of these two service methods: StartFormData FormService.getStartFormData(String processDefinitionId) and TaskFormdata FormService.getTaskFormData(String taskId).

Submitting form properties can be done with ProcessInstance FormService.submitStartFormData(String processDefinitionId, Map<String,String> properties) and void FormService.submitTaskFormData(String taskId, Map<String,String> properties)

To learn about how form properties map to process variables, see Form properties

You can place any form template resource inside the business archives that you deploy (in case you want to store them versioned with the process). It will be available as a resource in the deployment, which you can retrieve using: String ProcessDefinition.getDeploymentId() and InputStream RepositoryService.getResourceAsStream(String deploymentId, String resourceName); This could be your template definition file, which you can use to render/show the form in your own application.

You can use this capability of accessing the deployment resources beyond task forms for any other purposes as well.

The attribute <userTask flowable:formKey="…​" is exposed by the API through String FormService.getStartFormData(String processDefinitionId).getFormKey() and String FormService.getTaskFormData(String taskId).getFormKey(). You could use this to store the full name of the template within your deployment (e.g. org/flowable/example/form/my-custom-form.xml), but this is not required at all. For instance, you could also store a generic key in the form attribute and apply an algorithm or transformation to get to the actual template that needs to be used. This might be handy when you want to render different forms for different UI technologies like e.g. one form for usage in a web app of normal screen size, one form for mobile phone’s small screens and maybe even a template for an IM form or an email form.

10. JPA

You can use JPA-Entities as process variables, allowing you to:

  • Updating existing JPA-entities based on process variables that can be filled in on a form in a userTask or generated in a serviceTask.

  • Reusing existing domain model without having to write explicit services to fetch the entities and update the values

  • Make decisions (gateways) based on properties of existing entities.

  • …​

10.1. Requirements

Only entities that comply with the following are supported:

  • Entities should be configured using JPA-annotations, we support both field and property-access. Mapped super classes can also be used.

  • Entity should have a primary key annotated with @Id, compound primary keys are not supported (@EmbeddedId and @IdClass). The Id field/property can be of any type supported in the JPA-spec: Primitive types and their wrappers (excluding boolean), String, BigInteger, BigDecimal, java.util.Date and java.sql.Date.

10.2. Configuration

To be able to use JPA-entities, the engine must have a reference to an EntityManagerFactory. This can be done by configuring a reference or by supplying a persistence-unit name. JPA-entities used as variables will be detected automatically and will be handled accordingly.

The example configuration below uses the jpaPersistenceUnitName:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
<bean id="processEngineConfiguration" class="org.flowable.engine.impl.cfg.StandaloneInMemProcessEngineConfiguration"> <!-- Database configurations --> <property name="databaseSchemaUpdate" value="true" /> <property name="jdbcUrl" value="jdbc:h2:mem:JpaVariableTest;DB_CLOSE_DELAY=1000" /> <property name="jpaPersistenceUnitName" value="flowable-jpa-pu" /> <property name="jpaHandleTransaction" value="true" /> <property name="jpaCloseEntityManager" value="true" /> <!-- job executor configurations --> <property name="jobExecutorActivate" value="false" /> <!-- mail server configurations --> <property name="mailServerPort" value="5025" /> </bean>

The next example configuration below provides a EntityManagerFactory which we define ourselves (in this case, an open-jpa entity manager). Note that the snippet only contains the beans that are relevant for the example, the others are omitted. Full working example with open-jpa entity manager can be found in the flowable-spring-examples (/flowable-spring/src/test/java/org/flowable/spring/test/jpa/JPASpringTest.java)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitManager" ref="pum"/> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.OpenJpaVendorAdapter"> <property name="databasePlatform" value="org.apache.openjpa.jdbc.sql.H2Dictionary" /> </bean> </property> </bean> <bean id="processEngineConfiguration" class="org.flowable.spring.SpringProcessEngineConfiguration"> <property name="dataSource" ref="dataSource" /> <property name="transactionManager" ref="transactionManager" /> <property name="databaseSchemaUpdate" value="true" /> <property name="jpaEntityManagerFactory" ref="entityManagerFactory" /> <property name="jpaHandleTransaction" value="true" /> <property name="jpaCloseEntityManager" value="true" /> <property name="jobExecutorActivate" value="false" /> </bean>

The same configurations can also be done when building an engine programmatically, example:

1 2 3 4
ProcessEngine processEngine = ProcessEngineConfiguration .createProcessEngineConfigurationFromResourceDefault() .setJpaPersistenceUnitName("flowable-pu") .buildProcessEngine();

Configuration properties:

  • jpaPersistenceUnitName: The name of the persistence-unit to use. (Make sure the persistence-unit is available on the classpath. According to the spec, the default location is /META-INF/persistence.xml). Use either jpaEntityManagerFactory or jpaPersistenceUnitName.

  • jpaEntityManagerFactory: An reference to a bean implementing javax.persistence.EntityManagerFactory that will be used to load the Entities and flushing the updates. Use either jpaEntityManagerFactory or jpaPersistenceUnitName.

  • jpaHandleTransaction: Flag indicating that the engine should begin and commit/rollback the transaction on the used EntityManager instances. Set to false when Java Transaction API (JTA) is used.

  • jpaCloseEntityManager: Flag indicating that the engine should close the EntityManager instance that was obtained from the EntityManagerFactory. Set to false when the EntityManager is container-managed (e.g. when using an Extended Persistence Context which isn’t scoped to a single transaction').

10.3. Usage

10.3.1. Simple Example

Examples for using JPA variables can be found in JPAVariableTest in the Flowable source code. We’ll explain JPAVariableTest.testUpdateJPAEntityValues step by step.

First of all, we create an EntityManagerFactory for our persistence-unit, which is based on META-INF/persistence.xml. This contains classes which should be included in the persistence unit and some vendor-specific configuration.

We are using a simple entity in the test, having an id and String value property, which is also persisted. Before running the test, we create an entity and save this.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
@Entity(name = "JPA_ENTITY_FIELD") public class FieldAccessJPAEntity { @Id @Column(name = "ID_") private Long id; private String value; public FieldAccessJPAEntity() { // Empty constructor needed for JPA } public Long getId() { return id; } public void setId(Long id) { this.id = id; } public String getValue() { return value; } public void setValue(String value) { this.value = value; } }

We start a new process instance, adding the entity as a variable. As with other variables, they are stored in the persistent storage of the engine. When the variable is requested the next time, it will be loaded from the EntityManager based on the class and Id stored.

1 2 3 4 5
Map<String, Object> variables = new HashMap<String, Object>(); variables.put("entityToUpdate", entityToUpdate); ProcessInstance processInstance = runtimeService.startProcessInstanceByKey( "UpdateJPAValuesProcess", variables);

The first node in our process definition contains a serviceTask that will invoke the method setValue on entityToUpdate, which resolves to the JPA variable we set earlier when starting the process instance and will be loaded from the EntityManager associated with the current engine’s context.

1 2
<serviceTask id='theTask' name='updateJPAEntityTask' flowable:expression="${entityToUpdate.setValue('updatedValue')}" />

When the service-task is finished, the process instance waits in a userTask defined in the process definition, which allows us to inspect the process instance. At this point, the EntityManager has been flushed and the changes to the entity have been pushed to the database. When we get the value of the variable entityToUpdate, it’s loaded again and we get the entity with its value property set to updatedValue.

1 2 3 4
// Servicetask in process 'UpdateJPAValuesProcess' should have set value on entityToUpdate. Object updatedEntity = runtimeService.getVariable(processInstance.getId(), "entityToUpdate"); assertTrue(updatedEntity instanceof FieldAccessJPAEntity); assertEquals("updatedValue", ((FieldAccessJPAEntity)updatedEntity).getValue());

10.3.2. Query JPA process variables

You can query for ProcessInstances and Executions that have a certain JPA-entity as variable value. Note that only variableValueEquals(name, entity) is supported for JPA-Entities on ProcessInstanceQuery and ExecutionQuery. Methods variableValueNotEquals, variableValueGreaterThan, variableValueGreaterThanOrEqual, variableValueLessThan and variableValueLessThanOrEqual are unsupported and will throw an FlowableException when a JPA-Entity is passed as value.

1 2
ProcessInstance result = runtimeService.createProcessInstanceQuery() .variableValueEquals("entityToQuery", entityToQuery).singleResult();

10.3.3. Advanced example using Spring beans and JPA

A more advanced example, JPASpringTest, can be found in flowable-spring-examples. It describes the following simple use case:

  • An existing Spring-bean which uses JPA entities already exists which allows for Loan Requests to be stored.

  • Using Flowable, we can use the existing entities, obtained through the existing bean, and use them as variable in our process. Process is defined in the following steps:

    • Service task that creates a new LoanRequest, using the existing LoanRequestBean using variables received when starting the process (e.g. could come from a start form). The created entity is stored as a variable, using flowable:resultVariable which stores the expression result as a variable.

    • UserTask that allows a manager to review the request and approve/disapprove, which is stored as a boolean variable approvedByManager

    • ServiceTask that updates the loan request entity so the entity is in sync with the process.

    • Depending on the value of the entity property approved, an exclusive gateway is used to make a decision about what path to take next: When the request is approved, process ends, otherwise, an extra task will become available (Send rejection letter), so the customer can be notified manually by a rejection letter.

Please note that the process doesn’t contain any forms, since it is only used in a unit test.

jpa.spring.example.process
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
<?xml version="1.0" encoding="UTF-8"?> <definitions id="taskAssigneeExample" xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:flowable="http://flowable.org/bpmn" targetNamespace="org.flowable.examples"> <process id="LoanRequestProcess" name="Process creating and handling loan request"> <startEvent id='theStart' /> <sequenceFlow id='flow1' sourceRef='theStart' targetRef='createLoanRequest' /> <serviceTask id='createLoanRequest' name='Create loan request' flowable:expression="${loanRequestBean.newLoanRequest(customerName, amount)}" flowable:resultVariable="loanRequest"/> <sequenceFlow id='flow2' sourceRef='createLoanRequest' targetRef='approveTask' /> <userTask id="approveTask" name="Approve request" /> <sequenceFlow id='flow3' sourceRef='approveTask' targetRef='approveOrDissaprove' /> <serviceTask id='approveOrDissaprove' name='Store decision' flowable:expression="${loanRequest.setApproved(approvedByManager)}" /> <sequenceFlow id='flow4' sourceRef='approveOrDissaprove' targetRef='exclusiveGw' /> <exclusiveGateway id="exclusiveGw" name="Exclusive Gateway approval" /> <sequenceFlow id="endFlow1" sourceRef="exclusiveGw" targetRef="theEnd"> <conditionExpression xsi:type="tFormalExpression">${loanRequest.approved}</conditionExpression> </sequenceFlow> <sequenceFlow id="endFlow2" sourceRef="exclusiveGw" targetRef="sendRejectionLetter"> <conditionExpression xsi:type="tFormalExpression">${!loanRequest.approved}</conditionExpression> </sequenceFlow> <userTask id="sendRejectionLetter" name="Send rejection letter" /> <sequenceFlow id='flow5' sourceRef='sendRejectionLetter' targetRef='theOtherEnd' /> <endEvent id='theEnd' /> <endEvent id='theOtherEnd' /> </process> </definitions>

Although the example above is quite simple, it shows the power of using JPA combined with Spring and parametrized method-expressions. The process requires no custom java-code at all (except for the Spring-bean off course) and speeds up development drastically.

11. History

History is the component that captures what happened during process execution and stores it permanently. In contrast to the runtime data, the history data will remain present in the DB also after process instances have completed.

There are 6 history entities:

  • HistoricProcessInstances containing information about current and past process instances.

  • HistoricVariableInstances containing the latest value of a process variable or task variable.

  • HistoricActivityInstances containing information about a single execution of an activity (node in the process).

  • HistoricTaskInstances containing information about current and past (completed and deleted) task instances.

  • HistoricIdentityLinks containing information about current and past identity links on tasks and process instances.

  • HistoricDetails containing various kinds of information related to either a historic process instances, an activity instance or a task instance.

Since the DB contains historic entities for past as well as ongoing instances, you might want to consider querying these tables in order to minimize access to the runtime process instance data and that way keeping the runtime execution performant.

11.1. Querying history

In the API, it’s possible to query all 6 of the History entities. The HistoryService exposes the methods createHistoricProcessInstanceQuery(), createHistoricVariableInstanceQuery(), createHistoricActivityInstanceQuery(), getHistoricIdentityLinksForTask(), getHistoricIdentityLinksForProcessInstance(), createHistoricDetailQuery() and createHistoricTaskInstanceQuery().

Below are a couple of examples that show some of the possibilities of the query API for history. Full description of the possibilities can be found in the javadocs, in the org.flowable.engine.history package.

11.1.1. HistoricProcessInstanceQuery

Get 10 HistoricProcessInstances that are finished and which took the most time to complete (the longest duration) of all finished processes with definition XXX.

1 2 3 4 5
historyService.createHistoricProcessInstanceQuery() .finished() .processDefinitionId("XXX") .orderByProcessInstanceDuration().desc() .listPage(0, 10);

11.1.2. HistoricVariableInstanceQuery

Get all HistoricVariableInstances from a finished process instance with id xxx ordered by variable name.

1 2 3 4
historyService.createHistoricVariableInstanceQuery() .processInstanceId("XXX") .orderByVariableName.desc() .list();

11.1.3. HistoricActivityInstanceQuery

Get the last HistoricActivityInstance of type serviceTask that has been finished in any process that uses the processDefinition with id XXX.

1 2 3 4 5 6
historyService.createHistoricActivityInstanceQuery() .activityType("serviceTask") .processDefinitionId("XXX") .finished() .orderByHistoricActivityInstanceEndTime().desc() .listPage(0, 1);

11.1.4. HistoricDetailQuery

The next example, gets all variable-updates that have been done in process with id 123. Only HistoricVariableUpdates will be returned by this query. Note that it’s possible that a certain variable name has multiple HistoricVariableUpdate entries, for each time the variable was updated in the process. You can use orderByTime (the time the variable update was done) or orderByVariableRevision (revision of runtime variable at the time of updating) to find out in what order they occurred.

1 2 3 4 5
historyService.createHistoricDetailQuery() .variableUpdates() .processInstanceId("123") .orderByVariableName().asc() .list()

This example gets all form-properties that were submitted in any task or when starting the process with id "123". Only HistoricFormPropertiess will be returned by this query.

1 2 3 4 5
historyService.createHistoricDetailQuery() .formProperties() .processInstanceId("123") .orderByVariableName().asc() .list()

The last example gets all variable updates that were performed on the task with id "123". This returns all HistoricVariableUpdates for variables that were set on the task (task local variables), and NOT on the process instance.

1 2 3 4 5
historyService.createHistoricDetailQuery() .variableUpdates() .taskId("123") .orderByVariableName().asc() .list()

Task local variables can be set using the TaskService or on a DelegateTask, inside TaskListener:

1
taskService.setVariableLocal("123", "myVariable", "Variable value");
1 2 3
public void notify(DelegateTask delegateTask) { delegateTask.setVariableLocal("myVariable", "Variable value"); }

11.1.5. HistoricTaskInstanceQuery

Get 10 HistoricTaskInstances that are finished and which took the most time to complete (the longest duration) of all tasks.

1 2 3 4
historyService.createHistoricTaskInstanceQuery() .finished() .orderByHistoricTaskInstanceDuration().desc() .listPage(0, 10);

Get HistoricTaskInstances that are deleted with a delete reason that contains "invalid", which were last assigned to user kermit.

1 2 3 4 5
historyService.createHistoricTaskInstanceQuery() .finished() .taskDeleteReasonLike("%invalid%") .taskAssignee("kermit") .listPage(0, 10);

11.2. History configuration

The history level can be configured programmatically, using the enum org.flowable.engine.impl.history.HistoryLevel (or HISTORY constants defined on ProcessEngineConfiguration for versions prior to 5.11):

1 2 3 4
ProcessEngine processEngine = ProcessEngineConfiguration .createProcessEngineConfigurationFromResourceDefault() .setHistory(HistoryLevel.AUDIT.getKey()) .buildProcessEngine();

The level can also be configured in flowable.cfg.xml or in a spring-context:

1 2 3 4
<bean id="processEngineConfiguration" class="org.flowable.engine.impl.cfg.StandaloneInMemProcessEngineConfiguration"> <property name="history" value="audit" /> ... </bean>

Following history levels can be configured:

  • none: skips all history archiving. This is the most performant for runtime process execution, but no historical information will be available.

  • activity: archives all process instances and activity instances. At the end of the process instance, the latest values of the top level process instance variables will be copied to historic variable instances. No details will be archived.

  • audit: This is the default. It archives all process instances, activity instances, keeps variable values continuously in sync and all form properties that are submitted so that all user interaction through forms is traceable and can be audited.

  • full: This is the highest level of history archiving and hence the slowest. This level stores all information as in the audit level plus all other possible details, mostly this are process variable updates.

In older releases, the history level was stored in the database (table ACT_GE_PROPERTY, property with name historyLevel). Starting from 5.11, this value is not used anymore and is ignored/deleted from the database. The history can now be changed between 2 boots of the engine, without an exception being thrown in case the level changed from the previous engine-boot.

11.3. Async History configuration

[Experimental] Async History has been introduced with Flowable 6.1.0 and allows historic data to be persisted asynchronously using a history job executor.

1 2 3 4 5 6
<bean id="processEngineConfiguration" class="org.flowable.engine.impl.cfg.StandaloneInMemProcessEngineConfiguration"> <property name="asyncHistoryEnabled" value="true" /> <property name="asyncHistoryExecutorNumberOfRetries" value="10" /> <property name="asyncHistoryExecutorActivate" value="true" /> ... </bean>

With the asyncHistoryExecutorActivate property, the history job executor can be started automatically when booting the Process Engine. This would be only set to false for test cases (or if Async History is not enabled of course). The asyncHistoryExecutorNumberOfRetries property configures the number of retries for an Async History job. This property is a bit different than that for a normal async job, because a history job may need more cycles before it can be handled succesfully. For example, a historic task first has to be created in the ACT_HI_TASK_ table before the assignee can be updated by another history job. The default value for this property is set to 10 in the Process Engine configuration. When the number of retries has been reached, the history job will be ignored (and not written to a deadletter job table).

In addition to these properties, the asyncHistoryExecutor property can be used to configure an AsyncExecutor in a similar way that you can do for the normal async job executor.

When the history data is not to be persisted in the default history tables, but for example, is required in a NoSQL database (such as Elasticsearch, MongoDb, Cassandra and so on), or something completely different is to be done with it, the handler that is responsible for handling the job can be overridden:

  • Using the historyJobHandlers property, which is a map of all the custom history job handlers

  • Or, configure the customHistoryJobHandlers list with all instances will be added to the historyJobHandlers map at boot time.

Alternatively, it is also possible to use a Message Queue and configure the engine in such a way that a message will be sent when a new history job is available. This way, the historical data can be processed on different servers to where the engines are run. It’s also possible to configure the engine and Message Queue using JTA (when using JMS) and not store the historical data in a job, but send it all data to a Message Queue that participates in a global transaction.

See the Flowable Async History Examples for various examples on how to configure the Async History, including the default way, using a JMS queue, using JTA or using a Message Queue and a Spring Boot application that acts as a message listener.

11.4. History for audit purposes

When configuring at least audit level for configuration. Then all properties submitted through methods FormService.submitStartFormData(String processDefinitionId, Map<String, String> properties) and FormService.submitTaskFormData(String taskId, Map<String, String> properties) are recorded.

Form properties can be retrieved with the query API like this:

1 2 3 4 5
historyService .createHistoricDetailQuery() .formProperties() ... .list();

In that case only historic details of type HistoricFormProperty are returned.

If you’ve set the authenticated user before calling the submit methods with IdentityService.setAuthenticatedUserId(String) then that authenticated user who submitted the form will be accessible in the history as well with HistoricProcessInstance.getStartUserId() for start forms and HistoricActivityInstance.getAssignee() for task forms.

12. Identity management

Starting from Flowable V6, the identity management (IDM) component has been extracted from the flowable-engine module and the logic moved to several separate modules: flowable-idm-api, flowable-idm-engine, flowable-idm-spring and flowable-idm-engine-configurator. The main reason for separating the IDM logic was that it’s not core to the Flowable engine and in a lot of cases when the Flowable engine is embedded in an application, the identity logic is not used or needed.

By default, the IDM engine is initialized and started when the Flowable engine is started. This results in the same identity logic being executed and available in Flowable v5. The idm-engine manages its own database schema and the following entities:

  • User and UserEntity, the user information.

  • Group and GroupEntity, the group information.

  • MembershipEntity, the memberships of users in groups

  • Privilege and PrivilegeEntity, a privilege definition (for example used for controlling access to the UI apps, such as the Flowable Modeler and Flowable Task app)

  • PrivilegeMappingEntity, linking a user and/or group to a privilege

  • Token and TokenEntity, an authentication token used by the UI apps

Since the DB contains historic entities for past as well as ongoing instances, you might want to consider querying these tables in order to minimize access to the runtime process instance data, and that way keep the runtime execution performant.

[[IDM engine configuration]]

12.1. IDM engine configuration

By default the Flowable engine is started with the org.flowable.engine.impl.cfg.IdmEngineConfigurator. This configurator uses the same datasource configuration as the Flowable process engine configuration. No additional configuration is needed to use the identity component as it was configured in Flowable v5.

When no identity logic is needed in the Flowable engine the IDM engine can be disabled in the process engine configuration.

1 2 3 4
<bean id="processEngineConfiguration" class="org.flowable.engine.impl.cfg.StandaloneInMemProcessEngineConfiguration"> <property name="disableIdmEngine" value="true" /> ... </bean>

This means that no user and group queries can be used, and candidate groups in a task query can not be retrieved for a user.

By default, the user passwords will be saved in plain text in the IDM database tables. To make sure that the passwords are encoded you can define a password encoder in the process engine configuration.

1 2 3 4 5 6 7 8 9 10 11
<bean id="shaEncoder" class="org.springframework.security.authentication.encoding.ShaPasswordEncoder"/> <bean id="passwordEncoder" class="org.flowable.idm.spring.authentication.SpringEncoder"> <constructor-arg ref="shaEncoder"/> </bean> <bean id="processEngineConfiguration" class="org.flowable.engine.impl.cfg.StandaloneInMemProcessEngineConfiguration"> <property name="passwordEncoder" ref="passwordEncoder" /> ... </bean>

In this example the ShaPasswordEncoder is used, but you can also use the org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder for example. When not using Spring you can also use the org.flowable.idm.engine.impl.authentication.ApacheDigester to encode the passwords.

The default IDM engine configurator can also be overridden to initialize the IDM Engine in a custom way. A good example is the LDAPConfigurator implementation which overrides the default IDM engine to use a LDAP server instead of the default IDM database tables. The idmProcessEngineConfigurator property of the process engine configuration can be used to set a custom configurator like the LDAPConfigurator

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
<bean id="processEngineConfiguration" class="...SomeProcessEngineConfigurationClass"> ... <property name="idmProcessEngineConfigurator"> <bean class="org.flowable.ldap.LDAPConfigurator"> <!-- Server connection params --> <property name="server" value="ldap://localhost" /> <property name="port" value="33389" /> <property name="user" value="uid=admin, ou=users, o=flowable" /> <property name="password" value="pass" /> <!-- Query params --> <property name="baseDn" value="o=flowable" /> <property name="queryUserByUserId" value="(&(objectClass=inetOrgPerson)(uid={0}))" /> <property name="queryUserByFullNameLike" value="(&(objectClass=inetOrgPerson)(|({0}=*{1}*)({2}=*{3}*)))" /> <property name="queryGroupsForUser" value="(&(objectClass=groupOfUniqueNames)(uniqueMember={0}))" /> <!-- Attribute config --> <property name="userIdAttribute" value="uid" /> <property name="userFirstNameAttribute" value="cn" /> <property name="userLastNameAttribute" value="sn" /> <property name="userEmailAttribute" value="mail" /> <property name="groupIdAttribute" value="cn" /> <property name="groupNameAttribute" value="cn" /> </bean> </property> </bean>

13. Eclipse Designer

Flowable comes with an Eclipse plugin, the Flowable Eclipse Designer, that can be used to graphically model, test and deploy BPMN 2.0 processes.

13.1. Installation

The following installation instructions are verified on Eclipse Mars and Neon.

Go to Help → Install New Software. In the following panel, click on Add button and fill in the following fields:

designer.add.update.site

Make sure the "Contact all updates sites.." checkbox is checked, because all the necessary plugins will then be downloaded by Eclipse.

13.2. Flowable Designer editor features

  • Create Flowable projects and diagrams.

designer.create.flowable.project
  • The Flowable Designer creates a .bpmn file when creating a new Flowable diagram. When opened with the Flowable Diagram Editor view this will provide a graphical modeling canvas and palette. The same file can however be opened with an XML editor and it then shows the BPMN 2.0 XML elements of the process definition. So, the Flowable Designer works with a single file for both the graphical diagram as well as the BPMN 2.0 XML. Note that in old releases, the .bpmn extension was not supported as a deployment artifact for a process definition. Therefore, the "create deployment artifacts" feature of the Flowable Designer can be used to generate a BAR file containing a .bpmn20.xml file with the content of the .bpmn file. You can also do a quick file rename yourself. Also note that you can open a .bpmn20.xml file with the Flowable Diagram Editor view as well.

designer.bpmn.file
  • BPMN 2.0 XML files can be imported into the Flowable Designer and a diagram will be displayed. Just copy the BPMN 2.0 XML file to your project and open the file with the Flowable Diagram Editor view. The Flowable Designer uses the BPMN DI information of the file to create the diagram. If you have a BPMN 2.0 XML file without BPMN DI information, the Flowable BPMN autolayout module is used to created a graphical representation of the process.

designer.open.importedfile
  • For deployment, a BAR file and optionally a JAR file is created by the Flowable Designer by right-clicking on a Flowable project in the package explorer and choosing the Create deployment artifacts option at the bottom of the popup menu. For more information about the deployment functionality of the Designer look at the deployment section.

designer.create.deployment
  • Generate a unit test (right click on a BPMN 2.0 XML file in the package explorer and select generate unit test) A unit test is generated with a Flowable configuration that runs on an embedded H2 database. You can now run the unit test to test your process definition.

designer.unittest.generate
  • The Flowable project is generated as a Maven project. To configure the dependencies you need to run mvn eclipse:eclipse and the Maven dependencies will be configured as expected. Note that for process design Maven dependencies are not needed. They are only needed to run unit tests.

designer.project.maven

13.3. Flowable Designer BPMN features

  • Support for start none event, start error event, timer start event, end none event, end error event, sequence flow, parallel gateway, exclusive gateway, inclusive gateway, event gateway, embedded sub process, event sub process, call activity, pool, lane, script task, user task, service task, mail task, manual task, business rule task, receive task, timer boundary event, error boundary event, signal boundary event, timer catching event, signal catching event, signal throwing event, none throwing event and four Flowable specific elements (user, script, mail tasks and start event).

designer.model.process
  • You can quickly change the type of a task by hovering over the element and choosing the new task type.

designer.model.quick.change
  • You can quickly add new elements hovering over an element and choosing a new element type.

designer.model.quick.new
  • Java class, expression or delegate expression configuration is supported for the Java service task. In addition, field extensions can be configured.

designer.servicetask.property
  • Support for pools and lanes. Because Flowable reads different pools as different process definitions, it makes the most sense to use only one pool. If you use multiple pools, be aware that drawing sequence flows between the pools will result in problems when deploying the process in the Flowable engine. You can add as many lanes to a pool as you want.

designer.model.poolandlanes
  • You can add labels to sequence flows by filling in the name property. You can position the labels yourself and the position is saved as part of the BPMN 2.0 XML DI information.

designer.model.labels
  • Support for event sub processes.

designer.model.eventsubprocess
  • Support for expanded embedded sub processes. You can also add an embedded sub process in another embedded sub process.

designer.embeddedprocess.canvas
  • Support for timer boundary events on tasks and embedded sub processes. Although, the timer boundary event makes most sense when using it on a user task or an embedded sub process in the Flowable Designer.

designer.timerboundary.canvas
  • Support for additional Flowable extensions, such as the Mail task, the candidate configuration of User tasks and Script task configuration.

designer.mailtask.property
  • Support for the Flowable execution and task listeners. You can also add field extensions for execution listeners.

designer.listener.configuration
  • Support for conditions on sequence flows.

designer.sequence.condition

13.4. Flowable Designer deployment features

Deploying process definitions and task forms on the Flowable engine is not hard. You need a BAR file containing the process definition BPMN 2.0 XML file and optionally task forms and an image of the process that can be viewed in the Flowable app. In the Flowable Designer it’s made very easy to create a BAR file. When you’ve finished your process implementation just right-click on your Flowable project in the package explorer and choose for the Create deployment artifacts option at the bottom of the popup menu.

designer.create.deployment

Then a deployment directory is created containing the BAR file and optionally a JAR file with the Java classes of your Flowable project.

designer.deployment.dir

This file can now be uploaded to the Flowable engine using the deployments tab in Flowable Admin app, and you are ready to go.

When your project contains Java classes, the deployment is a bit more work. In this case, the Create deployment artifacts step in the Flowable Designer will also generate a JAR file containing the compiled classes. This JAR file must be deployed to the flowable-XXX/WEB-INF/lib directory in your Flowable Tomcat (or other container) installation directory. This makes the classes available on the classpath of the Flowable Engine.

13.5. Extending Flowable Designer

You can extend the default functionality offered by Flowable Designer. This section documents which extensions are available, how they can be used and provides some usage examples. Extending Flowable Designer is useful in cases where the default functionality doesn’t suit your needs, you require additional capabilities or have domain-specific requirements when modeling business processes. Extension of Flowable Designer falls into two distinct categories: extending the palette and extending output formats. Each of these extension types requires a specific approach and different technical expertise.

Extending Flowable Designer requires technical knowledge and, more specifically, knowledge of programming in Java. Depending on the type of extension you want to create, you might also need to be familiar with Maven, Eclipse, OSGi, Eclipse extensions and SWT.

13.5.1. Customizing the palette

You can customize the palette that is offered to users when modeling processes. The palette is the collection of shapes that can be dragged onto the canvas in a process diagram and is displayed to the right-hand side of the canvas. As you can see in the default palette, the default shapes are grouped together (these are called "drawers") for Events, Gateways and so on. There are two options built-in to Flowable Designer to customize the drawers and shapes in the palette:

  • Adding your own shapes / nodes to existing or new drawers

  • Disabling any or all of the default BPMN 2.0 shapes offered by Flowable Designer, with the exception of the connection and selection tools

In order to customize the palette, you create a JAR file that is needs to be added to every installation of Flowable Designer (more on how to do that later). Such a JAR file is called an extension. By writing classes that are included in your extension, Flowable Designer understands which customizations you wish to make. In order for this to work, your classes should implement certain interfaces. There is an integration library available with those interfaces and base classes to extend, which you should add to your project’s classpath.

You can find the code examples listed below in source control with Flowable Designer. Take a look in the examples/money-tasks directory in the flowable-designer repository of Flowable’s source code.

You can setup your project in whichever tool you prefer and build the JAR with the build tool of your choice. For the instructions below, a setup is assumed with Eclipse Mars or Neon, using Maven (3.x) as build tool, but any setup should enable you to create the same results.

Extension setup (Eclipse/Maven)

Download and extract Eclipse (most recent versions should work) and a recent version (3.x) of Apache Maven. If you use a 2.x version of Maven, you will run into problems when building your project, so make sure your version is up to date. We assume you are familiar with using basic features and the Java editor in Eclipse. It’s up to you whether you prefer to use Eclipse’s features for Maven or run Maven commands from a command prompt.

Create a new project in Eclipse. This can be a general project type. Create a pom.xml file at the root of the project to contain the Maven project setup. Also create folders for the src/main/java and src/main/resources folders, which are Maven conventions for your Java source files and resources respectively. Open the pom.xml file and add the following lines:

1 2 3 4 5 6 7 8 9 10 11 12 13 14
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>org.acme</groupId> <artifactId>money-tasks</artifactId> <version>1.0.0</version> <packaging>jar</packaging> <name>Acme Corporation Money Tasks</name> ... </project>

As you can see, this is just a basic pom.xml file that defines a groupId, artifactId and version for the project. We will create a customization that includes a single custom node for our money business.

Add the integration library to your project’s dependencies by including this dependency in your pom.xml file:

1 2 3 4 5 6 7 8 9 10 11 12 13 14
<dependencies> <dependency> <groupId>org.flowable.designer</groupId> <artifactId>org.flowable.designer.integration</artifactId> <version>5.22.0</version> <!-- Use the current Flowable Designer version --> <scope>compile</scope> </dependency> </dependencies> ... <repositories> <repository> <id>Flowable</id> </repository> </repositories>

Finally, in the pom.xml file, add the configuration for the maven-compiler-plugin so the Java source level is at least 1.5 (see snippet below). You will need this in order to use annotations. You can also include instructions for Maven to generate the JAR’s MANIFEST.MF file. This is not required, but you can use a specific property in the manifest to provide a name for your extension (this name may be shown at certain places in the designer and is primarily intended for future use if you have several extensions in the designer). If you wish to do so, include the following snippet in pom.xml:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
<build> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.8</source> <target>1.8</target> <showDeprecation>true</showDeprecation> <showWarnings>true</showWarnings> <optimize>true</optimize> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <version>2.3.2</version> <configuration> <archive> <index>true</index> <manifest> <addClasspath>false</addClasspath> <addDefaultImplementationEntries>true</addDefaultImplementationEntries> </manifest> <manifestEntries> <FlowableDesigner-Extension-Name>Acme Money</FlowableDesigner-Extension-Name> </manifestEntries> </archive> </configuration> </plugin> </plugins> </build>

The name for the extension is described by the FlowableDesigner-Extension-Name property. The only thing left to do now is tell Eclipse to setup the project according to the instructions in pom.xml. So open up a command shell and go to the root folder of your project in the Eclipse workspace. Then, execute the following Maven command:

mvn eclipse:eclipse

Wait until the build is successful. Refresh the project (use the project’s context menu (right-click) and select Refresh). You should now have the src/main/java and src/main/resources folders as source folders in the Eclipse project.

You can, of course, also use the m2eclipse plugin and simply enable Maven dependency management from the context menu (right-click) of the project. Then choose Maven > Update project configuration from the project’s context menu. That should setup the source folders as well.

That’s it for the setup. Now you’re ready to start creating customizations to Flowable Designer!

Applying your extension to Flowable Designer

You might be wondering how you can add your extension to Flowable Designer so your customizations are applied. These are the steps to do just that: * Once you’ve created your extension JAR (for instance, by performing a mvn install in your project to build it with Maven), you need to transfer the extension to the computer where Flowable Designer is installed; * Store the extension somewhere on the hard drive where it will be able to remain, and remember the location. Note: the location must be outside the Eclipse workspace of Flowable Designer - storing the extension inside the workspace will lead to the user getting a popup error message and the extensions being unavailable; * Start Flowable Designer and from the menu, select Window > Preferences or Eclipse > Preferences * In the preferences screen, type user as keyword. You should see an option to access the User Libraries in Eclipse in the Java section.

designer.preferences.userlibraries
  • Select the User Libraries item and a tree view shows up to the right where you can add libraries. You should see the default group where you can add extensions to Flowable Designer (depending on your Eclipse installation, you might see several others as well).

designer.preferences.userlibraries.flowable.empty
  • Select the Flowable Designer Extensions group and click the Add JARs…​ or Add External JARs…​ button. Navigate to the folder where your extension is stored and select the extension file you want to add. After completing this, your preferences screen should show the extension as part of the Flowable Designer Extensions group, as shown below.

designer.preferences.userlibraries.flowable.moneytasks
  • Click the OK button to save and close the preferences dialog. The Flowable Designer Extensions group is automatically added to new Flowable projects you create. You can see the user library as an entry in the project’s tree in the Navigator or Package Explorer. If you already had Flowable projects in the workspace, you should also see the new extensions show up in the group. An example is shown below.

designer.userlibraries.project

Diagrams you open will now have the shapes from the new extension in their palette (or shapes disabled, depending on the customizations in your extension). If you already had a diagram opened, close and reopen it to see the changes in the palette.

Adding shapes to the palette

With your project set up, you can now easily add shapes to the palette. Each shape you wish to add is represented by a class in your JAR. Take note that these classes are not the classes that will be used by the Flowable engine during runtime. In your extension you describe the properties that can be set in Flowable Designer for each shape. From these shapes, you can also define the runtime characteristics that should be used by the engine when a process instance reaches the node in the process. The runtime characteristics can use any of the options that Flowable supports for regular ServiceTasks. See this section for more details.

A shape’s class is a simple Java class, to which a number of annotations are added. The class should implement the CustomServiceTask interface, but you shouldn’t implement this interface yourself. Extend the AbstractCustomServiceTask base class instead (at the moment you MUST extend this class directly, so no abstract classes in between). In the Javadoc for that class you can find instructions on the defaults it provides and when you should override any of the methods it already implements. Overrides allow you to do things such as providing icons for the palette and in the shape on the canvas (these can be different) and specifying the base shape you want the node to have (activity, event, gateway).

1 2 3 4 5 6 7 8
/** * @author John Doe * @version 1 * @since 1.0.0 */ public class AcmeMoneyTask extends AbstractCustomServiceTask { ... }

You will need to implement the getName() method to determine the name the node will have in the palette. You can also put the nodes in their own drawer and provide an icon. Override the appropriate methods from AbstractCustomServiceTask. If you want to provide an icon, make sure it’s in the src/main/resources package in your JAR and is about 16x16 pixels and in JPEG or PNG format. The path you supply is relative to that folder.

You can add properties to the shape by adding members to the class and annotating them with the @Property annotation like this:

1 2 3
@Property(type = PropertyType.TEXT, displayName = "Account Number") @Help(displayHelpShort = "Provide an account number", displayHelpLong = HELP_ACCOUNT_NUMBER_LONG) private String accountNumber;

There are several PropertyType values you can use, which are described in more detail in this section. You can make a field required by setting the required attribute to true. A message and red background will appear if the user doesn’t fill in the field.

If you want to fix the order of the various properties in your class as they appear in the property screen, you should specify the order attribute of the @Property annotation.

As you can see, there’s also a @Help annotation that’s used to provide the user some guidance when filling in the field. You can also use the @Help annotation on the class itself - this information is shown at the top of the property sheet presented to the user.

Below is the listing for further elaboration of the MoneyTask. A comment field has been added and you can see an icon is included for the node.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37
/** * @author John Doe * @version 1 * @since 1.0.0 */ @Runtime(javaDelegateClass = "org.acme.runtime.AcmeMoneyJavaDelegation") @Help(displayHelpShort = "Creates a new account", displayHelpLong = "Creates a new account using the account number specified") public class AcmeMoneyTask extends AbstractCustomServiceTask { private static final String HELP_ACCOUNT_NUMBER_LONG = "Provide a number that is suitable as an account number."; @Property(type = PropertyType.TEXT, displayName = "Account Number", required = true) @Help(displayHelpShort = "Provide an account number", displayHelpLong = HELP_ACCOUNT_NUMBER_LONG) private String accountNumber; @Property(type = PropertyType.MULTILINE_TEXT, displayName = "Comments") @Help(displayHelpShort = "Provide comments", displayHelpLong = "You can add comments to the node to provide a brief description.") private String comments; @Override public String contributeToPaletteDrawer() { return "Acme Corporation"; } @Override public String getName() { return "Money node"; } @Override public String getSmallIconPath() { return "icons/coins.png"; } }

If you extend Flowable Designer with this shape, the palette and corresponding node will look like this:

designer.palette.add.money

The properties screen for the money task is shown below. Note the required message for the accountNumber field.

designer.palette.add.money.properties.required

Users can enter static text or use expressions that use process variables in the property fields when creating diagrams (for example, "This little piggy went to ${piggyLocation}"). Generally, this applies to text fields where users are free to enter any text. If you expect users to want to use expressions and you apply runtime behavior to your CustomServiceTask (using @Runtime), make sure to use Expression fields in the delegate class so the expressions are correctly resolved at runtime. More information on runtime behavior can be found in this section.

The help for fields is offered by the buttons to the right of each property. Clicking on the button shows a popup as displayed below.

designer.palette.add.money.help
Configuring runtime execution of Custom Service Tasks

With your fields set up and your extension applied to Designer, users can configure the properties of the service task when modelling a process. In most cases, you will want to use these user-configured properties when the process is executed by Flowable. To do this, you must tell Flowable which class to instantiate when the process reaches your CustomServiceTask.

There is a special annotation for specifying the runtime characteristics of your CustomServiceTask, the @Runtime annotation. Here’s an example of how to use it:

1
@Runtime(javaDelegateClass = "org.acme.runtime.AcmeMoneyJavaDelegation")

Your CustomServiceTask will result in a normal ServiceTask in the BPMN output of processes modeled with it. Flowable enables several ways to define the runtime characteristics of ServiceTasks. Therefore, the @Runtime annotation can take one of three attributes, which match directly to the options Flowable provides, like this:

  • javaDelegateClass maps to flowable:class in the BPMN output. Specify the fully qualified classname of a class that implements JavaDelegate.

  • expression maps to flowable:expression in the BPMN output. Specify an expression to a method to be executed, such as a method in a Spring Bean. You should not specify any @Property annotations on fields when using this option. For more information, see below.

  • javaDelegateExpression maps to flowable:delegateExpression in the BPMN output. Specify an expression to a class that implements JavaDelegate.

The user’s property values will be injected into the runtime class if you provide members in the class for Flowable to inject into. The names should match the names of the members in your CustomServiceTask. For more information, consult this part of the userguide. Note that from version 5.11.0 of the Designer, you can use the Expression interface for dynamic field values. This means that the value of the property in the Flowable Designer must contain an expression, and this expression will then be injected into an Expression property in the JavaDelegate implementation class.

You can use @Property annotations on members of your CustomServiceTask, but this will not work if you use @Runtimes expression attribute. The reason for this is that the expression you specify will be attempted to be resolved to a method by Flowable, not to a class. Therefore, no injection into a class will be performed. Any members marked with @Property will be ignored by Designer if you use expression in your @Runtime annotation. Designer will not render them as editable fields in the node’s property pane and will produce no output for the properties in the process BPMN.

Note that the runtime class shouldn’t be in your extension JAR, as it’s dependent on the Flowable libraries. Flowable needs to be able to find it at runtime, so it needs to be on the Flowable engine’s classpath.

The examples project in Designer’s source tree contains examples of the different options for configuring @Runtime. Take a look in the money-tasks project for some starting points. The examples refer to delegate class examples that are in the money-delegates project.

Property types

This section describes the property types you can use for a CustomServiceTask by setting its type to a PropertyType value.

PropertyType.TEXT

Creates a single-line text field as shown below. Can be a required field and shows validation messages as a tooltip. Validation failures are displayed by changing the background of the field to a light red color.

designer.property.text.invalid
PropertyType.MULTILINE_TEXT

Creates a multiline text field as shown below (height is fixed at 80 pixels). Can be a required field and shows validation messages as a tooltip. Validation failures are displayed by changing the background of the field to a light red color.

designer.property.multiline.text.invalid
PropertyType.PERIOD

Creates a structured editor for specifying a period of time by editing amounts of each unit with a spinner control. The result is shown below. Can be a required field (which is interpreted such that not all values can be 0, so at least 1 part of the period must have a non-zero value) and shows validation messages as a tooltip. Validation failures are displayed by changing the background of the entire field to a light red color. The value of the field is stored as a string of the form 1y 2mo 3w 4d 5h 6m 7s, which represents 1 year, 2 months, 3 weeks, 4 days, 6 minutes and 7 seconds. The entire string is always stored, even if parts are 0.

designer.property.period
PropertyType.BOOLEAN_CHOICE

Creates a single checkbox control for boolean or toggle choices. Note that you can specify the required attribute on the Property annotation, but it will not be evaluated because that would leave the user without a choice whether to check the box or not. The value stored in the diagram is java.lang.Boolean.toString(boolean), which results in "true" or "false".

designer.property.boolean.choice
PropertyType.RADIO_CHOICE

Creates a group of radio buttons as shown below. Selection of any of the radio buttons is mutually exclusive with selection of any of the others (in other words, only one selection allowed). Can be a required field and shows validation messages as a tooltip. Validation failures are displayed by changing the background of the group to a light red color.

This property type expects the class member you have annotated to also have an accompanying @PropertyItems annotation (for an example, see below). Using this additional annotation, you can specify the list of items that should be offered in an array of Strings. Specify the items by adding two array entries for each item: first, the label to be shown; second, the value to be stored.

1 2 3 4 5 6
@Property(type = PropertyType.RADIO_CHOICE, displayName = "Withdrawl limit", required = true) @Help(displayHelpShort = "The maximum daily withdrawl amount ", displayHelpLong = "Choose the maximum daily amount that can be withdrawn from the account.") @PropertyItems({ LIMIT_LOW_LABEL, LIMIT_LOW_VALUE, LIMIT_MEDIUM_LABEL, LIMIT_MEDIUM_VALUE, LIMIT_HIGH_LABEL, LIMIT_HIGH_VALUE }) private String withdrawlLimit;
designer.property.radio.choice
designer.property.radio.choice.invalid
PropertyType.COMBOBOX_CHOICE

Creates a combobox with fixed options as shown below. Can be a required field and shows validation messages as a tooltip. Validation failures are displayed by changing the background of the combobox to a light red color.

This property type expects the cl