Skip to main content

Modeling with situation patterns

When modeling, you will sometimes realize that some situations share common characteristics. To save work for yourself and spread such knowledge within your organization, collect and document such patterns as soon as you understand their nature and have found a satisfying solution for modeling them. For a start, we collected some typical patterns for you, which we see quite often in our modeling practice. You do not need to reinvent the wheel over and over again.

Escalating a situation step by step

You need something and hope that it happens. Such a hope for a result may materialize, but it does not have to! After some time, you will typically become impatient and try to do something to make it happen. But if it then still does not happen, there comes a point at which you will have to decide that you must accept a failure.

We sometimes also call that very common pattern a multi-step escalation.

Example: "A month ago, I ordered a pair of shoes with that new online shop! After two weeks of waiting: nothing. I contacted them to determine what's up. The clerk promised me that the shoes will leave the warehouse today! But again, nothing, so after another week I just canceled that order. Since then I did not hear a word."

In this scenario, the shop clearly did not implement the escalation of the delay properly. They should have applied one of the following patterns in the order delivery process:

Option 1: Using event-based gateways

1

After ordering the goods, the process passively waits for the success case by means of an event-based gateway: the goods should be delivered. However, in case this does not happen within a reasonable time, we make a first step of escalation: remind the dealer.

2

We still stay optimistic. Therefore, the process again passively waits for the success case by means of another event-based gateway: the goods should still be delivered. However, in case this does not happen again within a reasonable time, we make a second step of escalation: cancel the deal.

Evaluation:

  • 👍 This solution explicitly shows how the two steps of this escalation are performed. Timers are modeled separately, followed by their corresponding escalation activities.

  • 👎 The usage of separate event-based gateways leads to duplication (for example, of the receiving message events) and makes the model larger, even more so in case multiple steps of escalation need to be modeled.

  • 👎 During the time we need to remind the dealer, we are strictly speaking not in a position to receive the goods! According to the BPMN specification, a process can handle a message event only if it is ready to receive at exactly the moment it occurs. Fortunately, Camunda 8 introduced message buffering, allowing to execute this model properly without loosing messages. Using Camunda 7, the message might get lost until we are at the second event-based gateway.

note

You might want to use that pattern when modeling simple two phase escalations. You should not execute it on Camunda 7.

Option 2: Using gateways forming a loop

1

After having ordered the goods, the process passively waits for the success case by means of an event-based gateway: the goods should be delivered. However, in case this does not happen within a reasonable time...

2

We choose by means of an exclusive gateway to make a first step of escalation: remind the dealer. We still stay optimistic. Therefore, the process returns to the event-based gateway and again passively waits for the success case: the goods should still be delivered. However, in case this does not happen again within a reasonable time, we choose a second step of escalation: cancel the deal.

Evaluation:

  • 👍 This model is a more compact and more generic modeling solution to the situation. If it comes to multiple steps of escalation, you will need such an approach to avoid huge diagrams.

  • 👎 The solution is less explicit. We could not choose to label the timer with explicit durations, as a single timer is used for both durations. The solution is less readable for a less experienced reading public. For a fast understanding of the two step escalation, this method of modeling is less suitable.

  • 👎 During the time we need to remind the dealer, we are strictly speaking not in a position to receive the goods! According to the BPMN specification, a process can handle a message event only if it is ready to receive at exactly the moment it occurs. Fortunately, Camunda 8 introduced message buffering, allowing to execute this model properly without loosing messages. Using Camunda 7, the message might get lost until we are at the second event-based gateway.

note

You might want to use that pattern when modeling escalations with multiple steps. You should not execute it on Camunda 7.

Option 3: Using boundary events

1

After having ordered the goods, the process passively waits for the success case by means of a receive task: the goods should be delivered. However, in case this does not happen within a reasonable time...

2

a non-interrupting boundary timer event triggers a first step of escalation: remind the dealer. We still stay optimistic. Therefore, we did not interrupt the receive task, but continued to wait for the success case: the goods should still be delivered.

3

However, in case this does not happen within a reasonable time, we trigger a second step of escalation by means of an interrupting boundary timer event: interrupt the waiting for delivery and cancel the deal.

Evaluation:

  • 👍 This model is even more compact and a very generic modeling solution to the situation. If it comes to multiple steps of escalation, the non-interrupting boundary timer event could even trigger multiple times.

  • 👍 The model complies with BPMN execution semantics. Since we never leave the wait state, the process is always ready to receive incoming messages.

  • 👎 The solution is less readable and less intuitive for a less experienced reading public, because the way the interrupting and non-interrupting timers collaborate requires a profound understanding of boundary events and the consequences for token flow semantics. For communication purposes, this method of modeling is therefore typically less suitable.

note

You might want to use that pattern when modeling escalations with two steps as well as escalations with multiple steps for executable models.

Requiring a second set of eyes

For a certain task - typically a critical one in terms of your business - you need the opinion, review, or approval of two different people.

We sometimes also call that pattern the four eyes principle.

Example: The manager of a small sized bank's lending department has a problem: "Over the last quarter, we lost €100,000 in unrecoverable medium-sized loans. Controlling now tells me that could probably have been easily avoided by more responsible decisions of our lending department staff! I want that every such decision is signed off by two people from now on."

Modeling a process dealing with that requirement can be achieved easily, but the better solution also depends on whether you prefer overall speed over total effort.

All of the following modeling patterns assume that the two or more tasks needed to ultimately approve the loan must not be completed by one and the same person. When executing such patterns, you must enforce that with the workflow engine.

Option 1: Using separate tasks

1

A first approver looks at the loan and decides whether they approve. If they decide not to approve, we are done, but if the loan is approved...

2

...a second approver looks at the loan. If they also decide to approve, the loan is ultimately approved.

Evaluation:

  • 👍 This solution explicitly shows how the two steps of this approval are performed. Tasks are modeled separately, followed by gateways visualizing the decision making process.

  • Note that the approvers work in a strictly sequential mode, which might be exactly what we need in case we want minimization of effort and, for example, display the reasonings of the first approver for the second one. However, we also might prefer maximization of speed. If this is the case, see solution option 3 (multi-instance) further below.

  • 👎 The usage of separate tasks leads to duplication and makes the model larger, even more so in case multiple steps of approvals need to be modeled.

You might want to use that pattern when modeling the need for a second set of eyes needed in sequential order, therefore minimizing effort needed by the participating approvers.

While it is theoretically possible to model separate, explicit approval tasks in parallel, we do not recommend such patterns due to readability concerns.

As a better alternative when looking for maximization of speed, see option 3 (multi-instance) below.

Option 2: Using a loop

1

A first approver looks at the loan and decides if they approve. If they decide not to approve, we are done, but...

2

...if the loan is approved, we turn to a second approver to look at the loan. If they also decide to approve, the loan is ultimately approved.

Evaluation:

  • 👍 This model is a more compact modeling solution to the situation. If it comes to multiple sets of eyes needed, you will probably prefer such an approach to avoid huge diagrams.

  • Note that the approvers work in a strictly sequential mode, which might be exactly what we need if we want minimization of effort and, for example, display the reasonings of the first approver for the second one. However, we also might prefer maximization of speed. If this is the case, see option 3 (multi-instance) below.

  • 👎 The solution is less explicit. We could not choose to label the tasks with explicit references to a first and a second step of approval, as a single task is used for both approvals. The solution is less readable for a less experienced reading public. For a fast understanding of the two steps needed for ultimate approval, this method of modeling is less suitable.

You might want to use that pattern when modeling the need for multiple sets of eyes needed in sequential order, therefore minimizing effort needed by the participating approvers.

Option 3: Using a multi-instance task

1

All the necessary approvers are immediately asked to look at the loan and decide by means of a multi-instance task. The tasks are completed with a positive approval. Once all positive approvals for all necessary approvers are made, the loan is ultimately approved.

2

If the loan is not approved by one of the approvers, a boundary message event is triggered, interrupting the multi-instance task and therefore removing all the tasks of all approvers who did not yet decide. The loan is then not approved.

Evaluation:

  • 👍 This model is a very compact modeling solution to the situation. It can also easily deal with multiple sets of eyes needed.

  • Note that the approvers work in a parallel mode, which might be exactly what we need in case we want maximization of speed and want the approvers to do their work independent from each other and uninfluenced by each other. However, we also might prefer minimization of effort. If this is the case, see option 1 (separate tasks) or option 2 (loop) above.

  • 👎 The solution is much less explicit and less readable for a less experienced reading public, because the way the boundary event interacts with a multi-instance task requires a profound understanding of BPMN. For communication purposes, this method of modeling is therefore typically less suitable.

You might want to use that pattern when modeling the need for two or multiple sets of eyes needed in parallel order, therefore maximising speed for the overall approval process.

Measuring key performance indicators (KPIs)

You want to measure specific aspects of your process execution performance along some indicators.

Example: A software developer involved in introducing Camunda gets curious about the business: "How many applications do we accept or decline per month, and how many do we need to review manually? How many are later accepted and declined? How much time do we spend for those manual work cases, and how long does the customer have to wait for an answer? I mean...do we focus on the meaningful cases...?"

When modeling a process, we should actually always add some information about important key performance indicators (KPIs) implicitly. For example, specifically naming start and end events with the process state reached from a business perspective. Additionally, we might explicitly add additional business milestones or phases.

While the following section concentrates on the aspects of modeling KPIs, you might want to learn more about using them for reporting about processes from a more technical perspective. For example, when being faced with the task to actually retrieve and present Camunda's historical data collected on the way of execution.

Option 1: Showing milestones

1

First, we assess the application risk based on a set of automatically evaluable rules.

2

We can then determine whether the automated rules already came to a (positive or negative) conclusion or not. If the rules led to an unsure result, a human must assess the application risk.

3

We use explicit intermediate events to make perfectly clear that we are interested in the applications which never see a human...

4

...and be able to compare that to the applications which needed to be assessed manually, because the automatic assessment failed to determine a clear result.

5

We also use end events, which are meaningful from a business perspective. We must know whether an application was either accepted...

6

...or rejected.

By means of that process model, we can now let Camunda count the applications which were accepted and declined. We know how many and which instances we needed to review manually, and can therefore also narrow down our accpeted/declined statistics to those manual cases.

Furthermore, we will be able to measure the handling time needed for the user task; for example, by measuring the time needed from claiming the task to completing it. The customer will need to wait a cycle time from start to end events, and these statistics, for example, could be limited to the manually assessed applications and will then also include any idle periods in the process.

By comparing the economic value of manually assessed insurance policies to the effort* (handling time) we invest into them, we will also be able to learn whether we focus our manual work on the meaningful cases and eventually improve upon the automatically evaluated assessment rules.

Option 2: Emphasizing process phases

As an alternative or supplement to using events, you might also use subprocesses to emphasize certain phases in your process.

1

By introducing a separate embedded subprocess, we emphasize the phase of manual application assessment, which is the critical one from an economic perspective.

Note that this makes even more sense if multiple tasks are contained within one phase.

Evaluating decisions in processes

You need to come to a decision relevant for your next process steps. Your actual decision depends on a number of different factors and rules.

We sometimes also call that pattern business rules in BPMN.

Example: The freshly hired business analyst is always as busy as a bee: "Let's see... Category A customers always get their credit card applications approved, whereas Category D gets rejected by default. For B and C it's more complicated. Right, in between 2500 and 5000 Euros, we want a B customer, below 2500 a C customer is OK, too. Mmh. Should be no problem with a couple of gateways!"

Showing decision logic in the diagram?

When modeling business processes, we focus on the flow of work and just use gateways to show that following tasks or results fundamentally differ from each other. However, in the example above, the business analyst used gateways to model the logic underlying a decision, which clearly is considered to be an anti-pattern!

It does not make sense to model the rules determining a decision inside the BPMN model. The rules decision tree will grow exponentially for every additional criteria. Furthermore, we typically will want to change such rules much more often than the process (in the sense of tasks needed to be carried out).

Using a single task for a decision

1

Instead of modeling the rules determining a decision inside the BPMN model, we just show a single task representing the decision. Of course, when preparing for executing such a model in Camunda, we can wire such a task with a DMN decision table or some other programmed piece of decision logic.

2

While it would be possible to hide the evaluation of decision logic behind the exclusive gateway, we recommend always showing an explicit node with which the data is retrieved, which then might be used by subsequent data-based gateways.

Distinguishing undesired results from fatal problems

You model a certain step in a process and wonder about undesired outcomes and other problems hindering you to achieve the result of the step.

Example: What today is a problem for the business might become part of the happy path in a less successful future: "Before we can issue a credit card, we must ensure that a customer is credit-worthy. Unfortunately sometimes it might also turn out that we cannot even get any information about the customer. Then we typically also reject at the moment. Luckily, we do have enough business with safe customers anyway."

Option 1: Using gateways to check for undesired results

1

Showing the check for the applicant's creditworthiness as a gateway also informs about the result of the preceding task: the applicant might be creditworthy - or not. Both outcomes are valid results of the task, even though one of the outcomes here might be undesired from a business perspective.

Option 2: Using boundary error events to check for fatal problems

1

Not to know anything about the creditworthiness (because we cannot even retrieve information about the applicant) is not considered to be a valid result of the step, but a fatal problem hindering us to achieve any valid result. We therefore model it as a boundary error event.

The fact that both problems (an unknown applicant number or an applicant which turns out not to be credit-worthy) lead us at the moment to the same reaction in the process (we reject the credit card application) does not influence that we need to model it differently. The decision in favor of a gateway or an error boundary event solely depends on the exact definition of the result of a process step. See the next section.

Understanding the definition of the result

What we want to consider to be a valid result for a process step depends on assumptions and definitions. We might have chosen to model the process above with slightly different execution semantics, while achieving the same business semantics:

1

The only valid result for the step "Ensure credit-worthiness" is knowing that the customer is in fact credit-worthy. Therefore, any other condition must be modeled with an error boundary event.

To advance clarity by means of process models, it is absolutely crucial for modelers to have a clear mental definition of the result a specific step produces, and as a consequence, to be able to distinguish undesired results from fatal problems hindering us to achieve any result for the step.

While there is not necessarily a right way to decide what to consider as a valid result for your step, the business reader will typically have a mental preference to see certain business issues, either more as undesired outcomes or more as fatal problems. However, for the executable pools, your discretion to decide about a step's result might also be limited when using, for example, service contracts which are already pre-defined.

Asking multiple recipients for a single reply

You offer something to or request something from multiple communication partners, but you actually just need the first reply.

We sometimes also call that pattern first come, first serve.

Example: A well-known personal transportation startup works with a system of relatively independent drivers. "Of course, when the customer requests a tour, speed is everything. Therefore, we need to limit a tour to those of our drivers who are close by. Of course, there might be several drivers within a similar distance. We then just offer the tour to all of them!"

Using a multi-instance task

1

After determining all drivers currently close enough to serve the customer, we push the information about the tour to all of those drivers.

2

We then wait for the reply of a single driver. Once we have it, the process won't wait any longer, proceeds to the end event, and informs the customer about the approaching driver.

According to the process model, it is possible that another driver accepts the tour as well. However, as the process in the tour offering system is not waiting for the message anymore, it will get lost. As our process proceeded to the end event after the first reply, all subsequent messages are intentionally ignored in this process design.

Processing a batch of objects

You need to process many objects at once, which were already created before one by one, or which were updated one by one to reach a certain status.

We sometimes also call that pattern simply the 1-to-n problem.

Example: A lawyer explains to a new client the way he intends to bill him: "Of course, if you need advice, you can call me whenever you want! We will agree about any work that needs to be done and my assistant will track those services which are subject to a charge. Once a month mostly you will receive a neatly-structured invoice providing you with all the details!"

Using data stores and multi instance activities

1

The client asks for advice whenever they need it. Note that we create one process instance per request for advice.

2

The lawyer makes sure to record the billable hours needed for the client.

3

As he does not directly inform anybody by doing this, but rather collects data, we show this with a data store representing the time sheet and a data association pointing in its direction - representing the write operation.

4

The assistant starts their invoicing process on a monthly basis. In other words, we create one process instance per monthly billing cycle.

5

As a first step, the assistant determines all the billable clients. This are the clients for which time sheet entries exist in the respective month. Note that we have many legal advice instances who have a relationship to one billing instance and that the connection is implicitly shown by the read operation on the current status of data in the time sheet.

6

Now that the assistant knows the billable clients, they can iterate through them and invoice all of them. We use a sequential multi-instance subprocess to illustrate that we need to do this for every billable client.

7

On the way, the assistant is also in charge of checking and correcting time sheet entries, illustrated with a parallel multi-instance task. Note that these time sheet entries (and hence task instances) relate here 1:1 to the instances of the lawyer's "legal consulting" process. In real life, the lawyer might have created several time sheet entries per legal advice process, but this does not change the logic of the assistant's process.

8

Once the client is invoiced, the assistant starts a "payment processing" instance per invoice, the details of which are not shown in this diagram. We can imagine that the assistant needs to be prepared to follow up with reminders until the client eventually pays the bill.

Concurring dependent instances

You need to process a request, but need to make sure that you don't process several similar requests at the same time.

Example: A bank worries about the increasing costs for creditworthiness background checks: "Such a request costs real money, and we often have packages of related business being processed at the same time. So we should at least make sure that if one credit check of a customer is already running, we do not want another credit check for the same customer to be performed at the same time."

Using message events

1

Once an instance passes this event and moves on to the subsequent actual determination of the creditworthiness...

2

...other instances will determine that there already exists an active instance and wait to be informed by this instance.

3

When the active instance has determined the creditworthiness, it will move on to inform the waiting instances...

4

...which will receive a message with a creditworthiness payload and be finished themselves with the needed information.

The model explicitly shows separate steps (determine and inform waiting instances) which you might want to implement more efficiently within one single step doing both semantic steps at once by means of a small piece of programming code.

Using a timer event

While using timer events can be a feasible approach in case you want to avoid communication between instances, we do not recommend it. For example, one downside is that such solutions cause delays and overhead due to the perdiodical queries and the loop.

1

Once an instance passes this event and moves on to the subsequent actual determination of the creditworthiness...

2

...all other instances will go into a wait state for some time, but check periodically, if the active instance is finished.

3

When the active instance has determined the creditworthiness and finishes...

4

...all other instances will also finish after some time.