Google GKE
Google Kubernetes Engine (GKE) is a managed container service to run and scale Kubernetes applications in the cloud or on-premises.
Camunda 8 Self-Managed can be deployed on any Kubernetes cluster using Helm charts, like GKE. However, there are a few pitfalls to avoid as described below.
GKE cluster specificationβ
Generally speaking, the GKE cluster specification depends on your needs and workloads. Here is a recommended start to run Camunda 8:
- Instance type:
n1-standard-4
(4 vCPUs, 15 GB Memory) - Number of nodes:
4
- Volume type:
Performance (SSD) persistent disks
Pitfalls to avoidβ
For general deployment pitfalls, visit the deployment troubleshooting guide.
Volume performanceβ
To have a proper performance in Camunda 8, the persistent volumes attached to Zeebe should have around 1,000-3,000 IOPS. The Performance (SSD) persistent disks
volumes deliver a consistent baseline IOPS performance but it varies based on volume size.
It's recommended to use Performance (SSD) persistent disks
volume type with at least 100 GB
per volume to have 3,000 IOPS.
Zeebe Ingressβ
Zeebe requires an Ingress controller that supports gRPC
, so if you are using GKE Ingress (ingress-gce), not ingress-nginx, you might need to do extra steps. Namely, using cloud.google.com/app-protocols
annotation in Zeebe Service. For more details, visit the GKE guide using HTTP/2 for load balancing with Ingress.
Google Cloud load balancerβ
Camunda Identity management endpoints, such as the health check endpoint, do not run on port 80. As a result, when using a Google Cloud Load Balancer, you may need a custom health check configuration.
Hereβs an example of a BackendConfig
you can apply:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: camunda-identity
spec:
healthCheck:
timeoutSec: 3
type: HTTP
requestPath: /actuator/health/readiness
port: 82
When using container-native load balancing, the load balancer sends traffic to an endpoint in a network endpoint group; hence, the targetPort
should match the containerPort
as the load balancer sends probes to the Pod's IP address directly.
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: camunda-identity
spec:
healthCheck:
timeoutSec: 3
type: HTTP
requestPath: /actuator/health/readiness
port: 8082
Finally, in your Helm values, assign the BackendConfig
to the Identity service.
identity:
service:
annotations:
cloud.google.com/backend-config: '{"default": "camunda-identity"}'