Configuration

An Axon Ivy Engine Cluster node is setup almost the same way as stand-alone Axon Ivy Engine. Note that all nodes in a cluster setup must run within the same network (the same broadcast domain).

Files Directory

The files directory (Data.FilesDirectory in your ivy.yaml), where Axon Ivy Engine stores files that are uploaded from users, must be shared between all cluster nodes. This can be done by using a Docker volume that stores all the files and is mounted into all cluster nodes.

Configure the files directory in your ivy.yaml file:

Data.FilesDirectory: "/var/lib/axonivy-engine/files"

Create a Docker volume called ivy-files and mount it into /var/lib/axonivy-engine/files directory of your Axon Ivy Engine Cluster node containers.

> docker volume create ivy-files
> docker run --mount source=ivy-files,target=/var/lib/axonivy-engine/files ...

Ensure that you backup the Docker volume periodically.

Note

In older version of Axon Ivy Engine Cluster also the applications directory (Data.AppDirectory in your ivy.yaml) had to be shared between the cluster nodes. This is no longer the case and should be avoided.

Cluster Name

If you want to run multiple clusters within the same network (same broadcast domain), you need to name each cluster with a unique name. Only then it is guaranteed that the clusters don’t interfere with each other.

The name of the cluster can be configured in the ivy.yaml file.

Node Name

The Engine Cockpit has a view that displays all running nodes. The name of a node is auto-generated by default but can be configured in the ivy.yaml file.

Changes

Configuration changes are only applied to the local cluster node where the change is done. E.g., if an administrator is using the Engine Cockpit to change a configuration setting then the setting is only changed at the cluster node where the current session of the administrator is executed. On all other cluster nodes, the setting is still the old one!

We recommend that you do not change the configuration at runtime. Instead use a container image that contains the right configuration. If you have to change the configuration create a new container image with the new configuration.

Another solution is to share the configuration folder between all cluster nodes using a Docker volume.