Configuration
An Axon Ivy Engine Cluster node is set up almost the same way as a stand-alone Axon Ivy Engine. Note: All nodes in a cluster setup have to run within the same network (the same broadcast domain).
Files
We recommend to configure S3 Document Storage as distributed blob storage so that every cluster node can access all the files.
Otherwise you need to share the data directory for all cluster nodes
(Data.Directory
from ivy.yaml). Use a Docker volume that stores all the files.
Mount it on all cluster nodes.
Create a Docker volume called ivy-data
and mount it at /ivy/data
in your Axon Ivy Engine Cluster node containers.
> docker volume create ivy-data
> docker run --mount source=ivy-data,target=/ivy/data ...
Ensure that you backup this Docker volume periodically.
Note
In previous version of the Axon Ivy Engine Cluster, the applications directory
(Data.AppDirectory
in your ivy.yaml) had to be shared
between the cluster nodes, too. This is no longer required and should be avoided.
Cluster Name
If you want to run multiple clusters within the same network (same broadcast domain), you need to name each cluster with a unique name. This prevents clusters from interfering with each other.
You define the cluster name in the ivy.yaml file.
Node Name
The Engine Cockpit has a view that displays all running nodes of the cluster. The node names are auto-generated by default but you can configure them in ivy.yaml.
Changes
Configuration changes are only applied to the local cluster node where the change is done. E.g., if an administrator is using the Engine Cockpit to change a configuration setting, then the setting is only changed at the cluster node where the current session of the administrator is executed. On all other cluster nodes, the setting remains unchanged!
We recommend that you do not change the configuration at runtime. Instead use a container image that contains the right configuration. If you have to change the configuration, create a new container image with the new configuration.
Another solution is to share the configuration folder between all cluster nodes using a Docker volume.