System Installation: FNZ Studio 2025
The FNZ Studio Platform is a web application that runs on a Java application server. No Java application client is required. Both FNZ Studio Composition and FNZ Studio Runtime (User Interface) can be accessed through a web browser.
The Platform can be installed on a single or multiple servers (that is, single-node or multi-node) to run as a Cluster. Furthermore, it requires a storage solution for the Cluster Storage:
- See Set up Cluster Storage for a list of all available storage locations for the Cluster Storage.
- Optionally, the Platform can be configured to store large files in one or more Blob Storage locations.
- Finally, if your installation is using containers, see also System Installation Using Containers (2025).
Some features described in this article may subject to separate surcharge. See the Private Cloud Bundle article for licensing information.
Prerequisites
Before reading this article, see the Technical Requirements article for detailed information on the required software and hardware.
To install the Platform, you need the following items (all files are provided by Support):
- Application server (one per node)
- Java Runtime Environment
- A storage solution (one per cluster which is shared by all nodes)
- All server names, IP addresses and access credentials for environments the Platform needs to access (e.g. LDAP, SMTP server, database server, etc.).
- The following files:
- Latest FNZ Studio 2025.x.x version (see Products page)
- Compatible extensions, including all mandatory extensions for your 2025.x.x version, as well as any other extensions that you have licensed.
- Compatible Packages, including all mandatory Packages for your 2025.x.x version, as well as any other provided Packages that you have licensed, delivered as one or more
AWDEPLOYMENT
files.
Installing the Platform
This section explains all the steps necessary for installing the Platform in a single-node or multi-node environment. Information specific to multi-node installations is indicated in the related sections and can be ignored when performing a single-node installation.
- Set up a directory in the local file system ('Data Home')
- Set up and configure the Cluster Storage
- (Optionally) Set up and configure the Blob Storage
- Install mandatory extensions
- Install Packages
- Map J2EE security roles to the Platform roles
- Install the Platform WAR file on your specific application server
- Apply the license file
- Start the Platform
Data Home, Cluster Storage and Blob Storage
FNZ Studio stores data in various locations:
- One Data Home per node
- One Cluster Storage
- Zero or more Blob Storages
The following table outlines the content stored in these locations, and provides an overview of the differences between single-node and multi-node installations. See the following sections for further details.
Data Home | Cluster Storage | Blob Storage | |
---|---|---|---|
Content | Configuration files, Extensions; Log files | The model for Solutions and active Process data, e.g.: Business Objects; Active Process Instances/Value Stores; Cluster files | Large binary data, such as: High-resolution pictures; Movies; Digital identification documents; Any other uploaded documents |
Single-node installation | One Data Home per instance (on its local file system). Default location is <Local file system>/data-home/ |
One Cluster Storage per instance, by default located in the Data Home. Default location is <Local file system>/data-home/cluster . See Note 1 |
Zero or more Blob Storage Silos. See Note 2 . |
Multi-node installation | One Data Home per node (on its local file system). Default location is <Local file system>/data-home/ . See Note 3 |
One shared Cluster Storage for all nodes. See Note 4 |
Zero or more shared Blob Storage Silos for all nodes. See Note 5 . |
Notes:
- Note
1
: See Set Up Cluster Storage for an overview of all available variants for the Cluster Storage. - Note
2
: See the Set Up Blob Storage section for more information. - Note
3
: Theconf
andextensions
folder may be shared among the cluster nodes. - Note
4
: See Set Up Cluster Storage for an overview of all available variants for the Cluster Storage. - Note
5
: See the Set Up Blob Storage section for more information.
Set up Data Home
Data Home is a folder located on the application server's file system. This folder contains all configuration data and extensions. The Data Home folder can be located anywhere on the file system. The process executing the Platform must have read/write rights on this folder.
To configure the path to the Data Home folder, set a JVM parameter. For example, this sets the folder /opt/appway/data-home
as the Data Home:
nm.data.home=/opt/appway/data-home
The process of setting a JVM parameter is different for every application server. Consult the documentation of your application server for details. On the command line, a JVM parameter can be set like this: -Dnm.data.home=/opt/appway/data-home
To start the Platform successfully, the /data-home/
folder (the Data Home root directory) and its /data-home/conf/
subfolder must exist.
Folder | Description |
---|---|
<Local file system>/data-home/conf/
|
Contains all system configurations. This subfolder can be empty, but it must exist. Otherwise, the data directory will not be accepted as Data Home. |
<Local file system>/data-home/extensions
|
Contains extensions and their configuration files. Some application servers may require specific extensions in order to start the Platform (see Install WAR File). Regular extensions can be installed from FNZ Studio Composition once it has started. |
Notes:
- When installing multiple instances on the same server, you must configure a different location for each Data Home
- For single-node installations, the Data Home also contains the Cluster Storage by default, which is used to store the model for Solutions and the data generated while running them. See Set up Cluster Storage for alternative storage locations.
Saving the Data Home in Cluster Storage
You can configure the Platform to store the configuration and extensions in the Cluster Storage, in so-called Startup Files which are stored in the startupfiles
distributed map. Whenever the Platform is started, it checks the system configuration and extensions stored in the Startup Files and overrides the existing local configuration if needed.
Note that this is not recommended for permanent extensions.
In detail, Startup Files work as follows:
- The first node to start in a Cluster uses a "Newer Wins" strategy. This means that the node's local system configuration and extensions are merged with the Startup Files stored in Cluster Storage. If there are two files with the same path, the newer one is taken. If there are files which exist only on one side, they are added to the other side.
- "Non-first" nodes in a Cluster always use whatever system configuration or extensions exist in the Startup Files in Cluster Storage. You could call this strategy "Cluster Wins".
Note that the Startup Files functionality synchronizes the system configuration and extensions on all nodes of a Cluster.
Enabling Startup Files
To enable this functionality and store the system configuration and extensions in the startupfiles
distributed map in the Cluster Storage, set the following configuration property to true
(default: false
):
{code:[language:java,numbering:false]}
nm.cluster.startup.file.sync.enabled
Moreover, instead of adding this configuration property to your <data-home>conf/conf.properties
file, we recommend setting it by following one of these methods:
- Using a JVM System property:
-Dnm.cluster.startup.file.sync.enabled=true
- Using an environment variable:
NM_CLUSTER_STARTUP_FILE_SYNC_ENABLED=true
When this functionality is enabled, any further changes made to the system configuration or extensions at runtime will also be saved in the Cluster Storage. On the contrary, if you disable this functionality, all the files saved in the startupfiles
distributed map are removed from the Cluster Storage.
When starting nodes which join a running cluster, we recommend that you enable an additional feature of Startup Files using the following configuration property:
nm.cluster.startup.file.sync.client.enabled
This property needs to be configured either as a JVM System property or as an environment variable, since it is used before the data home is accessed:
- Using a JVM System property:
-Dnm.cluster.startup.file.sync.client.enabled=true
- Using an environment variable:
NM_CLUSTER_STARTUP_FILE_SYNC_CLIENT_ENABLED=true
Doing so allows the Platform to check the system configuration and extensions stored in the startupfiles
distributed map early during system startup. When the functionality is enabled, any further changes made to the system configuration or extensions at runtime will also be saved in the Cluster Storage. on the contrary, if you disable the functionality, all files saved in the startupfiles
distributed map are removed from the Cluster Storage.
You can check the content of the startupfiles
distributed map at the following locations:
- In FNZ Studio Composition, under System Maintenance > Cluster Tools > Hazelcast Maps > Startup Files.
- In the Cluster Storage, see the
startupfiles
map.
Supporting Downgrades
The new configuration property nm.cluster.startup.file.sync.support.downgrade
can support patch downgrade scenarios for extensions.
Without this property, if you tried to perform a downgrade, the Startup Files would overwrite the older extensions provided in the Docker image.
By setting nm.cluster.startup.file.sync.support.downgrade
to true
(default is false
), the locally provided extensions from the Docker image will overwrite the extensions stored in Startup Files. Furthermore, the installation.properties
file will be taken from the Docker image.
Note, however, that Platform downgrades are still not supported as a general rule (see Upgrade Principles) and that you should use this property only for small patch downgrades, when needed.
Set up and Configure the Cluster Storage
The details of setting up a Cluster Storage differ depending on whether you are performing a single-node or multi-node installation of the Platform.
Single-Node Installations The Cluster Storage is located in the Data Home by default. However, for installations, we recommend placing the Cluster Storage outside of the Data Home, possibly on a network share or at least into a separate directory. See Data Home, Cluster Storage, and Blob Storage for details.
Alternatively, you can use any of the following options as Cluster Storage instead (see linked sections for details):
- File System
- Apache Cassandra Database
- Relational Database
- Azure Blob Storage
- Amazon Web Services S3
Multi-Node Installations Prepare a shared storage location as Cluster Storage which can be used by all nodes of the Cluster. The following options are available:
- File System
- Apache Cassandra Database
- Relational Database
- Azure Blob Storage
- Amazon Web Services S3
Set up and Configure the Blob Storage
Blob Storage is specially useful for Solutions that need to store large binary data. Data is stored in Blob Storage Silos and is accessible to any node of the Cluster. Moreover, Blob Storage has low impact on memory and local disk space. See full information in the Blob Storage User Guide article.
Before using Blob Storage, you need to decide where you want to store your Blobs. A file system Blob Storage Provider is available in the Platform by default and allows storing Blobs on a file system. Furthermore, different Blob Storage Providers can be added as extensions to store Blobs in a Relational Database, in AWS S3, or in Azure Blobs. Please note that using multiple Blob Storage Providers at the same time is supported.
Configure the File System Blob Storage Provider
When using the file system Blob Storage Provider, Blobs can be stored on the local disk (for single-node installations) or on a shared network drive which is accessible by all nodes.
First, identify where to store the Blobs and then point the Filesystem Blob Storage Provider to this directory by configuring the following property:
nm.blob.storage.filesystem.silos = <silo name 1>=<path to silo 1>, <silo name 2>=<path to silo 2>, ...
For example, to configure two Blob Storage Silos where the first is located under <data-home>/blobs
and the second is located under /mnt/additional-blobs
, you would use the following:
nm.blob.storage.filesystem.silos = under-review=blobs, archive=/mnt/additional-blobs
Please, note that both directories need to be accessible by all nodes in the cluster.
Install Extensions
The following extensions need to be installed:
- All mandatory extensions for your 2025.x.x version
- Any other extensions that you have licensed
Proceed as follows:
- Install the extensions by adding them to the
<Local file system>/data-home/extensions
folder. - Edit the extensions configuration file
<Local file system>/data-home/extensions/configuration.xml
to determine whether an extension is started at startup or not. Refer to the following attributes description (also included in theconfiguration.xml file
itself):- state: 0 stopped (has never been started manually), 1 stopped (has run before), 2 started
- startup: 0 preserve state, 1 do not run at startup, 2 run at startup
- Starting from FNZ Studio 2025.x, the following behavior applies:
- You can use the configuration property
nm.extensions.config.dir
to configure a different directory for theconfiguration.xml
file. Absolute and relative paths are accepted. The relative path is resolved against the data home. - Extension auto-discovery at start-up can be disabled by setting the configuration property
nm.cluster.startup.extensions.autoDiscovery
tofalse
(default istrue
). In this case, only the extensions listed in theconfiguration.xml
file are loaded.
- You can use the configuration property
- After starting the Platform, make sure that the expected extensions are started in FNZ Studio Composition (System Configuration > Extensions).
Mandatory Extensions
Mandatory extensions are an essential part of the Platform and must be installed on every instance. If these extensions are not installed and running, the Platform itself is not considered correctly installed and set up. Mandatory extensions provide important features and strengthen the available analytical capabilities allowing you to tune your installation and ensure maximum platform stability.
Ensure the following mandatory extensions are installed on your system:
- AppTask — Includes all the logic behind the autogeneration of the Data Driven UI and the logic to apply a Theme to such UI. This extension is used by App Tasks.
- AppTaskAPI — Exposes the API used to communicate with the backend (Data Logic, Theming, Process). This extension is used by App Tasks.
- ClusterTools — Provides insights into the internals of the Cluster and Hazelcast (see documentation).
- ComponentExtension — Contains most of available Screen Components.
- DefaultIcons — Adds generic Workspace icons for use with Screen Components and in Solutions.
- Functions — Provides many additional functions vital to working in the Platform.
- ProcessUsageTracker — Collects usage statistics (started, updated, stopped) for top-level Processes in Solutions. These statistics can be accessed in the license report.
- StatisticsCollector — Collects statistics relevant for testing and tuning your installations (see documentation).
- SystemHealthSensors — Adds additional health sensors to the Health Service (see Health Sensor documentation).
- WorkitemFinder — Provides out-of-the-box filters as well as sorting, paging, and text-based searching capabilities for Workitems (see documentation).
DefaultUser Extension
The DefaultUser Extension can be used if no other mechanism to create users is available. Consider the following information:
- It creates a user with id:
NM
. - The password of the default user can be configured using the
default.user.password
configuration property. This must be stored in theDefaultUser.jar.cfg
file that must be created and placed in the<Local file system>/data-home/extensions
folder. Example configuration:default.user.password = some-secure-password
- The DefaultUser extension and the configuration file
DefaultUser.jar.cfg
must be uninstalled after the first user has been created, otherwise it would fail to start.
Install Packages
The following Packages need to be installed:
- All mandatory Packages for your FNZ Studio 2025.x.x version
- Any other Packages that are provided by FNZ Studio and you have licensed
Proceed as follows:
- Install these Packages by placing the corresponding
AWDEPLOYMENT
files into thepackages
folder in the Data Home (<Local file system>/data-home/packages
). The Packages are then automatically installed when the Platform is started. See Importing Packages for more details on installing Packages automatically. - Alternatively, if you wish to install Packages manually, install Packages from Studio Composition (Solution Design > Import) after starting the Platform.
Mandatory Packages
Mandatory Packages are an essential part of the Platform and must be available on every instance. If these Packages are not installed (or not updated to a version compatible with version 2025.x.x), the Platform itself is not considered correctly installed. Mandatory Packages provide important features and functionality allowing you to use your installation more effectively. You can download mandatory Packages from the Products page.
Ensure the following mandatory Packages are installed on your system:
- Design System — Provides patterns, prebuilt modules, and styling to enable the straightforward implementation of a cohesive brand experience. If you are updating the Design System Package, see the Design System Upgrade Guide for more information on the update process.
Important! The DataLogic Package, which was mandatory in previous FNZ Studio versions, is now a Platform Package and is, therefore, installed automatically.
Map Security Roles
The Platform defines the following J2EE security roles:
- Administrator – Gives access to FNZ Studio Composition.
- User – Gives access to FNZ Studio Runtime.
Proceed as follows:
- Create role mappings for the User and Administrator roles in your application server. The user registry responsible for user authentication must define at least one of these two roles for each user.
- If applicable, create the following additional J2EE security role related to the Web API functionality:WebAPI. This gives access to URLs with the prefix
/secure/api/*
for users who do not have the Administrator or User role assigned. You only need to create a role mapping for the WebAPI role if these conditions are fulfilled:- The Authenticator option in the Web API configuration is set to Application Server.
- You want to allow users who should not be assigned to the Administrator or User role to call the Web API.
Install WAR File
Install the WAR file according to your application server. Make a note of the application's context path ({ CONTEXT_PATH }
) that you define. See the following sections for details.
Apache Tomcat
Use Apache Tomcat 9.0.82 or greater. Refer to the Technical Requirements article for all details.
Special care has to be taken to ensure that Tomcat handles static resources using UTF-8 encoding. There are two ways to achieve this:
-
Make sure that the default encoding of the JVM is set to UTF-8. This should be the preferred option for new installations, as it guarantees proper UTF-8 support for all the Platform components. The default encoding of the JVM can be changed by passing the Java system property
file.encoding
on the command line when starting the application server:Copy-Dfile.encoding=UTF-8
-
Configure Tomcat's DefaultServlet to use UTF-8 as file encoding Open the default web.xml deployment descriptor file in
$CATALINA_BASE/conf/
and add a new<init-param>
element to the servlet with name 'default':Copy<servlet>
<servlet-name>default</servlet-name>
<servlet-class>org.apache.catalina.servlets.DefaultServlet</servlet-class>
...
<init-param>
<param-name>fileEncoding</param-name>
<param-value>UTF-8</param-value>
</init-param>
</servlet>
IBM WebSphere
For WebSphere installations, a special server property must be defined to handle JSON calls properly:
nm.auth.userid.sessionfallback = true
If you are using WebSphere 8.5 or higher and want to enable the Platform to pick up SSL certificates injected by WebSphere, set the following configuration property to true
(default: false
):
nm.security.ssl.useWebSphereMethodExecutor
JBoss EAP
If you are using JBoss EAP 7.4 or higher, set the following environment variable to prevent problems when FNZ Studio is duplicating or persisting some business objects:
DISABLE_JDK_SERIAL_FILTER=true
Apply License File
Copy the license file (license.properties
) to <data-home>/conf/
. If you have not yet received a license file, request it from Support.
Note: For single-node test installations, the license file is integrated in the WAR file and does not need to be applied manually.
Configure Hazelcast
For multi-node installations, configure Hazelcast to allow it to connect to other nodes in the cluster. The Platform relies on Hazelcast, an in-memory data grid, to distribute data among the nodes. Define the nodes that should join the Cluster using one of the following discovery mechanisms. Note that multiple join configurations cannot be enabled at the same time:
- TCP-IP – For each node, edit the file
<data-home>/conf/hazelcast.xml
. Enable the mechanism (<tcp-ip enabled="true">
) and add the IP addresses of all nodes in the cluster. For example: {code:[language:java,numbering:false]} Note: If the192.168.14.120 192.168.14.46 192.168.14.72 192.168.14.35 hazelcast.xml
file does not exist yet, start a single-node installation for the first time, and copy the automatically generatedhazelcast.xml
file from that installation to each node's file system in the cluster installation. - Multicast – For each node, edit the file
<data-home>/conf/hazelcast.xml
. Enable the mechanism (<multicast enabled="true">
) and add the multicast group and port. For example: {code:[language:java,numbering:false]} Note: If the224.2.2.3 54327 hazelcast.xml
file does not exist yet, start a single-node installation for the first time, and copy the automatically generatedhazelcast.xml
file from that installation to each node's file system in the cluster installation. - Cloud-native – Cloud-native node discovery is available for instances deployed on:
- Amazon Web Services – Use the AWS Hazelcast Discovery extension
- Microsoft Azure – Use the AzureHazelcastDiscovery extension
- Kubernetes – The KubernetesHazelcastDiscovery extension provides automatic Cluster node discovery for instances deployed on Kubernetes.
- Other methods – The Platform should also work with all additional discovery mechanisms supported by Hazelcast, however more setup work would be required and the use of these methods is not supported. See the Hazelcast documentation for details.
Hazelcast Configuration Properties
Two new configuration properties are available in FNZ Studio 2025.x:
-
nm.cluster.startup.hazelcast.config.directory
, which allows configuring in which directory the Hazelcast configuration files are stored. Absolute paths and paths relative to the Data Home are supported. Its default value isconf
.
-
nm.cluster.startup.hazelcast.config.autoCreate
, which allows configuring if the the Hazelcast configuration files should be automatically created if they do not exist. The default value isfalse
. If no Hazelcast configuration files exist and they are not automatically generated, then the default Hazelcast configuration shipped with FNZ Studio is used.
Both configuration properties can only be set as Environment variables (ENV) or as Java System Properties (-D).
Configure HTTP Compression
HTTP compression significantly reduces the amount of data transferred between FNZ Studio and a browser. This improves the responsiveness of FNZ Studio web application and leads to a better user experience. System administrators are therefore encouraged to enable and configure HTTP compression in the Java application server used to host FNZ Studio. More information and configuration examples can be found in Configure HTTP compression for FNZ Studio.
Configure Required Command-line Options of Java 21
FNZ Studio 2025.x requires Java 21. However, in order to run correctly on Java 21, certain command-line options have to be set when starting Java. See System Installation: Required command-line options for Java 21 for more information.
Split-Brain Protection (Experimental)
Starting from FNZ Studio 2024, a Split-Brain protection feature allows you to ensure data consistency in split-brain scenarios.
We have introduced a feature aiming at preserving data consistency in a cluster node Split-Brain situation by suspending the activities of the Platform and, thus, preserving the state of the distributed maps. To configure this feature:
- In FNZ Studio Composition (System Configuration > Configuration Properties), set the configuration property
nm.cluster.splitbrain.protection.function
totrue
(false
by default). - The property above enables the possibility to configure one of these protection functions:
MinSize
: This function detects a Split-Brain if the number of nodes goes below the threshold set in thenm.cluster.splitbrain.protection.min.cluster.size
configuration property (System Configuration > Configuration Properties). The minimum size can be changed at runtime.ExpectedSize
: This function detects a Split-Brain if the number of nodes goes below:expectedClusterSize
-backup
, whereexpectedClusterSize
is the actual number of nodes, andbackup
is the number of synchronous backups for every entry in the distributed maps.
As a best practice, we recommend having a cluster with an odd number of nodes.
Start the Platform
- Start the application server and the Platform. If you performed a multi-node installation and are starting a Cluster, see the Cluster Startup Considerations just below before starting the Platform.
- Ensure there are no WARN or ERROR messages returned during startup or that you fully understand the messages and feel confident ignoring them. Contact Support if in doubt.
Cluster Startup Considerations
Following is additional information you should consider before starting a Cluster:
- Before starting the Cluster, make sure that all nodes have: the same extensions installed; the same configuration, except for node-specific configurations, if any.
- If every node has enough memory and CPU power to start the Platform by itself, send start-up commands to every application server. When the application servers start, the first among them initializes the system and the other nodes then join.
- If several application servers must start at the same time, configure a minimum initial cluster size:
- Edit the Hazelcast configuration file
<data-home>/conf/hazelcast.xml
. - Add the property
hazelcast.initial.min.cluster.size
and a value for the minimum number of nodes necessary to start the Platform. For example, to start the Platform only after four nodes have been able to connect with each other: {code:[language:java,numbering:false]}... 4
- Edit the Hazelcast configuration file
- Startup is aborted and the whole cluster is shut down if any cluster data cannot be loaded (e.g., missing Business Objects).
To ignore startup errors during data loading for specific maps, use the following configuration property:
nm.cluster.persistence.startup.ignore.errors.maps
. You can configure multiple maps separated by commas (e.g.,businessObjectsMap, clusterFilesMap
). Note that this property should only be used for emergency cases and should be removed as soon as the underlying data loading problem is fixed.
Script Execution at Cluster Startup
Starting from FNZ Studio 2025.x.x, it is possible to execute FNZ Studio Scripts at cluster startup. This is specially useful when provisioning a new environment and new Solutions.
To execute scripts at Platform startup, place them in directory ${data_home}/provisioning
directly. For security reasons, we recommend that the ${data_home}/provisioning
directory is set as read-only.
Note the following:
- Only
.nel
scripts (written with FNZ Studio Script Language) are supported. - Multiple scripts are executed in alphabetical order based on their file name. Note that nested scripts (placed inside /
provisioning
sub-directories) are not supported. - Scripts are executed only once, meaning that scripts are skipped on subsequent cluster startups if already executed.
- If a script fails during execution, cluster startup will also fail and the remaining scripts will not be executed.
Verify the Platform Started Correctly
After starting the Platform, access FNZ Studio Composition and verify that it started correctly. We recommend accessing FNZ Studio Composition from each node of the installation, as well as from the load balancer, if applicable (see details just below).
After connecting to a node / load balancer, follow these steps to access Studio Composition and verify that the Platform started correctly:
- Log in with a user in the Administrator role. Note: If you cannot log into Studio Composition, check whether your authentication and authorization provider is configured correctly.
- Go to System Maintenance > System Overview > Overview and verify that all nodes appear as expected. Also check the startup time and the the Platform version to verify correctness.
Connecting to a Node To open FNZ Studio Composition directly on a node, enter the following URL to your browser:
http://SERVER_NAME:PORT/{ CONTEXT_PATH }/admin
Example URL:
http://solution-node1.appway.com/admin
- SERVER_NAME = solution-node1.appway.com
- PORT = 80
- = '' (empty)
Connecting to the Load Balancer To open FNZ Studio Composition from the load balancer, enter the following URL to your browser:
http://LOAD_BALANCER:PORT/{ CONTEXT_PATH }/admin
Example URL:
http://solution.appway.com/admin
- LOAD_BALANCER = solution.appway.com
- PORT = 80
- = '' (empty)
Using the File System as Cluster Storage
As mentioned above, there are several options you can use as Cluster Storage, such as the file system. To use the file system as Cluster Storage, the necessary steps differ depending on whether you are performing a single-node or multi-node installation of the Platform:
- For multi-node installations, set up a single Cluster Storage directory shared by all nodes:
- Create a folder anywhere on a shared file system. For example, use a mounted NFS drive or a Windows share.
- This folder must be accessible from all nodes in the Cluster, with read and write permissions. To estimate the necessary disk space, see Technical Requirements.
- For both single- and multi-node installations, point each node to the shared Cluster Storage directory using the property
nm.cluster.persistence.filesystem.path
. The path can be absolute or relative to the Data Home.
Marker File
If the Platform is configured to persist information to the file system, the Platform checks for the marker file at startup. This check is used to understand if one of the following conditions is fulfilled:
- The Platform is starting in a fresh and empty Cluster Storage
- The Platform is starting in a Cluster Storage used by a previous major/feature version (this is the case for the first startup after upgrading)
If either of the above conditions is fulfilled, the Platform creates the marker file for the currently running the Platform version in the Cluster Storage. The marker file is empty and named according to the syntax appway<appwayFeatureVersion>.marker
, e.g. appway2025.1.marker
.
Using an Apache Cassandra Database as Cluster Storage
An Apache Cassandra Database can be used as a Cluster Storage through the CassandraHazelcastStore extension. Complete information can be found in the CassandraHazelcastStore Extension article.
Using a Relational Database as Cluster Storage
The RelationalDBHazelcastStore extension provides another of the possible options for Cluster Storage, in this case allowing using a Relational Database to store the Platform data.
Complete information on using a Relational Database as a Cluster Storage can be found in the RelationalDBHazelcastStore Extension article.
Using Azure Blob Storage as Cluster Storage
The AzureHazelcastStore extension enables the Platform to use Azure Block Blobs as Cluster Storage. Block Blobs are a scalable object storage for documents, videos, pictures, and unstructured text or binary data. Blobs can be stored in Hot tiers.
Complete information on using an Azure Block Blobs as a Cluster Storage through the AzureHazelcastStore extension can be found in the AzureHazelcastStore Extension article. See also our Products page for complete information.
Using Amazon Web Services S3 as Cluster Storage
The AWSHazelcastStore extension enables the Platform to use the AWS S3 service as Cluster Storage. S3 is a service provided by Amazon that allows you to store and manage data in the Amazon cloud.
Complete information on using an AWS S3 service as a Cluster Storage through the AWSHazelcastStore extension can be found in the AWSHazelcastStore Extension article. See also our Products page for complete information.
System Shutdown
To stop the Platform, send shutdown commands to the application server (or to each application server in a cluster).
If there are several Platform nodes, these communicate with each other to ensure that each one stops in turn.
If you want to stop all nodes at the same time, publish a cluster shutdown command through the REST service: /rest/cluster/shutdown
.
See Shutting Down a Large Cluster for information on the pro's and con's of different approaches to cluster shutdown.