System Installation Using Containers (2025)
Introduction
The FNZ Studio Platform is a web application that runs on a Java application server and can be accessed through a web browser. No Java application client is required.
The Platform has one or more nodes at runtime. When using containers, each node runs in a separate container. Each container runs a Java application server and the Platform Platform web application.
Furthermore, the Platform requires a storage solution for the Cluster Storage. And, optionally, the Platform can be configured to store large files in one or more Blob Storage solutions.
- See section Configure-Cluster-Storage for a list of all available storage solutions for Cluster Storage.
- See section Configure Blob Storage for a list of all available storage solutions for Blob Storage.
Related Documentation Resources
- General Documents:
- Reference Documents:
Preparation
This System Installation guide is structured as follows:
- Section 3 - Standard Docker Image presents the Standard Docker images. In this section we explain how to use Standard Docker images and show its features. Necessary prerequisites are illustrated in the section below.
- Section 4 - Running the Platform in Kubernetes shows how to embed the Standard Docker images in a container orchestration system. We use Kubernetes, but other systems are possible and have similar requirements. Necessary prerequisites are illustrated in the section below.
Prerequisites
To use Standard Docker Image, you need the following:
- From your side:
- A laptop or any other machine with Docker installed.
- From the Platform:
- Access to a Docker repository which has the Standard Docker images.
- The DefaultUser Extension or another mechanism to create users.
For Running the Platform in Kubernetes, you need the following:
- From your side:
- A Kubernetes cluster (for sizing recommendations, please see section Infrastructure Requirements in the Technical Requirements article).
- The tool kubectl
- The tool Helm version 3.
- One Cluster Storage solution (for supported solutions, see details below).
- (Optionally) One or more Blob Storage solutions (for supported solutions, see details below).
- Access credentials for 3rd-party system that the Platform needs to integrate with, e.g., LDAP, relational database server, and so on.
- From the Platform:
Access to a Docker repository which has the Standard Docker images
The Platform License file (specifies the number of nodes allowed in the cluster)
The KubernetesHazelcastDiscovery Extension (required for node discovery in Kubernetes)
The DefaultUser Extension or another mechanism to create users.
Additional Extensions that you have licensed (mandatory Extensions are already included in the Standard Docker image)
Additional Packages that you have licensed (mandatory Packages are already included in the Standard Docker image)
Contact Support to request the necessary files and access credentials to our Docker repository.
Standard Docker Images
This section explains how to use Standard Docker images and shows its features.
-
To start the Platform using Docker, execute the following:
Copy
$ docker login
$ # enter the credentials that you received from the Platform
$ docker run --name appway appwayag/platform -
When the startup is finished, you should see the following log:
Copy
2020-04-07 13:02:29,939 INFO [localhost-startStop-1] com.nm.bricks.impl.DemiurgeBrick - start-up done. -
To stop again, press Ctrl-C.
Access the Platform (Expose Ports)
In the previous example, the Platform itself was not actually shown. To be able to access the Platform running in a container, you need to expose different ports.
The Standard Docker images allows you to export the following ports:
8080
- HTTP port. This port can be used to access the Platform.5713
- Hazelcast port. This port is used by Hazelcast to connect the different nodes in the Cluster.5556
- JMX port. This port can be used to inspect JMX beans.
Follow these steps:
-
Start the Platform again. To access Studio Composition, you need to expose the
8080
port using the-p
parameter of docker run:Copy
$ docker run --name appway \
-p 8080:8080 \
appwayag/platform -
Point a web browser to http://localhost:8080/admin.
Note: At this point, login does not work yet, since authentication needs to be configured first. Authentication will be configured at a later point.
Configure the Platform (Use Environment Variables)
The Standard Docker images understand a number of Environment Variables which allow configuring the Java virtual machine, the Java application server, and the Platform itself.
Here are some examples:
JAVA_OPTS
- Java runtime options- They are passed to the Java runtime when starting it
- Examples: -Xmx2g -Duser.timezone=Europe/Zurich -Dfile.encoding=UTF8
NM_SYSTEM_ID
orAPPWAY_CLUSTERTOOLS_EVENTS_LIMIT
- the Platform configuration properties- They are passed to the Platform and allow overwriting the configuration of the platform (Core platform and Extensions). See details in our documentation.
When starting the Platform, environment variables can be passed using the -e
parameter of docker run like this:
$ docker run --name appway \
-p 8080:8080 \
-e NM_SYSTEM_ID=aw-docker \
appwayag/platform
Note, there are also other options to configure the Platform. Check the Configuration Properties section to understand the different types of configuration and when they should be used.
Provide a License File (Mount a File)
A license file has not been specified yet. To do so, you need to mount a given license file into the Docker container.
Assuming you have a license file called license.properties
in the current directory, you can use the -v
parameter of docker run to mount it in the proper location:
Note that you need to provide an absolute path when mounting the license file. `$(pwd)/license.properties` will create one using the current directory.
$ docker run --name appway \
-p 8080:8080 \
-e NM_SYSTEM_ID=aw-docker \
-v $(pwd)/license.properties:/appway/data-home/conf/license.properties \
appwayag/platform
Add Additional Extensions (Mount a Directory)
To add additional extensions:
-
Copy them to a subdirectory called
extensions
in your current directory and then mount this directory as follows:Copy
$ docker run --name appway \
-p 8080:8080 \
-e NM_SYSTEM_ID=aw-docker \
-v $(pwd)/license.properties:/appway/data-home/conf/license.properties \
-v $(pwd)/extensions:/appway/additional-extensions \
appwayag/platform -
The additional extensions are picked up and started automatically during startup.
Add Additional Packages (Mount a Directory)
To add additional Packages, a similar approach as for additional extensions (see section above) can be used.
-
Copy the additional Packages to a subdirectory called
packages
in your current directory and then mount this directory as follows:Copy
$ docker run --name appway \
-p 8080:8080 \
-e NM_SYSTEM_ID=aw-docker \
-v $(pwd)/license.properties:/appway/data-home/conf/license.properties \
-v $(pwd)/extensions:/appway/additional-extensions \
-v $(pwd)/packages:/appway/additional-packages \
appwayag/platform -
The additional Packages are picked up and imported during startup.
Configure Authentication
To be able to log into the Platform, some users need to be created first.
If there is no mechanism in place to sync users such as the one described in the LdapSyncAdapter extension article, the DefaultUser extension can be used to create a default user with the nm
id and a provided password.
This extension must be uninstalled after the first user has been created, otherwise it would fail to start.
The DefaultUser extension must be added to the extensions subdirectory as mentioned in the Add Additional Extensions section above.
The password of the default user can be configured using environment variables as described in the Configure the Platform (Use Environment Variables) section above. The name of the environment variable to specify the default user password is APPWAY_DEFAULTUSER_DEFAULT_USER_PASSWORD
.
Example:
$ docker run --name appway \
-p 8080:8080 \
-e NM_SYSTEM_ID=aw-docker \
-e APPWAY_DEFAULTUSER_DEFAULT_USER_PASSWORD=some-secure-password \
-v $(pwd)/license.properties:/appway/data-home/conf/license.properties \
-v $(pwd)/extensions:/appway/additional-extensions \
appwayag/platform=
Configure Cluster Storage
Now that you know how to add additional extensions and configure them (see sections Configure the Platform and Add Additional Extensions), you are fully equipped to use any supported Cluster Storage solution.
The Platform supports the following Cluster Storage types:
- File system (shipped with the Platform) - This type of Cluster Storage allows persisting the Platform State (business objects, process data, etc.) on a filesystem. See documentation for more info. Consider the following information:
- This can be a local disk or a mounted network drive.
- Every node in the Cluster has to be able to read and write to this filesystem.
- Relational Database (RelationalDbHazelcastStore extension) - With this extension you can persist the Platform State into a relational database. See System Installation.
- Cassandra (CassandraHazelcastStore extension) - With this extension you can persist the Platform State into a Cassandra installation. See System Installation.
- Amazon Web Services S3 (AWSHazelcastStore extension) - With this extension you can persist the Platform State into a Amazon Web Services S3 bucket. See System Installation.
- Azure Blobs (AzureHazelcastStore extension) - With this extension you can persist the Platform State into Azure Blobs. See System Installation.
Except for the File system Cluster Storage type, you need to add the respective extensions as additional extensions to you Docker container when starting the Platform (see section Add Additional Extensions). Furthermore, all the extensions require additional configuration. To configure the extension, use Environment Variables as shown in section Configure the Platform .
To make this fully clear, here is an example on how to start the Platform and persist Cluster Storage in AWS S3 (see code explanation below):
$ docker run --name appway \
-p 8080:8080 \
-v $(pwd)/license.properties:/appway/data-home/conf/license.properties \
-v $(pwd)/extensions:/appway/additional-extensions \
-e APPWAY_AWSHAZELCASTSTORE_COM_NM_EXTENSIONS_AWSHAZELCASTSTORE_ACCESSKEY=AKIAU5.....UULRWV2FH \
-e APPWAY_AWSHAZELCASTSTORE_COM_NM_EXTENSIONS_AWSHAZELCASTSTORE_SECRETKEY=t4WtoEERMIl....h+e2iVq0mAT1svAWwvRc \
-e APPWAY_AWSHAZELCASTSTORE_COM_NM_EXTENSIONS_AWSHAZELCASTSTORE_REGION=eu-west-3 \
-e APPWAY_AWSHAZELCASTSTORE_COM_NM_EXTENSIONS_AWSHAZELCASTSTORE_BUCKETNAME=appway-test-s3-bucket \
appwayag/platform
- Lines 1-3: illustrated in earlier sections.
- Line 4: all the additional extensions are added in the
extensions
subdirectory. In this example, you add the AWSHazelcastStore extension. - Lines 5-8: the AWSHazelcastStore extension is configured. To make it work, replace the configuration values with the information from your AWS account.
Configure Blob Storage
Similarly to the Cluster Storage (see section Configure Cluster Storage), the Blob Storage can also be configured through additional extensions and configuration.
The Platform supports the following Blob Storage types:
- File system (shipped with the Platform) - This type of Blob Storage allows persisting large files on a file system. Consider the following information:
- This can be a local disk or a mounted network drive.
- Every node in the Cluster has to be able to read and write to this file system. See System Installation.
- Amazon Web Services S3 (AWSBlobStorage extension) - With this extension you can persist large files into an Amazon Web Services S3 bucket.
Except for the file system type, you need to add the respective extensions as additional extensions to your Docker container when starting the Platform (see section Add Additional Extensions). Furthermore, all the extensions require additional configuration. To configure the extension, use Environment Variables as shown in section Configure the Platform .
For an example on how to start with an additional extension and configure it, see section Configure Cluster Storage.
Persist Runtime Changes
At this point, your installation is already reasonably functional. Moreover, after designing a new Process or starting new Process Instances, your Business Objects and process data are persisted in the Cluster Storage after restarting your Docker container. However, the changes you applied on configuration and extensions at runtime are lost after a restart. This is because those changes are not persisted (automatically) in the Cluster Storage.
For most configuration properties related to a solution, however, the Platform provides a solution for persisting changes at runtime: Startup Files. However, consider that for some configuration properties and for permanent extensions (that is, plugins) the Platform does not yet provide a high-standard solution for persisting such changes (for a subset of them changes at runtime are not supported at all).
To enable persistence of runtime changes, set the following configuration property to true (see section Configure the Platform to pass it as an Environment Variable to the Platform container):
nm.cluster.startup.file.sync.enabled = true
From now on, configuration and extension changes at runtime are also persisted in the Cluster Storage and they are loaded again from the Cluster Storage after a restart.
Let s see this in action (see code explanation below):
$ docker run --name appway \
-p 8080:8080 \
-e NM_CLUSTER_STARTUP_FILE_SYNC_ENABLED=true \
-e NM_CLUSTER_PERSISTENCE_FILESYSTEM_PATH=/appway/cluster-storage \
-v $(pwd)/cluster-storage:/appway/cluster-storage \
-v $(pwd)/license.properties:/appway/data-home/conf/license.properties \
appwayag/platform
- Line 3: Persisting runtime changes is enabled.
- Lines 4 and 5: Cluster Storage is configured on a local directory in
./cluster-storage
.
If you want to start additional nodes, you are recommended to enable and additional feature of Startup Files using the following configuration property.
nm.cluster.startup.file.sync.client.enabled = true
Thanks to this functionality, the other joining nodes can download the configuration files earlier from the running cluster.
$ docker run --name appway \
-p 8080:8080 \
-e NM_CLUSTER_STARTUP_FILE_SYNC_CLIENT_ENABLED=true \
-e NM_CLUSTER_STARTUP_FILE_SYNC_ENABLED=true \
-e NM_CLUSTER_PERSISTENCE_FILESYSTEM_PATH=/appway/cluster-storage \
-v $(pwd)/cluster-storage:/appway/cluster-storage \
-v $(pwd)/license.properties:/appway/data-home/conf/license.properties \
appwayag/platform
To avoid conflicts between configuration files in the Docker image and configuration files in Cluster Storage, we recommend to put solution related configuration properties into Content Configuration. Since the Standard Docker images do not provide such configuration, there will never be a conflict and Content Configuration will always be restored from Cluster Storage.
Configure Node Discovery
So far, we have been starting the Platform as a single-node in a single container. However, for the following steps (see section Running the Platform in Kubernetes) and when running the Platform as a multi-node Cluster, you need to configure how the Platform nodes discover each other.
Traditionally, this has been configured in conf/hazelcast.xml
. When starting the Platform in containers, this could still be done by mounting this file in the proper location inside the Docker container. However, this procedure is better performed through the options described below.
The Platform supports the following Node Discovery types:
- Multicast: Use the
nm.cluster.network.join.multicast.enabled
property to enable Multicast. When starting multiple Standard Docker images in a single multicast network, they discover each other. Example:
# start node 1
$ docker run --name appway-node1 \
-p 8080:8080 \
-e NM_CLUSTER_NETWORK_JOIN_MULTICAST_ENABLED=true \
-e NM_CLUSTER_PERSISTENCE_FILESYSTEM_PATH=/appway/cluster-storage \
-v $(pwd)/cluster-storage:/appway/cluster-storage \
-v $(pwd)/license.properties:/appway/data-home/conf/license.properties \
appwayag/platform
# start node 2
$ docker run --name appway-node-2 \
-p 8081:8080 \
-e NM_CLUSTER_NETWORK_JOIN_MULTICAST_ENABLED=true \
-e NM_CLUSTER_PERSISTENCE_FILESYSTEM_PATH=/appway/cluster-storage \
-v $(pwd)/cluster-storage:/appway/cluster-storage \
-v $(pwd)/license.properties:/appway/data-home/conf/license.properties \
appwayag/platform
- Kubernetes (KubernetesHazelcastDiscovery extension) - This is required when using Kubernetes. See extension documentation.
- Amazon Web Services EC2 (AWSHazelcastDiscovery extension) - This is required when running the Platform on AWS EC2 instances. See extension documentation.
- Azure (AzureHazelcastDiscovery extension) - This is required when running on virtual machines in Azure. See extension documentation.
To use a Node Discovery extension:
- Install it as an additional extension and configure it using Environment Variables (see section Configure the Platform).
- Check if your license allows more than 1 node. In the
license.properties
file, the value of thenm.license.cluster.maxsize
property should be larger than 1. - After you start the Platform cluster nodes, you should check if the cluster was formed correctly. You should see a log similar with the one below:
2020-05-04 10:03:38,908 INFO [hz.appway.generic-operation.thread-0] com.hazelcast.internal.cluster.ClusterService - [172.17.0.3]:5713 [blade] [3.12.6]
Members {size:2, ver:2} [
Member [172.17.0.2]:5713 - 604a0e4f-1152-4edb-8f79-9b649f972b58
Member [172.17.0.3]:5713 - 131440d0-d65a-4946-a98b-47cbcf8b558e this
]
Extend the Standard Docker Images
Before proceeding with the next section (Running the Platform in Kubernetes), we will suggest some possible alternatives or enhancements to the procedures we have outlined so far.
Up to now, we have used the plain vanilla Standard Docker images and added different configuration options when starting the Docker container. For example, we mounted a local directory to add additional extensions in section Add Additional Extensions. While this works fine for a single-node installation, it might not be as easy (or even possible) when using a container orchestration system like Kubernetes. This is because Kubernetes has to manage many different containers running on different machines. Therefore, mounting the current, local directory while starting a Docker container does not work.
For this reason, a good alternative is to extend the Standard Docker images and include additional artifacts in the image directly instead of mounting them while starting the Docker container. We recommend this for:
- Additional extensions (instead of mounting them, see section Add Additional Extensions) and
- Additional Packages (instead of mounting them, see section Add Additional Packages (Mount a Directory)).
Moreover, we recommend using configuration objects (e.g. ConfigMap for Kubernetes) (for further details see section Running the Platform in Kubernetes) for:
- Exposing ports (see section Access the Platform),
- Configuring the Platform (see section Configure the Platform),
- Providing a License File (see section Provide a License File).
To extend the Standard Docker images, follow these steps:
-
Create a file named
Dockerfile
in the current directory with the following content:Copy
ARG BASE_IMAGE_NAME
FROM ${BASE_IMAGE_NAME}
#add additional extensions
COPY --chown=appway:appway ./resources/appway-extensions ${NM_DATA_HOME}/extensions/
#add additional packages
COPY --chown=appway:appway ./resources/appway-packages ${NM_DATA_HOME}/packages/ -
Download additional extensions and Packages and store them in
./resources/appway-extensions
and./resources/appway-packages
respectively. -
Add a file named
configuration.xml
in./resources/appway-extensions
(or copy an existingconfiguration.xml
file from any${NM_DATA_HOME}/extensions/
directory) and make sure it contains all the mandatory extensions plus the additional extensions that you want to add. Each extension should be listed with an entry like the following. Note: Currently, this step is needed to ensure the additional extensions are started when the Platform starts:Copy
<Adapter name="ClusterTools" state="2" startup="2" priority="0"/> -
Execute docker build with the following parameters (Line 2: Docker is "instructed" to base the new image on the given Standard Docker image. Line 3: the new image is provided with a new name and tag):
Copy
$ docker build --pull \
--build-arg BASE_IMAGE_NAME=appwayag/platform \
-t appway:latest-extended \
-f Dockerfile . -
To start using the new Docker image, replace
appwayag/platform
withappway:latest-extended
in the commands shown above, e.g.:Copy
$ docker run --name appway \
-p 8080:8080 \
-v $(pwd)/license.properties:/appway/data-home/conf/license.properties \
appway:latest-extended -
Finally, push this new, extended Docker image into your own Docker repository, so that your container orchestration system can use them.
Running the Platform in Kubernetes
In the previous section, we outlined how to use the Standard Docker images. While doing so, we also experienced how the various Docker-run commands became longer and longer, even for one single container on a single machine. In a real installation, you would not have one single container, but possibly:
- Multiple nodes (for failover or scalability)
- A load balancer
- A service to re-new SSL certificates (possibly)
- A relational database (possibly)
- Others
To manage all those containers, you need a container orchestration system. In this document, we use Kubernetes (https://kubernetes.io/) but similar principles apply to other systems.
Prerequisites
To deploy the Platform on Kubernetes, you need a Kubernetes cluster and the following tools installed:
For smoke tests on local machines, the simplest way to set up a Kubernetes cluster is by using Minikube. Minikube is a tool that makes it easy to run Kubernetes locally.
-
Install Minikube by following the instructions presented here: https://kubernetes.io/docs/tasks/tools/install-minikube/
-
After installing Minikube, we suggest updating the settings to increase the CPU and memory of the Minikube Virtual Machine. Before performing these changes, power off the machine first.
- Increase memory: Virtual Box > System > Motherboard > Base Memory > choose 6GB
- Increase CPUs: Virtual Box > System > Processor > Processor(s) > choose 4 CPUs
-
At this point, you are ready to start:
Copy
$ minikube start -
The last step is done to make sure you are using the correct Kubernetes context. The result of the following command should be 'minikube'
Copy
$ kubectl config current-context
Helm Chart
Helm is a tool that helps you manage Kubernetes applications. A Helm Chart represents a collection of Kubernetes resources that define how an application should be installed.
You can find a simple Helm Chart for the Platform that can be extended to cover more complex scenarios in the following attachment (zip file): helmchartexamplev.2.0.1.zip
The zip file contains two folders:
appway
- The actual Helm Chartexample-config
- Example resources that show how the Helm Chart can be configured
In the following sections, we refer to the configuration folder as example-config
but you should rename this folder for your installations to better describe the environment for which it is used, e.g.,appway-onboarding-dev
or appway-onboarding-uat
.
Docker Repository Access
Standard Docker images are stored in a private repository, therefore you need to configure the credentials that you received from the Platform to gain access to them.
You also need to configure the tag of the Docker image. All these values have to be set in the example-config/values.yaml
file:
image:
tag: 11.0.0
username: your-username
password: 123...
Platform License File
The example Helm chart contains all the resources needed to configure the Platform license file.
All you need to do is update the content of the example-config/license.properties
file with the actual license file that you received from the Platform.
Platform Configuration
There are various options to configure the Platform. Check the Configuration Properties document to understand the different types of configuration and when they should be used:
The example Helm chart provided above (see Helm Chart) demonstrates how environment variables can be used to configure the Platform.
Check the example-config/environment-config.yaml
file to understand how to pass environment variables to configure the Platform.
Using environment variables comes in handy if you have to set a limited number of configuration properties that are spread across multiple files. However, this approach is not feasible if you have a large number of configuration properties.
For this case, you can extend the Helm chart and define other resources that can be mounted as volumes
in the places where the Platform configuration files are defined (e.g.,/appway/data-home/conf/content.properties
).
If you build a custom Docker image on top of the Platform image, you can also consider including the configuration in the Docker image.
Other Helm Chart Configurations
You can find the complete list of configurations of the Helm Chart in the appway/values.yaml
file.
The most notable ones are:
replicaCount
- The number of nodes that you want to start.appway.java_opts
- JAVA_OPTS property passed to JVM machine. You can set the amount of memory that the Platform should use.terminationGracePeriodSeconds
- The time (in seconds) to wait for the Platform to shut down. Check section Stop the Platform for more details.
Configure Additional Extensions and Packages
When you deploy the Platform on Kubernetes, you need at least one additional extension: KubernetesHazelcastDiscovery. This extension allows the Platform nodes to find and communicate to each other.
There are various possibilities to configure additional extensions and Packages: they can be provided by mounting external volumes or you can extend the Standard Docker images and create new Docker images that contain the additional extensions and packages.
Cluster Storage
Check section Configure Cluster Storage for details on how to configure the Cluster Storage.
Access the Platform
For simplicity reasons, the provided Helm Chart uses a Kubernetes Service of type LoadBalancer to expose the Platform to its users. However, depending on your needs, this Chart can be extended and you can configure more flexible ways of exposing the Platform, for example using Ingress resources.
Again, for simplicity reasons, the Helm chart does not include resources to automate the generation and usage of certificates. The official Helm Chart cert-manager can be used for this purpose.
The following ports are configured by default:
8080
?- HTTP port. This port can be used to access Studio Composition and Studio Runtime5713
- Hazelcast port. This port is used by Hazelcast to connect the different nodes in the Cluster.5556
?- JMX port. This port can be used to inspect JMX beans.
Deploy the Platform
-
Before deploying the Platform you need to create a Kubernetes namespace for the installation:
Copy
$ kubectl create namespace appway-ns -
Deploy the Platform by running the helm install command as shown in the example below:
Copy
$ helm install appway \
--namespace appway-ns \
-f example-config/values.yaml \
--set-file environment_config=example-config/environment-config.yaml,license_properties=example-config/license.properties \
appway -
Once the helm install command has finished, you can check the status of the deployment and access the Platform once it is ready.
Copy
#Check list of Helm deployments
$ helm ls -n appway-ns
#Check Kubernetes Pods
$ kubectl get pods -n appway-ns -o wide
$ kubectl describe pods appway -n appway-ns
#Check the logs of the Platform Pods
$ kubectl logs -f appway-0 -n appway-ns
#Debug Appway Pods
$ kubectl exec -n appway-ns -ti appway-0 /bin/bash -
Depending on the way how the Platform was exposed externally, there are different options on how to retrieve the URL to access the Platform:
-
Load Balancer:
Copy
#Use the EXTERNAL-IP section together with 8080 port
$ kubectl get services -n appway-ns -
Minikube:
Copy$ minikube service appway -n appway-ns --url
Upgrade the Platform
To upgrade the Platform version (Core Platform and Extensions), shut down the Cluster (see section Stop the Platform), update the Docker image coordinates in the Helm chart, and restart.
To update the Docker image tag, edit example-config/values.yaml
and change the tag to the new version.
appway:
tag: 11.0.0
Note: When upgrading to a new feature version (e.g. from 10.0.x to 10.1.x; or from 10.x to 11.x), follow the upgrade instructions carefully.
Stop the Platform
To shut down the Platform correctly, be aware of the Kubernetes procedure to terminate a Pod:
- Kubernetes sends a SIGTERM signal to the Platform containers in the Pod. This triggers the Platform shutdown procedure.
- Kubernetes then waits for a grace period that is configurable. It is very important that you configure a sufficiently large value for the terminationGracePeriodSeconds to allow the Platform to save all the data before shutting down.
If the grace period is configured to an excessively low value, data loss can occur.
We recommend setting a grace period of 60s to begin with, but it is very important that you test how long your Solution actually takes to shut down, and increase the grace period accordingly if your Solution needs more time to stop.
If the Platform stops before the grace period has expired, Kubernetes will not wait until the grace period expires, but will stop immediately.
After the Platform has shut down or the grace period has expired, Kubernetes sends a SIGKILL signal to the Pod, and the Pod is removed.
To stop the Platform cluster and clean the Helm resources that were created, you can run the following command:
$ helm delete appway -n appway-ns