Platform Settings for ThingWorx HA
As with all ThingWorx deployments, a ThingWorx clustering deployment requires the platform-settings.json file to exist in the ThingworxPlatform storage location. Each server will have its own ThingworxPlatform folder, and the platform-settings.json file for each server will have minor differences.
For more information on the platform-settings.json settings and to see a generic platform-settings.json file sample, see platform-settings.json Configuration Details.
For a clustered system, the most relevant settings are described below:
Basic Settings
Basic Setting
Default
Description
EnableClusteredMode
false
Determines whether ThingWorx will run as a cluster (setting is true) or standalone server (setting is false).
Extension Package Import Policy Settings
Extension Package Import Policy Setting
Default
Description
haCompatibilityImportLevel
WARN
When running ThingWorx in cluster mode, you can restrict the import of extensions to only those that have the haCompatibility flag set to true in the extension metadata. The default setting is WARN, which allows the import but generates a warning message. You can change the setting to DENY; in this case, the import fails and an error is generated.
Clustered Mode Settings
Settings specific to running ThingWorx in cluster mode. All settings are ignored if the EnableClusteredMode setting above is set to false.
Clustered Mode Setting
Default
Description
PlatformId
none
A unique identifier for each node in the cluster. This ID will be displayed in aggregated logs. It must be alphanumeric and cannot exceed 32 characters. It should match the pattern "^[a-zA-Z0-9]{1,32}$".
CoordinatorHosts
none
A comma-delimited list of the Apache ZooKeeper servers used to coordinate ThingWorx leader election. String pattern is IP:port. (for example, "127.0.0.1:2181, 127.0.0.2:2181").
ZKNamespace
ThingWorx
* 
For ThingWorx 9.3.0 and up, the clustered mode setting changes to none.
The root node path used to track information in ZooKeeper for the cluster. It is required when running multiple clusters using the same ZooKeeper. ZooKeeper naming limitations apply; see http://zookeeper.apache.org/doc/current/zookeeperProgrammers.html#ch_zkDataModel.
ModelSyncPollInterval
100
Frequency in milliseconds at which the model is synchronized between servers in the cluster. Each server will check for model changes at this frequency and apply any found.
ModelSyncWaitPeriod
3000
When communicating over WebSockets, traffic is routed round-robin across servers. If a model change is made via WebSockets, the next request will wait until the specified time in milliseconds has passed for the model to sync on whichever server it lands. If no sync occurs before the timeout, the request will fail with a timeout error.
ModelSyncTimeout
120000
How long to wait (in milliseconds) for each retry attempt.
ModelSyncMaxDBUnavailableErrors
10
The number of consecutive sync failures from lost database connectivity allowed before the server shuts down. The timeframe in milliseconds is approximately ModelSyncPollInterval * this value.
ModelSyncMaxCacheUnavailableErrors
10
The number of consecutive sync failures from lost cache connectivity allowed before the server shuts down. The timeframe in milliseconds is approximately ModelSyncPollInterval * this value.
CoordinatorMaxRetries
3
In case of a failure communicating to the coordinator, it will retry n times before failing.
CoordinatorSessionTimeout
90000
How long ThingWorx waits (in milliseconds) without receiving a "heartbeat" from the Apache ZooKeeper service used to coordinate ThingWorx leadership.
CoordinatorConnectionTimeout
10000
The amount of time (in milliseconds) the system will wait for a connection to the coordinator.
MetricsCacheFrequency
60000
Metrics are tracked per server and aggregated for cluster-level metrics. This value is the frequency (in milliseconds) in which the cluster metrics are updated.
IgnoreInactiveInterfaces
true
Optional. When cluster mode is enabled, we attempt to register the ThingWorx server as a service with a service discovery provider. To do this, we look at all available adapters and their addresses and attempt to find a site-local address. If we do not find one, we use the first address we find that is not site-local. This setting impacts this logic. When this setting is true, inactive interfaces are ignored.
IgnoreVirtualInterfaces
true
Optional. When this setting is true, virtual interfaces are ignored. For more information, see the description for IgnoreInactiveInterfaces above.
HostAddressFilter
none
Optional. If specified, we filter addresses based on the regular expression; otherwise, no filter is applied. For example, specify “10\\\\..” to filter addresses that start with 10, or “^.:.*$” to filter addresses that contain :. For more information, see the description for IgnoreInactiveInterfaces above.
DCPPort
2551
Akka communication port
* 
Available in ThingWorx 9.4
AkkaNumberOfShards
100
Total number of Akka cluster shards (should be 10 times the number of ThingWorx nodes in the cluster)
* 
Available in ThingWorx 9.4
AkkaEntityTimeout
7m
Idle timeout for Akka actors (in minutes: 7m or seconds: 540s)
* 
Available in ThingWorx 9.4
AkkaSSLEnabled
false
Akka TLS enable/disable
* 
Available in ThingWorx 9.4
AkkaKeyStore
Akka TLS keystore path
* 
Available in ThingWorx 9.4
AkkaTrustStore
Akka TLS truststore path
* 
Available in ThingWorx 9.4
AkkaKeyStorePassword
encrypt.akka.keystore.password
Akka TLS keystore password
* 
Available in ThingWorx 9.4
AkkaTrustStorePassword
encrypt.akka.truststore.password
Akka TLS truststore password
* 
Available in ThingWorx 9.4
AkkaTlsProtocolVersion
TLSv1.2
Akka TLS protocol version
* 
Available in ThingWorx 9.4
Cache Settings
There are multiple configurations available in this section, but in general, only the following should be changed unless you are tuning for an environment:
Setting
Default
Description
basePath
/services
Ignite will create an Ignite folder under basePath that stores the Ignite node entries for services discovery. If using one ZooKeeper for multiple cluster instances, change the default to/clusterXX/services on Ignite client configuration and Ignite server side. For more information, see Configuring a Central ZooKeeper Cluster.
* 
The basePath setting is only applicable to ThingWorx 9.3.0 and up.
client-mode
true
Determines if the embedded Ignite runs as a client (default) or server. In server mode, it participates in storing data and will use more memory.
connection
none
For an address-resolver type of zookeeper, a comma-delimited list of the Apache ZooKeeper servers used to coordinate ThingWorx leader election. The string pattern is IP:port (for example, 127.0.0.1:2181, 127.0.0.2:2181).
default-cache-mode
none
It can be set to REPLICATED or PARTITIONED. If set to PARTITIONED, data is spread around the cluster and is replicated to other servers based on the backups setting. If set to REPLICATED, all data from all caches is stored on all servers in the Ignite cluster.
Your settings depend on the HA requirements of the system and the number of Ignite servers being run.
default-atomicity-mode
ATOMIC
Cache atomicity mode determines whether the cache maintains full transactional semantics or more light-weight atomic behavior. ATOMIC mode should be used when transactions and explicit locking are not needed. In ATOMIC mode, the cache will maintain full data consistency across all cache nodes.
default-backups
none
This setting only applies if the cache-mode is set to PARTITIONED. It defines the number of servers that will keep a copy of the data. For an HA environment, it must be set to 1 or more.
default-read-from-backup
false
When you are running in embedded mode (client-mode is set to false), set default-read-from-backup to true so the cache reads locally and performance is increased.When you are running in distributed mode, this setting has no benefit since you always have to go across network to another node. In that case, it should be set to false.
default-write-sync-mode
PRIMARY_SYNC
You can change this setting to the following:
FULL_ASYNC
Ignite will not wait for write or commit responses from participating nodes; therefore, remote nodes may get their state updated after the cache write methods complete or after Transaction.commit() method completes.
FULL_SYNC
Ignite should wait for write or commit replies from all nodes.
PRIMARY_SYNC
This setting is for CacheMode.PARTITIONED and CacheMode.REPLICATED modes only.
FULL_ASYNC
Not recommended. It ensures no possibility of data loss on system failure but will slow write performance.
provider-type
none
Must be set to "com.thingworx.cache.ignite.IgniteCacheProvider", which enables the distributed cache.
ThingWorx Flow Settings
Copy the OrchestrationSettings section in the platform-settings.json file from the ThingWorx Flow installation machine.
For the QueueHost setting, replace the localhost value with the actual host value of ThingWorx Flow installation during which RabbitMQ was installed. For information on other ThingWorx Flow settings, see platform_settings.json Configuration Details.
If installing HA on Ubuntu 22.04 or higher, add “basePath” : “/services” param in platform-settings.json file in the block:
"cache": {
"provider-type": "com.thingworx.cache.ignite.IgniteCacheProvider",
"ignite": {
Example
{
"PlatformSettingsConfig": {
"BasicSettings": {
"BackupStorage": "/ThingworxBackupStorage",
"EnableBackup": false,
"EnableClusteredMode": true,
"EnableSystemLogging": true,
"Storage": "/ThingworxStorage"
},
"ExtensionPackageImportPolicy": {
"importEnabled": true,
"allowJarResources": true,
"allowJavascriptResources": true,
"allowCSSResources": true,
"allowJSONResources": true,
"allowWebAppResources": true,
"allowEntities": true,
"allowExtensibleEntities": true
"haCompatibilityImportLevel":"WARN"
},
"OrchestrationSettings": {
"EnableOrchestration": true,
"QueueHost": "<RabbitMQ_Host>",
"QueuePort": 5672,
"QueueName": "256mb",
"QueueUsername": "flowuser",
"QueuePassword": "encrypt.queue.password",
"QueueVHost": "orchestration"
"TurnOffScopesApprovalPopup": true
},
"ClusteredModeSettings": {
"PlatformId": "platform1",
"CoordinatorHosts": "localhost:2181",
"ModelSyncPollInterval": 100,
"DCPPort": 2551,
"AkkaNumberOfShards": 100,
"AkkaEntityTimeout": "7m",
"AkkaSSLEnabled": true,
"AkkaKeyStore":"C:/tempcerts/akka-keystore.jks",
"AkkaTrustStore":"C:/tempcerts/akka-keystore.jks",
"AkkaKeyStorePassword":"encrypt.akka.keystore.password",
"AkkaTrustStorePassword":"encrypt.akka.truststore.password",
"AkkaTlsProtocolVersion": "TLSv1.2"
},
"AdministratorUserSettings": {
"InitialPassword": "YOU MUST SET A DEFAULT PASSWORD"
}
},
"PersistenceProviderPackageConfigs": {
"PostgresPersistenceProviderPackage": {
"ConnectionInformation": {
"jdbcUrl": "jdbc:postgresql://localhost:5432/thingworx",
"password": "thingworx",
"username": "thingworx"
}
}
},
"cache": {
"provider-type": "com.thingworx.cache.ignite.IgniteCacheProvider",
"ignite": {
"instance-name": "twx-core-server",
"default-cache-mode": "PARTITIONED",
"default-atomicity-mode": "ATOMIC", // never change
"default-backups": 1, // set based on failure tolerance
"default-write-sync-mode": "PRIMARY_SYNC", // never change
"default-read-from-backup": false, // only change to true if running 2 node embedded
"client-mode": true, // only false for embedded mode
"metrics-log-frequency": 0,
"address-resolver" : {
"type": "zookeeper",
"connection": "localhost:2181"
},
"ssl-active": "true",
"igniteKeyStoreFilePath": "/certs/ignite.pfx",
"igniteKeyStorePassword": "mykeystorepassword"
}
}
}
Was this helpful?