Installation and Upgrade > Upgrading ThingWorx > Manual Upgrade > Linux Manual Upgrade > Manual In Place Upgrade to 9.0.x, 9.1.x, and 9.2.x: Linux
Manual In Place Upgrade to 9.0.x, 9.1.x, and 9.2.x: Linux
Refer to the upgrade table to determine your upgrade path. The steps below are for in place upgrade only. For a migration upgrade, refer to Manual Migration to ThingWorx 9.x: Linux.
* 
At this time, there is no support for the big integer/timezone database migration scripts for H2. These migration scripts are detailed for other supported databases. If you have an existing H2 database and require the timezone correction, you must migrate to a supported database such as PostgreSQL or MS SQL. If your application will function without the timezone correction, you can upgrade to the latest ThingWorx version on H2. Note that you will skip the Set the ThingWorx Server Timezone section as noted.
A.) Before You Upgrade 
1. If your OS is RHEL, verify that you have upgraded to the supported version before performing an upgrade of ThingWorx. Refer to System Requirements for more information.
* 
ThingWorx 9.1 is only supported on RHEL 8.2.
2. Before beginning the upgrade, it is recommended to perform the following:
Database dump
Back up all data in the ThingworxStorage and ThingworxPlatform folders.
Back up the Tomcat_home folder. This includes the bin, conf, lib, temp, webapps, and work folders.
3. If you are using ThingWorx Apps in addition to ThingWorx platform:
a. Verify that the version of ThingWorx you are upgrading to is supported with the version of ThingWorx Apps. See ThingWorx Apps Upgrade Support Matrix.
b. There are steps that you must take before upgrading the platform. See Upgrading ThingWorx Apps before proceeding to the next step.
4. If you also have Navigate installed, verify compatibility at ThingWorx Navigate Compatibility Matrix.
5. Obtain the latest version of ThingWorx at PTC Software Downloads.
6. Verify that you are running the required versions of Tomcat and Java. Refer to the System Requirements for version requirements.
* 
If you must upgrade your Java version, perform the ThingWorx upgrade before upgrading Java.
7. If you are upgrading MSSQL, Azure SQL, or H2, the upgrade will fail if any of the custom index field values are missing in the data tables. Verify all custom index fields have values before starting the upgrade process.
* 
If you fail to do so, the upgrade will fail and you will have to deploy the older version again (if schema updates were made, you must roll back/restore database) and add missing index values or remove the custom indexes from the data table and then perform upgrade.
8. Add the following to the Apache Tomcat Java Options:
-Dlog4j2.formatMsgNoLookups=true
B.) Export Stream and Value Stream Data (InfluxDB only) 
* 
The steps in this section are only required if you are upgrading ThingWorx with InfluxDB 1.7.x (for ThingWorx 8.5.x or 9.0.x) to InfluxDB 1.8.x (For ThingWorx 9.1.x or 9.2.x).
1. Export data from InfluxDB 1.7.x/MS SQL/PostgreSQL:
a. Login to ThingWorx as the Administrator.
b. Click Import/Export > Export.
c. Use the following options:
For Export Option, select To File.
For Export Type, select Collection of Data.
For Collection, select Streams.
Click Export.
d. Repeat steps a-c for value stream data.
e. Move the exported folder for the stream and value stream data created from the system repository to a safe location as a backup.
C.) Stop and Delete the ThingWorx Webapp 
1. Stop Tomcat.
2. It is highly recommended to backup the following two folders before continuing:
Apache Software Foundation/Tomcat x.x/webapps/Thingworx
/ThingworxStorage
3. If your current Tomcat version is older and is not supported with the target ThingWorx version, update to the supported Tomcat version.
4. To retain the SSO configurations from the existing installation, backup the web.xml file from the folder <Tomcat Installation directory>\webapps\Thingworx\WEB-INF.
5. Backup and delete the validation.properties file from the /ThingworxStorage/esapi directory.
* 
The validation.properties file is created upon startup of ThingWorx. If you want to retain any changes you have made, save the file outside of the ThingworxStorage directory and then proceed with removing the esapi directory. Upon startup, ThingWorx will recreate the file and you can add your custom regexes back into the validation.properties file that was automatically generated.
Reference this topic for additional information.
6. Go to the Tomcat installation at /Apache Software Foundation/Tomcat x.x/webapps and delete the Thingworx.war file and the Thingworx folder.
D.) Set the ThingWorx Server Timezone 
Skip this step if you are upgrading on H2. For all other databases, add the following parameter to the Tomcat Java Options to set the ThingWorx server timezone:
-Duser.timezone=UTC
E.) Update Schema and Migrate Data (PostgreSQL only) 
* 
Only step 1 in this section is required for all upgrades.
Perform the steps in the rest of this section if you are upgrading from ThingWorx 8.4.x or 8.5.x --> 9.0.x, 9.1.x, or 9.2.x.
Skip the steps in the rest of this section if you are upgrading from ThingWorx 9.0.x or 9.1.x --> 9.1.x or 9.2.x
1. Run the following scripts that are located in the update folder in the ThingWorx software download (starting with the version you are upgrading from):
thingworxPostgresSchemaUpdate8.4-to-8.5.sh
thingworxPostgresSchemaUpdate8.5-to-9.0.sh
thingworxPostgresSchemaUpdate9.1-to-9.2.sh
* 
You do not need to run the thingworxPostgresSchemaUpdate9.0-to-9.1.sh script because there are no schema updates in 9.1. Although the script is included in the update folder for completeness, it is empty and is intended only for users with automated upgrade processes.
* 
The steps in the rest of this section should only be performed if you are upgrading from ThingWorx 8.4.x or 8.5.x to 9.0.x, 9.1.x, or 9.2.x. Skip the steps in the rest of this section if you are upgrading from ThingWorx 9.0.x to 9.1.x or 9.2.x.
2. Run the big integer/timezone setup script to prepare the database for migration:
* 
If you are already running ThingWorx in UTC, you still need to run the migration for the big integer changes, but the sourceTZ and targetTZ parameters (available in some of the scripts in the steps below) can both be supplied the value of UTC.
thingworxPostgresDBSetupBigIntTimezoneDataUpdate.sh
3. To find all supported time zones, use the following command:
select pg_timezone_names()
* 
When specifying timezones for the data migration scripts below, the specified timezone names must exactly match one of the formal names displayed by the pg_timezone_names() script.
4. Run the model migration script to migrate all model data:
* 
Before running the script, open the script in a text editor to ensure its default environment values (such as server, port, time zones, etc.) are correct and sufficient for your environment. If any default values defined within the script do not seem appropriate for your environment, override the default values when running the script by specifying one or more command line arguments.
thingworxPostgresModelTablesDataUpdate.sh
* 
Usage:
thingworxPostgresModelTablesDataUpdate.sh [-h <server>] [-p <port>] [-d <Thingworx database name>] [-u <thingworx database username>] [-r <password>] [-m <azure managed instance name>] [-sourceTZ <source timezone>] [-targetTZ <target timezone>]
Example:
thingworxPostgresModelTablesDataUpdate.sh -sourceTZ US/Eastern -targetTZ Etc/UTC
5. Run each of the following schema migration scripts to create backup tables of all data table, stream, and value stream data:
* 
For performance reasons, these scripts do not actually create a copy of the original data in these tables. Instead, these scripts rename these existing tables from “<original-table-name>” to “<original-table-name>_backup”. This circumvents the potentially time-consuming process of actually copying the data. Once these existing tables are renamed (thus becoming the backup tables), new tables are created with the original names. These new tables are empty and serve the same purpose as the original tables (because they have the same names as the original tables). These new tables will get populated with migrated data in later steps.
thingworxPostgresDataTableSchemaUpdate.sh
thingworxPostgresStreamSchemaUpdate.sh
thingworxPostgresValueStreamSchemaUpdate.sh
6. Open a new command prompt window and run the following data migration script to migrate the data table data within the backup table:
* 
Before running the script, open the script in a text editor to ensure its default environment values (such as server, port, time zones, etc.) are correct and sufficient for your environment. If any default values defined within the script do not seem appropriate for your environment, override the default values when running the script by specifying one or more command line arguments.
thingworxPostgresDataTableDataUpdate.sh
* 
Usage:
thingworxPostgresDataTableDataUpdate.sh [-h <server>] [-p <port>] [-d <thingworx database name>] [-u <thingworx database username>] [-r <password>] [-m <azure managed instance name>] [-sourceTZ <source timezone>] [-targetTZ <target timezone>] [-chunkSize <chunk size>]
Example:
thingworxPostgresDataTablesDataUpdate.sh -sourceTZ US/Eastern -targetTZ Etc/UTC
Once this migration script is started, wait until a message is displayed on the console indicating that it is safe to restart Tomcat. Once that message is displayed, it is safe to proceed to the next step, even if this migration script has not yet finished executing.
7. Open a new command prompt window and run the following data migration scripts to migrate the stream and value stream data from the backup tables:
* 
Before running the script, open the script in a text editor to ensure its default environment values (such as server, port, time zones, etc.) are correct and sufficient for your environment. If any default values defined within the script do not seem appropriate for your environment, override the default values when running the script by specifying one or more command line arguments.
thingworxPostgresStreamDataUpdate.sh
thingworxPostgresValueStreamDataUpdate.sh
* 
Usages:
thingworxPostgresStreamDataUpdate.sh [-h <server>] [-p <port>] [-d <thingworx database name>] [-u <thingworx database username>] [-r <password>] [-m <azure managed instance name>] [-sourceTZ <source timezone>] [-targetTZ <target timezone>] [-chunkSize <chunk size>]
thingworxPostgresValueStreamDataUpdate.sh [-h <server>] [-p <port>] [-d <thingworx database name>] [-u <thingworx database username>] [-r <password>] [-m <azure managed instance name>] [-sourceTZ <source timezone>] [-targetTZ <target timezone>] [-chunkSize <chunk size>]
Examples:
thingworxPostgresStreamDataUpdate.sh -sourceTz US/Eastern -targetTZ Etc/UTC -chunkSize 5000
thingworxPostgresValueStreamDataUpdate.sh -sourceTz US/Eastern -targetTZ Etc/UTC -chunkSize 5000
Once these two migration scripts are started, do not proceed to the next step until these migration scripts, as well as the data table migration script (started in a previous step), have completed successfully.
8. Manually verify that the following scripts have all completed successfully: thingworxPostgresDataTableDataUpdate.sh, thingworxPostgresStreamDataUpdate.sh, and thingworxPostgresValueStreamDataUpdate.sh. Verify that all data table, stream, and value stream data has been successfully migrated.
9. Run the cleanup script to remove any temporary database objects needed during migration:
* 
Although this script performs some cleanup of temporary database objects created during the upgrade process, this script does *not* delete any of the backup tables created in the previous steps, nor does it modify any data within those backup tables. This is intentional, and ensures that data cannot be accidentally deleted. If you want to delete these backup tables, then they must be deleted manually.
thingworxPostgresDBCleanupBigIntTimezoneDataUpdate.sh
Script Troubleshooting
* 
The following commands must be run against the ThingWorx database.
Error code 23505 means there was a duplicate key unique constraint violation on insert. To resolve this issue:
1. Run the following command:
Select * from migration_log where status = -1 OR status = 0.
2. Record the ranges for the fromId and toId for those rows returned.
3. Query the data table for the entry_ids written down from the migration log.
4. If the records are all there, for any of those ranges, change the 0 or -1 to a 1. If records are missing partially from a range, try a smaller chunksize, or delete the records already migrated into the table and try the migration again.
To find all supported time zones, use the following command:
select pg_timezone_names()
You must use the formal name, listed first, to have PostgreSQL automatically resolve the offset to shift the date from, based on the timestamp given.
To verify that all entry_ids have been migrated, use a SQL query similar to the following:
SELECT entry_id FROM data_table_backup EXCEPT SELECT entry_id FROM data_table
SELECT entry_id FROM data_table EXCEPT SELECT entry_id FROM data_table_backup
If the environment variable PGPASSWORD is being used on the system, you must pass that variable into the -r parameter to set the password the scripts should run with.
F.) Update Schema and Migrate Data (MSSQL only) 
* 
Only step 1 in this section is required for all upgrades.
Perform the steps in the rest of this section if you are upgrading from ThingWorx 8.4.x or 8.5.x --> 9.0.x, 9.1.x, or 9.2.x.
Skip the steps in the rest of this section if you are upgrading from ThingWorx 9.0.x or 9.1.x --> 9.1.x or 9.2.x
1. Copy the entire update folder located in the ThingWorx software download to the MS SQL server and run the following scripts that are located in the update folder (starting with the version you are upgrading from):
thingworxMssqlSchemaUpdate8.4-to-8.5.sh
thingworxMssqlSchemaUpdate8.5-to-9.0.sh
thingworxMssqlSchemaUpdate9.1-to-9.2.sh
* 
You do not need to run the thingworxMssqlSchemaUpdate9.0-to-9.1.sh script because there were no schema changes in 9.1. Although the script is included in the update folder for completeness, it is empty and is intended only for users with automated upgrade processes.
* 
The steps in the rest of this section should only be performed if you are upgrading from ThingWorx 8.4.x or 8.5.x to 9.0.x, 9.1.x, or 9.2.x. Skip the steps in the rest of this section if you are upgrading from ThingWorx 9.0.x to 9.1.x or 9.2.x.
2. Run the big integer/timezone setup script to prepare the database for migration:
* 
If you are already running ThingWorx in UTC, you still need to run the migration for the big integer changes, but the sourceTZ and targetTZ parameters (available in some of the scripts in the steps below) can both be supplied the value of UTC.
thingworxMssqlDBSetupBigIntTimezoneDataUpdate.sh
3. Run the model migration script to migrate all model data:
* 
Before running the script, open the script in a text editor to ensure its default environment values (such as server, port, time zones, etc.) are correct and sufficient for your environment. If any default values defined within the script do not seem appropriate for your environment, override the default values when running the script by specifying one or more command line arguments.
thingworxMssqlModelTablesDataUpdate.sh
* 
Usage:
thingworxMssqlModelTablesDataUpdate.sh [-h <server>] [-i <server-instance>] [-p <port>] [-r <password>] [-l <login-name>] [-d <thingworx-database-name>] [-sourceTZ <source-timezone>] [-targetTZ <target-timezone>]
Example:
thingworxMssqlModelTablesDataUpdate.sh -sourceTZ "Eastern Standard Time" -targetTZ UTC
4. Run each of the following schema migration scripts to create backup tables of all data table, stream, and value stream data.
* 
For performance reasons, these scripts do not actually create a copy of the original data in these tables. Instead, these scripts rename these existing tables from “<original-table-name>” to “<original-table-name>_backup”. This circumvents the potentially time-consuming process of actually copying the data. Once these existing tables are renamed (thus becoming the backup tables), new tables are created with the original names. These new tables are empty and serve the same purpose as the original tables (because they have the same names as the original tables). These new tables will get populated with migrated data in later steps.
thingworxMssqlDataTableSchemaUpdate.sh
thingworxMssqlStreamSchemaUpdate.sh
thingworxMssqlValueStreamSchemaUpdate.sh
5. Open a new command prompt window and run the following data migration script to migrate the data table data within the backup table:
* 
Before running the script, open the script in a text editor to ensure its default environment values (such as server, port, time zones, etc.) are correct and sufficient for your environment. If any default values defined within the script do not seem appropriate for your environment, override the default values when running the script by specifying one or more command line arguments.
thingworxMssqlDataTableDataUpdate.sh
* 
Usage:
thingworxMssqlDataTableDataUpdate.sh [-h <server>] [-i <server-instance>] [-p <port>] [-r <password>] [-l <login-name>] [-d <thingworx-database-name>] [-sourceTZ <source-timezone>] [-targetTZ <target-timezone>] [-chunkSize <chunk-size>]
Example:
thingworxMssqlDataTableDataUpdate.sh -sourceTZ "Eastern Standard Time" -targetTZ UTC -chunkSize 5000
Once this migration script is started, wait until a message is displayed on the console indicating that it is safe to restart Tomcat. Once that message is displayed, it is safe to proceed to the next step, even if this migration script has not yet finished executing.
6. Open a new command prompt window and run the following data migration scripts to migrate the stream and value stream data from the backup tables:
* 
Before running the script, open the script in a text editor to ensure its default environment values (such as server, port, time zones, etc.) are correct and sufficient for your environment. If any default values defined within the script do not seem appropriate for your environment, override the default values when running the script by specifying one or more command line arguments.
thingworxMssqlStreamDataUpdate.sh
thingworxMssqlValueStreamDataUpdate.sh
* 
Usages:
thingworxMssqlStreamDataUpdate.sh [-h <server>] [-i <server-instance>] [-p <port>] [-r <password>] [-l <login-name>] [-d <thingworx-database-name>] [-sourceTZ <source-timezone>] [-targetTZ <target-timezone>] [-chunkSize <chunk-size>]
thingworxMssqlValueStreamDataUpdate.sh [-h <server>] [-i <server-instance>] [-p <port>] [-r <password>] [-l <login-name>] [-d <thingworx-database-name>] [-sourceTZ <source-timezone>] [-targetTZ <target-timezone>] [-chunkSize <chunk-size>]
Examples:
thingworxMssqlStreamDataUpdate.sh -sourceTZ "Eastern Standard Time" -targetTZ UTC -chunkSize 5000
thingworxMssqlValueStreamDataUpdate.sh -sourceTZ "Eastern Standard Time" -targetTZ UTC -chunkSize 5000
Once these two migration scripts are started, do not proceed to the next step until these migration scripts, as well as the data table migration script (started in a previous step), have completed successfully.
7. Manually verify that the following scripts have all completed successfully: thingworxMssqlDataTableDataUpdate.sh, thingworxMssqlStreamDataUpdate.sh, and thingworxMssqlValueStreamDataUpdate.sh and verify that all data table, stream, and value stream data has been successfully migrated.
8. Run the cleanup script to remove any temporary database objects needed during migration:
* 
Although this script performs some cleanup of temporary database objects created during the upgrade process, this script does *not* delete any of the backup tables created in the previous steps, nor does it modify any data within those backup tables. This is intentional, and ensures that data cannot be accidentally deleted. If you want to delete these backup tables, then they must be deleted manually.
thingworxMssqlDBCleanupBigIntTimezoneDataUpdate.sh
Script Troubleshooting
To find all supported time zones, use the following command:
select * from sys.time_zone_info
To verify that all entry_ids have been migrated, use a SQL query similar to the following:
select dt.entry_id, dtb.entry_id from [thingworx].[twschema].[data_table_backup] dtb full join [thingworx].[twschema].[data_table] dt on dtb.entry_id = dt.entry_id where dtb.entry_id is null or dt.entry_id is null
G.) Update Schema and Migrate Data (Azure SQL only) 
* 
Only step 1 in this section is required for all upgrades.
Perform the steps in the rest of this section if you are upgrading from ThingWorx 8.4.x or 8.5.x --> 9.0.x, 9.1.x, or 9.2.x.
Skip the steps in the rest of this section if you are upgrading from ThingWorx 9.0.x or 9.1.x --> 9.1.x or 9.2.x
1. Run the following script located in the update folder of the ThingWorx software download (starting with the version you are upgrading from):
* 
The number of permissions that exist in the platform may affect the time to complete the upgrade. More permissions may increase the upgrade completion time.
thingworxAzureSchemaUpdate8.4-to-8.5.sh
thingworxAzureSchemaUpdate8.5-to-9.0.sh
thingworxAzureSchemaUpdate9.1-to-9.2.sh
* 
You do not need to run the thingworxAzureSchemaUpdate9.0-to-9.1.sh script because there are no schema updates in 9.1. Although the script is included in the update folder for completeness, it is empty and is intended only for users with automated upgrade processes.
* 
Usage:
./thingworxAzureSchemaUpdate8.4-to-8.5.sh -d ^database^ -h ^server^ -l ^username^ [-i ^serverInstance^] [-p ^port^] [-o ^option^]
Example:
./thingworxAzureSchemaUpdate8.4-to-8.5.sh -d thingworx -h test.sqldatabase.net -l sqlAdmin
2. Run the big integer/timezone setup script to prepare the database for migration:
* 
If you are already running ThingWorx in UTC, you still need to run the migration for the big integer changes, but the sourceTZ and targetTZ parameters (available in some of the scripts in the steps below) can both be supplied the value of UTC.
thingworxAzureDBSetupBigIntTimezoneDataUpdate.sh
3. Run the model migration script to migrate all model data:
* 
Before running the script, open the script in a text editor to ensure its default environment values (such as server, port, time zones, etc.) are correct and sufficient for your environment. If any default values defined within the script do not seem appropriate for your environment, override the default values when running the script by specifying one or more command line arguments.
thingworxAzureModelTablesDataUpdate.sh
* 
Usage:
thingworxAzureModelTablesDataUpdate.sh [-d database] [-h server] [-l loginname] [-i serverinstance] [-r password] [-p port] [-sourceTZ source-timezone] [-targetTZ target-timezone] [-chunkSize chunk-size]
Example:
thingworxAzureModelTablesDataUpdate.sh -sourceTZ "Eastern Standard Time" -targetTZ UTC -chunkSize 5000
4. Run each of the following schema migration scripts to create backup tables of all data table, stream, and value stream data.
* 
For performance reasons, these scripts do not actually create a copy of the original data in these tables. Instead, these scripts rename these existing tables from “<original-table-name>” to “<original-table-name>_backup”. This circumvents the potentially time-consuming process of actually copying the data. Once these existing tables are renamed (thus becoming the backup tables), new tables are created with the original names. These new tables are empty and serve the same purpose as the original tables (because they have the same names as the original tables). These new tables will get populated with migrated data in later steps.
* 
The following expected warning displays when executing the data table script: Warning! The maximum key length for a clustered index is 900 bytes. The index 'data_table_indexes_pkey' has maximum length of 902 bytes. For some combination of large values, the insert/update operation will fail.
thingworxAzureDataTableSchemaUpdate.sh
thingworxAzureStreamSchemaUpdate.sh
thingworxAzureValueStreamSchemaUpdate.sh
5. Open a new command prompt window and run the following data migration script to migrate the data table data within the backup table:
* 
Before running the script, open the script in a text editor to ensure its default environment values (such as server, port, time zones, etc.) are correct and sufficient for your environment. If any default values defined within the script do not seem appropriate for your environment, override the default values when running the script by specifying one or more command line arguments.
thingworxAzureDataTableDataUpdate.sh
* 
Usage:
thingworxAzureDataTableDataUpdate.sh [-d database] [-h server] [-l loginname] [-i serverinstance] [-r password] [-p port] [-sourceTZ source-timezone] [-targetTZ target-timezone] [-chunkSize chunk-size]
Example:
thingworxAzureDataTableDataUpdate.sh -sourceTZ "Eastern Standard Time" -targetTZ UTC -chunkSize 5000
Once this migration script is started, wait until a message is displayed on the console indicating that it is safe to restart Tomcat. Once that message is displayed, it is safe to proceed to the next step, even if this migration script has not yet finished executing.
6. Open a new command prompt window and run the following data migration scripts to migrate the stream and value stream data from the backup tables:
* 
Before running the script, open the script in a text editor to ensure its default environment values (such as server, port, time zones, etc.) are correct and sufficient for your environment. If any default values defined within the script do not seem appropriate for your environment, override the default values when running the script by specifying one or more command line arguments.
thingworxAzureStreamDataUpdate.sh
thingworxAzureValueStreamDataUpdate.sh
* 
Usages:
thingworxAzureStreamDataUpdate.sh [-d database] [-h server] [-l loginname] [-i serverinstance] [-r password] [-p port] [-sourceTZ source-timezone] [-targetTZ target-timezone] [-chunkSize chunk-size]
thingworxAzureValueStreamDataUpdate.sh [-d database] [-h server] [-l loginname] [-i serverinstance] [-r password] [-p port] [-sourceTZ source-timezone] [-targetTZ target-timezone] [-chunkSize chunk-size]
Examples:
thingworxAzureStreamDataUpdate.sh -sourceTZ "Eastern Standard Time" -targetTZ UTC -chunkSize 5000
thingworxAzureValueStreamDataUpdate.sh -sourceTZ "Eastern Standard Time" -targetTZ UTC -chunkSize 5000
Once these two migration scripts are started, do not proceed to the next step until these migration scripts, as well as the data table migration script (started in a previous step), have completed successfully.
7. Manually verify that the following scripts have all completed successfully: thingworxAzureDataTableDataUpdate.sh, thingworxAzureStreamDataUpdate.sh, and thingworxAzureValueStreamDataUpdate.sh. Verify that all data table, stream, and value stream data has been successfully migrated.
8. Run the cleanup script to remove any temporary database objects needed during migration:
* 
Although this script performs some cleanup of temporary database objects created during the upgrade process, this script does *not* delete any of the backup tables created in the previous steps, nor does it modify any data within those backup tables. This is intentional, and ensures that data cannot be accidentally deleted. If you want to delete these backup tables, then they must be deleted manually.
thingworxAzureDBCleanupBigIntTimezoneDataUpdate.sh
H.) Upgrade to Java 11 
* 
Java 11 is required for ThingWorx 9.2.0 and later. Refer to system requirements for details.
1. If you are upgrading to Java 11, the following steps are required. Skip this section if Java 11 is already installed.
a. Download OpenJDK or Java 11.
b. Install jEnv on Linux:
a. Git clone the jEnv repository:
git clone https://github.com/jenv/jenv.git ~/.jenv
b. Add jEnv to your $PATH:
echo 'export PATH="$HOME/.jenv/bin:$PATH"' >> ~/.bash_profile
c. Initialize jEnv:
echo 'eval "$(jenv init -)"' >> ~/.bash_profile
d. Update the changes made in ~/.bash_profile:
source ~/.bash_profile
e. Set the JAVA_HOME environment variable:
jenv enable-plugin export
f. Restart your current shell session:
exec $SHELL -l
g. Run the following command and the JAVA_HOME variable will be automatically set by jEnv, depending upon the currently active Java environment:
jenv doctor
c. Add Java environments:
a. Add any environments. All Java installs are located in /usr/lib/jvm/. Use the jenv add command. Examples below:
jenv add /usr/lib/jvm/java-11-amazon-corretto
jenv add /usr/lib/jvm/jdk-11.0.7
b. Check all available Java versions to jenv:
jenv versions
c. Set global Java environment:
jenv global <version>
d. Set shell-specific Java environment:
jenv shell <version>
e. Verify current version set by jenv:
jenv versions
f. Update the path in the Tomcat Java settings.
I.) Deploy the ThingWorx.war File and Restart 
1. Copy the new Thingworx.war file and place it in the following location of your Tomcat installation: /Apache Software Foundation/Tomcat x.x/webapps.
2. Enable extension import. By default, extension import is disabled for all users. Add the following to the platform-settings.json file. Add or update the following ExtensionPackageImportPolicy parameters to true to allow extensions to be imported.
"ExtensionPackageImportPolicy": {
"importEnabled": <true or false>,
"allowJarResources": <true or false>,
"allowJavascriptResources": <true or false>,
"allowCSSResources": <true or false>,
"allowJSONResources": <true or false>,
"allowWebAppResources": <true or false>,
"allowEntities": <true or false>,
"allowExtensibleEntities": <true or false>
},
3. If you are using H2 as a database with ThingWorx, a username and password must be added to the platform-settings.json file.
},
"PersistenceProviderPackageConfigs":{
"H2PersistenceProviderPackage":{
"ConnectionInformation":
{
"password": "<changeme>",
"username": "twadmin"
}
},
4. Start Tomcat.
5. To restore the SSO configurations:
a. Copy the SSOSecurityContextFilter block from the backup web.xml file.
b. In the newly created web.xm file, paste the SSOSecurityContextFilter block after the last AuthenticationFilter block.
6. To launch ThingWorx, go to <servername>\Thingworx in a web browser. Use the following login information:
Login Name: Administrator
Password: <Source server admin password>
J.) Import Stream and Value Stream Data (InfluxDB only) 
* 
The steps in this section are only required if you are upgrading ThingWorx with InfluxDB 1.7.x (for ThingWorx 8.5.x or 9.0.x) to InfluxDB 1.8.x (For ThingWorx 9.1.x or 9.2.x).
1. Create a new persistence provider for InfluxDB 1.8.x or provide new connection information to the existing 1.7.x persistence provider.
2. Import the stream and value stream data to the server. Perform the steps below for stream and value stream data.
a. Login to ThingWorx 9.x as Administrator.
b. Click Import/Export > Import.
c. Use the following options:
a. For Import Option, select From File.
b. For Import Type, select Data.
c. For Import Source, select File Repository.
d. For File Repository, select System.
e. For Path, provide a valid System Repository path.
K.) Upgrade Additional Components 
If you are using Integration Connectors, you must obtain and install the latest version of the integration runtime. For more information, refer to Initial Setup of Integration Runtime Service for Integration Connectors.
If you are upgrading the MSSQL JDBC driver, verify the System Requirements and see Configuring ThingWorx for MSSQL to find the appropriate driver.
If you upgraded from 8.x to 9.x and have Java extensions, see Migrating Java Extensions from 8.x to 9.x.
If you are using ThingWorx Analytics as part of your solution, two installers are available to handle component upgrades:
Analytics Server – installs or upgrades Analytics Server and Analytics Extension
Platform Analytics – installs or upgrades Descriptive Analytics and Property Transforms
For more information about the upgrade procedures, see ThingWorx Analytics Upgrade, Modify, Repair
L.) Run the Cleanup Script for 9.2+ 
If you are upgrading to ThingWorx 9.2.x or later, you must run the cleanup script to remove the temporary tables created during the upgrade process.
Run the thingworx<database_name>DBCleanupPermissionTempTableUpdate.sh cleanup script located in <installDir>/schema/update.
The script takes parameters, such as the following:
-h <server>
-p <port>
-d <thingworx database name>
-l <thingworx database username> for MSSQL
-u <thingworx database username> for PostgreSQL
-i <SQL server instance name> (optional, for MSSQL installations only)
You will be prompted to enter the password for the database user, which is passed in the -u or -l parameters.
M.) Troubleshooting 
If the upgrade failed due to missing values for custom index fields, you must deploy the older version again (if schema updates were made, you must roll back/restore database) and add missing index values or remove the custom indexes from the data table and then perform upgrade.
After starting the ThingWorx platform, check the Application log for the platform. If you are using MSSQL, PostgreSQL, or H2, you may see the following property conflict error messages.
Error Troubleshooting
Application Log Error
Resolution
Error in copying permissions: Problems migrating database
This migration error is seen for MSSQL upgrades and displays if there are any migrated service, property, or event names that have run time permissions configured and their name contains more than 256 characters. To fix this error, limit all service, property, and event names to less than 256 characters.
[L: ERROR] [O: c.t.p.m.BaseReportingMigrator] [I: ] [U: SuperUser] [S: ]
[T: localhost-startStop-1] Thing: <Name of Thing>, has a property which conflicts
with one of the following system properties: isReporting,reportingLastChange,reportingLastEvaluation.
Please refer to the ThingWorx Platform 8.4 documentation on how to resolve this problem.
As part of the Thing Presence feature added to ThingWorx platform 8.4, the following properties were added to the Reportable Thing Shape and are used as part of presence evaluation on the things that implement this shape:
isReporting
reportingLastChange
reportingLastEvaluation
If one of the property names above previously existed on a Thing, Thing Template, or Thing Shape, the following errors will appear in the Application log when the platform starts up. To resolve this problem, the property in conflict on each affected entity must be removed and any associated entities updated to accommodate this change (for example, mashups or services). Without this update, the associated Things cannot display their reporting status properly and cannot be updated/saved. Once these entities are updated properly, the platform-specific reporting properties will be displayed and used in evaluating whether a device is connected and communicating.
[L: ERROR] [O: c.t.p.m.BaseReportingMigrator] [I: ] [U: SuperUser]
[S: ] [T: localhost-startStop-1] ThingTempate: <Name of ThingTemplate>, has a
property which conflicts with one of the following system properties:
isReporting,reportingLastChange,reportingLastEvaluation.
Please refer to the ThingWorx Platform 8.4 documentation on how to resolve this problem.
[L: ERROR] [O: c.t.p.m.BaseReportingMigrator]
[I: ] [U: SuperUser] [S: ] [T: localhost-startStop-1] ThingShape:
<Name of ThingShape>, has a property which conflicts with one of the following system properties:
isReporting,reportingLastChange,reportingLastEvaluation. Please refer to the ThingWorx Platform
8.4 documentation on how to resolve this problem.
Was this helpful?