Linux Manual Upgrade
Manual In Place Upgrade to 9.3.x and later: Linux
Refer to the upgrade table to determine your upgrade path. The steps below are for in place upgrade only. For a migration upgrade, refer to Manual Migration to ThingWorx 9.x: Linux.
* 
At this time, there is no support for the big integer/timezone database migration scripts for H2. These migration scripts are detailed for other supported databases. If you have an existing H2 database and require the timezone correction, you must migrate to a supported database such as PostgreSQL or MS SQL. If your application will function without the timezone correction, you can upgrade to the latest ThingWorx version on H2. Note that you will skip the Set the ThingWorx Server Timezone section as noted.
A.) Before You Upgrade 
1. If your OS is RHEL, verify that you have upgraded to the supported version before performing an upgrade of ThingWorx. Refer to System Requirements for more information.
* 
ThingWorx 9.1 is only supported on RHEL 8.2.
2. Before beginning the upgrade, it is recommended to perform the following:
Database dump
Back up all data in the ThingworxStorage and ThingworxPlatform folders.
Back up the Tomcat_home folder. This includes the bin, conf, lib, temp, webapps, and work folders.
3. If you are using ThingWorx Apps in addition to ThingWorx platform:
a. Verify that the version of ThingWorx you are upgrading to is supported with the version of ThingWorx Apps. See ThingWorx Apps Upgrade Support Matrix.
b. There are steps that you must take before upgrading the platform. See Upgrading ThingWorx Apps before proceeding to the next step.
4. If you also have Navigate installed, verify compatibility at ThingWorx Navigate Compatibility Matrix.
5. Obtain the latest version of ThingWorx at PTC Software Downloads.
6. Verify that you are running the required versions of Tomcat and Java. Refer to the System Requirements for version requirements.
* 
If you must upgrade your Java version, perform the ThingWorx upgrade before upgrading Java.
7. If you are upgrading MSSQL, Azure SQL, or H2, the upgrade will fail if any of the custom index field values are missing in the data tables. Verify all custom index fields have values before starting the upgrade process.
* 
If you fail to do so, the upgrade will fail and you will have to deploy the older version again (if schema updates were made, you must roll back/restore database) and add missing index values or remove the custom indexes from the data table and then perform upgrade.
8. Add the following to the Apache Tomcat Java Options:
-Dlog4j2.formatMsgNoLookups=true
B.) Export Stream and Value Stream Data (InfluxDB only) 
* 
The steps in this section are only required if you are upgrading ThingWorx with InfluxDB 1.7.x (for ThingWorx 8.5.x or 9.0.x) to InfluxDB 1.8.x (For ThingWorx 9.1.x or 9.2.x).
1. Export data from InfluxDB 1.7.x/MS SQL/PostgreSQL:
a. Login to ThingWorx as the Administrator.
b. Click Import/Export > Export.
c. Use the following options:
For Export Option, select To File.
For Export Type, select Collection of Data.
For Collection, select Streams.
Click Export.
d. Repeat steps a-c for value stream data.
e. Move the exported folder for the stream and value stream data created from the system repository to a safe location as a backup.
C.) Stop and Delete the ThingWorx Webapp 
1. Stop Tomcat.
2. It is highly recommended to backup the following two folders before continuing:
Apache Software Foundation/Tomcat x.x/webapps/Thingworx
/ThingworxStorage
* 
To retain the SSO configurations from the existing installation, add the SSOSecurityContextFilter parameter to the recreated web.xml file, after the upgrade is complete.
3. If your current Tomcat version is older and is not supported with the target ThingWorx version, update to the supported Tomcat version.
4. To retain the SSO configurations from the existing installation, backup the web.xml file from the folder <Tomcat Installation directory>\webapps\Thingworx\WEB-INF.
5. Backup and delete the validation.properties file from the /ThingworxStorage/esapi directory.
* 
The validation.properties file is created upon startup of ThingWorx. If you want to retain any changes you have made, save the file outside of the ThingworxStorage directory and then proceed with removing the esapi directory. Upon startup, ThingWorx will recreate the file and you can add your custom regexes back into the validation.properties file that was automatically generated.
Reference this topic for additional information.
6. Go to the Tomcat installation at /Apache Software Foundation/Tomcat x.x/webapps and delete the Thingworx.war file and the Thingworx folder.
D.) Set the ThingWorx Server Timezone 
Skip this section if you are upgrading on H2.
1. For all other databases, check the Tomcat Java Options for the timezone configuration setting. If this setting is configured for UTC (-Duser.timezone=UTC), skip the rest of the steps here and go to the section "Update Schema and Migrate Data" for your database.
2. If this setting is not configured for UTC, determine the current timezone value for later use:
If this setting is configured to a timezone other than UTC, make a note of that timezone value for later use.
If this setting is not configured at all in Tomcat, determine the timezone value of the operating system where Tomcat is installed.
3. After making note of one of the above timezone values and where that value is from (Tomcat or operating system), place that information aside, but keep it accessible since you will need it for a later step.
4. Set this configuration setting to UTC:
-Duser.timezone=UTC
E.) Update Schema and Migrate Data (PostgreSQL only) 
* 
All scripts referenced below are located within the update folder within the ThingWorx software download.
* 
All scripts referenced below require database access. If the PGPASSWORD environment variable is defined, then the scripts will use its value as the database password. Otherwise, the scripts will prompt you for the database password. See the official Postgres documentation for more information.
1. Run the main update script to perform the ThingWorx schema migration:
update_postgres.sh
* 
Running this script without arguments prints its usage statement:
update_postgres.sh -h <host> -p <port> -d <database> -s <schema> -u <user> [--managed_instance <name>] ( --update_all | [--update_data] [--update_model] [--update_property] [--update_system] ) [-y]

Supported Options:
-h host The host name of the machine on which the database is running.
-p port The port on which the database server is listening for connections.
-d database The name of the database to connect to.
-s schema The name of the database schema to update.
-u user Connect to the database as this user.
--managed_instance name To be specified only when the database is deployed within a Managed Instance (e.g. Azure, etc).
--system_version_override version Forces the upgrade to assume the database schema is currently of this version (i.e. "n.n.n"), rather than of the actual, persisted version.
--update_all Update all information (i.e. Data, Model, Property, etc). Same as specifying all other "--update_..." flags.
--update_data Update only Data information. Can be specified with any other "--update_..." flags, except "--update_all".
--update_model Update only Model information. Can be specified with any other "--update_..." flags, except "--update_all".
--update_property Update only Property information. Can be specified with any other "--update_..." flags, except "--update_all".
--update_system Update only System information. Can be specified with any other "--update_..." flags, except "--update_all".
-y Suppress all non-required prompts, such as "Are you sure?"
Run ThingWorx Server Timezone Scripts
The steps in the section only need to be performed if -Duser.timezone=UTC was not set in step 1 of the “Set the ThingWorx Server Timezone” section or if you are upgrading from ThingWorx 8.5.x or earlier.
1. Obtain the list of all database-specific timezone names. To do this, manually connect to the database and run this query to get a list of all timezone names currently supported by the database:
SELECT name, utc_offset, is_dst FROM pg_timezone_names ORDER BY name
* 
Keep this list for later reference.
2. Determine the name of the timezone to which all existing ThingWorx data is currently associated (your “From” timezone):
Get the timezone value you noted in the "Set the ThingWorx Server Timezone" section above.
That value is the timezone to which all existing ThingWorx data is currently associated.
That value is specific to either the JVM or the operating system, and may not exactly match any timezone name within the database-specific list (queried in step 1).
Manually examine the list of timezones queried from the database to determine which timezone name most appropriately matches that value from the "Set the ThingWorx Server Timezone" section.
After you find the most appropriate match, that timezone name becomes the “From” timezone (vs. the “To” timezone) that will be needed in later steps.
3. Determine the timezone to which all existing ThingWorx data should be migrated (your “To” timezone):
In the "Set the ThingWorx Server Timezone" section above, you set Tomcat’s -Duser.timezone configuration setting to UTC. This is the timezone that all existing ThingWorx data should be migrated to. However, that value is specific to the JVM, and may not exactly match any timezone name within the queried database list (queried in step 1).
Manually examine the list of timezones queried from the database to determine which timezone name most appropriately matches that UTC value.
After you find the most appropriate match, that timezone name becomes the "To” timezone (vs. the “From” timezone) that will be needed in later steps.
* 
The “From” and “To” timezone names can be the same.
4. Run the BigInt/Timezone schema script to prepare for the migration of all data table, stream, and value stream data:
update_bigint_timezone_schema_postgres.sh
* 
Running this script without arguments prints its usage statement:
Usage: update_bigint_timezone_schema_postgres.sh -h <host> -p <port> -d <database> -s <schema> -u <user> [--managed_instance <name>] --from_timezone <timezone> --to_timezone <timezone> [-y]

Supported Options:
-h host The host name of the machine on which the database is running.
-p port The port on which the database server is listening for connections.
-d database The name of the database to connect to.
-s schema The name of the database schema to update.
-u user Connect to the database as this user.
--managed_instance name To be specified only when the database is deployed within a Managed Instance (e.g. Azure, etc).
--system_version_override version Forces the upgrade to assume the database schema is currently of this version (i.e. "n.n.n"), rather than of the actual, persisted version.
--from_timezone timezone The name of the timezone for all existing data.
--to_timezone timezone The name of the timezone to which all existing data will be updated.
-y Suppress all non-required prompts, such as "Are you sure?"
* 
While this script directly migrates some data, it does not migrate any data table, stream, or value stream data. Instead, this script creates a backup of all data table, stream, and value stream data so it can be migrated later. For performance reasons, this script does not actually create a backup copy of the data within the existing data table, stream, and value stream tables. Instead, this script renames the existing tables from "foo" to "foo_backup". This circumvents the potentially time-consuming process of copying huge amounts data. Once these existing tables are renamed (thus becoming their own backup tables), new tables are created with the original names. These new tables are empty and serve the same purpose as the original tables (because they have the same names as the original tables).
5. After completing the previous step, the platform may be restarted if necessary. However, note that data table, stream, and value stream data has not yet been migrated. Therefore, until that data migration occurs, queries for that data may receive reduced result sets.
6. Run the BigInt/Timezone data script to migrate any data table, stream, or value stream data:
update_bigint_timezone_data_postgres.sh
* 
Running this script without arguments prints its usage statement:
Usage: update_bigint_timezone_data_postgres.sh -h <host> -p <port> -d <database> -s <schema> -u <user> [--managed_instance <name>] --from_timezone <timezone> --to_timezone <timezone> --chunk_size <chunk_size> ( --update_data_table | --update_stream | --update_value_stream ) [-y]

Supported Options:
-h host The host name of the machine on which the database is running.
-p port The port on which the database server is listening for connections.
-d database The name of the database to connect to.
-s schema The name of the database schema to update.
-u user Connect to the database as this user.
--managed_instance name To be specified only when the database is deployed within a Managed Instance (e.g. Azure, etc).
--system_version_override version Forces the upgrade to assume the database schema is currently of this version (i.e. "n.n.n"), rather than of the actual, persisted version.
--from_timezone timezone The name of the timezone for all existing data.
--to_timezone timezone The name of the timezone to which all existing data will be updated.
--chunk_size chunk_size The number of records to update per transaction. Must be greater than 0.
--update_data_table Update "data_table" information. Cannot be specified with any other "--update_..." flag.
--update_stream Update "stream" information. Cannot be specified with any other "--update_..." flag.
--update_value_stream Update "data_table" information. Cannot be specified with any other "--update_..." flag.
-y Suppress all non-required prompts, such as "Are you sure?"
* 
Per the usage statement above, only one “--update…” option can be specified at a time. Therefore, to migrate all data table, stream, and value stream data, this script must be run three times (once for each dataset). Because these datasets are independent from each other, the migration of one dataset can be done in parallel with the migration of another dataset. For example, if you open three separate command windows, you can concurrently run the data table migration in the first window, the stream migration in the second window, and the value stream migration in the third window. However, do not attempt to use more than one process to concurrently migrate a given dataset. For example, do not attempt to use two concurrent processes to migrate value stream data. Doing so is undefined and will result in data corruption.
The suggested chunk_size for a typical environment is 10000.
Since the platform can be restarted before all data migration has been completed, the migration of data occurs from the newest data to the oldest data. This is intentional, and allows any queries for that data to start receiving the most pertinent data first.
The size of your data sets can have a dramatic impact on how long it takes to migrate all your data. For example, if you have billions of rows to migrate, the migration of that data may take several days to complete.
7. After all BigInt/Timezone data migration has been completed, and after all migrated BigInt/Timezone data has been manually verified and validated, run the BigInt/Timezone cleanup script to clean up any temporary database artifacts created by the BigInt/Timezone schema script:
cleanup_bigint_timezone_data_update_postgres.sh
* 
Running this script without arguments prints its usage statement:
Usage: cleanup_bigint_timezone_data_update_postgres.sh -h <host> -p <port> -d <database> -s <schema> -u <user> [--managed_instance <name>] [-y]

Supported Options:
-h host The host name of the machine on which the database is running.
-p port The port on which the database server is listening for connections.
-d database The name of the database to connect to.
-s schema The name of the database schema to update.
-u user Connect to the database as this user.
--managed_instance name To be specified only when the database is deployed within a Managed Instance (e.g. Azure, etc).
-y Suppress all non-required prompts, such as "Are you sure?"
* 
Although this script performs some cleanup of temporary database objects created during the upgrade process, it does not delete any of the backup tables created in the previous steps, nor does it modify any data within those backup tables. This is intentional, and ensures that data cannot be accidentally deleted. If you want to delete these backup tables, you must delete them manually.
8. Run the main Cleanup script after all data migration has been completed, verified and validated.
After all BigInt/Timezone data migration has been completed, verified and validated, run this script to clean up any temporary database artifacts created by main ThingWorx update script:
cleanup_update_postgres.sh
* 
Running this script with no arguments prints its usage statement:
Usage: cleanup_update_postgres.sh -h <host> -p <port> -d <database> -s <schema> -u <user> [--managed_instance <name>] [-y]

Supported Options:
-h host The host name of the machine on which the database is running.
-p port The port on which the database server is listening for connections.
-d database The name of the database to connect to.
-s schema The name of the database schema to update.
-u user Connect to the database as this user.
--managed_instance name To be specified only when the database is deployed within a Managed Instance (e.g. Azure, etc).
-y Suppress all non-required prompts, such as "Are you sure?"
F.) Update Schema and Migrate Data (MSSQL only) 
* 
All scripts referenced below are located within the update folder within the ThingWorx software download.
* 
All scripts referenced below require database access. If the SQLCMDPASSWORD environment variable is defined, then the scripts will use its value as the database password. Otherwise, the scripts will prompt you for the database password. See the official MSSQL documentation for more information.
1. Run the main update script to perform the ThingWorx schema migration:
update_mssql.sh
* 
Running this script without arguments prints its usage statement:
Usage: update_mssql.sh -h <host> -p <port> -d <database> -u <user> [--managed_instance <name>] ( --update_all | [--update_data] [--update_grants] [--update_model] [--update_property] ) [-y]

Supported Options:
-h host The host name of the machine on which the database is running.
-p port The port on which the database server is listening for connections.
-d database The name of the database to connect to.
-u user Connect to the database as this user.
--managed_instance name To be specified only when the database is deployed within a Managed Instance (e.g. Azure, etc).
--system_version_override version Forces the upgrade to assume the database schema is currently of this version (i.e. "n.n.n"), rather than of the actual, persisted version.
--update_all Update all information (i.e. Data, Model, Property, etc). Same as specifying all other "--update_..." flags.
--update_data Update only Data information. Can be specified with any other "--update_..." flags, except "--update_all".
--update_grants Update only Grants information. Can be specified with any other "--update_..." flags, except "--update_all".
--update_model Update only Model information. Can be specified with any other "--update_..." flags, except "--update_all".
--update_property Update only Property information. Can be specified with any other "--update_..." flags, except "--update_all".
-y Suppress all non-required prompts, such as "Are you sure?"
Run ThingWorx Server Timezone Scripts
The steps in the section only need to be performed if -Duser.timezone=UTC was not set in step 1 of the “Set the ThingWorx Server Timezone” section or if you are upgrading from ThingWorx 8.5.x or earlier.
1. Obtain the list of all database-specific timezone names. To do this, manually connect to the database and run this query to get a list of all timezone names currently supported by the database:
SELECT name, current_utc_offset, is_currently_dst FROM sys.time_zone_info ORDER BY name
* 
Keep this list for later reference.
2. Determine the name of the timezone to which all existing ThingWorx data is currently associated (your “From” timezone):
Get the timezone value you noted in the "Set the ThingWorx Server Timezone" section above.
That value is the timezone to which all existing ThingWorx data is currently associated.
That value is specific to either the JVM or the operating system, and may not exactly match any timezone name within the database-specific list (queried in step 1).
Manually examine the list of timezones queried from the database to determine which timezone name most appropriately matches that value from the "Set the ThingWorx Server Timezone" section.
After you find the most appropriate match, that timezone name becomes the “From” timezone (vs. the “To” timezone) that will be needed in later steps.
3. Determine the timezone to which all existing ThingWorx data should be migrated (your “To” timezone):
In the "Set the ThingWorx Server Timezone" section above, you set Tomcat’s -Duser.timezone configuration setting to UTC. This is the timezone that all existing ThingWorx data should be migrated to. However, that value is specific to the JVM, and may not exactly match any timezone name within the queried database list (queried in step 1).
Manually examine the list of timezones queried from the database to determine which timezone name most appropriately matches that UTC value.
After you find the most appropriate match, that timezone name becomes the "To” timezone (vs. the “From” timezone) that will be needed in later steps.
* 
The “From” and “To” timezone names can be the same.
4. Run the BigInt/Timezone schema script to prepare for the migration of all data table, stream, and value stream data:
update_bigint_timezone_schema_mssql.sh
* 
Running this script with no arguments prints its usage statement:
Usage: update_bigint_timezone_schema_mssql.sh -h <host> -p <port> -d <database> -u <user> [--managed_instance <name>] --from_timezone <timezone> --to_timezone <timezone> [-y]

Supported Options:
-h host The host name of the machine on which the database is running.
-p port The port on which the database server is listening for connections.
-d database The name of the database to connect to.
-u user Connect to the database as this user.
--managed_instance name To be specified only when the database is deployed within a Managed Instance (e.g. Azure, etc).
--system_version_override version Forces the upgrade to assume the database schema is currently of this version (i.e. "n.n.n"), rather than of the actual, persisted version.
--from_timezone timezone The name of the timezone for all existing data.
--to_timezone timezone The name of the timezone to which all existing data will be updated.
-y Suppress all non-required prompts, such as "Are you sure?"
* 
While this script directly migrates some data, it does not migrate any data table, stream, or value stream data. Instead, this script creates a backup of all data table, stream, and value stream data so it can be migrated later. For performance reasons, this script does not actually create a backup copy of the data within the existing data table, stream, and value stream tables. Instead, this script renames the existing tables from "foo" to "foo_backup". This circumvents the potentially time-consuming process of copying huge amounts data. Once these existing tables are renamed (thus becoming their own backup tables), new tables are created with the original names. These new tables are empty and serve the same purpose as the original tables (because they have the same names as the original tables).
5. After completing the previous step, the platform may be restarted if necessary. However, note that data table, stream, and value stream data has not yet been migrated. Therefore, until that data migration occurs, queries for that data may receive reduced result sets.
6. Run the BigInt/Timezone data script to migrate any data table, stream, or value stream data:
update_bigint_timezone_data_msssql.sh
* 
Running this script without arguments prints its usage statement:
Usage: update_bigint_timezone_data_mssql.sh -h <host> -p <port> -d <database> -u <user> [--managed_instance <name>] --from_timezone <timezone> --to_timezone <timezone> --chunk_size <chunk_size> ( --update_data_table | --update_stream | --update_value_stream ) [-y]

Supported Options:
-h host The host name of the machine on which the database is running.
-p port The port on which the database server is listening for connections.
-d database The name of the database to connect to.
-u user Connect to the database as this user.
--managed_instance name To be specified only when the database is deployed within a Managed Instance (e.g. Azure, etc).
--system_version_override version Forces the upgrade to assume the database schema is currently of this version (i.e. "n.n.n"), rather than of the actual, persisted version.
--from_timezone timezone The name of the timezone for all existing data.
--to_timezone timezone The name of the timezone to which all existing data will be updated.
--chunk_size chunk_size The number of records to update per transaction. Must be greater than 0.
--update_data_table Update "data_table" information. Cannot be specified with any other "--update_..." flag.
--update_stream Update "stream" information. Cannot be specified with any other "--update_..." flag.
--update_value_stream Update "data_table" information. Cannot be specified with any other "--update_..." flag.
-y Suppress all non-required prompts, such as "Are you sure?"
* 
Per the usage statement above, only one “--update…” option can be specified at a time. Therefore, to migrate all data table, stream, and value stream data, this script must be run three times (once for each dataset). Because these datasets are independent from each other, the migration of one dataset can be done in parallel with the migration of another dataset. For example, if you open three separate command windows, you can concurrently run the data table migration in the first window, the stream migration in the second window, and the value stream migration in the third window. However, do not attempt to use more than one process to concurrently migrate a given dataset. For example, do not attempt to use two concurrent processes to migrate value stream data. Doing so is undefined and will result in data corruption.
The suggested chunk_size for a typical environment is 10000.
Since the platform can be restarted before all data migration has been completed, the migration of data occurs from the newest data to the oldest data. This is intentional, and allows any queries for that data to start receiving the most pertinent data first.
The size of your data sets can have a dramatic impact on how long it takes to migrate all your data. For example, if you have billions of rows to migrate, the migration of that data may take several days to complete.
7. After all the BigInt/Timezone data migration has been completed, and after all migrated BigInt/Timezone data has been manually verified and validated, run the BigInt/Timezone cleanup script to clean up any temporary database artifacts created by the BigInt/Timezone schema script:
cleanup_bigint_timezone_data_update_mssql.sh
* 
Running this script without arguments prints its usage statement:
Usage: cleanup_bigint_timezone_data_update_mssql.sh -h <host> -p <port> -d <database> -u <user> [--managed_instance <name>] [-y]

Supported Options:
-h host The host name of the machine on which the database is running.
-p port The port on which the database server is listening for connections.
-d database The name of the database to connect to.
-u user Connect to the database as this user.
--managed_instance name To be specified only when the database is deployed within a Managed Instance (e.g. Azure, etc).
-y Suppress all non-required prompts, such as "Are you sure?"
* 
Although this script performs some cleanup of temporary database objects created during the upgrade process, it does not delete any of the backup tables created in the previous steps, nor does it modify any data within those backup tables. This is intentional, and ensures that data cannot be accidentally deleted. If you want to delete these backup tables, you must delete them manually.
8. Run the main Cleanup script after all data migration has been completed, verified and validated.
After all BigInt/Timezone data migration has been completed, verified and validated, run this script to clean up any temporary database artifacts created by main ThingWorx update script:
cleanup_update_mssql.sh
* 
Running this script without arguments prints its usage statement:
Usage: cleanup_update_mssql.sh -h <host> -p <port> -d <database> -s <schema> -u <user> [--managed_instance <name>] [-y]

Supported Options:
-h host The host name of the machine on which the database is running.
-p port The port on which the database server is listening for connections.
-d database The name of the database to connect to.
-s schema The name of the database schema to connect to.
-u user Connect to the database as this user.
--managed_instance name To be specified only when the database is deployed within a Managed Instance (e.g. Azure, etc).
-y Suppress all non-required prompts, such as "Are you sure?"
G.) Update Schema and Migrate Data (Azure SQL only) 
* 
All scripts referenced below are located within the update folder within the ThingWorx software download.
* 
All scripts referenced below require database access. If the SQLCMDPASSWORD environment variable is defined, then the scripts will use its value as the database password. Otherwise, the scripts will prompt you for the database password. See the official MSSQL documentation for more information.
1. Run the main update script to perform the ThingWorx schema migration:
update_mssql.sh
* 
Running this script without arguments prints its usage statement:
Usage: update_mssql.sh -h <host> -p <port> -d <database> -u <user> [--managed_instance <name>] ( --update_all | [--update_data] [--update_grants] [--update_model] [--update_property] ) [-y]

Supported Options:
-h host The host name of the machine on which the database is running.
-p port The port on which the database server is listening for connections.
-d database The name of the database to connect to.
-u user Connect to the database as this user.
--managed_instance name To be specified only when the database is deployed within a Managed Instance (e.g. Azure, etc).
--system_version_override version Forces the upgrade to assume the database schema is currently of this version (i.e. "n.n.n"), rather than of the actual, persisted version.
--update_all Update all information (i.e. Data, Model, Property, etc). Same as specifying all other "--update_..." flags.
--update_data Update only Data information. Can be specified with any other "--update_..." flags, except "--update_all".
--update_grants Update only Grants information. Can be specified with any other "--update_..." flags, except "--update_all".
--update_model Update only Model information. Can be specified with any other "--update_..." flags, except "--update_all".
--update_property Update only Property information. Can be specified with any other "--update_..." flags, except "--update_all".
-y Suppress all non-required prompts, such as "Are you sure?"
Run ThingWorx Server Timezone Scripts
The steps in the section only need to be performed if -Duser.timezone=UTC was not set in step 1 of the “Set the ThingWorx Server Timezone” section or if you are upgrading from ThingWorx 8.5.x or earlier.
1. Obtain the list of all database-specific timezone names. To do this, manually connect to the database and run this query to get a list of all timezone names currently supported by the database:
SELECT name, current_utc_offset, is_currently_dst FROM sys.time_zone_info ORDER BY name
* 
Keep this list for later reference.
2. Determine the name of the timezone to which all existing ThingWorx data is currently associated (your “From” timezone):
Get the timezone value you noted in the "Set the ThingWorx Server Timezone" section above.
That value is the timezone to which all existing ThingWorx data is currently associated.
That value is specific to either the JVM or the operating system, and may not exactly match any timezone name within the database-specific list (queried in step 1).
Manually examine the list of timezones queried from the database to determine which timezone name most appropriately matches that value from the "Set the ThingWorx Server Timezone" section.
After you find the most appropriate match, that timezone name becomes the “From” timezone (vs. the “To” timezone) that will be needed in later steps.
3. Determine the timezone to which all existing ThingWorx data should be migrated (your “To” timezone):
In the "Set the ThingWorx Server Timezone" section above, you set Tomcat’s -Duser.timezone configuration setting to UTC. This is the timezone that all existing ThingWorx data should be migrated to. However, that value is specific to the JVM, and may not exactly match any timezone name within the queried database list (queried in step 1).
Manually examine the list of timezones queried from the database to determine which timezone name most appropriately matches that UTC value.
After you find the most appropriate match, that timezone name becomes the "To” timezone (vs. the “From” timezone) that will be needed in later steps.
* 
The “From” and “To” timezone names can be the same.
4. Run the BigInt/Timezone schema script to prepare for the migration of all data table, stream, and value stream data:
update_bigint_timezone_schema_mssql.sh
* 
Running this script with no arguments prints its usage statement:
Usage: update_bigint_timezone_schema_mssql.sh -h <host> -p <port> -d <database> -u <user> [--managed_instance <name>] --from_timezone <timezone> --to_timezone <timezone> [-y]

Supported Options:
-h host The host name of the machine on which the database is running.
-p port The port on which the database server is listening for connections.
-d database The name of the database to connect to.
-u user Connect to the database as this user.
--managed_instance name To be specified only when the database is deployed within a Managed Instance (e.g. Azure, etc).
--system_version_override version Forces the upgrade to assume the database schema is currently of this version (i.e. "n.n.n"), rather than of the actual, persisted version.
--from_timezone timezone The name of the timezone for all existing data.
--to_timezone timezone The name of the timezone to which all existing data will be updated.
-y Suppress all non-required prompts, such as "Are you sure?"
* 
While this script directly migrates some data, it does not migrate any data table, stream, or value stream data. Instead, this script creates a backup of all data table, stream, and value stream data so it can be migrated later. For performance reasons, this script does not actually create a backup copy of the data within the existing data table, stream, and value stream tables. Instead, this script renames the existing tables from "foo" to "foo_backup". This circumvents the potentially time-consuming process of copying huge amounts data. Once these existing tables are renamed (thus becoming their own backup tables), new tables are created with the original names. These new tables are empty and serve the same purpose as the original tables (because they have the same names as the original tables).
5. After completing the previous step, the platform may be restarted if necessary. However, note that data table, stream, and value stream data has not yet been migrated. Therefore, until that data migration occurs, queries for that data may receive reduced result sets.
6. Run the BigInt/Timezone data script to migrate any data table, stream, or value stream data:
update_bigint_timezone_data_msssql.sh
* 
Running this script without arguments prints its usage statement:
Usage: update_bigint_timezone_data_mssql.sh -h <host> -p <port> -d <database> -u <user> [--managed_instance <name>] --from_timezone <timezone> --to_timezone <timezone> --chunk_size <chunk_size> ( --update_data_table | --update_stream | --update_value_stream ) [-y]

Supported Options:
-h host The host name of the machine on which the database is running.
-p port The port on which the database server is listening for connections.
-d database The name of the database to connect to.
-u user Connect to the database as this user.
--managed_instance name To be specified only when the database is deployed within a Managed Instance (e.g. Azure, etc).
--system_version_override version Forces the upgrade to assume the database schema is currently of this version (i.e. "n.n.n"), rather than of the actual, persisted version.
--from_timezone timezone The name of the timezone for all existing data.
--to_timezone timezone The name of the timezone to which all existing data will be updated.
--chunk_size chunk_size The number of records to update per transaction. Must be greater than 0.
--update_data_table Update "data_table" information. Cannot be specified with any other "--update_..." flag.
--update_stream Update "stream" information. Cannot be specified with any other "--update_..." flag.
--update_value_stream Update "data_table" information. Cannot be specified with any other "--update_..." flag.
-y Suppress all non-required prompts, such as "Are you sure?"
* 
Per the usage statement above, only one “--update…” option can be specified at a time. Therefore, to migrate all data table, stream, and value stream data, this script must be run three times (once for each dataset). Because these datasets are independent from each other, the migration of one dataset can be done in parallel with the migration of another dataset. For example, if you open three separate command windows, you can concurrently run the data table migration in the first window, the stream migration in the second window, and the value stream migration in the third window. However, do not attempt to use more than one process to concurrently migrate a given dataset. For example, do not attempt to use two concurrent processes to migrate value stream data. Doing so is undefined and will result in data corruption.
The suggested chunk_size for a typical environment is 10000.
Since the platform can be restarted before all data migration has been completed, the migration of data occurs from the newest data to the oldest data. This is intentional, and allows any queries for that data to start receiving the most pertinent data first.
The size of your data sets can have a dramatic impact on how long it takes to migrate all your data. For example, if you have billions of rows to migrate, the migration of that data may take several days to complete.
7. After all BigInt/Timezone data migration has been completed, and after all migrated BigInt/Timezone data has been manually verified and validated, run the BigInt/Timezone cleanup script to clean up any temporary database artifacts created by the BigInt/Timezone schema script:
cleanup_bigint_timezone_data_update_mssql.sh
* 
Running this script without arguments prints its usage statement:
Usage: cleanup_bigint_timezone_data_update_mssql.sh -h <host> -p <port> -d <database> -u <user> [--managed_instance <name>] [-y]

Supported Options:
-h host The host name of the machine on which the database is running.
-p port The port on which the database server is listening for connections.
-d database The name of the database to connect to.
-u user Connect to the database as this user.
--managed_instance name To be specified only when the database is deployed within a Managed Instance (e.g. Azure, etc).
-y Suppress all non-required prompts, such as "Are you sure?"
* 
Although this script performs some cleanup of temporary database objects created during the upgrade process, it does not delete any of the backup tables created in the previous steps, nor does it modify any data within those backup tables. This is intentional, and ensures that data cannot be accidentally deleted. If you want to delete these backup tables, you must delete them manually.
8. Run the main Cleanup script after all data migration has been completed, verified and validated.
After all BigInt/Timezone data migration has been completed, verified and validated, run this script to clean up any temporary database artifacts created by main ThingWorx update script:
cleanup_update_mssql.sh
* 
Running this script without arguments prints its usage statement:
Usage: cleanup_update_mssql.sh -h <host> -p <port> -d <database> -s <schema> -u <user> [--managed_instance <name>] [-y]

Supported Options:
-h host The host name of the machine on which the database is running.
-p port The port on which the database server is listening for connections.
-d database The name of the database to connect to.
-s schema The name of the database schema to connect to.
-u user Connect to the database as this user.
--managed_instance name To be specified only when the database is deployed within a Managed Instance (e.g. Azure, etc).
-y Suppress all non-required prompts, such as "Are you sure?"
H.) Upgrade to Java 11 
* 
Java 11 is required for ThingWorx 9.2.0 and later. Refer to system requirements for details.
1. If you are upgrading to Java 11, the following steps are required. Skip this section if Java 11 is already installed.
a. Download OpenJDK or Java 11.
b. Install jEnv on Linux:
a. Git clone the jEnv repository:
git clone https://github.com/jenv/jenv.git ~/.jenv
b. Add jEnv to your $PATH:
echo 'export PATH="$HOME/.jenv/bin:$PATH"' >> ~/.bash_profile
c. Initialize jEnv:
echo 'eval "$(jenv init -)"' >> ~/.bash_profile
d. Update the changes made in ~/.bash_profile:
source ~/.bash_profile
e. Set the JAVA_HOME environment variable:
jenv enable-plugin export
f. Restart your current shell session:
exec $SHELL -l
g. Run the following command and the JAVA_HOME variable will be automatically set by jEnv, depending upon the currently active Java environment:
jenv doctor
c. Add Java environments:
a. Add any environments. All Java installs are located in /usr/lib/jvm/. Use the jenv add command. Examples below:
jenv add /usr/lib/jvm/java-11-amazon-corretto
jenv add /usr/lib/jvm/jdk-11.0.7
b. Check all available Java versions to jenv:
jenv versions
c. Set global Java environment:
jenv global <version>
d. Set shell-specific Java environment:
jenv shell <version>
e. Verify current version set by jenv:
jenv versions
f. Update the path in the Tomcat Java settings.
I.) Deploy the ThingWorx.war File and Restart 
1. Copy the new Thingworx.war file and place it in the following location of your Tomcat installation: /Apache Software Foundation/Tomcat x.x/webapps.
2. Enable extension import. By default, extension import is disabled for all users. Add the following to the platform-settings.json file. Add or update the following ExtensionPackageImportPolicy parameters to true to allow extensions to be imported.
"ExtensionPackageImportPolicy": {
"importEnabled": <true or false>,
"allowJarResources": <true or false>,
"allowJavascriptResources": <true or false>,
"allowCSSResources": <true or false>,
"allowJSONResources": <true or false>,
"allowWebAppResources": <true or false>,
"allowEntities": <true or false>,
"allowExtensibleEntities": <true or false>
},
3. If you are using H2 as a database with ThingWorx, a username and password must be added to the platform-settings.json file.
},
"PersistenceProviderPackageConfigs":{
"H2PersistenceProviderPackage":{
"ConnectionInformation":
{
"password": "<changeme>",
"username": "twadmin"
}
},
4. Start Tomcat.
5. To restore the SSO configurations:
a. Copy the SSOSecurityContextFilter block from the backup web.xml file.
b. In the newly created web.xm file, paste the SSOSecurityContextFilter block after the last AuthenticationFilter block.
6. To launch ThingWorx, go to <servername>\Thingworx in a web browser. Use the following login information:
Login Name: Administrator
Password: <Source server admin password>
J.) Import Stream and Value Stream Data (InfluxDB only) 
* 
The steps in this section are only required if you are upgrading ThingWorx with InfluxDB 1.7.x (for ThingWorx 8.5.x or 9.0.x) to InfluxDB 1.8.x (For ThingWorx 9.1.x or 9.2.x).
1. Create a new persistence provider for InfluxDB 1.8.x or provide new connection information to the existing 1.7.x persistence provider.
2. Import the stream and value stream data to the server. Perform the steps below for stream and value stream data.
a. Login to ThingWorx 9.x as Administrator.
b. Click Import/Export > Import.
c. Use the following options:
a. For Import Option, select From File.
b. For Import Type, select Data.
c. For Import Source, select File Repository.
d. For File Repository, select System.
e. For Path, provide a valid System Repository path.
K.) Upgrade Additional Components 
If you are using Integration Connectors, you must obtain and install the latest version of the integration runtime. For more information, refer to Initial Setup of Integration Runtime Service for Integration Connectors.
If you are upgrading the MSSQL JDBC driver, verify the System Requirements and see Configuring ThingWorx for MSSQL to find the appropriate driver.
If you upgraded from 8.x to 9.x and have Java extensions, see Migrating Java Extensions from 8.x to 9.x.
If you are using ThingWorx Analytics as part of your solution, two installers are available to handle component upgrades:
Analytics Server – installs or upgrades Analytics Server and Analytics Extension
Platform Analytics – installs or upgrades Descriptive Analytics and Property Transforms
For more information about the upgrade procedures, see ThingWorx Analytics Upgrade, Modify, Repair
L.) Additional Optional Cleanup for 9.3+ 
If you are upgrading from ThingWorx 9.2.x or earlier, and you have enabled single sign-on with access token encryption, there is an optional cleanup step you might want to perform. In releases earlier than 9.3, the KeyCzar tool is used to encrypt access tokens before they are persisted to the database. KeyCzar requires the creation of a symmetric folder in the ThingworxPlatform\ssoSecurityConfig folder of your ThingWorx installation directory.
The KeyCzar tool is now deprecated. In ThingWorx 9.3 and later, it has been replaced by the use of Tink for the encryption of access tokens. Tink does not require the symmetric folder or the keyczarKeyFolderPath parameter in the ThingWorx sso-settings.json file. You can leave these files and settings as they are and ThingWorx 9.3 and later will simply ignore them. But if you decide to remove them, you must wait until the upgrade procedure is complete.
M.) Troubleshooting 
If the upgrade failed due to missing values for custom index fields, you must deploy the older version again (if schema updates were made, you must roll back/restore database) and add missing index values or remove the custom indexes from the data table and then perform upgrade.
After starting the ThingWorx platform, check the Application log for the platform. If you are using MSSQL, PostgreSQL, or H2, you may see the following property conflict error messages.
Error Troubleshooting
Application Log Error
Resolution
Error in copying permissions: Problems migrating database
This migration error is seen for MSSQL upgrades and displays if there are any migrated service, property, or event names that have run time permissions configured and their name contains more than 256 characters. To fix this error, limit all service, property, and event names to less than 256 characters.
[L: ERROR] [O: c.t.p.m.BaseReportingMigrator] [I: ] [U: SuperUser] [S: ]
[T: localhost-startStop-1] Thing: <Name of Thing>, has a property which conflicts
with one of the following system properties: isReporting,reportingLastChange,reportingLastEvaluation.
Please refer to the ThingWorx Platform 8.4 documentation on how to resolve this problem.
As part of the Thing Presence feature added to ThingWorx platform 8.4, the following properties were added to the Reportable Thing Shape and are used as part of presence evaluation on the things that implement this shape:
isReporting
reportingLastChange
reportingLastEvaluation
If one of the property names above previously existed on a Thing, Thing Template, or Thing Shape, the following errors will appear in the Application log when the platform starts up. To resolve this problem, the property in conflict on each affected entity must be removed and any associated entities updated to accommodate this change (for example, mashups or services). Without this update, the associated Things cannot display their reporting status properly and cannot be updated/saved. Once these entities are updated properly, the platform-specific reporting properties will be displayed and used in evaluating whether a device is connected and communicating.
[L: ERROR] [O: c.t.p.m.BaseReportingMigrator] [I: ] [U: SuperUser]
[S: ] [T: localhost-startStop-1] ThingTempate: <Name of ThingTemplate>, has a
property which conflicts with one of the following system properties:
isReporting,reportingLastChange,reportingLastEvaluation.
Please refer to the ThingWorx Platform 8.4 documentation on how to resolve this problem.
[L: ERROR] [O: c.t.p.m.BaseReportingMigrator]
[I: ] [U: SuperUser] [S: ] [T: localhost-startStop-1] ThingShape:
<Name of ThingShape>, has a property which conflicts with one of the following system properties:
isReporting,reportingLastChange,reportingLastEvaluation. Please refer to the ThingWorx Platform
8.4 documentation on how to resolve this problem.
Was this helpful?