Preparing the Codebeamer 3.0 Schema Upgrade in Codebeamer 2.2.1.0
Refer to this topic if you intend to migrate to Codebeamer 3.0.0.0.
Before you migrate to Codebeamer 3.0, you must first prepare the 3.0 schema to be able to use legacy Working-Sets in Codebeamer 3.0. The migration is performed using Codebeamer version 2.2.1.0 that contains the migration module.
To prepare the 3.0 schema, use the migration module. The module prepares the schema as needed for version 3.0.
|
You will not be able to install Codebeamer 3.0 until you have installed Codebeamer version 2.2.1.0 and have successfully completed the migrate operation.
|
After you prepare the 3.0 schema in version 2.2.1.0, upgrade to version 3.0 to use legacy Working-Sets in version 3.0.
|
You must be a user added to the System Administrator group to use the migration module.
|
Preparing for the Schema Upgrade
The following operations are unavailable after you start the migration:
• Extending a Working-Set.
• Merging new items from Working-Sets to the Default Working-Set.
• Moving items from Working-Sets to the Default Working-Set.
• Changing the Shared flag of a tracker (including Project Configuration Deployment).
• Physical deletion of a project, tracker, or tracker item.
A project, tracker, or tracker item can be deleted during migration (logical deletion moving the item to the trash), but cannot be physically deleted.
PTC recommends updating the
Slogan field when the upgrade is in progress. To set a message in the
Slogan field, go to > . For more information, see
Login and Welcome Texts.
Use the following steps to prepare for the schema upgrade:
1. Identify the trackers that are shared and branched in your environment.
|
A tracker shared or branched attribute cannot be modified later after the schema upgrade. It is important that you identify the trackers that you want as shared or branched before beginning the upgrade. Your current attribute set will not prevent an upgrade to Codebeamer 3.0.0.0.
|
Trackers of the following tracker types are always shared (not branched): Area, Contact, Team, RPE Report, Test Run, and Time Recording.
PTC recommends that the trackers of the following tracker types are shared: Bug, Change Request, Epic, Issue, Platform, Release, Task, Test Configuration, and User Story.
PTC recommends that the trackers of the following tracker types are branched: Component, Configuration Item, Document, Requirement, Risk, Test Case, and Test Set.
A tracker’s shared/branched attribute is determined during migration in the following order of precedence and the first condition that is met, is considered:
◦ If a tracker has a branch, it is set as branched even if its tracker type is one of those that are always shared (not branched).
◦ If a tracker is shared in even a single Working-Set, it is set as shared.
◦ If the tracker type of a tracker is one of those that are recommended to be shared (not branched), the value as specified on the tracker configuration page (Shared checkbox) is considered. This is the only case where the value as specified on the tracker configuration page is considered.
◦ If the tracker type of a tracker is one of those that are always shared (not branched), the tracker is set as shared.
◦ In all other cases the tracker is set as branched.
Use the following scripts to identify the shared trackers that must be branched and vice-versa:
Oracle:
/* List all trackers which are always or are recommended to be Shared, but the attribute is set to Branched in the tracker configuration */
SELECT tt.id, trrev.name, tt.proj_id
FROM task_type tt
INNER JOIN object tr ON tr.id = tt.id
INNER JOIN object_revision trrev ON tr.id = trrev.object_id AND tr.revision = trrev.revision
WHERE tr.type_id = 16
/* trackers that are always or are recommended to be Shared (and whose Shared checkbox is alterable on the UI) */
AND tt.desc_id IN (
/* always "Shared": Area, RPE Report (alterable in pre-2.2.1.1 release) */
151, 110,
/* recommended to be "Shared": Task, Change Request, Epic, Release, User Story */
6, 3, 12, 103, 10
)
/* but are set as Branched */
AND (trrev.description not like '%"sharedInWorkingSet":%' OR JSON_VALUE(trrev.description, '$.sharedInWorkingSet') = 'false')
AND tr.deleted = 0
ORDER BY tt.id;
/* List all trackers which are recommended to be Branched, but the attribute is set to Shared in the tracker configuration */
SELECT tt.id, trrev.name, tt.proj_id
FROM task_type tt
INNER JOIN object tr ON tr.id = tt.id
INNER JOIN object_revision trrev ON tr.id = trrev.object_id AND tr.revision = trrev.revision
WHERE tr.type_id = 16
/* trackers that are recommended to be Branched (and whose Shared checkbox is alterable on the UI): */
/* Component, Configuration Item, Requirement, Risk, Testcase, Testset */
AND tt.desc_id IN (105, 101, 5, 11, 102, 108)
/* but are set as Shared */
AND (trrev.description like '%"sharedInWorkingSet":%' and JSON_VALUE(trrev.description, '$.sharedInWorkingSet') = 'true')
AND tr.deleted = 0
ORDER BY tt.id;
Postgres:
/* List all trackers which are always or are recommended to be Shared, but the attribute is set to Branched in the tracker configuration */
SELECT tt.id, trrev.name, tt.proj_id
FROM task_type tt
INNER JOIN object tr ON tr.id = tt.id
INNER JOIN object_revision trrev ON tr.id = trrev.object_id AND tr.revision = trrev.revision
WHERE tr.type_id = 16
/* trackers that are always or are recommended to be Shared (and whose Shared checkbox is alterable on the UI) */
AND tt.desc_id IN (
/* always "Shared": Area, RPE Report (alterable in pre-2.2.1.1 release) */
151, 110,
/* recommended to be "Shared": Task, Change Request, Epic, Release, User Story */
6, 3, 12, 103, 10
)
/* but are set as Branched */
AND (trrev.description not like '%"sharedInWorkingSet":%' OR trrev.description::jsonb->'sharedInWorkingSet' = 'false')
AND tr.deleted = 0
ORDER BY tt.id;
/* List all trackers which are recommended to be Branched, but the attribute is set to Shared in the tracker configuration */
SELECT tt.id, trrev.name, tt.proj_id
FROM task_type tt
INNER JOIN object tr ON tr.id = tt.id
INNER JOIN object_revision trrev ON tr.id = trrev.object_id AND tr.revision = trrev.revision
WHERE tr.type_id = 16
/* trackers that are recommended to be Branched (and whose Shared checkbox is alterable on the UI): */
/* Component, Configuration Item, Requirement, Risk, Testcase, Testset */
AND tt.desc_id IN (105, 101, 5, 11, 102, 108)
/* but are set as Shared */
AND (trrev.description like '%"sharedInWorkingSet":%' and trrev.description::jsonb->'sharedInWorkingSet' = 'true')
AND tr.deleted = 0
ORDER BY tt.id;
2. Check your database size before beginning the migration to ensure that it has sufficient free space.
|
The migration module copies the data from the existing tables into new tables, without deleting any data. This results in a significant increase in the database size. Ensure that your database has at least 150% of free space relative to the current database. For instance, if your database has 500 megabytes of used space, then it must have at least 750 megabytes of free space.
The earlier data tables will be deleted in a subsequent release of Codebeamer after version 3.0.
|
3. Ensure that there are no orphaned branches that do not belong to any Working-Set.
You can use the following query to check for orphaned branches. It should return nothing.
SELECT * FROM object o
LEFT OUTER JOIN object_reference obj_r ON obj_r.to_id = o.id AND obj_r.from_type_id IN (5,3)
LEFT OUTER JOIN object ws ON ws.id = obj_r.from_id AND obj_r.from_type_id = 5
WHERE o.type_id = 36 AND ws.id IS NULL;
Alternatively, once you confirm that the branches are unused, run the following query to delete orphaned branches:
delete from object where id in (
select o.id from object o
left outer join object_reference obj_r on obj_r.to_id = o.id and obj_r.from_type_id in (5,3)
left outer join object ws on ws.id = obj_r.from_id and obj_r.from_type_id = 5
where o.type_id = 36 and ws.id is null);
|
If there are orphaned branches, you must delete such branches or assign them to a Working-Set. The migration will stop if the process detects orphaned branches. The exported report will display the related errors. For more information, see Exporting the Report.
|
4. Fix duplicated Working-Set Relation IDs.
Codebeamer contains a fix for duplicated Working-Set Relation IDs. However, Codebeamer can fix the duplicated records only if there are no records where the Relation ID is null. The following script should return 0:
SELECT count(*) FROM task WHERE working_set_relation_id IS NULL;
|
If there are records with a Relation ID as null, those IDs must be fixed before the migration. The migration will immediately stop if the process detects such records. Contact PTC Technical Support to help fix this issue. The exported report will display the related errors. For more information, see Exporting the Report.
|
5. Fix tasks with working_set_relation_id as null.
If a task exists where the
working_set_relation_id is null, the migration cannot be started. This check is performed when the migration module loads. The check can result in slower load times if the migration has not started, and the database is large. If the
working_set_relation_id error is encountered, an error message is displayed on the
System Administration page, instead of the
migration module being displayed. The REST API will start the migration but the migration stops if the
working_set_relation_id error is encountered. If you need help with
working_set_relation_id as null issues, contact
PTC Technical Support.
6. If you have cloned instances, you must run the migration on only one of the instances. PTC recommends you run the migration on the cloned instance and not the primary instance so that there are no data changes and performance issues on the primary instance. However, the features must be
unavailable during migration on both instances.
To achieve this, set the following environment variable: CB_mimicRunningScalableWsMigration_migrationStarted.
"mimicRunningScalableWsMigration" : {
"migrationStarted" : true
}
If you set
migrationStarted to
true,
Codebeamer assumes that the migration is in progress and makes certain features
unavailable. PTC recommends that you set this to
true on the primary instance and start the migration on the cloned instance. This ensures that the features are unavailable on both instances.
The migration can run on the cloned instance while the features are
unavailable on the primary instance.
Limitations during Migration
If you need to perform operations that are
unavailable during migration, do the following:
1. Stop the migration.
2. Do a Hard Reset.
3. Apply the necessary changes.
4. Restart the migration.
Migration Process
The migration from 2.2.1.0 to 3.0 is a two-phase migration. The first phase is done by the Codebeamer version 2.2.1.0 migration module in the background. During this migration Codebeamer can be used normally. The second phase of the migration is done during the first deployment of Codebeamer 3.0. This step finalizes and completes the migration.
Use the following steps to migrate your tasks:
1. Upgrade to the provided Codebeamer version 2.2.1.0, that contains the migration module.
|
This step is mandatory. You must first upgrade to version 2.2.1.0 to prepare the 3.0 schema.
|
2. Start the migration manually using the
migration module UI as described in this topic. The migration process operates asynchronously. A background job is scheduled, and the migration is considered as started, but the job may not start immediately. Pre-migration checks are part of this job. If the job detects check failures such as the check for abandoned branches, the migration immediately stops in the background without any notification.
3. Check the migration process regularly based on the information in the
migration module. The migration will run in the background, and
Codebeamer can be used as usual during the migration process. Some operations are
unavailable during the migration.
The migration process is as follows:
a. The migration process checks for stuck in-progress items that could be a result of an application crash, or other severe error. The state of such items is changed to MODIFIED, to process them again. For more information, see
Item Status Values.
b. The migration collects all the data that needs to be migrated. If the migration process has been run earlier, only the changes since the last run are collected.
c. The collected data is migrated into the new tables or marked as faulty if the migration fails for some reason. More details are in subsequent sections in this topic.
d. The successfully migrated data is validated. The data in the new tables is compared with the data in the earlier tables. If a difference is detected, the data is marked as inconsistent. More details are in subsequent sections in this topic.
e. When all the data is migrated, the process sleeps for ten minutes.
f. The process automatically restarts at
this step and begins the cycle again to check if new data needs to be migrated.
4. To proceed, stop the migration manually by using the migration module. For more information, see
Migration Module.
|
The migration process will never stop automatically, because new data could emerge if the application is still used during the migration. The migration should not be stopped if any item is not in one of the terminal states as described in subsequent sections in this topic.
|
The steps must be executed in the order specified in the following tables.
Upgrade preparation steps:
Step Number
|
Description
|
1
|
Ensure you are not trying to deploy Codebeamer 3.0.0.0.
|
2
|
Delete 3.0.0.0 schema parts if necessary.
|
3
|
Back up the database.
|
4
|
Stop Codebeamer.
|
5
|
|
6
|
Deploy Codebeamer 2.2.1.0.
|
Migration steps:
Step Number
|
Description
|
1
|
Start the migration. The migration can take from a few days to a week or more, to complete.
|
2
|
Monitor migration and wait until most of the data is migrated. Wait until most of the items are in one of the following terminal states: FINISHED, PARTIALLY_FINISHED, UNRECOVERABLE_ERROR, or VALIDATION_ERROR.
|
3
|
Export the migration results for analysis.
If there are issues, you can use analyze the exported data to decide if you should go ahead with the migration. You can reach out to PTC Technical Support if you encounter issues or need guidance.
|
If you decide to proceed with the migration:
Step Number
|
Description
|
1
|
Put Codebeamer in maintenance mode.
|
2
|
Check if there are changes that are not migrated and wait until all the data is migrated.
|
3
|
Stop the migration.
|
4
|
Exclude all items with a status of UNRECOVERABLE_ERROR. For more information see Migration Module.
|
5
|
Stop Codebeamer.
|
6
|
Deploy Codebeamer 3.0.0.0 with the updated Working-Sets feature.
|
7
|
Run a quick smoke test.
|
If you are applying a new version:
Step Number
|
Description
|
1
|
Stop the migration.
|
2
|
Stop Codebeamer.
|
3
|
Deploye the new version of Codebeamer 2.2.1.0.
|
4
|
Again migrate and re-validate all items. Continue again from this point.
|
Additional Information About the Migration Process
• The migration is fully asynchronous, so no errors will be shown on the migration module page.
• There are two types of errors/warnings/messages as follows:
◦ Process-level: These pertain to the migration process itself. For example, if there are orphaned branches, this is a process-level error and the migration will immediately stop.
◦ Item: These pertain to the tracker item being migrated. For example, if a task cannot be migrated, the error is an item-level error and pertains to the item.
• The only errors that stop the migration are the errors described in
Preparing for
the Schema Upgrade. In every other case the migration will keep trying to migrate the items. Temporary database or network outages, or other errors won't stop the migration process.
• With a single exception mentioned in the next bullet, the migration module only reads the existing tables and creates new tables that are not being used by Codebeamer 2.2.0.0. It also creates a set of temporary migration tables to keep track of the migration process and to store any errors or warnings messages.
• If tasks contain a Modified at field value different from the Submitted at field value, or a Modified by field value different from the Submitted by field value, the data is corrupted because the values must be the same for the first revision. The new database schema is not able to store different values for these fields in the first revision. To address this issue, the inconsistent entries for submittedBy and submittedAt are deleted before migration.
Throttling, Retries, and Revision Skip
• Throttling: The migration supports throttling. If enabled, the process keeps track of the average number free database connections, and the average CPU load. If either of them exceeds a predefined threshold, the migration is scaled down. For instance, the threshold could be as follows:
◦ Number of free database connections are less than 20% of the pool on average.
◦ CPU load is more than 80% on average.
On the other hand, throttling is scaled up until the threshold values are not exceeded. This ensures a balance between migration speed and application database load.
• Automatic retry: If an item migration fails, migration is automatically retried thrice. This is to ensure that there is no item lost because of temporary outages. However, if the item is not migrated despite reaching the maximum number of retries, it is possible to reset the migration or re-validate the failed tasks. For more information, see
Migration Module. This is useful when the outage was longer than expected.
• Revision skip: If a revision cannot be migrated due to a programmatic error, the migration module tries to create an empty revision which is a dummy revision in which there are no field value changes, and proceeds to the next revision. Revision information is retained to avoid broken references in the data. If there are one or more skipped revisions, but not all revisions are skipped, the final item status is PARTIALLY_FINISHED. If all revisions are skipped, the final item status is UNRECOVERABLE_ERROR.
Process Status Values
The following are the possible process status values based on the order in which the migration module executes them:
Status
|
Description
|
State
|
NEW
|
The upgrade has not yet started.
|
|
MIGRATION_ITERATION_SCHEDULED
|
The upgrade has started, and a new iteration is already scheduled. (The migration is currently in the ten-minute sleeping period.)
|
|
PREPARE_MIGRATION_PHASE_IN_PROGRESS
|
The pre-upgrade step is currently in progress. The migration module is bulk loading the tables where possible and preparing the migration tables for later phases.
|
|
TASK_UPGRADE_PHASE_IN_PROGRESS
|
The tracker item upgrade is currently in progress.
|
|
VALIDATION_PHASE_IN_PROGRESS
|
The tracker item validation phase is currently in progress. Each revision of each tracker item is checked to ensure that the content loaded from the new tables is the same as the content loaded from the old (earlier) tables.
|
|
MERGE_INFO_MIGRATION_IN_PROGRESS
|
The merge information migration is currently in progress.
|
|
POST_MIGRATION_PHASE_IN_PROGRESS
|
A bulk validation to check if the data does not violate the database constraints that will be enforced later.
|
|
ERROR
|
The migration failed with an error. This does not stop the migration but only the current iteration. Migration will be retried in the next iteration.
|
|
STOPPING
|
The migration is being stopped.
|
|
STOPPED
|
The migration is stopped manually or due to an unrecoverable error.
|
Terminal state
|
UNDER_RESET
|
The migration tables are under HARD RESET. The migration module truncates all target tables and resets the migration state as if it was never run previously.
|
|
Item Status Values
The following are the possible item status values:
Status
|
Description
|
State
|
NEW
|
Migration of the item has not yet started.
|
|
MIGRATION_IN_PROGRESS
|
Migration is currently in progress.
|
|
ERROR
|
Migration failed due to an unexpected error. For example, a network outage. Will be retried if the number of retry attempts are not yet reached.
|
|
UNRECOVERABLE_ERROR
|
The maximum number of retries has been reached and the migration failed with an unrecoverable error. For example, mandatory data is missing in the task. For such a status, migration will not be retried.
|
Terminal state
|
VALIDATION_ERROR
|
Migration failed with a validation error. The generated hashes of the source and target objects are different. Applies only to Task items.
|
Terminal state
|
READY_FOR_VALIDATION
|
The item is successfully migrated and ready for validation. Applies only to Task items.
|
|
VALIDATION_IN_PROGRESS
|
Validation is currently in progress.
|
|
FINISHED
|
Migration and validation have successfully completed.
|
Terminal state
|
PARTIALLY_FINISHED
|
The migration finished with one or more revision errors (skipped revisions).
|
Terminal state
|
MODIFIED
|
The item has been modified since the last successful migration. For example, a new revision is created, and the modifications need to be migrated. This status is similar to NEW but indicates that previous work or migration attempts are already done on this item.
|
|
EXCLUDED
|
The task is manually excluded from migration.
|
Excluded items are completely deleted from the old and new tables. Otherwise, the database constraints cannot be enforced due to foreign key constraint violations.
|
|
|
The migration flowchart is as follows:
Task Migration Flowchart
Status values with a regular border indicate healthy states. Status values with a dashed border indicate intermediate states. Status values with a double-dashed border indicate an error or exclusion, as displayed in the flowchart.
Field Conversion Errors
The following value conversion or value transfer failures are not considered as an error but logged in the TEMP_FIELD_MIGRATION_ERROR table. Such failures can be fixed or ignored:
• FieldConversion: The old schema stores field values as a string, while the new schema stores them based on data type. This means that during the migration the values need to be converted to the appropriate type. For example the string “123” is migrated as a number, 123. As the old schema allowed the storing of invalid values for a particular type, the conversion may fail. For example, the string “two” in a number field may result in a failure. For field conversion errors, the value is carried over as a string as a fallback mechanism.
• BrokenReference: A reference field points to an item which is missing from the database. Such fields are not migrated.
• InvalidReference: The reference is not in a valid format and cannot be parsed.
• MissingCellIdForTableReference: The cell ID of the field value cannot be determined. The field is skipped as it cannot be migrated.
• Duplicate: If there are multiple references with the same properties except for the ordinal, only the first field is migrated, and all other fields are skipped.
• LegacyReferenceWarning: The reference is in a legacy format, and the migration module cannot ascertain if it is valid.
• AmbigousCreatorUsers: The Submitted by and Modified by field values are different for the first revision which is not a valid case. The Submitted by field is used during migration to fix the error.
• MissingLayout: The field is missing from the tracker's basic layout. As there is no layout where type information can be found, the field is migrated as a string.
• TableLayoutMismatch: Fields were found that don’t align with the table layout. For instance, cells outside existing columns.
• DuplicateChoice: The choice field contains multiple occurrences of the same choice.
• ReferenceTargetNotMigrated: The target of the reference is in an unrecoverable error state. So the reference is skipped from validation.
• ReferenceInChoiceOptionField: A reference value found in a choice field.
The following temporary tables are used by the migration module:
• temp_baseline_item_status
• temp_common_id_mapper
• temp_field_migration_error
• temp_merge_info
• temp_upgrade_item_status
• temp_upgrade_process_status
• temp_validation_result
• temp_ws_only_item_id_mapper
Migration Module
The migration module can be used to migrate your legacy Working-Sets to use them in Codebeamer 3.0.
Accessing the Migration Module
To access the migration module, in Codebeamer version 2.2.1.0, go to System Administration and click Migration module.
The migration module page is displayed. The following is a sample of a migration that is in progress:
The checkboxes with item status names appear on the migration module page if there is at least one item that has been migrated containing that status.
Starting and Stopping the Migration
Click Start migration to start the migration. If the migration is running, the button changes to Stop migration. To stop the migration, click it.
When migrating the database, keep the following in mind:
• Both start and stop functions are asynchronous.
• Start migration will schedule the migration job, but it may not start immediately.
• Stop migration will notify the migration module to stop, but the migration will not be immediately halted. Depending on the operation in progress when the migration is being stopped, it could take from a few seconds to several minutes. For instance, in the post migration phase, there are large-scale queries running for ten to twenty minutes. The queries are not aborted but between each query, there are checks to determine if the job should abort and exit. The status appears as STOPPING until the job exits.
The migration module will not process any new items but will continue processing the ongoing items. This might take some time based on the number of items currently being processed.
Migration Parameters
Before you start the migration, the following Migration parameters dialog box appears:
The migration parameters are as follows:
Thread pool size: The number of threads used for migration. Higher values improve performance, but also significantly increase CPU usage and database load. The value must be less than 70% of the maximum size of the connection pool. For example, if the maximum number of connections are 35, the thread pool count must not exceed 24. Increasing the thread pool count above the threshold will degrade the performance instead of improving it. To allow for higher thread pool values, the size of the connection pool size must be correspondingly increased. The default thread pool size is 20.
Throttling: If this checkbox is selected, the thread pool size is dynamically adjusted. In this case the set size acts as the maximum value. If the migration module detects that the average CPU usage is very high, or the average number of idle connections is too low, it will decrease the pool size to lower the load. Conversely, if the average CPU load is too low, or the average number of idle connections is very high, throttling will increase the pool size up to the specified maximum value, to increase the load. This checkbox is useful to automatically scale down the migration when Codebeamer is used during the day, and to scale up during the nights.
Detailed DB log: If this checkbox is selected, a very large amount of log information will be added to the migration module temp tables. Use this checkbox only for development purposes. Do not use it on a large dataset, because it will significantly decrease migration performance.
Hard Reset Migration
Click Hard reset migration to reset the migration job. It will be considered as it was never run before. Clicking this button will truncate all migration related tables except for temp_upgrade_process_status. The migration can be restarted from the beginning. Click Hard reset migration only when the migration stops.
Migration Process Details
Refer to the summary and the table on this page to know the migration status. For more information, see
Process Status Values and
Item Status Values.
Exporting the Report
To export the report, click Export report. This button is visible only when the migration is not running.
You can export the report as a ZIP file.
If you encounter errors or special cases that are not handled by the migration module, reach out to
PTC Technical Support for help.
The exported ZIP file contains the following CSV and TXT files:
• process_messages.txt
This file contains all the process-level messages and errors. It does not contain item-level messages. Post validation, database constraint warnings which cannot be tied to a task are also reported in this file.
• status_summary.csv
The table as visible on the screen, is exported as a CSV file.
• field_migration_errors.csv
This file contains all the field-related errors encountered during the migration. Checkboxes have no effect on the content of this file. For details on these errors, see
Field Conversion
Errors.
A sample is as follows:
globalUniqueItemId,revision,fieldId,targetReference,type,message
1294,1,13001,,MissingLayout,"The field is missing from the revision's layout, migrating it as string"
• tasks.csv
This file contains the tasks with the selected status values. Status values are included if there is at least one item with that status.
| If you select FINISHED and there are many items with that status, the file size of the CSV file can be large. The same applies if you select VALIDATION_ERROR. |
A sample is as follows:
itemId,status,latestKnownRevisionId,highestProcessedRevisionId,remainingRetryAttempts,messages
23927,EXCLUDED,2,0,0,"
2024-11-11 09:14:58: Task item successfully collected
Error during trying to upgrade task: org.springframework.dao.DataIntegrityViolationException:
### Error updating database. Cause: java.sql.SQLIntegrityConstraintViolationException: ORA-01400: cannot insert NULL into (""CBTEST"".""BRANCH_ITEM_HISTORY_REVISION"".""ITEM_ID"")"
• validation_errors.csv
This file contains all the tasks with validation errors.
A sample is as follows:
globalUniqueItemId,revisionId,status,oldHash,oldObject,newObject
1270,1,FAILED,39bdf82f543569b1ba813a36f34589debb85fafcb679b707786d48a76f6f2775,"[{""assocId"":null,""biDirect"":false,""biDirectIncomingSuspected"":false,...[TRUNCATED]...}]","[{""assocId"":null,""biDirect"":false,""biDirectIncomingSuspected"":false,...[TRUNCATED]...}]"
Get New Changes Since Last Iteration
While
Codebeamer can be used in parallel during the migration, you must create a maintenance window to ensure data integrity. When
Maintenance mode is turned on, and the users are no longer allowed to use
Codebeamer, there is a small window when the very last changes may not have been migrated.
Click Get new changes since last iteration, to verify that all the data is processed, or if there still are pending tasks or task revisions that need to be migrated. If some data is pending, you need to wait for a final migration iteration to migrate the remaining data. When all numbers appear as 0, the upgrade process can advance to the next step.
Again Migrate Failed Tasks
The Re-migrate failed tasks button is enabled if there is at least one task item with a status of UNRECOVERABLE_ERROR or PARTIALLY_FINISHED. When you click this button, all migrated data is removed and the item status is changed to MODIFIED, so it can be migrated again.
Use this button only if there was a temporary error such as a connection pool exhaust that lasted a few minutes and caused the migration of some items to fail.
Do not use this button in other cases because it will result in the same output. If you continue to see same errors, contact
PTC Technical Support.
Re-Validate Tasks
The Re-validate failed tasks button is enabled if there is at least one task item with a status of VALIDATION_ERROR. When you click this button, the item status is changed to READY_FOR_VALIDATION.
Use this button only if there was a temporary error such as a connection pool exhaust that lasted a few minutes and caused the migration of some items to fail.
Do not use this button in other cases because it will result in the same output. If you continue to see same errors, contact
PTC Technical Support.
| Do not use the Re-validate failed tasks button if there are items with a status of PARTIALLY_FINISHED, because skipped revisions are not validated. Attempting to validate such items will always result in a status of VALIDATION_ERROR. |
Exclude Unrecoverable Errors
The Exclude unrecoverable errors button is enabled if there is at least one task with a status of UNRECOVERABLE_ERROR.
Despite all attempts to migrate the data, there might be items that cannot be migrated because of data corruption or other issues. Un-migrated or half-migrated items would prevent applying database restrictions such as foreign keys or not-null constraints.
To ensure every constraint can be applied later you may decide to skip such items from the migration. In such a case, the problematic data needs to be deleted from the old and the new database tables.
| Exercise caution when clicking the Exclude unrecoverable errors button. Excluded items are fully deleted from all tables, except the temporary migration tables to keep a record of such an action. This action cannot be undone, even with a hard reset. |
Changes Post Migration
This section discusses the changes that occur after the migration is complete.
Handling of Shared Trackers and Working-Sets
On the migration module page, the Working-Set and tracker combinations section shows the shared trackers that are no longer shared after the upgrade, because they were previously branched before the schema upgrade. If a tracker is branched in any Working-Set (including the Default Working-Set), it is set as branched in the configuration and removed from all the Working-Sets where it was shared. This is based on changes introduced in Codebeamer 3.0.0.0.
Migration of Merge Information
On upgrade, all cleared item badge information is lost.
Items are compared manually between the current and parent Working-Set. If they are different, a merge badge is created for the item on the current Working-Set. This scenario can be a bit challenging if the user had previously merged an item to another Working-Set using the Mark as merged checkbox (without actually merging the content). In this case, the item badge is cleared although the content in the current Working-Set is different than the content in the other Working-Set. When such an item is migrated, the migration module will find a difference between the item in the current Working-Set and the other Working-Set and create a merge badge for this item, disregarding the Mark as merged checkbox that was selected previously. Newly added badges have to be manually removed by the user by again attempting the merge and selecting the Mark as merged checkbox post data migration. This can be challenging if there are a large number of items with newly added merge badges after data migration. There is no way to provide a report to identify such items with merge badges.