Database Migration
This guide provides information on database migration.
Concept
Migration tool creates source_hash and target_hash table for all tables and store a primary key(s) and the MD5 hash of the rest of the columns. Hash is computed for all rows, on source and target as well.
Prerequisites
Supported Codebeamer versions:
22.04 or higher SPs
22.10-SP7 or higher SPs
2.0.0.0 or higher SPs
2.1.0.0 or higher SPs
Linux machine with Installed docker.
Oracle 19.x or PostgreSQL 16.x database.
Source database must be available only for the migration tool.
Empty schema must be created in the target database.
Artifact in Database feature is not used.
Enough free disk for new source hash table on the source and target database. It is recommended to have half the allocation disk empty.
* 
From MySQL 8.30, the Generated Invisible Primary Keys feature is available.
Before starting the migration script, ensure that sql_generate_invisible_primary_key is OFF.
* 
Use the IP address of the host machine instead of Localhost at the host of DB (both source and target).
* 
To edit the config.conf in Windows, use Notepad. Do not use rich text editor.
* 
If possible, avoid special characters in the target DB password.
Limitation
Temp tables will not be dropped after migration in the source database. Temp tables end with _temp.
How to Create PostgreSQL
Following configuration should be used for database creation:
LC_COLLATE='en_US.UTF-8' LC_CTYPE='en_US.UTF-8' ENCODING 'UTF8'
How to Create Oracle
Following configuration should be used for database creation:
GRANT CREATE SESSION, CREATE TYPE, CREATE TABLE, CREATE CLUSTER, CREATE TRIGGER, CREATE OPERATOR, CREATE SEQUENCE, CREATE INDEXTYPE, CREATE PROCEDURE, CREATE VIEW
GRANT EXECUTE ON CTXSYS.CTX_DDL
GRANT EXECUTE ON DBMS_LOB
GRANT EXECUTE ON DBMS_CRYPTO
DBMS_CRYPTO grant can be removed after the migration.
Migration Stages
Pre-Stage - Steps are Running on the Source Database
1. Create temporally tables with PKs for history_search_update, initial_version, locks, session_user_licenses, task_reference_tag, user_key, reference_search_history, task_field_value, field_value_search_history, project_activity_log, computed_field_lookup, object_access_permission, object_notification, object_reference, object_approval_history, object_reference_filter, task_field_status_specific.
2. Clean up invalid data.
3. Generate source hash.
4. Assert source hash tables generation was successful.
Java-Migration-Stage
Migrate data from source to target.
Post-Stage - Steps are Running on the Target Database
1. Create target database.
2. Create sequences.
3. Create tables.
4. Create source hash tables.
5. Set version number.
6. Generate target hash.
7. Assert target hash.
8. Validate source and target hash.
9. Drop source and target hash tables.
10. Add View and Procedure.
11. Remove invalid data.
12. Add indexes.
13. Add PKs.
14. Add constraints and FKs.
How to Run the Migration Tool
Configuration
Create a configuration file using the following template.
config.conf
# Number of threads is used for running hash computation script. Use the number of CPUs of source database.
db.parallel.sql_job_number=4

# Number of threads is used for running PK creation script. Use the number of CPUs of source database.(max 11)
db.parallel.pk_job_number=2

db.source.database.type=<mysql|postgresql|oracle>
db.target.database.type=<postgresql|oracle>

db.source.host=<database host>
db.source.port=<database port>
db.source.username=<database username>
db.source.password=<database password>
db.source.database=<database name>

db.target.host=<database host>
db.target.port=<database port>
db.target.username=<database username>
db.target.password=<database password>
db.target.database=<database name>

# default is true
migrator.run.pre_stage=<true|false>

# default is true
migrator.run.java_stage=<true|false>

# default is true
migrator.run.post_stage=<true|false>

# Run hash computation and validation or not
migrator.verify_rows=<true|false>

# It depends on your memory of the migration server and database server
migrator.fetch_size=3000

# It depends on your memory of the migration server and database server
migrator.batch_size=3000

# Number of maximum threads is used for running migration script. Use the number of CPUs of your migration machine
migrator.worker.threads=12

# Number is threads is used for processing one table, smaller or same as the total.sections configuration
migrator.table.parallel=12

# Number of sections that table is cut of
migrator.total.sections=24

# Drop or keep the hash tables, it is helpful for investigating migration issues, by default it is true
migrator.drop_hash_tables=<true|false>

# See the related section below, by default it is false
migrator.drop_document_cache=<true|false>

# See the related section below, by default it is true
migrator.update_boolean_values=<true|false>

# See the related section below, by default it is true
migrator.remove_000_characters=<true|false>

# Version of the source codebeamer
cb.major.version=22.10

# Version of the source codebeamer
cb.minor.version=-SP7
How to Download the Migrator Tool
Docker Images
intland/database-migration-tool:22.04-1.2
intland/database-migration-tool:22.10-1.3
intland/database-migration-tool:2.0-1.2
intland/database-migration-tool:2.1-1.3
How to Pull Docker Image
docker login -u intlandsnapshot -p 89bc690e-bf19-4014-885b-7cd4c6355ca2
docker pull < docker image>
docker logout
How to Start the Migration Tool
docker run -v <path of config.conf>:/conf/config -v <path of directory>:/conf/logs -i /rootfs/run-migration.sh
Database Configuration
PostgreSQL Tuning
synchronous_commit = off
* 
This configuration must set to ON for production use.
Oracle
nls_length_semantics = CHAR
* 
This configuration is required for migration and production use as well.
Execution Times of Test Runs
We used a ~200G database running on AWS.
Details:
Source RDS is a db.m5.2xlarge (8 vCPU / 32G / 4000 IOPS) server
Target RDS are a db.m5.2xlarge (8 vCPU / 32G / 8000 IOPS) server
Migration server is a m5.2xlarge (8 vCPU / 32G / 2000 IOPS) server
Network is up to 10 Gbps
Execution Time for MySQL -> Oracle
With Verification
Add PKs using 2 threads - 2h 30m
Source hash generation using 4 threads - 1h 10m
Migrate Source to Target - 5h 30m
Target hash generation using 4 threads - 22m
Validate source and target hash - 8m
Drop temporally Pks using 4 threads - 1h
Remove invalid data - 7m
Create indexes using 4 threads - 7m
Add PKs using 4 threads - 2m
Without Verification
Add temporally PKs using 2 threads - 2h 30m
Migrate Source to Target - 1h 50m
Drop temporally Pks using 4 threads - 1h
Remove invalid data - 7m
Create indexes using 4 threads - 7m
Add PKs using 4 threads - 2m
Execution Time for MySQL -> PostgreSQL
With Verification
Add temporally PKs using 2 threads - 2h 30m
Source hash generation using 4 threads - 1h 10m
Migrate Source to Target - 5h
Drop temporally Pks using 4 threads - 1m
Target hash generation using 4 threads - 10m
Validate source and target hash - 21m
Remove invalid data - 14m
Create indexes using 4 threads - 12m
Add PKs using 4 threads - 2m
Without Verification
Add temporally PKs using 2 threads - 2h 30m
Migrate Source to Target - 1h 40m
Drop temporally Pks using 4 threads - 1m
Remove invalid data - 14m
Create indexes using 4 threads - 12m
Add PKs using 4 threads - 2m
Recommendation
Half of the migration time is the verification step, in order to speed up the migration.
Dry-run migration with verification and validation:
migrator.verify_rows=true
migrator.drop_hash_tables=false
Real migration with verification and validation:
migrator.verify_rows=true
migrator.drop_hash_tables=true
Real migration without verification and validation:
migrator.verify_rows=false
migrator.drop_hash_tables=false
Remove Data of document_cache
document_cache, document_cache_data, and document_cache_data_blobs contain data for faster re-index of the database. These tables can be truncated before the migration:
SET foreign_key_checks = 0;
TRUNCATE TABLE document_cache;
TRUNCATE TABLE document_cache_data;
TRUNCATE TABLE document_cache_data_blobs;
SET foreign_key_checks = 1;
Data Inconsistency
If migrator_update_boolean_values is set to true, on MySQL the following SQL scripts will be executed in order to fix data inconsistency:
UPDATE object_notification_temp SET only_members = 0 WHERE only_members IS NULL;
UPDATE object_notification_temp SET deleted = 0 WHERE deleted IS NULL;
UPDATE task_type SET service_desk = 0 WHERE service_desk IS NULL;
UPDATE task_type SET template = 0 WHERE template IS NULL;
If migrator.remove_000_characters is set to true, on MySQL the following SQL scripts will be executed in order to fix data inconsistency:
UPDATE task_field_history SET new_value = replace(new_value, 0x00, "") WHERE new_value LIKE CONCAT("%", CHAR(0x00 using utf8), "%");
UPDATE task_field_history SET old_value = replace(old_value, 0x00, "") WHERE old_value LIKE CONCAT("%", CHAR(0x00 using utf8), "%");
UPDATE task_field_value SET field_value = replace(field_value, 0x00, "") WHERE field_value LIKE CONCAT("%", CHAR(0x00 using utf8), "%");
Was this helpful?