Migration

During migration I am getting error with an empty line. What should I do?

In order to migrate the database which contains empty lines, use the property ignoreDeletedRecords with value as true. If still error is persisting with property, then check whether the latest version of the data-migrator tool is used.

How can I execute queries after data insertion? 

Possible to execute queries after the data insertion by adding a section in the ini file. Data migrator reads and performs steps based on the section sequentially. Hence, create a new section after the data insertion section to run queries after data insertion.

[CSV2DB - Loading1]
csvQuote="
csvSeparator=;
csvOneLiner=true;
database=test
encoding=UTF-8
input.folder=data
multithread=5
temp.folder=dataTemp
[ExecuteSql - Execute extra queries]
database=test
input.folder=database/extra
multithread=1
plainExecution=true

During migration for DB2LUW I am getting deadlock error. What should I do?

For DB2LUW database, creating multiple indexes on same table concurrently can cause deadlock. Set multithread=1 if encountereing deadlock.

How does Data migrator work with IMS hierarchy?

Data Migrator does not work with hierarchy directly, Blu Age Transformation Engines involved in Transformation Center parses the hierarchy in IMS DBD and converts into relation in target Postgres database the dedicated JDHB component in the AWS Blu Age Runtime is able to interpret.

During migration, I am experiencing slow performance. What should I check?

If you are facing performance issues during migration, check out the following properties depending on your configuration.

Data loading mode properties

  • mssqlInsertMode: If you are using MSSQL database and the value is set to INSERT_INTO, this mode uses individual INSERT statements and is only valid for Proof of Concept (POC). Switch to BULK_INSERT for better performance. For more details on available modes, refer to Target Database Support.

Debug properties

  • For all databases, check the ebcdicCsvDumpFolder property. When set, an additional CSV file is written to disk for every EBCDIC record with a flush on each write, which can significantly impact performance.