User Guide › Stage 2 – Configuration

Stage 2 – Configuration


2.1 BEGIN

Choose/create a Workflow and configure it:

The configuration process will start with the Configure Directories screen.


2.2 DIRECTORIES CONFIGURATION

Loading existing directories on source
To load all source directories use the “Load” button on the upper right corner of the screen.

Selecting directories
Choose all needed directories (Directory for DATAPUMP is a necessary) and define destinations according the file system on your destination server. In case of RAC (cluster), the directory should be shared among all nodes.

Switch to Users Configuration Screen.


2.3 USERS CONFIGURATION

Loading existing users on source
To load all non-system users on source use the “Load” button on the upper right corner of the screen.

Selecting users to migrate
Choose users to migrate and, if you need to migrate data from one specific user to another, define a destination user according to your needs.

Switch to Tablespaces Configuration Screen.


2.4 TABLESPACES CONFIGURATION

Loading existing Tablespaces on source
To load all Tablespaces on source use the “Load” button on the upper right corner of the screen.

Configuring Tablespaces
The following columns are available:

Basic View:
Selected – checkbox to select required tablespaces. Only selected tablespaces are handled.
Name – tablespace name. Cannot be changed.

Advanced View:
Block Size – datafiles block size. Can be changed to any supported block size.
Contents – Permanent or Temporary.
Bigfile – whether the tablespace is defined using bigfile feature. It means that only one datafile exists, and it is unlimited in size.
File name – location of the datafile in the destination filesystem. In the event that the db_create_file_dest is already set, then leave this field empty.
Files quantity – how many datafiles to create. In case bigfile is set, this field is ignored.

All the datafiles are created using the same clause “SIZE 100M AUTOEXTEND ON NEXT 100M MAXSIZE UNLIMITED.”
It is defined in ab_params table, parameter DEFAULT_DATAFILE_SIZE_CLAUSE.

To update many rows use “Update all” button. It updates all the filtered rows.
To update all filenames use an asterisk * in the “Replace this” field.

Switch to Tables Configuration Screen.


2.5 TABLES CONFIGURATION

Loading existing tables on source
To load all non-system tables on source use the “Load” button on the upper right corner of the screen.

Configuring tables
The following columns are available:

Basic View:
Selected – checkbox to select required tables. Only selected tables are handled.
Table – table name in the source database. Cannot be changed. See further Destination table name.

Advanced View:
Destination Owner – new owner for this table. Can be changed. If empty, then Destination username from user configuration is inherited. If it is also empty, then the Destination owner is the same as Source owner.
Destination table name – new table name for this table. Can be changed. If empty, then Destination table name is the same as source table name.
Transform columns – whether columns on the destination are different from the source, they can be changed.
Transform partitions – whether partitions on the destination are different from the source, they can be changed.
Verify empty segment – whether to check if the destination table is empty. Can be changed.
Compression – sets the compression level. Can be changed to any supported oracle compression algorithm (BASIC, OLTP, QUERY LOW, QUERY HIGH, ARCHIVE LOW, ARCHIVE HIGH). Note: the last 4 compressions are available only on EXADATA.
Read source in parallel – sets the parallel degree for reading a single table from the source.
Destination tablespace – new tablespace for the table. Can be changed. The tablespace should exist at the moment when you run migration.
Where clause – condition for the table. Can be changed. It is used as a filter for the table to copy only subsets.
Is masked – sets masking (scrambling) for fields of the table. Can be changed. For every field, it is allowed to choose any masking algorithm. When the data of the table is copied, it is automatically masked accordingly.

Note: since the data is masked on the fly, no sensitive data is present in destination database at any stage.
Go to partitions – navigation button that goes to partitions configuration of the table. It is enabled only in case the table is partitioned. Further, for every partition, a similar navigation button goes to sub-partition of the chosen partition.

To update many rows use “Update all” button. It updates all the filtered rows.
There are several fields in the dialog box of “Update all”. Only if there is any value in a field – it is updated. Use space ” ” to reset values to the original state.

Masking configuration

The next stage is Indexes Configuration Screen.


2.6 INDEXES CONFIGURATION

Loading existing indexes on source
To load all non-system indexes on source use the “Load” button on the upper right corner of the screen.

Configuring indexes
The following columns are available:

Basic View:
Selected – checkbox to select required indexes. Only selected indexes are handled.
Table – table name in the source database. Cannot be changed.
Index Owner – owner of the index. Can be changed. In case of change, it is the destination index owner.
Index – index name. Cannot be changed.

Advanced View:
Size – size of the source index. Cannot be changed.
Type – Index or Constraint. Cannot be changed.

Note: constraints are Primary key or Unique key only. Other constraints (like check, foreign key, etc.) are handled in other stages.
Unique – unique or not unique. Cannot be changed.
Parallel – sets the parallel degree to build the index with. Can be changed. The value 0 – means that the parallelism is set automatically using parameters: INDEX_PARALLEL_DEGREE, INDEX_PARALLEL_MIN_IDX_SIZE

Switch to Options Configuration Screen.


2.7 OPTIONS CONFIGURATION

This is the screen you use to manage workflow properties related to the average load level on your servers;

  1. A level of parallelism on a source machine (also can be changed “on the fly” during the run of workflow).
  2. A level of parallelism on a target machine (also can be changed “on the fly” during the run of workflow).
  3. Parameters to compute statistics after completion of copying data onto the target.
  4. Parameter to define data consistency (if your source database is open for changes, Accelario can take all data consistent to time of starting workflow process or to any time selected).
  5. Support for rerunning the workflow in case of infrastructure problems during the migration process.
  6. Mail notification.

Switch to Run type Configuration Screen.


2.8 RUN TYPE CONFIGURATION

This is the screen you use to manage and refresh workflow properties and to schedule the workflow runtime.
In case, that the Workflow is executed more than one time (for example, recurring process to refresh test DB).
In refresh mode, every run deletes data from destination and copies it again from the source.

Here are the properties of “refresh”:

  • Delete data – how the data should be deleted from the destination.
  • Drop users – the whole schema is dropped. This is the most commonly used setting.
  • Drop tables – only tables are dropped, while all other objects (procedures, views, etc.) are kept.
  • Tables – how the configuration of tables is changed in every run.
  • No change – the configuration of tables is static. No tables are added, deleted, or changed.
  • Overwrite – the whole configuration of tables is deleted and repopulated from the source. All the tables are filled with defaults, and all special settings are reset.
  • Synchronize – New tables are added, dropped tables are deleted. Special settings are kept. This is the most commonly used option.
  • Indexes – how the configuration of indexes is changed in every run. The options are the same as in “Tables” above.
  • Tablespaces – how configuration of tablespaces is changed in every run.
  • No change – the configuration is static. Only tablespaces that are currently defined are created in destination database.
  • Synchronize by tables – automatically add new tablespaces that are in use by selected tables.
  • Increase tablespace size – whether to increase datafiles automatically. If selected, following settings are available:
  • Maximum used space – when to increase the datafile size. The value is a percent of full tablespace after which the automatic increase happens.
  • Increase by – how much space to add in the event of an automatic increase. The value is the percent that is added to current tablespace size. The special value of -1 means increasing the tablespace up to maximum used space defined above.

If you need to schedule the Workflow runtime – do it on the ‘Schedule’ part of this screen.
Here are the properties of schedule:

  • When – sets first time for running.
  • Interval – sets how to compute next time for running. The syntax is the same as “repeat_interval” in dbms_scheduler package.
  • Examples: Run every day: “FREQ=DAILY;INTERVAL=1”, run every Saturday: “FREQ=DAILY; BYDAY=SAT”, run every first day of the month: “FREQ=DAILY;BYMONTHDAY=1”

Switch to Complete Configuration Screen to make sure that all configure steps are completed:


2.9 COMPLETE CONFIGURATION

From this screen, you may Run the Workflow.