ORACLE-SCN

OGG-01031 Output file is not in any allowed output directories

There are many new features and enhancements introduced in Oracle GoldenGate 12.2. YES, the current version of OGG is 19c. But I still thought of writing this article as this one of the important one. In this article, I am going to explain you about the parameter ALLOWOUTPITDIR with an example.

The Extract Pump (DataPump) process is abending with the below error,

2019-06-11 21:48:42 ERROR OGG-01031 There is a problem in network communication, a remote file problem, encryption keys for target and source do not match (if using ENCRYPT) or an unknown error. (Reply received is Output file /u02/ogg_data/dirdat/et000000 is not in any allowed output directories.).

2019-06-11 21:48:42 ERROR OGG-01668 PROCESS ABENDING.

In the above error message, do not see the ERROR code, which is a default or generic one. Check the message mentioned inside the braces.

Reply received is Output file /u02/ogg_data/dirdat/et000000 is not in any allowed output directories.

From the above message we can understand that, OGG is not allowing to open / create a trail file in above mentioned location and hence the process is failing with the error as above.

This is a new enhancement with respect to security from OGG 12.2

ALLOWOUTPUTDIR

A new parameter is introduced from Oracle GoldenGate 12.2 version. This parameter should be used in the ./GLOBALS parameter file. Failing to use it will encounter the error in the pump process at the Source as below,

ERROR OGG-01031 There is a problem in network communication, a remote file problem, encryption keys for target and source do not match (if using ENCRYPT) or an unknown error. (Reply received is Output file is not in any allowed output directories.).

To overcome this issue, please follow the below action plan,

1. Stop the Manager, Replicat processes in the target.

2. Edit the ./GLOBALS parameter with the below command

GGSCI> EDIT PARAMS ./GLOBALS

3. Add the parameter ALLOWOUTPUTDIR with path to the output trail file directory and save it.

Example ALLOWOUTPUTDIR [relative_dir_name | absolute_dir_name]

4. Exit from the GGSCI prompt.

5. Enter the GGSCI prompt

6. Perform DBLOGIN

7. Start the Manager and Replicat processes.

Note: Whenever we make any changes to ./GLOBALS file, we need to exit and relogin to the GGSCI. Without this the changes made to ./GLOBALS will not be in effect.

This is case sensitive for Windows and driver letter should match exactly as is, for example:

If your driver letter is in lower case but you had mentioned as below,

ALLOWOUTPUTDIR D:\ogg_12c

You will again hit the error OGG-01031.

So, it should be as below,

ALLOWOUTPUTDIR d:\ogg_12c

Hope you enjoyed the post. Cheers 🙂

Oracle GoldenGate Logdump

We cannot open a Redo or Archive log directly in Oracle Database. We need a utility called Log Miner for that. Similarly, in Oracle GoldenGate, we cannot open a Trail file directly and check the data written to it.

LOGDUMP is the tool or utility which is used to open a Trail File in Oracle GoldenGate.

So, what can we seen in the trail file? What information are there in the trail file?

In the above image, we could IOType and TransInd. What are they?

IOType

For each and every operations like INSERT, DELETE, UPDATE, DDL operations etc, it has their own IDs. Means Insert is specified with IOType 5, DELETE is mentioned as 3, COMMIT is mentioned as 2. We have many IO types as below,

TransInd

It is nothing but, Transaction Indicator. We have four kind of Transaction Indicator. Please check the below image for more details,

Let’s continue to see what other information can be seen in the Logdump.

The trail file information which we had seen above are all prior to Oracle GoldenGate 12.2. From 12.2, more information are captured by the Extract process and written to trail file. This Enhancement of Trail file is called “Self Describing Trail Files”. We will see discuss about this feature in our upcoming post.

Some of the basic commands and it’s uses are below,

Hope you enjoyed the post. Keep supporting. Cheers 🙂

Downgrade the Extract (or) Capture from Integrated mode to Classic mode

There are two types of Extract / Capture processes in Oracle GoldenGate.

1. Classic Extract

2. Integrated Extract

Classic Extract is a normal Extract process. It is a single threaded process. If you want to know more about the Integrated Extract process, click the below link,

In my earlier post, we had seen about how to upgrade the Extract/Capture process from Classic to Integrated. Now, let’s see how to downgrade the Integrated Capture to Classic.

Sometimes, we face issues in Integrated mode of capture and we can downgrade it to the Classic mode to keep moving. But this will not help in all the cases. As I explained in my previous article, before downgrading the Integrated extract to Classic mode, we need check if the existing environment can be supported by Classic mode or not. If you are having RAC environment and if there are XA transactions or if your database is compression enabled, then it is not possible to use Classic Extract process. In this situation it is not advised to downgrade the Extract process from Integrated to Classic.

But in some situations this really helps to move forward and avoid latency in replication. Below are high level steps to downgrade the Extract process from Integrated to Classic.

1. Stop the Extract process

GGSCI> STOP EXTRACT

2. Issue the following command to determine whether the downgrade command can be issued. Transactions that started before the downgrade command is issued must be written to the trail before you can proceed. You may have to issue this command more than once until it returns a message stating that Extract can be downgraded.

GGSCI> INFO EXTRACT group DOWNGRADE

3. Downgrade the Extract process.

GGSCI> ALTER EXTRACT DOWNGRADE INTEGRATED TRANLOG

4. Unregister the Extract process from the Database.

GGSCI> UNREGISTER EXTRACT DATABASE

5. Start the Extract process.

GGSCI> START EXTRACT

Let me explain this with an example so that it makes an easier way of understanding.

I have an Extract process EXT1 which is in Integrated Mode. As I explained in my previous article, you could easily find out if the Extract process is Classic or Integrated using the INFO command output,

In the INFO command output, if the,

Log Read Checkpoint – Oracle Redo Logs – Classic Extract

Log Read Checkpoint – Oracle Integrated Redo Logs – Integrated Extract

If you see the output of the INFO command here, you can see the “Log Read Checkpoint” as “Oracle Integrated Redo Logs” which means, here the Extract EXT1 is an Integrated Extract process.

1. Always perform a DBLOGIN whenever you perform any activity in Oracle GoldenGate.

2. Stop the Extract process.

3. Issue the following command to determine whether the downgrade command can be issued. Transactions that started before the downgrade command is issued must be written to the trail before you can proceed. You may have to issue this command more than once until it returns a message stating that Extract can be downgraded.

4. The Extract process is ready to be downgraded. But, before that, we need to edit parameter files and remove/comment or add the parameters which are necessary for Classic Extract process. For example, in my parameter file, I have the parameter below,

TRANLOGOPTIONS INTEGRATEDPARAMS (MAX_SGA_SIZE 500)

which is not supported by Classic Extract process. So, make necessary changes to the parameter file before downgrading and starting the Extract process.

5. Downgrade the Extract process.

If you see the output of the INFO command, the value of the “Log Read Checkpoint” is now changed to “Oracle Redo Logs”. This means the Extract process has been downgraded from Integrated to Classic.

But still you can see the Logminer Capture which was created when you registered this Extract process EXT1 to the database exists.

This is because, we haven’t unregistered the extract process from the database.

6. Unregister the Extract process from the database.

Now check the DBA_CAPTURE view again. It will return no rows.

7. Start the Extract process.

See you again. Cheers 🙂

Upgrade Extract from Classic mode to Integrated mode

Sometimes, we may face a situation where in we need to upgrade our Extract process from Classic mode to Integrated mode.

This situation may arise due to the below reasons,

1. When some data types which are not supported by Classic Capture.
2. When you are using Classic Capture in a RAC environment. This could be the main reason. I would always recommend to go with Integrated Mode in a RAC environment.
3. When you don’t get satisfied with the performance of the Classic capture as it is a single threaded one.
4. When you face a error or bug in Classic mode, which you may not face in Integrated mode.

Likewise, we can keep on adding points for the above situation.

In this article, we are going to see how to upgrade an Extract / Capture process from Classic mode to Integrated mode. Below are the steps,

1. Stop the Extract process

GGSCI> STOP EXTRACT

2. Register the Extract process

GGSCI> REGISTER EXTRACT DATABASE

3. Issue the following command to determine whether the upgrade command can be issued. Transactions that started before the registration command must be written to the trail before you can proceed with the upgrade. You may have to issue this command more than once until it returns a message stating that Extract can be upgraded.

GGSCI> INFO EXTRACT UPGRADE

4. Once confirmed, Upgrade the Extract process.

GGSCI> ALTER EXTRACT UPGRADE INTEGRATED TRANLOG

5. Before starting the Extract process, edit the parameter file accordingly. You might have used the below parameter in the Extract process parameter file which is not supported by Integrated Extract,

TRANLOGOPTIONS DBLOGREADER

This will make the Integrated Extract process to abend when you start it. Like wise you have many parameters which are not supported by Integrated Mode and used only for Classic mode. Check and remove those parameters before starting the Integrated Extract process after upgrading it.

So, let me explain you with an example so that it would be more easier for you to understand.

I have an Extract/Capture process EXT1 which is running in Classic mode.

From the below INFO output, you could see the “Log Read Checkpoint” as “Oracle Redo Logs”. This means, that the extract process is a Classic Extract.

1. When doing any activity in Oracle GoldenGate, it is always recommended to perform a DBLOGIN. So, perform the DBLOGIN,

2. Stop the Extract process. Check if the extract EXT1 has stopped.

3. Register the Extract process to the database. You might be aware of this command. REGISTER command creates a Logminer Capture in the database.

Once you register the Extract process to the database, you can query the DBA_CAPTURE view to check if a Logminer server has been created or not.

The status is in “DISABLED” as we did not start it yet.

But when issuing the INFO command, you will still the extract process in Classic mode. This is because, we haven’t upgraded the extract process. We had just registered it to the database.

4. Issue the following command to determine whether the upgrade command can be issued. Transactions that started before the registration command must be written to the trail before you can proceed with the upgrade. You may have to issue this command more than once until it returns a message stating that Extract can be upgraded.

5. START and STOP the extract process so that it bypasses the SCN with which it was registered. It has to reach the SCN 1077042 as mentioned in the above message.

6. It bypassed the SCN. Now, stop the Extract process.

7. Check if the Extract process is ready for the upgrade by issuing the below command,

8. Now the Extract process EXT1 is ready for upgrade. Issue the below command to upgrade the capture from Classic mode to Integrated mode.,

9. Before starting the Extract process, edit the parameter file accordingly. You might have used the below parameter in the Extract process parameter file which is not supported by Integrated Extract,

TRANLOGOPTIONS DBLOGREADER

This will make the Integrated Extract process to abend when you start it. Like wise you have many parameters which are not supported by Integrated Mode and used only for Classic mode. Check and remove those parameters before starting the Integrated Extract process after upgrading it.

In my case, I haven’t used any such parameters. So, I had just added one of the parameter which is used by Integrated mode of Extract process. Below is my param file,

10. Now, go ahead and start the Extract process.

11. Check if the extract process has been upgraded from Classic to Integrated. From the below output, you can clearly see the “Log Read Checkpoint” is changed from “Oracle Redo Logs” to “Oracle Integrated Redo Logs”. Now the Extract process is running in INTEGRATED mode.

From the report file, you can see the below lines,

From the DBA_CAPTURE view you can see now the Capture status as “ENABLED”.

Hope you enjoyed the post. Cheers 🙂

Oracle GoldenGate: Time recorded in the Jagent Log File is Not Correct

Oracle GoldenGate: Time recorded in the Jagent Log File is Not Correct

Installed Oracle GoldenGate Management Pack. When installing the JAgent 12.2.1 on the OGG target instances, came up with a curious behavior. In the ogg_agent.log files, the time is not the local time. The time shown in the log file is 8 hours ahead although System Time is correct. The latest data under “ogg_agent.log” was as below,

[2016-12-06T08:45:29.442+00:00] [JAGENT] [NOTIFICATION] [OGGMON-20051] [com.goldengate.monitor.jagent.mbeans.impl.Agent] [tid: RMI TCP Connection(56)-146.222.96.163] [ecid: 0000LZEiZuLFKAwtOCq2Td1OHHUM00000A,0] DerivedMBean is getting build... 

Later came to know that the cause of the issue was the incorrect setting or Java parameter missed in the Java command.

Please follow the below steps to resolve the issue,

1. edit jagent parameter file

2. add -Duser.timezone=Hongkong in command

For example, you will be having the below parameters in the Jagent parameter file,

COMMAND java -Dconfig.dir=/u05/app/goldengate/ggagent/agent1/cfg -Djava.util.logging.config.class=oracle.core.ojdl.logging.LoggingConfiguration
-Doracle.core.ojdl.logging.config.file=/u05/app/goldengate/ggagent/agent1/cfg/logging-config.xml -Doracle.core.ojdl.logging.componentId=JAGENT -jar
-Xms512m -Xmx1024m /u05/app/goldengate/ogg_agent1/oggmon/ogg_agent/dirjar/jagent.jar
Program is 'java', arguments are '-Dconfig.dir=/u05/app/goldengate/ggagent/agent1/cfg
-Djava.util.logging.config.class=oracle.core.ojdl.logging.LoggingConfiguration -Doracle.core.ojdl.logging.config.file=/u05/app/goldengate/ggagent/agent1
/cfg/logging-config.xml -Doracle.core.ojdl.logging.componentId=JAGENT -jar -Xms512m -Xmx1024m /u05/app/goldengate/ogg_agent1/oggmon/ogg_agent
/dirjar/jagent.jar'


So you need to add this -Duser.timezone=Hongkong in the first line after the COMMAND java

Note: Respective Timezones should be added according to the issue faced.

3. Restart the JAgent process from GGSCI.

GGSCI> stop JAgent

GGSCI> start JAgent

Remote Capture – OGG-00446 Could Not Find Archived Log

Remote Capture – OGG-00446 Could Not Find Archived Log

Remote Extract process is failing with the below error,

OGG-00446 Oracle GoldenGate Capture for Oracle, ext1.prm: Could not find archived log

Source Database and Target Database are installed in a separate server. Oracle GoldenGate is installed in a separate server. So, all the three are in their respective servers individually.

Database is running in Archive log mode. Oracle Client 11.2.0.1.0 is installed in the server where OGG is installed.

When using the Remote capture, we need to use the below parameter in the Extract parameter file.

TRANLOGOPTIONS DBLOGREADER

This parameter is not supported for the Oracle client 11.2.0.1.0. Here the Client version installed in 11.2.0.1.0

If one wants to be able to access archive logs from a different Oracle installation and using Classic Extract, then we need to use TRANLOGOPTOINS DBLOGREADER parameter. This does not imply or mean we are using ASM at the remote installation, it is just a means for OGG to read the remote archive logs.

The client version 11.2.0.1.0 is unable to use TRANLOGOPTOINS DBLOGREADER which is what is needed.

Install Oracle Client 11.2.0.4 and use the parameter TRANLOGOPTIONS DBLOGREADER in the Extract process parameter file. Restart the Extract process.

OGG-15051 Java or JNI exception: hadoop.hbase.HTableDescriptor.addFamily in Cloudera

OGG-15051 Java or JNI exception: hadoop.hbase.HTableDescriptor.addFamily in Cloudera

Oracle GoldenGate Replicat process for Cloudera HBase abend with the below error.

                                         Oracle GoldenGate Delivery
                         Version 12.2.0.1.160823 OGGCORE_OGGADP.12.2.0.1.0_PLATFORMS_170324.1532
                             Linux, x64, 64bit (optimized), Generic on Mar 24 2017 17:11:07

2017-10-12 05:01:44 INFO OGG-06511 Using following columns in default
map by name: ROW_ID, LAST_UPD, PAR_ROW_ID, BU_ID, NAME, CUST_STAT_CD,INTEGRATION_ID, OU_NUM,OU_TYPE_CD,
PAR_OU_ID, PR_ADDR_ID, ACCNT_TYPE_CD, PROCESS_TIMESTAMP,X_BILL_TYPE_BT, STAT_REASON_CD.
2017-10-12 05:01:44 INFO OGG-06510 Using the following key columns for
target table SCHEMA.TABLE: ROW_ID.
2017-10-12 05:01:59 INFO OGG-01021 Command received from GGSCI: STATS.
Oct 12, 2017 5:02:56 AM org.apache.hadoop.conf.Configuration
warnOnceIfDeprecated
INFO: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
Exception in thread "main" java.lang.NoSuchMethodError:
org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)Lorg/apache/hadoop/hbase/HTableDescriptor;
at oracle.goldengate.handler.hbase.operations.HBase1TableWriter.createTable(HBase1TableWriter.java:254)
at oracle.goldengate.handler.hbase.operations.HBase1TableWriter.open(HBase1TableWriter.java:96)
at oracle.goldengate.handler.hbase.operations.HBase1Writer.getTableWriter(HBase1Writer.java:167)
at oracle.goldengate.handler.hbase.operations.HBase1Writer.put(HBase1Writer.java:110)
at oracle.goldengate.handler.hbase.operations.HBaseInsertDBOperation.processOp(HBaseInsertDBOperation.java:78)
at oracle.goldengate.handler.hbase.HBaseHandler.processOp(HBaseHandler.java:468)
at oracle.goldengate.handler.hbase.HBaseHandler.transactionCommit(HBaseHandler.java:411)
at oracle.goldengate.datasource.DsEventManager$2.send(DsEventManager.java:422)
at oracle.goldengate.datasource.DsEventManager$EventDispatcher.distributeEvent(DsEventManager.java:231)
at oracle.goldengate.datasource.DsEventManager.fireTransactionCommit(DsEventManager.java:430)
at oracle.goldengate.datasource.AbstractDataSource.fireTransactionCommit(AbstractDataSource.java:506)
at oracle.goldengate.datasource.UserExitDataSource.commitActiveTransaction(UserExitDataSource.java:1546)
at oracle.goldengate.datasource.UserExitDataSource.commitTx(UserExitDataSource.java:1469)

2017-10-12 05:02:56 ERROR OGG-15051 Java or JNI exception:
java.lang.NoSuchMethodError:
org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)Lorg/apache/hadoop/hbase/HTableDescriptor;.

The root cause is Compatibility Issue. I came across a bug for this issue.

Bug 25591766 – HBase 1.2: java.lang.NoSuchMethodError: ….addFamily

In the bug it has been mentioned to set the below parameter, but already the below parameter is set and still facing the same issue.

gg.handler..hBase98Compatible=true.

This is a well known compatibility problem with Cloudera HBase. The root cause is that the Cloudera HBase client libraries broke binary compatibility with the Apache HBase libraries. You must be on OGGBD 12.3.0.1.x or earlier.

What the problem is and how to work around it, is explained in the documentation:

https://docs.oracle.com/goldengate/bd123010/gg-bd/GADBD/GUID-A6C0DEC9-480F-4782-BD2A-54FEDDE2FDD9.htm#GADBDGUID-8873BA41-35B8-4307-B406-EA9835DAA477

Go to section 1.2.2.2

The OGGBD 12.3.1.1.0 release provided improved support by resolving the offending HBase method using Java reflection. Therefore, if you are ready to move to OGGBD 12.3.1.1.0 or later this issue is resolved automatically.

See the documentation:

https://docs.oracle.com/goldengate/bd123110/gg-bd/GADBD/using-hbase-handler.htm#GADBD-GUID-1A9BA580-628B-48BD-9DC0-C3DF9722E0FB

Go to section 5.6.5

Cheers 🙂

OGG 12.2 – Extract Abend – OGG-00458 Cannot Find Metadata Property

OGG 12.2 – Extract Abend – OGG-00458 Cannot Find Metadata Property

Extract Process Abends with the below error,

2016-07-05 00:03:34 ERROR OGG-00458 Cannot find metadata property G41. DDL metadata [,G1='alter table "schema"."table" shrink space CHECK ',,C6='GGS.GGS_MARKER',,C5='6399704',,C2='4377220081',,S='99751',,W='schema',,X='table',,Y='TABLE',,Z='ALTER',,A1='104',,A1='104',,..........

Cause of the Issue was, by default “auto space advisor” is enabled in the Oracle Database. This advisor, shrinks the space of large tables and this causes the Oracle GoldenGate Extract process to get abended with the below error,

2016-07-05 00:03:34 ERROR OGG-00458 Cannot find metadata property G41. DDL metadata [,G1='alter table "schema"."table" shrink space CHECK ',,C6='GGS.GGS_MARKER',,C5='6399704',,C2='4377220081',,S='99751',,W='schema',,X='table',,Y='TABLE',,Z='ALTER',,A1='104',,A1='104',,..........

This is a Bug,

Bug 23154554 - Extract abends: OGG-00458 Cannot find metadata property G23. DDL metadata

But there is a Workaround for this, you can disable the “auto space advisor” feature which is the cause for triggering shrink space for large tables.

Steps to disable the “auto space advisor”

1. Check the status of the “auto space advisor”.

SQL> SELECT client_name, status FROM dba_autotask_client;
set lines 180 pages 1000
col client_name for a40
col attributes for a60
select client_name, status,attributes,service_name from dba_autotask_client;

2. Execute the below to disable the “auto space advisor”.

BEGIN
DBMS_AUTO_TASK_ADMIN.disable(
client_name => 'auto space advisor',
operation => NULL,
window_name => NULL);
END;
/

3. Check if the “auto space advisor” is disabled.

set lines 180 pages 1000
col client_name for a40
col attributes for a60
select client_name, status,attributes,service_name from dba_autotask_client;

Restart the Extract process after disabling the “auto space advisor”.

Note: Please check with your Oracle Core DBA before disabling this “auto space advisor”..

Oracle GoldenGate Integrated Replicat slow and does not use BATCHSQL

Oracle GoldenGate Integrated Replicat slow and does not use BATCHSQL

Oracle GoldenGate Integrated Replicat slow and does not use BATCHSQL.

The BATCHSQL is specified , though in the report file we do not see any

batchsql statistics:
BATCHSQL statistics:
Batch operations: 0
Batches: 0
Batches executed: 0
Queues: 0
Batches in error: 0
Normal mode operations: 11055655
Immediate flush operations: 0
PK collisions: 0
UK collisions: 0
FK collisions: 0
Thread batch groups: 0
Commits: 0
Rollbacks: 0
Queue flush calls: 0
Ops per batch: N/A
Ops per batch executed: N/A
Ops per queue: N/A
Parallel batch rate: N/A

and the replicat is very slow. Lag is 7 mins but could be max 30 secs

From the IR healthcheck report we can see that the BATCHSQL is enabled:

BATCHSQL = Y

Though from the AWR report we can see that by the goldengate operations the rows/exec is ~1 like:

Executions 	Rows Processed 	Rows per Exec 	Elapsed Time (s) 	SQL Module 		SQL  Text
============== ============== 	============= 	================ 	==========     	=================================
565,577         670,971 	0.19    	1,111.16 	 	GoldenGate 	INSERT /*+ restrict_all_ref_c...
365,150 	361,452 	0.99 		586.7872.7 	 	GoldenGate 	INSERT /*+ restrict_all_ref_c...
201,967 	215,429 	0.95 		482.7754.6 	 	GoldenGate 	INSERT /*+ restrict_all_ref_c...
201,195 	213,216 	0.87 		438.8554.5 	 	GoldenGate 	INSERT /*+ restrict_all_ref_c...

This is because, BATCHSQL iS directly related to the EAGER_SIZE when used in the Integrated Replicat. This is very important to note in here. We should always not assume as BATCHSQL will always improve the performance. Sometimes it degrades the performance as well. When used with Integrated Replicat process, we need to be even more careful.

For Integrated Replicat, the OGG BATCHSQL statistics are never maintained. It will always show as normal mode in the OGG report file whether BATCHSQL is enabled or not. The only way to determine if BATCHSQL is in use on the IR inbound server is to use AWR and look at the AWR SQL Executions report. If the Rows per Exec > 1, then BATCHSQL is in operation.

You can tell if BATCHSQL is configured (for the inbound server) by looking in the healthcheck summary at Replicat parameters section

Integrated Replicat
key parameters (Details).

APPLY_NAME Replicat
Name PARALLELISM MAX_PARALLELISM COMMIT_SERIALIZATION EAGER_SIZE BATCHSQL BATCH_SQL_MODE MAX_SGA_SIZE
OPTIMIZE_PROGRESS_TABLE
OGG$ROACDW ROACDW 4 50 DEPENDENT_TRANSACTIO NS 9500 Y INFINITE N

A bulk load implies that a single transaction was performed. If this is the case, then the inbound server will apply in EAGER mode. Meaning the transaction will begin applying in the inbound server before the entire transaction is received by the inbound server. Eagerly applied transactions cannot take advantage of BATCHSQL.

BATCHSQL iS directly related to the EAGER_SIZE when used in the Integrated Replicat. The default EAGER_SIZE of the in Oracle GoldenGate 12.1.x is 9500 and from OGG 12.2 is 15100.

So, if the number of operations in a Transaction is greater than 9500, then BATCHSQL will have no effect. So, the number of LCR’s should be below or lesser than 9500 (12.1) and 15000 (12.2) so that BATCHSQL will be used by the Integrated Replicat process.

If the number of transactions is greater then 9500 (12.1) or 15000 (12.2), then the Integrated Replicat will not apply the transactions in parallel and will apply it in a serialized way and hence we could see the “Rows per Exec = 1” in the “SQL ordered by executions” in AWR.

For example, if in your case, if you have most of the transactions having operations nearing to 1 Lakh, then it is better to increase the EAGER_SIZE to 100000. But please do remember, if you increase the EAGER_SIZE, you need to have sufficient amount of memory too. So, do increase the STREAMS_POOL_SIZE also.

Once you increase the EAGER_SIZE to 100000. Create a table and insert 90000 records and commit the transaction. Then update 80000 records and commit the transaction. Generate the AWR report and check if the “Rows per Exec > 1” in the “SQL ordered by executions”

Hope this clearly explained how to use BATCHSQL in Integrated Replicat process.

Cheers 🙂

Oracle GoldenGate – Parallel Replicat

Oracle GoldenGate – Parallel Replicat

Initially, when Oracle GoldenGate was launched, there was only one EXTRACT and REPLICAT processes. Later in OGG 11.2.x, Oracle introduced Integrated Extract. So, the normal or already existing Extract was renamed to Classic Extract.Similarly, the Replicat process was also renamed to Classic Replicat. Oracle GoldenGate became a powerful tool for replication and migration. As there were many version upgrades happened in OGG, there was also many processes got introduced which you can see in the below diagram.,

From the above table, you can see a new type of replicat process has been introduced from OGG 12.3, which is call “PARALLEL REPLICAT”. Integrated Replicat itself is very fast in applying the data to the target as it has parallelism concepts in it. But this Parallel Replicat process is even more faster than the Integrated Replicat process.

  • Parallel Replicat has a highly scalable apply engine which achieves a apply rate up to 1 million+ operations per second.
  • It is 5X faster than Integrated Replicat
  • Applies a single large transaction in parallel
  • Parallelizes apply by carefully handling the dependent transactions
  • Dependency calculation and parallelism outside the database
  • Ability to Parallelize a Single Large Transaction
  • Processes large transactions faster as they are parallelized
  • Dependencies are still considered while parallelizing large transaction
  • Ability controlled by a SPLIT_TRANS_RECS parameter which specifies the transaction split size (in records). Default is 100,000.

There are two types of Parallel Replicat processes available.

  • Integrated Parallel Replicat

  • Non-Integrated Parallel Replicat

Let us see about the architecture and working concepts of this newly introduced Parallel Replicat process,

Like Integrated Replicat, Parallel Replicat process have background processes and each one of them plays it’s role in order to apply the transactions or data to the target server. Below is the architecture diagram of the Parallel Replicat process.,

So, what these background processes does? Check below,

MAPPERS

The Mappers read the trail file and map records, forward the mapped records to the Master.

APPLIERS

Applies the records to the target database.

MASTER

The Master process consists of two separate threads, Collater and Scheduler.

COLLATER

The Collater is responsible for managing and communicating with the Mappers, along with receiving the mapped transactions and reordering them into a single in-order stream.

SCHEDULER

The Scheduler is responsible for managing and communicating with the Appliers, along with reading transactions from the Collater, batching them, and scheduling them to Appliers.

As per above diagram, we do have some special parameters for the Parallel Replicat according to which it behaves. Let’s see what these parameters do,

MAP_PARALLELISM

Configures number of mappers. This controls the number of threads used to read the trail file. The default value is 2.

APPLY_PARALLELISM

Configures number of appliers. This controls the number of connections in the target database used to apply the changes. The default value is 4.

MIN_APPLY_PARALLELISM | MAX_APPLY_PARALLELISM

The Apply parallelism is auto-tuned. You can set a minimum and maximum value to define the ranges in which the Replicat automatically adjusts its parallelism. There are no defaults. Do not use with APPLY_PARALLELISM at same time.

SPLIT_TRANS_REC

Specifies that large transactions should be broken into pieces of specified size and applied in parallel. Dependencies between pieces are still honored. Disabled by default.

COMMIT_SERIALIZATION

Enables commit FULL serialization mode, which forces transactions to be committed in trail order.

The difference between a Integrated Parallel Replicat and Non-Integrated Parallel Replicat is

    • APPLIER process runs outside of the database in Non-Integrated Parallel Replicat where in Integrated mode of Parallel Replicat, the APPLIER process will run inside the database.
    • Integrated Replicat process can be configured only when your Oracle RDBMS version is 12.2 and above.

Now, let’s configure a Non-Integrated Parallel Replicat process.

I have installed Oracle GoldenGate 12.3.0.1.2 version,

Add the parameters for the Parallel Replicat process.

Add the Parallel Replicat process using the below command.

You can see the Parallel Replicat process has been added now.

How do we know if the replicat added is a parallel or integrated or classic? You can see a keyword PARALLEL when seeing the info of the replicat process.

Let’s start the Parallel Replicat process now.

Now the replicat is running.

Below is the report file of the Parallel Replicat process PREP which has been added now.

GGSCI (ogg123-2.localdomain as ggadmin@ORCL) 12> view report PREP


***********************************************************************
                 Oracle GoldenGate Delivery for Oracle
    Version 12.3.0.1.2 OGGCORE_12.3.0.1.0_PLATFORMS_171208.0005_FBO
   Linux, x64, 64bit (optimized), Oracle 12c on Dec  9 2017 01:06:26

Copyright (C) 1995, 2017, Oracle and/or its affiliates. All rights reserved.


                    Starting at 2018-11-18 19:29:33
***********************************************************************

Operating System Version:
Linux
Version #1 SMP Wed Feb 2 18:40:23 EST 2011, Release 2.6.32-100.28.5.el6.x86_64
Node: ogg123-2.localdomain
Machine: x86_64
                         soft limit   hard limit
Address Space Size   :    unlimited    unlimited
Heap Size            :    unlimited    unlimited
File Size            :    unlimited    unlimited
CPU Time             :    unlimited    unlimited

Process id: 9554

Description:

2018-11-18 19:29:33  WARNING OGG-02904  Replication of PARTIAL XML containing NCHAR/NVARCHAR/NCLOB data may cause divergence.

***********************************************************************
**            Running with the following parameters                  **
***********************************************************************

2018-11-18 19:29:33  INFO    OGG-03059  Operating system character set identified as UTF-8.

2018-11-18 19:29:33  INFO    OGG-02695  ANSI SQL parameter syntax is used for parameter parsing.

2018-11-18 19:29:34  INFO    OGG-01360  REPLICAT is running in Parallel mode.
replicat PREP
userid ggadmin, password ***
MAP_PARALLELISM 2
MIN_APPLY_PARALLELISM 2
MAX_APPLY_PARALLELISM 8
SPLIT_TRANS_RECS 100
MAP source.t1, TARGET target.t1;

2018-11-18 19:29:34  INFO    OGG-06067  Spawned Applier with pid 9,564.

2018-11-18 19:29:34  INFO    OGG-06067  Spawned Applier with pid 9,565.

2018-11-18 19:29:34  INFO    OGG-06067  Spawned Applier with pid 9,566.

2018-11-18 19:29:34  INFO    OGG-06067  Spawned Applier with pid 9,567.

2018-11-18 19:29:34  INFO    OGG-06067  Spawned Applier with pid 9,569.

2018-11-18 19:29:34  INFO    OGG-06067  Spawned Mapper with pid 9,568.

2018-11-18 19:29:34  INFO    OGG-06067  Spawned Mapper with pid 9,570.

2018-11-18 19:29:36  INFO    OGG-06451  Triggers will be suppressed by default.

2018-11-18 19:29:36  INFO    OGG-01815  Virtual Memory Facilities for: COM
    anon alloc: mmap(MAP_ANON)  anon free: munmap
    file alloc: mmap(MAP_SHARED)  file free: munmap
    target directories:
    /u01/app/ogg/dirtmp.

2018-11-18 19:29:37  INFO    OGG-06604  Database ORCL CPU info: CPU Count 1, CPU Core Count 0, CPU Socket Count 0.

***********************************************************************
**                     Run Time Messages                             **
***********************************************************************


2018-11-18 19:30:09  INFO    OGG-06067  Spawned Applier with pid 9,624.

As we know, the replicat process has it’s own checkpoint table which is mainly used for recovery purpose. This Parallel Replicat also has it’s own checkpoint table. But this does not take the default or the checkpoint table which is mentioned in the ./GLOBALS parameter

The checkpoint table in here is ggadmin.ggchkpt

This checkpoint table should have an entry in it for the parallel replicat process PREP. But, we will not see it in here.

This is because, the Parallel Replicat process creates it’s own checkpoint table. It uses the same schema name which is mentioned in the ./GLOBALS parameter, but the table name will be created with an extension as below,

Querying this checkpoint table will give the information of the parallel replicat process.

We have multiple applier processes, but only 3 were used and hence we can see the information for the 3 processes. So, this is something different when comes to Parallel Replicat process. Always do not check for the default checkpoint table which will be created.

So, we have created Non-Intgrated Replicat process. Let’s try to create a Integrated Parallel Replicat process now. Add the parameters for the Integrated Parallel Replicat process.

Add the replicat IPREP now. The keyword used in here is “PARALLEL INTEGRATED”.

Let’s start the Integrated Parallel Replicat process IPREP.

Oops! The process is stopped. Why? Let us see the report file to check for the error messages.

As I mentioned earlier in this post, Integrated Parallel Replicat process is only supported when the Database version 12.2 and above. The database which I have used in here is 12.1.x and hence this got failed.

Finally, below is the comparison report between various Replicat processes available in Oracle GoldenGate.

Hope the post gave you much information about the new replicat process “Parallel Replicat” which was introduced in OGG 12.3. Your comments are always welcome.

Cheers 🙂