Monthly Archive: August 2017

Oracle GoldenGate 12.2 – Enhanced Heartbeat Table

Prior to OGG 12.2, configuration of Heartbeat table was very tedious with lots of confusion. From OGG 12.2 this has been simplified avoiding to follow huge steps which are mentioned in the DOC ID below,

Oracle GoldenGate Best Practices: Heartbeat Table for Monitoring Lag times (Doc ID 1299679.1)

You need to manually create the heartbeat tables prior to OGG 12.2 where as from OGG 12.2, you can configure heartbeat table with only a single command which is below,


The above command should be issued in the GGSCI prompt. This command creates all the necessary tables, views, jobs and procedures related to the heartbeat table.

The details of the objects are below,

There are also few more parameters available which are below,


This parameter allows us to use a non-default name for the heartbeat table. The table GG_HEARTBEAT is the default name. The tables are created under the schema which is mentioned in the parameter GGSCHEMA in ./GLOBALS parameter file.

The syntax is below,

HEARTBEATTABLE schema_name heartbeat_table_name

By default, the tables are created under the schema which is mentioned in the GGSCHEMA. But you can also change the schema name using this parameter.

In the below example, I have changed the prefix of the table_name of the heartbeat table in the ./GLOBALS parameter file,

Now I am adding the heartbeat table and you could see that the tables are created with the prefix “GGHBT”.


The above parameter is used to enable or disable the heartbeat processing. This parameter is valid for GLOBALS, Extract and Replicat.

Enables Oracle GoldenGate processes to handle records from a GG_HEARTBEAT table. This is the default.

Disables Oracle GoldenGate processes from handing records from a GG_HEARTBEAT table

Note: Whenever you make some changes to the ./GLOBALS parameter file, you need to exit and relogin to the GGSCI prompt. Else the parameter which was modified in the ./GLOBALS parameter file will not be in effect.

To delete the heartbeat table, simply issue the below command.


Cheers 🙂

OGG 12.2 New Feature – Automated Remote Trail File Recovery

This article explains on how to recover from a trail file corruption from Oracle GoldenGate 12cR2 (12.2) version. Prior to OGG 12.2, recovery from a trail file corruption has so many steps which is very tedious and if any one mistake in the step would lead to data integrity issues and replication would be messed up. Below are the steps to recover the trail file corruption prior to OGG 12.2 (OGG 11g, 12cR1)

Note down the Last Applied SCN and the Timestamp from the target.
In the Source, scan the trail file for this Last Applied SCN.
Re-Extract the data after this SCN by altering the Pump Extract process.
Alter the Replicat process to read from the New Trail Seq#.
Start the Pump and Replicat process.

From Oracle GoldenGate 12.2, a new feature is introduced called “Automated Remote Trail File Recovery”. In this article, I am going to explain with an example on how simple it is to recover from a trail file corruption from OGG 12.2

Server and configuration details are below.


Hostname 	- 	OGGR2-1.localdomain
Database 	-	GGDB1
Schema		-	SOURCE
Table Name 	- 	T1
OGG		-	/vol3/ogg
Extract 	-	EXT1
Pump		-	PMP1


Hostname 	- 	OGGR2-2.localdomain
Database 	-	GGDB2
Schema		-	TARGET
Table Name 	- 	T1
OGG		-	/vol3/ogg
Replicat	-	REP1

From the below output, you could see that all the oracle goldengate processes are running fine without any issues.

The parameters of the oracle goldengate processes are below.,
USERID ggadmin, PASSWORD oracle
EXTTRAIL /vol3/ogg/dirdat/et
TABLE source.t1;

RMTTRAIL /vol3/ogg/dirdat/ft
TABLE source.t1;

USERID ggadmin, PASSWORD oracle
MAP source.t1, TARGET target.t1;

Check the count of the table T1 in source and target.

Now I am going to stop the replicat process in the Target side so that I could corrupt one of the trail files which is going to generate in the target server.

Insert some records to the source table SOURCE.T1

In the source below trail files are generated.,

All these data are pushed / pumped to the target server by the Pump process.

In the target server, I have corrupted the trail file ft000000002. Now I am going to start the replicat process REP1 in the target side and it abends with the error “Incompatilbe Error”. Let us see it.

The replicat process got abended.

In the report file below was the error.

So the trail file ft000000002 has a corrupted record in the RBA 4996369. Also only few records got replicated to the target table TARGET.T1

The replicat process will not move further since there is a bad record in the trail file. Let’s implement or use the new feature “Automated Remote Trail File Recovery” which is available in the OGG 12.2. Below are the steps,

1. The first and foremost thing is to stop the Pump process in the source side.
2. Delete all the trail files first from the corrupted seq#.
3. Start the Pump process in the source and any missing trails are now automatically rebuilt by bouncing the Extract Pump.
4. Start the Replicat process. 

Below are the detailed steps,

1. Stop the Extract pump (Datapump) process in the source.

2. Delete all the trail files first from the corrupted seq# ft000000002

3. Start the Pump process in the source and any missing trails are now automatically rebuilt by bouncing the Extract Pump.

Report file of the Datapump process DMP1

4. Finally start the replicat process REP1.

You could see the replicat is running fine and the records are applied on the target table TARGET.T1.

How simple it is to recover from a corrupted trail file at the target. If you compare the steps involved in the versions prior to OGG 12.2 and steps involved from OGG 12.2. It is very simplified and easy. But there are some considerations or requirements which are needed to use this new feature Automated Remote Trail File Recovery.

1. At least one valid complete trail file should be present prior to the corrupted trail file.
2. Respective trail files in the source should be there.

You may think about the last point, NOFILTERDUPTRANSACTIONS. What if the replicat process after the restart reads and applies the records which are already applied in the target? You will end up with errors like “ORA-00010 Unique constraint”. To overcome this, a new parameter FILTERDUPTRANSACTIONS has been introduced from OGG 12c.


a)It causes replicat to ignore transactions that it has already processed.
b) Use when Extract was repositioned to a new start point (ATCSN or AFTERCSN option of “START EXTRACT”) and you are confident that there are duplicate transactions in the trail that could cause Replicat to abend.
c) This option requires the use of a checkpoint table.
d) If the database is Oracle, this option is valid only for Replicat in nonintegrated mode.
f) To override this, use NOFILTERDUPTRANSACTIONS

Hope you found this article useful.

Cheers 🙂