Category: GoldenGate

Oracle GoldenGate Logdump

We cannot open a Redo or Archive log directly in Oracle Database. We need a utility called Log Miner for that. Similarly, in Oracle GoldenGate, we cannot open a Trail file directly and check the data written to it.

LOGDUMP is the tool or utility which is used to open a Trail File in Oracle GoldenGate.

So, what can we seen in the trail file? What information are there in the trail file?

In the above image, we could IOType and TransInd. What are they?

IOType

For each and every operations like INSERT, DELETE, UPDATE, DDL operations etc, it has their own IDs. Means Insert is specified with IOType 5, DELETE is mentioned as 3, COMMIT is mentioned as 2. We have many IO types as below,

TransInd

It is nothing but, Transaction Indicator. We have four kind of Transaction Indicator. Please check the below image for more details,

Let’s continue to see what other information can be seen in the Logdump.

The trail file information which we had seen above are all prior to Oracle GoldenGate 12.2. From 12.2, more information are captured by the Extract process and written to trail file. This Enhancement of Trail file is called “Self Describing Trail Files”. We will see discuss about this feature in our upcoming post.

Some of the basic commands and it’s uses are below,

Hope you enjoyed the post. Keep supporting. Cheers 🙂

Downgrade the Extract (or) Capture from Integrated mode to Classic mode

There are two types of Extract / Capture processes in Oracle GoldenGate.

1. Classic Extract

2. Integrated Extract

Classic Extract is a normal Extract process. It is a single threaded process. If you want to know more about the Integrated Extract process, click the below link,

In my earlier post, we had seen about how to upgrade the Extract/Capture process from Classic to Integrated. Now, let’s see how to downgrade the Integrated Capture to Classic.

Sometimes, we face issues in Integrated mode of capture and we can downgrade it to the Classic mode to keep moving. But this will not help in all the cases. As I explained in my previous article, before downgrading the Integrated extract to Classic mode, we need check if the existing environment can be supported by Classic mode or not. If you are having RAC environment and if there are XA transactions or if your database is compression enabled, then it is not possible to use Classic Extract process. In this situation it is not advised to downgrade the Extract process from Integrated to Classic.

But in some situations this really helps to move forward and avoid latency in replication. Below are high level steps to downgrade the Extract process from Integrated to Classic.

1. Stop the Extract process

GGSCI> STOP EXTRACT

2. Issue the following command to determine whether the downgrade command can be issued. Transactions that started before the downgrade command is issued must be written to the trail before you can proceed. You may have to issue this command more than once until it returns a message stating that Extract can be downgraded.

GGSCI> INFO EXTRACT group DOWNGRADE

3. Downgrade the Extract process.

GGSCI> ALTER EXTRACT DOWNGRADE INTEGRATED TRANLOG

4. Unregister the Extract process from the Database.

GGSCI> UNREGISTER EXTRACT DATABASE

5. Start the Extract process.

GGSCI> START EXTRACT

Let me explain this with an example so that it makes an easier way of understanding.

I have an Extract process EXT1 which is in Integrated Mode. As I explained in my previous article, you could easily find out if the Extract process is Classic or Integrated using the INFO command output,

In the INFO command output, if the,

Log Read Checkpoint – Oracle Redo Logs – Classic Extract

Log Read Checkpoint – Oracle Integrated Redo Logs – Integrated Extract

If you see the output of the INFO command here, you can see the “Log Read Checkpoint” as “Oracle Integrated Redo Logs” which means, here the Extract EXT1 is an Integrated Extract process.

1. Always perform a DBLOGIN whenever you perform any activity in Oracle GoldenGate.

2. Stop the Extract process.

3. Issue the following command to determine whether the downgrade command can be issued. Transactions that started before the downgrade command is issued must be written to the trail before you can proceed. You may have to issue this command more than once until it returns a message stating that Extract can be downgraded.

4. The Extract process is ready to be downgraded. But, before that, we need to edit parameter files and remove/comment or add the parameters which are necessary for Classic Extract process. For example, in my parameter file, I have the parameter below,

TRANLOGOPTIONS INTEGRATEDPARAMS (MAX_SGA_SIZE 500)

which is not supported by Classic Extract process. So, make necessary changes to the parameter file before downgrading and starting the Extract process.

5. Downgrade the Extract process.

If you see the output of the INFO command, the value of the “Log Read Checkpoint” is now changed to “Oracle Redo Logs”. This means the Extract process has been downgraded from Integrated to Classic.

But still you can see the Logminer Capture which was created when you registered this Extract process EXT1 to the database exists.

This is because, we haven’t unregistered the extract process from the database.

6. Unregister the Extract process from the database.

Now check the DBA_CAPTURE view again. It will return no rows.

7. Start the Extract process.

See you again. Cheers 🙂

Upgrade Extract from Classic mode to Integrated mode

Sometimes, we may face a situation where in we need to upgrade our Extract process from Classic mode to Integrated mode.

This situation may arise due to the below reasons,

1. When some data types which are not supported by Classic Capture.
2. When you are using Classic Capture in a RAC environment. This could be the main reason. I would always recommend to go with Integrated Mode in a RAC environment.
3. When you don’t get satisfied with the performance of the Classic capture as it is a single threaded one.
4. When you face a error or bug in Classic mode, which you may not face in Integrated mode.

Likewise, we can keep on adding points for the above situation.

In this article, we are going to see how to upgrade an Extract / Capture process from Classic mode to Integrated mode. Below are the steps,

1. Stop the Extract process

GGSCI> STOP EXTRACT

2. Register the Extract process

GGSCI> REGISTER EXTRACT DATABASE

3. Issue the following command to determine whether the upgrade command can be issued. Transactions that started before the registration command must be written to the trail before you can proceed with the upgrade. You may have to issue this command more than once until it returns a message stating that Extract can be upgraded.

GGSCI> INFO EXTRACT UPGRADE

4. Once confirmed, Upgrade the Extract process.

GGSCI> ALTER EXTRACT UPGRADE INTEGRATED TRANLOG

5. Before starting the Extract process, edit the parameter file accordingly. You might have used the below parameter in the Extract process parameter file which is not supported by Integrated Extract,

TRANLOGOPTIONS DBLOGREADER

This will make the Integrated Extract process to abend when you start it. Like wise you have many parameters which are not supported by Integrated Mode and used only for Classic mode. Check and remove those parameters before starting the Integrated Extract process after upgrading it.

So, let me explain you with an example so that it would be more easier for you to understand.

I have an Extract/Capture process EXT1 which is running in Classic mode.

From the below INFO output, you could see the “Log Read Checkpoint” as “Oracle Redo Logs”. This means, that the extract process is a Classic Extract.

1. When doing any activity in Oracle GoldenGate, it is always recommended to perform a DBLOGIN. So, perform the DBLOGIN,

2. Stop the Extract process. Check if the extract EXT1 has stopped.

3. Register the Extract process to the database. You might be aware of this command. REGISTER command creates a Logminer Capture in the database.

Once you register the Extract process to the database, you can query the DBA_CAPTURE view to check if a Logminer server has been created or not.

The status is in “DISABLED” as we did not start it yet.

But when issuing the INFO command, you will still the extract process in Classic mode. This is because, we haven’t upgraded the extract process. We had just registered it to the database.

4. Issue the following command to determine whether the upgrade command can be issued. Transactions that started before the registration command must be written to the trail before you can proceed with the upgrade. You may have to issue this command more than once until it returns a message stating that Extract can be upgraded.

5. START and STOP the extract process so that it bypasses the SCN with which it was registered. It has to reach the SCN 1077042 as mentioned in the above message.

6. It bypassed the SCN. Now, stop the Extract process.

7. Check if the Extract process is ready for the upgrade by issuing the below command,

8. Now the Extract process EXT1 is ready for upgrade. Issue the below command to upgrade the capture from Classic mode to Integrated mode.,

9. Before starting the Extract process, edit the parameter file accordingly. You might have used the below parameter in the Extract process parameter file which is not supported by Integrated Extract,

TRANLOGOPTIONS DBLOGREADER

This will make the Integrated Extract process to abend when you start it. Like wise you have many parameters which are not supported by Integrated Mode and used only for Classic mode. Check and remove those parameters before starting the Integrated Extract process after upgrading it.

In my case, I haven’t used any such parameters. So, I had just added one of the parameter which is used by Integrated mode of Extract process. Below is my param file,

10. Now, go ahead and start the Extract process.

11. Check if the extract process has been upgraded from Classic to Integrated. From the below output, you can clearly see the “Log Read Checkpoint” is changed from “Oracle Redo Logs” to “Oracle Integrated Redo Logs”. Now the Extract process is running in INTEGRATED mode.

From the report file, you can see the below lines,

From the DBA_CAPTURE view you can see now the Capture status as “ENABLED”.

Hope you enjoyed the post. Cheers 🙂

Oracle GoldenGate Integrated Replicat slow and does not use BATCHSQL

Oracle GoldenGate Integrated Replicat slow and does not use BATCHSQL

Oracle GoldenGate Integrated Replicat slow and does not use BATCHSQL.

The BATCHSQL is specified , though in the report file we do not see any

batchsql statistics:
BATCHSQL statistics:
Batch operations: 0
Batches: 0
Batches executed: 0
Queues: 0
Batches in error: 0
Normal mode operations: 11055655
Immediate flush operations: 0
PK collisions: 0
UK collisions: 0
FK collisions: 0
Thread batch groups: 0
Commits: 0
Rollbacks: 0
Queue flush calls: 0
Ops per batch: N/A
Ops per batch executed: N/A
Ops per queue: N/A
Parallel batch rate: N/A

and the replicat is very slow. Lag is 7 mins but could be max 30 secs

From the IR healthcheck report we can see that the BATCHSQL is enabled:

BATCHSQL = Y

Though from the AWR report we can see that by the goldengate operations the rows/exec is ~1 like:

Executions 	Rows Processed 	Rows per Exec 	Elapsed Time (s) 	SQL Module 		SQL  Text
============== ============== 	============= 	================ 	==========     	=================================
565,577         670,971 	0.19    	1,111.16 	 	GoldenGate 	INSERT /*+ restrict_all_ref_c...
365,150 	361,452 	0.99 		586.7872.7 	 	GoldenGate 	INSERT /*+ restrict_all_ref_c...
201,967 	215,429 	0.95 		482.7754.6 	 	GoldenGate 	INSERT /*+ restrict_all_ref_c...
201,195 	213,216 	0.87 		438.8554.5 	 	GoldenGate 	INSERT /*+ restrict_all_ref_c...

This is because, BATCHSQL iS directly related to the EAGER_SIZE when used in the Integrated Replicat. This is very important to note in here. We should always not assume as BATCHSQL will always improve the performance. Sometimes it degrades the performance as well. When used with Integrated Replicat process, we need to be even more careful.

For Integrated Replicat, the OGG BATCHSQL statistics are never maintained. It will always show as normal mode in the OGG report file whether BATCHSQL is enabled or not. The only way to determine if BATCHSQL is in use on the IR inbound server is to use AWR and look at the AWR SQL Executions report. If the Rows per Exec > 1, then BATCHSQL is in operation.

You can tell if BATCHSQL is configured (for the inbound server) by looking in the healthcheck summary at Replicat parameters section

Integrated Replicat
key parameters (Details).

APPLY_NAME Replicat
Name PARALLELISM MAX_PARALLELISM COMMIT_SERIALIZATION EAGER_SIZE BATCHSQL BATCH_SQL_MODE MAX_SGA_SIZE
OPTIMIZE_PROGRESS_TABLE
OGG$ROACDW ROACDW 4 50 DEPENDENT_TRANSACTIO NS 9500 Y INFINITE N

A bulk load implies that a single transaction was performed. If this is the case, then the inbound server will apply in EAGER mode. Meaning the transaction will begin applying in the inbound server before the entire transaction is received by the inbound server. Eagerly applied transactions cannot take advantage of BATCHSQL.

BATCHSQL iS directly related to the EAGER_SIZE when used in the Integrated Replicat. The default EAGER_SIZE of the in Oracle GoldenGate 12.1.x is 9500 and from OGG 12.2 is 15100.

So, if the number of operations in a Transaction is greater than 9500, then BATCHSQL will have no effect. So, the number of LCR’s should be below or lesser than 9500 (12.1) and 15000 (12.2) so that BATCHSQL will be used by the Integrated Replicat process.

If the number of transactions is greater then 9500 (12.1) or 15000 (12.2), then the Integrated Replicat will not apply the transactions in parallel and will apply it in a serialized way and hence we could see the “Rows per Exec = 1” in the “SQL ordered by executions” in AWR.

For example, if in your case, if you have most of the transactions having operations nearing to 1 Lakh, then it is better to increase the EAGER_SIZE to 100000. But please do remember, if you increase the EAGER_SIZE, you need to have sufficient amount of memory too. So, do increase the STREAMS_POOL_SIZE also.

Once you increase the EAGER_SIZE to 100000. Create a table and insert 90000 records and commit the transaction. Then update 80000 records and commit the transaction. Generate the AWR report and check if the “Rows per Exec > 1” in the “SQL ordered by executions”

Hope this clearly explained how to use BATCHSQL in Integrated Replicat process.

Cheers 🙂

Oracle GoldenGate 18c

Oracle GoldenGate 18c

The most awaited Oracle GoldenGate 18c is released and it is available for download. You can download it using the below links,

https://www.oracle.com/technetwork/middleware/goldengate/downloads/index.html

https://edelivery.oracle.com/osdc/faces/SoftwareDelivery

Overview of Oracle GoldenGate 18c

As usual it has two types of binaries

    • Classic or Traditional Architecture

    • Microservices Architecture

You can see as below when you visit the above mentioned links,

DownloadOracle GoldenGate 18.1.0.0.0 for Oracle on Linux x86-64 (375 MB)
DownloadOracle GoldenGate 18.1.0.0.0 Microservices for Oracle on Linux x86-64 (549 MB)

Below are the Oracle Versions it supports,

      • Oracle 11g – 11.2.0.4

      • Oracle 12c – 12.1.0.2+, 12.2.0.1+

      • Oracle 18c – 18.1.0 (Cloud), 18.3.0 (On-Premise)

From this release (OGG 18c), by default Classic Extract is deprecated for Oracle Databases. It will be available, but there will no new features included in it. It will be as same as in the earlier versions.

Below is the Certification Matrix for OGG 18c,

      • ORACLE –

      • Linux x86-64 (Classic & Microservices Architecture)
      • Solaris SPARC

  • Solaris x86-64

 

  • Windows x64

 

Currently it is available for Linux, in future For Windows and Solaris will be released.

    • MySQL – Linux x86-64

    • DB2 LUW – Linux x86-64

    • DB2 for zOS – Linux x86-64

    • Teradata – Linux x86-64

  • Some of the features of Oracle GoldenGate 18c for Oracle database is below,

    Autonomous Databases

    OGG 18c supports only Delivery (Replicat) for below,

    • Autonomous Data Warehouse Cloud

    • Autonomous Transaction Processing

    To replicat to ADWC, you need to use Non-Integrated Replicat processes. It means, you need to use either of the below,

    • Classic Replicat

    • Coordinated Replicat

    • Non-Integrated Parallel Replicat

    Identity Columns

    OGG 18c supports ID columns (Identity Columns) with some conditions,

    • The Database should be Oracle 18c (18.1.0 or 18.3.0).

    • Only DML can be replicated and DDL is not supported.

    • You need to have either Integrated Extract and Integrated Replicat or Integrated Parallel Replicat to use this feature.

    • Classic Extract or Classic Replicat will not support this feature.

    Auto CDR

    From OGG 18c, you can configure Auto CDR the tables with Unique Key (UK) also. But this can be done only when your database is Oracle 18c.

    Sharding

    Composite is sharding is supported by OGG 18c when your database version is Oracle 18c. It allows for sharding using HASH partitions also.

    In Database Row Archival

    OGG 18c enables replication of compressed and invisible rows. But this feature is also supported only when you use Oracle 18c database.

    There are also lot many of features. Currently started testing one by one. Will see all these in detail in my upcoming posts.

    Cheers 🙂

Oracle GoldenGate – Integrated Replicat

Oracle GoldenGate – Integrated Replicat

Integrated Replicat was introduced from the Oracle GoldenGate 12c. Similar to Integrated Extract, Integrated Replicat is also tightly coupled with Oracle Database.

Integrated Replicat operates as below,

  • Reads the OGG Trail files which are pushed by the Data Pump process from the Source.

  • Performs filtering, mapping and conversion of data.

  • Constructs logical change records (LCRs) of the Source DML transactions in committed order. DDL operations are applied directly by Replicat.

  • Attaches the Database Inbound Server to the database by means of a Lightweight Streaming API.

  • Transfers the constructed LCRs to the Inbound Server which then applies the data to the Target Database.

Like Integrated Extract, you can also configure parallelism in Integrated Replicat process. Multiple Inbound Server child processes which are known as APPLY SERVERS applies data in parallel to the target database maintaining the original transaction atomicity. Below is the architecture diagram of the Integrated Replicat,

By default, Integrated Replicat applies the transactions Asynchronously. Transactions which are with no dependencies are applied safely executed and committed out of order to achieve fast throughput. Transactions which are dependencies are applied in the same order as on the source.

Integrated Replicat process also has it’s own sub processes which are as below,

    • REPLICAT – Reads the trail files and constructs the LCRs and then transfers it to the database using the LLghtweight Streaming API.

    • RECEIVER – Reads the LCRs

    • PREPARER – It computes the dependencies of the transactions and sort them in order.

    • COORDINATOR – Coordinates the transactions and maintains the order between the Applier processes.

    • APPLIER – Applies the transactions to the database. Also, performs the changes for the transactions received from coordinator including CDR and error handling.

 

 

If the Inbound Server does not support any column data types or any other feature, it automatically applies the transactions to the database in DIRECT APPLY (Classic mode) using the OCI.

The following features are applied in DIRECT MODE by the Integrated Replicat:

      • DDL operations

      • Sequence operations

      • SQLEXEC parameter within a TABLE or MAP parameter

      • EVENTACTIONS processing

      • UDT Note, if the extract uses USENATIVEOBJSUPPORT to capture the UDT, then integrated Replicat will apply it with the inbound server, otherwise it will be handled by Replicat directly.

For example, if you have more of DDL operations or Sequence operations, then the Integrated Replicat will switch to Direct Apply to apply these operations and then switch back to Integrated Mode. This degrades the performance of the Integrated Replicat process even though if you set sufficient parallelism and memory.

Below are the advantages of the Integrated Replicat over the Classic or Non-Integrated Replicat process.,

      • Integrated Replicat supports parallelism and hence it applies the transactions in parallel which gives good performance rate while preserving the integrity and atomicity of the source transaction.. The parallelism can be set using the parameter PARALLELISM and MAX_PARALLELISM. The parallel apply servers increases and decreases automatically according to the workload.

      • There is no need of Range Replicat processes if you have huge tables. Integrated Replicat is very easy to configure and only one IR is enough as it supports parallelism.

      • High-performance apply streaming is enabled for integrated Replicat by means of a lightweight application programming interface (API) between Replicat and the inbound server.

      • Integrated Replicat process coordinates and handles the barrier transactions among multiple server apply processes.

      • When a DDL operations is seen, Integrated Replicat processes it as direct transactions that force a barrier by waiting for server processing to complete before the DDL execution.

      • Integrated Replicat process easily handles the Transient duplicate primary key updates.

      • Integrated Replicat can be configured in a single or pluggable databases (Multitenant).

To configure Integrated Replicat process there are some requirements which are below,

      • The Target database should be Oracle 11.2.0.4 and later.

      • Supplemental Logging should be enabled on the Source.

      • When using this Integrated Replicat, you need to enable trandata at OGG level using ADD SCHEMATRANDATA and not with ADD TRANDATA. Also, you need to use the parameter LOGALLSUPCOLS at the source Extract paraemeter file. Else, you will be hitting “ORA-26688: missing key in LCR”.

      • Integrated Replicat process can be used only in Oracle Database. It doesn’t supports Non-Oracle databases.

      • Need to set STREAMS_POOL_SIZE

This plays a vital role in the Integrated Feature. You need to configure explicitly the Streams_Pool_Size when you configure the Integrated Mode of Capture.
Integrated Capture takes or consumes memory from the STREAMS_POOL_SIZE. There are some calculations to size the streams pool. Please check the below link on
how to size this STREAMS_POOL

      • Need to apply the RECOMMENDED PATCHES

There are some recommended patches to be applied to the Oracle Database when using the Integrated Capture/Replicat. Please check the below KM Note.,

Oracle GoldenGate — Oracle RDBMS Server Recommended Patches (Doc ID 1557031.1)

Latest GoldenGate/Database (OGG/RDBMS) Patch recommendations (Doc ID 2193391.1)

Let’s go in to the configuration now,

GGSCI (OGGR2-1.localdomain as ggadmin@GGDB1) 2> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING

Add the parameters for the Integrated Replicat process “INREP”

GGSCI (OGGR2-1.localdomain as ggadmin@GGDB1) 3> edit params INREP

REPLICAT inrep
USERID ggadmin, PASSWORD oracle
DBOPTIONS INTEGRATEDPARAMS(parallelism 6)
AssumeTargetDefs
MAP source.t1, TARGET target.t1;

Save the file and check the parameters once.

GGSCI (OGGR2-1.localdomain as ggadmin@GGDB1) 4> view params INREP

REPLICAT inrep
USERID ggadmin, PASSWORD oracle
DBOPTIONS INTEGRATEDPARAMS(parallelism 6)
AssumeTargetDefs
MAP source.t1, TARGET target.t1;

DBOPTIONS INTEGRATEDPARAMS(parallelism 6) – denotes that for this Integrated Replicat, you are specifying a minimum number of parallel apply processes which will be 6.

To know more about these options, please refer to the below link,

https://docs.oracle.com/goldengate/1212/gg-winux/GIORA/addl_integrated_params.htm#GIORA245

All the pre-requisites are taken care already. In my last post, we saw about the Integrated Extract process. We need to register the Integrated Extract process. But, Integrated Replicat process will be registered automatically to the database. So, we don’t want to explicitly register the Integrated Replicat process. So, let’s just add the Integrated Replicat process,

GGSCI (OGGR2-1.localdomain as ggadmin@GGDB1) 5> add replicat inrep INTEGRATED exttrail ./dirdat/lt
REPLICAT (Integrated) added.

You could see an Integrated Replicat process is created.

GGSCI (OGGR2-1.localdomain as ggadmin@GGDB1) 10> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
REPLICAT    STOPPED     INREP       00:00:00      00:01:38

Now Start the IR process,

GGSCI (OGGR2-1.localdomain as ggadmin@GGDB1) 14> start INREP

Sending START request to MANAGER ...
REPLICAT INREP starting

Now the process is in RUNNING state.

GGSCI (OGGR2-1.localdomain as ggadmin@GGDB1) 15> info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER RUNNING REPLICAT RUNNING INREP 00:00:00 00:02:20

Below are the Entries in the Alert log file,

GoldenGate APPLY AP02 for OGG$INREP started with pid=25, OS id=3023
APPLY OGG$INREP: Apply User: GGADMIN
APPLY OGG$INREP: Apply Tag: 00
APPLY OGG$INREP: Parameter Set by User: PARALLELISM Value: 6
APPLY OGG$INREP: Parameter Set by User: ALLOW_DUPLICATE_ROWS Value: Y
APPLY OGG$INREP: Parameter Set by User: PRESERVE_ENCRYPTION Value: N
APPLY OGG$INREP: Parameter Set by User: EAGER_SIZE Value: 15100
APPLY OGG$INREP: Parameter Set by User: CDGRANULARITY Value: ROW
APPLY OGG$INREP: Parameter Set by User: SUPPRESSTRIGGERS Value: Y
Sat Sep 29 11:42:29 2018
GoldenGate Apply Reader for OGG$INREP started AS01 with pid=27 OS id=3025
Sat Sep 29 11:42:29 2018
Sat Sep 29 11:42:29 2018
GoldenGate Apply Server for OGG$INREP started AS02 with pid=28 OS id=3027
Sat Sep 29 11:42:29 2018
GoldenGate Apply Server for OGG$INREP started AS04 with pid=30 OS id=3031
GoldenGate Apply Server for OGG$INREP started AS06 with pid=32 OS id=3035
Sat Sep 29 11:42:29 2018
Sat Sep 29 11:42:29 2018
GoldenGate Apply Server for OGG$INREP started AS03 with pid=29 OS id=3029
GoldenGate Apply Server for OGG$INREP started AS05 with pid=31 OS id=3033
Sat Sep 29 11:42:29 2018
GoldenGate Apply Server for OGG$INREP started AS07 with pid=33 OS id=3037
APPLY OGG$INREP: Source Database: NA
APPLY OGG$INREP: Applied Message Number: NA
APPLY OGG$INREP: Message Create Time: NA
Sat Sep 29 11:42:30 2018
Propagation Receiver (CCA) for GoldenGate replicat and Apply OGG$INREP with pid=34, OS id=3021, objnum=0 started.

Below is the information in the report file of the Integrated Replicat process,

***********************************************************************
                 Oracle GoldenGate Delivery for Oracle
    Version 12.3.0.1.0 OGGCORE_12.3.0.1.0_PLATFORMS_170721.0154_FBO
   Linux, x64, 64bit (optimized), Oracle 11g on Jul 21 2017 20:36:55

Copyright (C) 1995, 2017, Oracle and/or its affiliates. All rights reserved.


                    Starting at 2018-09-29 11:42:22
***********************************************************************

Operating System Version:
Linux
Version #1 SMP Wed Feb 2 18:40:23 EST 2011, Release 2.6.32-100.28.5.el6.x86_64
Node: OGGR2-1.localdomain
Machine: x86_64
                         soft limit   hard limit
Address Space Size   :    unlimited    unlimited
Heap Size            :    unlimited    unlimited
File Size            :    unlimited    unlimited
CPU Time             :    unlimited    unlimited

Process id: 3016

Description:

2018-09-29 11:42:22  WARNING OGG-02904  Replication of PARTIAL XML containing NCHAR/NVARCHAR/NCLOB data may cause divergence.

***********************************************************************
**            Running with the following parameters                  **
***********************************************************************

2018-09-29 11:42:22  INFO    OGG-03059  Operating system character set identified as UTF-8.

2018-09-29 11:42:22  INFO    OGG-02695  ANSI SQL parameter syntax is used for parameter parsing.

2018-09-29 11:42:22  INFO    OGG-01360  REPLICAT is running in Integrated mode.
REPLICAT inrep
USERID ggadmin, PASSWORD ***
DBOPTIONS INTEGRATEDPARAMS(parallelism 6)
AssumeTargetDefs
MAP source.t1, TARGET target.t1;

2018-09-29 11:42:23  INFO    OGG-06451  Triggers will be suppressed by default.

2018-09-29 11:42:23  INFO    OGG-01815  Virtual Memory Facilities for: COM
    anon alloc: mmap(MAP_ANON)  anon free: munmap
    file alloc: mmap(MAP_SHARED)  file free: munmap
    target directories:
    /ogg/dirtmp.

Database Version:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE    11.2.0.4.0      Production
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production

Database Language and Character Set:
NLS_LANGUAGE     = "AMERICAN"
NLS_TERRITORY    = "AMERICA"
NLS_CHARACTERSET = "WE8MSWIN1252"

2018-09-29 11:42:23  INFO    OGG-06604  Database GGDB1 CPU info: CPU Count 1, CPU Core Count 0, CPU Socket Count 0.

2018-09-29 11:42:23  WARNING OGG-05673  CSN-based duplicate suppression is disabled because there is no checkpoint table for this Replicat.

2018-09-29 11:42:23  INFO    OGG-02545  Parameter GROUPTRANSOPS is ignored by Integrated Replicat when parallelism is greater than 1.

2018-09-29 11:42:23  INFO    OGG-02527  Integrated Replicat does not populate a trace table.

2018-09-29 11:42:30  INFO    OGG-02530  Integrated replicat successfully attached to inbound server OGG$INREP.

***********************************************************************
**                     Run Time Messages                             **
***********************************************************************


2018-09-29 11:42:30  INFO    OGG-02243  Opened trail file /ogg/dirdat/lt000000000 at 2018-09-29 11:42:30.874662.

2018-09-29 11:42:30  WARNING OGG-02760  ASSUMETARGETDEFS is ignored because trail file /ogg/dirdat/lt000000000 contains table definitions.

2018-09-29 11:42:30  INFO    OGG-03522  Setting session time zone to source database time zone 'GMT'.

2018-09-29 11:42:30  INFO    OGG-02232  Switching to next trail file /ogg/dirdat/lt000000001 at 2018-09-29 11:42:30.874662 due to EOF. with current RBA 1,401.

2018-09-29 11:42:30  INFO    OGG-03506  The source database character set, as determined from the trail file, is we8mswin1252.
Processed extract process graceful restart record at seq 1, rba 1401.

How to check if the replicat process created or running is a Classic or Integrated? It is simple. We can see the keyword INTEGRATED in the below command output,

GGSCI (OGGR2-1.localdomain as ggadmin@GGDB1) 16> info INREP

REPLICAT   INREP     Last Started 2018-09-29 11:42   Status RUNNING
INTEGRATED
Checkpoint Lag       00:00:00 (updated 00:00:09 ago)
Process ID           3016
Log Read Checkpoint  File /ogg/dirdat/lt000000001
                     2018-09-29 11:17:08.839373  RBA 1461

Below is the detailed information of the Integrated Replicat process,

GGSCI (OGGR2-1.localdomain as ggadmin@GGDB1) 17> info INREP, detail

REPLICAT   INREP     Last Started 2018-09-29 11:42   Status RUNNING
INTEGRATED
Checkpoint Lag       00:00:00 (updated 00:00:04 ago)
Process ID           3016
Log Read Checkpoint  File /ogg/dirdat/lt000000001
                     2018-09-29 11:17:08.839373  RBA 1461

INTEGRATED Replicat
DBLOGIN Provided, inbound server name is OGG$INREP in ATTACHED state


Current Log BSN value: (no data)

Low Watermark CSN value: (no data)

High Watermark CSN value: (no data)

  Extract Source                          Begin             End

  /ogg/dirdat/lt000000001                 * Initialized *   2018-09-29 11:17
  /ogg/dirdat/lt000000000                 * Initialized *   First Record
  ./dirdat/lt000000000                    * Initialized *   First Record


Current directory    /ogg

Report file          /ogg/dirrpt/INREP.rpt
Parameter file       /ogg/dirprm/inrep.prm
Checkpoint file      /ogg/dirchk/INREP.cpr
Process file         /ogg/dirpcs/INREP.pcr
Error log            /ogg/ggserr.log

One of the difference between Classic Replicat and Integrated Replicat is, like Classic Replicat process, Integrated Replicat does not have a Checkpoint Table. In the above output, you can clearly see there is no checkpoint table details. So, where does the Integrated Replicat stores it’s checkpoint information apart from the checkpoint file? Integrated Replicat is integrated with database, which means Oracle database knows about it. There are several Oracle GoldenGate views defined in the database data dictionary which can be queried to display information about Oracle GoldenGate queues and processes.

To check the Checkpoint information, below is the view.,

SQL> desc ggadmin.DBA_GG_INBOUND_PROGRESS
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 SERVER_NAME                               NOT NULL VARCHAR2(30)
 PROCESSED_LOW_POSITION                             VARCHAR2(4000)
 APPLIED_LOW_POSITION                               VARCHAR2(4000)
 APPLIED_HIGH_POSITION                              VARCHAR2(4000)
 SPILL_POSITION                                     VARCHAR2(4000)
 OLDEST_POSITION                                    VARCHAR2(4000)
 APPLIED_LOW_SCN                           NOT NULL NUMBER
 APPLIED_TIME                                       DATE
 APPLIED_MESSAGE_CREATE_TIME                        DATE
 SOURCE_DATABASE                           NOT NULL VARCHAR2(128)
 LOGBSN                                             VARCHAR2(4000)

To know about the Integrated Applier details,

 
SQL> desc ggadmin.DBA_GOLDENGATE_INBOUND
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 REPLICAT_NAME                                      VARCHAR2(4000)
 SERVER_NAME                               NOT NULL VARCHAR2(30)
 APPLY_USER                                         VARCHAR2(30)
 USER_COMMENT                                       VARCHAR2(4000)
 CREATE_DATE                                        TIMESTAMP(6)
 STATUS                                             VARCHAR2(8)
SQL> select replicat_name,server_name from ggadmin.DBA_GOLDENGATE_INBOUND;

REPLICAT_NAME                  SERVER_NAME
------------------------------ ------------------------------
INREP                          OGG$INREP
SQL> desc ggadmin.DBA_APPLY
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 APPLY_NAME                                NOT NULL VARCHAR2(30)
 QUEUE_NAME                                NOT NULL VARCHAR2(30)
 QUEUE_OWNER                               NOT NULL VARCHAR2(30)
 APPLY_CAPTURED                                     VARCHAR2(3)
 RULE_SET_NAME                                      VARCHAR2(30)
 RULE_SET_OWNER                                     VARCHAR2(30)
 APPLY_USER                                         VARCHAR2(30)
 APPLY_DATABASE_LINK                                VARCHAR2(128)
 APPLY_TAG                                          RAW(2000)
 DDL_HANDLER                                        VARCHAR2(98)
 PRECOMMIT_HANDLER                                  VARCHAR2(98)
 MESSAGE_HANDLER                                    VARCHAR2(98)
 STATUS                                             VARCHAR2(8)
 MAX_APPLIED_MESSAGE_NUMBER                         NUMBER
 NEGATIVE_RULE_SET_NAME                             VARCHAR2(30)
 NEGATIVE_RULE_SET_OWNER                            VARCHAR2(30)
 STATUS_CHANGE_TIME                                 DATE
 ERROR_NUMBER                                       NUMBER
 ERROR_MESSAGE                                      VARCHAR2(4000)
 MESSAGE_DELIVERY_MODE                              VARCHAR2(10)
 PURPOSE                                            VARCHAR2(19)
SQL> select apply_name,queue_name from ggadmin.DBA_APPLY;

APPLY_NAME                     QUEUE_NAME
------------------------------ ------------------------------
OGG$INEXT                      OGG$Q_INEXT
OGG$INREP                      OGGQ$INREP

Hope this post given you a clear view about the Integrated Replicat process. Will explain about the Coordinated Replicat in my next post.

Cheers 🙂

Oracle GoldenGate – Integrated Capture

Oracle GoldenGate – Integrated Capture

Oracle has introduced Integrated Extract or Integrated Capture from OGG 11.2.x. They bind this capture with the Oracle RDBMS and hence it is called Integrated Capture. The Classic Capture is decoupled with the Oracle Database where in the Integrated Capture is tightly coupled with the Oracle Database. Instead of the Extract process directly capturing the transactions from the database, in here, the Logmining Server plays the role. A LogMining server will be attached with this Integrated Capture process in the database to mine the redo or archive logs and captures the transactions. This process mainly works on the streams concepts. So, Integrated Capture has the below sub-processes.

  • READER – Reads logfiles and splits in to regions

  • PREPARER – Scans regions of logfiles and pre-filters based on extract parameters

  • BUILDER – Merges prepared records in SCN order

  • CAPTURE – Formats Logical Change Records (LCRs) and passes to Oracle GoldenGate Extract

 

Below is the architecture diagram of LogMining Server.

 

Once the Capture passes the LCRs to the Extract, the Extract process performs the mappings and transformations of the transactions according to the parameters which we had mentioned in the Extract process parameter file and finally writes them to the trail files called Local Trail Files.

So, the work of the Extract process in here is as below,

  • Requests LCRs from Logmining Server

  • Performs Mapping and Transformations

  • Writes the data to the Trail Files

Oracle GoldenGate supports all Oracle data and storage types. But to have this the source database should be at least 11.2.0.3 and a specific database bundle patch needs to be applied. Please refer the below doc,

Oracle GoldenGate — Oracle RDBMS Server Recommended Patches (Doc ID 1557031.1)

You can use integrated capture on an 11.2.0.3 downstream mining database for a source database with a compatibility less than 11.2.0.3, but in this mode, SECUREFILE LOBs, XML columns, Transparent Data Encryption, and UDTs have limited support based on database version and compatibility. But in here also, you need to apply the database specific bundle patch as per the below doc,

Oracle GoldenGate — Oracle RDBMS Server Recommended Patches (Doc ID 1557031.1)

Since the Integrated Capture is tightly coupled with the Oracle Database, there are many advantages when comparing to the Classic Capture. They are as below,

  • No additional setup is required when configured with Oracle RAC, ASM or TDE.

  • Integrated Capture uses the Logmining Server. This gives an additional advantage for the Integrated Capture to switch between different copies of redo and archive logs copies when there is a missing of log file due to disk corruption.

  • Faster filtering of tables

  • Supports all Oracle data and storage types

  • Efficiently handles point in time recovery and RAC integration. Whenever there is a inter-node failure, Integrated handles efficiently which is not in the case of Classic Capture.

  • It has an integrated log management which makes the RMAN to retain the archive logs automatically which are needed by the Integrated Extract process.

  • We all know that Oracle has introduced Multitenant Architecture (Container Database concepts) from Oracle 12c. Integrated Extract is the only mode which has the ability or supports capturing data from a Multitenant container database. Single Integrated Capture is more than enough to mine from multiple pluggable databases (PDBs).

  • For an Integrated Extract, no additional scripts are needed to be executed and this is called Native-DDL. For this, source database should be 11.2.0.4 with compatibility set to 11.2.0.4 or higher. If the source database is earlier than version 11.2.0.4 then DDL trigger must be used.

Integrated Capture vs Classic Capture

    • Tightly coupled with Oracle Database.

    • Can be used only with Oracle Database.

    • Supports parallelism. Multiple processes captures the data in parallel.

    • No requirement of setting ASMUSER or DBLOGREADER since the Logmining Server mines the redo or archive logs.

    • Whenever there is a node failure or addition of node in a RAC, no manual intervention required. It takes care automatically.

    • Works based on the STREAMS concepts.

    • Can be configured only when your OGG is 11.2.0.x or higher.

 

Pre-requisites for Integrated Extract

1.STREAMS_POOL_SIZE

This plays a vital role in the Integrated Feature. You need to configure explicitly the Streams_Pool_Size when you configure the Integrated Mode of Capture. Integrated Capture takes or consumes memory from the STREAMS_POOL_SIZE. There are some calculations to size the streams pool. Please check the below link on
how to size this STREAMS_POOL

2. RECOMMENDED PATCHES

There are some recommended patches to be applied to the Oracle Database when using the Integrated Capture/Replicat. Please check the below KM Note.,

Oracle GoldenGate — Oracle RDBMS Server Recommended Patches (Doc ID 1557031.1)

Latest GoldenGate/Database (OGG/RDBMS) Patch recommendations (Doc ID 2193391.1)

3. ARCHIVE LOG MODE

The database should be in ArchiveLog Mode when Oracle GoldenGate configured. Oracle GoldenGate captures the changes either from Redo Logs or Archive Logs. Please change the database to the ArchiveLog Mode.

Now, let’s go to the configuration of Integrated Extract process.

GGSCI (OGGR2-1.localdomain) 1> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING

Add the parameters for the Integrated Extract process “INEXT”

GGSCI (OGGR2-1.localdomain) 2> edit params inext

EXTRACT inext
USERID ggadmin, PASSWORD oracle
TRANLOGOPTIONS INTEGRATEDPARAMS (MAX_SGA_SIZE 100, PARALLELISM 2)
EXTTRAIL /ogg/dirdat/lt
TABLE source.T1;

Save the file and check the parameters once.

GGSCI (OGGR2-1.localdomain) 3> view params inext

EXTRACT inext
USERID ggadmin, PASSWORD oracle
TRANLOGOPTIONS INTEGRATEDPARAMS (MAX_SGA_SIZE 100, PARALLELISM 2)
EXTTRAIL /ogg/dirdat/lt
TABLE source.T1;

TRANLOGOPTIONS [INTEGRATEDPARAMS (parameter[, …])] – This is an optional parameter. It passes the parameters set using this option to the oracle database Logmining Server.

 

To know more about these options, please refer to the below link,

https://docs.oracle.com/goldengate/1212/gg-winux/GIORA/addl_integrated_params.htm#GIORA233

As per the pre-requisite, I have already set the streams_pool_size. And also, all the other pre-requisites are taken care.

SQL> show parameter streams_pool_size

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
streams_pool_size big integer 208M

Register the extract process to the database with the below command.

GGSCI (OGGR2-1.localdomain as ggadmin@GGDB1) 8> register extract INEXT, database

2018-09-24 11:53:24 INFO OGG-02003 Extract INEXT successfully registered with database at SCN 4043243.

This will create a Logminer Build and the capture begins from the time that REGISTER EXTRACT is issued. A Capture will be created in the database with the name OGG$CAP_INEXT. The format for this is “OGG$CAP_EXTRACT-NAME”.

Below are the entries in the alert log file during the time of register.

 

ALERT LOG FILE

Logminer Bld: Build started
ALTER SYSTEM SWITCH ALL LOGFILE start (GGDB1)
Thread 1 advanced to log sequence 277 (LGWR switch)
Current log# 1 seq# 277 mem# 0: /vol1/GGDB1/redo01.log
ALTER SYSTEM SWITCH ALL LOGFILE complete (GGDB1)
Mon Sep 24 11:53:15 2018
Archived Log entry 273 added for thread 1 sequence 276 ID 0xd6ddca38 dest 1:
Mon Sep 24 11:53:15 2018
Logminer Bld: Lockdown Complete. DB_TXN_SCN is 0 4043243 LockdownSCN is 4043243
ALTER SYSTEM ARCHIVE LOG
Thread 1 cannot allocate new log, sequence 278
Checkpoint not complete
Current log# 1 seq# 277 mem# 0: /vol1/GGDB1/redo01.log
Thread 1 advanced to log sequence 278 (LGWR switch)
Current log# 2 seq# 278 mem# 0: /vol2/GGDB1/redo02.log
Archived Log entry 274 added for thread 1 sequence 277 ID 0xd6ddca38 dest 1:
Mon Sep 24 11:53:21 2018
Logminer Bld: Done
GoldenGate Capture: OGG$CAP_INEXT CAPTURE Created
CAPTURE OGG$CAP_INEXT: Start SCN: 4043243 (0x3db1eb.00000000)
CAPTURE OGG$CAP_INEXT: First SCN: 4043243 (0x3db1eb.00000000)
CAPTURE OGG$CAP_INEXT: Required Checkpoint SCN: 0 (0x0000.00000000)
CAPTURE OGG$CAP_INEXT: Captured SCN: 0 (0x0000.00000000)
CAPTURE OGG$CAP_INEXT: Applied SCN: 0 (0x0000.00000000)
CAPTURE OGG$CAP_INEXT: Capture Type: LOCAL
CAPTURE OGG$CAP_INEXT: Logminer Id: 3
CAPTURE OGG$CAP_INEXT: Source Database: GGDB1
GoldenGate Apply: OGG$INEXT APPLY Created
APPLY OGG$INEXT: Apply User: GGADMIN
APPLY OGG$INEXT: Apply Tag: 00
Setting XOUT_CLIENT_EXISTS to Y for Capture: OGG$CAP_INEXT
Mon Sep 24 11:53:23 2018
GoldenGate CAPTURE CP01 for OGG$CAP_INEXT started with pid=43, OS id=3064
CAPTURE OGG$CAP_INEXT: Start SCN: 4043243 (0x3db1eb.00000000)
CAPTURE OGG$CAP_INEXT: First SCN: 4043243 (0x3db1eb.00000000)
CAPTURE OGG$CAP_INEXT: Required Checkpoint SCN: 0 (0x0000.00000000)
CAPTURE OGG$CAP_INEXT: Captured SCN: 0 (0x0000.00000000)
CAPTURE OGG$CAP_INEXT: Applied SCN: 0 (0x0000.00000000)
CAPTURE OGG$CAP_INEXT: Capture Type: LOCAL
CAPTURE OGG$CAP_INEXT: Logminer Id: 3
CAPTURE OGG$CAP_INEXT: Source Database: GGDB1
First applied SCN of the GoldenGate path from capture (OGG$CAP_INEXT) to propagation () to apply (OGG$INEXT) is set to SCN: 4043243 (0x3db1eb.00000000)
GoldenGate CAPTURE CP01 for OGG$CAP_INEXT with pid=43, OS id=3064 is in combined capture and apply mode.
Capture OGG$CAP_INEXT is handling 1 applies.
-- capture is running in apply-state checkpoint mode.
Starting persistent Logminer Session with sid = 3 for GoldenGate Capture OGG$CAP_INEXT
LOGMINER: Parameters summary for session# = 3
LOGMINER: Number of processes = 0, Transaction Chunk Size = 1
LOGMINER: Memory Size = 60M, Checkpoint interval = 1000M
LOGMINER: SpillScn 0, ResetLogScn 925702
LOGMINER: summary for session# = 3
LOGMINER: StartScn: 4044055 (0x0000.003db517)
LOGMINER: EndScn: 0
LOGMINER: HighConsumedScn: 4043243 (0x0000.003db1eb)
LOGMINER: session_flag: 0xb0
LOGMINER: DDL CKPT is on.
LOGMINER: Read buffers: 64
LOGMINER: Memory LWM: limit 10M, LWM 50M, 83%
LOGMINER: Memory Release Limit: 1M
LOGMINER: LowCkptScn: 0 (0x0000.00000000)
LOGMINER: HighCkptScn: 0 (0x0000.00000000)
LOGMINER: SkipScn: 4043243 (0x0000.003db1eb)
Mon Sep 24 11:53:34 2018
LOGMINER: Begin mining logfile during dictionary load for session 3 thread 1 sequence 277, /arch/1_277_874177400.dbf
Mon Sep 24 11:54:04 2018
Archived Log entry 277 added for thread 1 sequence 280 ID 0xd6ddca38 dest 1:
Mon Sep 24 11:54:14 2018
LOGMINER: Gathering statistics on logminer dictionary. (102)
LOGMINER: End mining logfiles during dictionary load for session 3

Now, add the Integrated Extract process. The command for adding Classic and Integrated extract is same, the only difference is the keyword “INTEGRATED”.

GGSCI (OGGR2-1.localdomain as ggadmin@GGDB1) 4> add extract inext INTEGRATED tranlog begin now
EXTRACT (Integrated) added.

Add the Local trail file to the Integrated Extract process.

GGSCI (OGGR2-1.localdomain as ggadmin@GGDB1) 5> add exttrail /ogg/dirdat/lt extract inext
EXTTRAIL added.

You could see an Integrated Extract process is created.

GGSCI (OGGR2-1.localdomain as ggadmin@GGDB1) 6> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT STOPPED INEXT 00:00:00 00:02:46

Start the Integrated Process.

GGSCI (OGGR2-1.localdomain as ggadmin@GGDB1) 7> start inext

Sending START request to MANAGER ...
EXTRACT INEXT starting

Now the process is in RUNNING state.

GGSCI (OGGR2-1.localdomain as ggadmin@GGDB1) 8> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT RUNNING INEXT 00:00:04 00:00:03

Once you start the Integrated Capture, we can see the capture process in the database will start mining the redo or archive log file. Below are the entries which we can see from the alert log file,

ALERT LOG FILE

Mon Sep 24 12:13:42 2018
CAPTURE OGG$CAP_INEXT: Start SCN: 4043243 (0x3db1eb.00000000)
CAPTURE OGG$CAP_INEXT: First SCN: 4043243 (0x3db1eb.00000000)
CAPTURE OGG$CAP_INEXT: Required Checkpoint SCN: 0 (0x0000.00000000)
CAPTURE OGG$CAP_INEXT: Captured SCN: 4043243 (0x3db1eb.00000000)
CAPTURE OGG$CAP_INEXT: Applied SCN: 0 (0x0000.00000000)
CAPTURE OGG$CAP_INEXT: Capture Type: LOCAL
CAPTURE OGG$CAP_INEXT: Logminer Id: 3
CAPTURE OGG$CAP_INEXT: Source Database: GGDB1
CAPTURE OGG$CAP_INEXT: Bytes of Redo Mined: 18508
CAPTURE OGG$CAP_INEXT: SGA Used: 4843080
CAPTURE OGG$CAP_INEXT: SGA Allocated: 63460120
GoldenGate CAPTURE CP01 for OGG$CAP_INEXT with pid=43, OS id=3064 stopped
Mon Sep 24 12:13:42 2018
GoldenGate Capture:OGG$CAP_INEXT cleared _SKIP_LCR_FOR_ASSERT
GoldenGate Capture: OGG$CAP_INEXT
setting IGNORE_UNSUPPORTED_TABLE for table (*)
GoldenGate Capture:OGG$CAP_INEXT setting _FILTER_PARTIAL_ROLLBACK:
Setting XOUT_CLIENT_EXISTS to Y for Capture: OGG$CAP_INEXT
Mon Sep 24 12:13:43 2018
Starting persistent Logminer Session with sid = 3 for GoldenGate Capture OGG$CAP_INEXT (OS id=4100).
LOGMINER: Parameters summary for session# = 3
LOGMINER: Number of processes = 4, Transaction Chunk Size = 1
LOGMINER: Memory Size = 99M, Checkpoint interval = 1000M
LOGMINER: SpillScn 0, ResetLogScn 925702
LOGMINER: summary for session# = 3
LOGMINER: StartScn: 4043243 (0x0000.003db1eb)
LOGMINER: EndScn: 0
LOGMINER: HighConsumedScn: 4043243 (0x0000.003db1eb)
LOGMINER: session_flag: 0x4f0
LOGMINER: DDL CKPT is on.
LOGMINER: Read buffers: 64
LOGMINER: Memory LWM: limit 10M, LWM 89M, 89%
LOGMINER: Memory Release Limit: 1M
LOGMINER: LowCkptScn: 0 (0x0000.00000000)
LOGMINER: HighCkptScn: 0 (0x0000.00000000)
LOGMINER: SkipScn: 4043243 (0x0000.003db1eb)
Mon Sep 24 12:13:43 2018
LOGMINER: session#=3 (OGG$CAP_INEXT), reader MS00 pid=45 OS id=4108 sid=50 started
Mon Sep 24 12:13:43 2018
LOGMINER: session#=3 (OGG$CAP_INEXT), builder MS01 pid=46 OS id=4111 sid=78 started
Mon Sep 24 12:13:43 2018
LOGMINER: session#=3 (OGG$CAP_INEXT), preparer MS02 pid=47 OS id=4113 sid=71 started
Mon Sep 24 12:13:43 2018
LOGMINER: session#=3 (OGG$CAP_INEXT), preparer MS03 pid=48 OS id=4115 sid=51 started
OGG Capture client successfully attached to GoldenGate Capture OGG$CAP_INEXT to receive uncommitted changes with pid=43 OS id=4100.
LOGMINER: Begin mining logfile for session 3 thread 1 sequence 277, /arch/1_277_874177400.dbf
LOGMINER: End mining logfile for session 3 thread 1 sequence 277, /arch/1_277_874177400.dbf
LOGMINER: Begin mining logfile for session 3 thread 1 sequence 278, /arch/1_278_874177400.dbf
LOGMINER: End mining logfile for session 3 thread 1 sequence 278, /arch/1_278_874177400.dbf
LOGMINER: Begin mining logfile for session 3 thread 1 sequence 279, /arch/1_279_874177400.dbf
LOGMINER: End mining logfile for session 3 thread 1 sequence 279, /arch/1_279_874177400.dbf
LOGMINER: Begin mining logfile for session 3 thread 1 sequence 280, /vol1/GGDB1/redo01.log
LOGMINER: End mining logfile for session 3 thread 1 sequence 280, /vol1/GGDB1/redo01.log
LOGMINER: Begin mining logfile for session 3 thread 1 sequence 281, /vol2/GGDB1/redo02.log
LOGMINER: End mining logfile for session 3 thread 1 sequence 281, /vol2/GGDB1/redo02.log
LOGMINER: Begin mining logfile for session 3 thread 1 sequence 282, /vol3/GGDB1/redo03.log

How to check if the extract process created or running is a Classic or Integrated? It is very simple. We can see the ‘Log Read Checkpoint’ as “Oracle Integrated Redo Logs”.

For a Classic Capture it will be as “Oracle Redo Logs”.

GGSCI (OGGR2-1.localdomain as ggadmin@GGDB1) 9> info extract INEXT

EXTRACT INEXT Last Started 2018-09-24 12:13 Status RUNNING
Checkpoint Lag 00:00:09 (updated 00:00:02 ago)
Process ID 3792
Log Read Checkpoint Oracle Integrated Redo Logs
2018-09-24 12:14:27
SCN 0.4066962 (4066962)

Below is the detailed information of the Integrated Capture.

GGSCI (OGGR2-1.localdomain as ggadmin@GGDB1) 10> info extract INEXT, detail

EXTRACT INEXT Last Started 2018-09-24 12:13 Status RUNNING
Checkpoint Lag 00:00:04 (updated 00:00:02 ago)
Process ID 3792
Log Read Checkpoint Oracle Integrated Redo Logs
2018-09-24 12:14:42
SCN 0.4066971 (4066971)

Target Extract Trails:

Trail Name Seqno RBA Max MB Trail Type

/ogg/dirdat/lt 0 1401 500 EXTTRAIL
Integrated Extract outbound server first scn: 0.4043243 (4043243)

Integrated Extract outbound server filtering start scn: 0.4043243 (4043243)

Extract Source Begin End

Not Available 2018-09-24 12:10 2018-09-24 12:14
Not Available * Initialized * 2018-09-24 12:10
Not Available * Initialized * 2018-09-24 12:10
Not Available * Initialized * 2018-09-24 12:10
Current directory /ogg

Report file /ogg/dirrpt/INEXT.rpt
Parameter file /ogg/dirprm/inext.prm
Checkpoint file /ogg/dirchk/INEXT.cpe
Process file /ogg/dirpcs/INEXT.pce
Error log /ogg/ggserr.log

Below are the entries in the report file of the Integrated Extract INEXT

***********************************************************************
                 Oracle GoldenGate Capture for Oracle
    Version 12.3.0.1.0 OGGCORE_12.3.0.1.0_PLATFORMS_170721.0154_FBO
   Linux, x64, 64bit (optimized), Oracle 11g on Jul 21 2017 20:34:23

Copyright (C) 1995, 2017, Oracle and/or its affiliates. All rights reserved.


                    Starting at 2018-09-29 11:16:38
***********************************************************************

Operating System Version:
Linux
Version #1 SMP Wed Feb 2 18:40:23 EST 2011, Release 2.6.32-100.28.5.el6.x86_64
Node: OGGR2-1.localdomain
Machine: x86_64
                         soft limit   hard limit
Address Space Size   :    unlimited    unlimited
Heap Size            :    unlimited    unlimited
File Size            :    unlimited    unlimited
CPU Time             :    unlimited    unlimited

Process id: 2543

Description:

***********************************************************************
**            Running with the following parameters                  **
***********************************************************************

2018-09-29 11:16:38  INFO    OGG-03059  Operating system character set identified as UTF-8.

2018-09-29 11:16:38  INFO    OGG-02695  ANSI SQL parameter syntax is used for parameter parsing.

2018-09-29 11:16:40  INFO    OGG-01360  EXTRACT is running in Integrated mode.
EXTRACT inext
USERID ggadmin, PASSWORD ***

2018-09-29 11:16:45  INFO    OGG-03522  Setting session time zone to source database time zone 'GMT'.
TRANLOGOPTIONS INTEGRATEDPARAMS (MAX_SGA_SIZE 100, PARALLELISM 2)
EXTTRAIL /ogg/dirdat/lt
LOGALLSUPCOLS

2018-09-29 11:16:45  WARNING OGG-04033   LOGALLSUPCOLS has set the NOCOMPRESSDELETES and GETUPDATEBEFORES parameters on.
TABLE source.T1;

2018-09-29 11:16:45  INFO    OGG-01635  BOUNDED RECOVERY: reset to initial or altered checkpoint.

2018-09-29 11:16:45  INFO    OGG-01815  Virtual Memory Facilities for: BR
    anon alloc: mmap(MAP_ANON)  anon free: munmap
    file alloc: mmap(MAP_SHARED)  file free: munmap
    target directories:
    /ogg/BR/INEXT.

Bounded Recovery Parameter:
BRINTERVAL = 4HOURS
BRDIR      = /ogg/

2018-09-29 11:16:46  INFO    OGG-01851  filecaching started: thread ID: 139768570574608.

2018-09-29 11:16:46  INFO    OGG-01815  Virtual Memory Facilities for: COM
    anon alloc: mmap(MAP_ANON)  anon free: munmap
    file alloc: mmap(MAP_SHARED)  file free: munmap
    target directories:
    /ogg/dirtmp.

Database Version:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE    11.2.0.4.0      Production
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production

Database Language and Character Set:
NLS_LANGUAGE     = "AMERICAN"
NLS_TERRITORY    = "AMERICA"
NLS_CHARACTERSET = "WE8MSWIN1252"

2018-09-29 11:16:46  INFO    OGG-03522  Setting session time zone to source database time zone 'GMT'.

2018-09-29 11:16:51  INFO    OGG-06604  Database GGDB1 CPU info: CPU Count 1, CPU Core Count 0, CPU Socket Count 0.

2018-09-29 11:17:02  INFO    OGG-02248  Logmining server DDL filtering enabled.

2018-09-29 11:17:04  INFO    OGG-02068  Integrated capture successfully attached to logmining server OGG$CAP_INEXT using OGGCapture API.

2018-09-29 11:17:04  INFO    OGG-02089  Source redo compatibility version is: 11.2.0.4.0.

2018-09-29 11:17:04  INFO    OGG-02086  Integrated Dictionary will be used.

2018-09-29 11:17:08  WARNING OGG-02901  Replication of UDT and ANYDATA from redo logs is not supported with the Oracle compatible parameter setting. Using fetch instead.

2018-09-29 11:17:08  INFO    OGG-01055  Recovery initialization completed for target file /ogg/dirdat/lt000000000, at RBA 1401.

2018-09-29 11:17:08  INFO    OGG-01478  Output file /ogg/dirdat/lt is using format RELEASE 12.3.

2018-09-29 11:17:09  INFO    OGG-01026  Rolling over remote file /ogg/dirdat/lt000000000.

2018-09-29 11:17:09  INFO    OGG-01053  Recovery completed for target file /ogg/dirdat/lt000000001, at RBA 1461.

2018-09-29 11:17:09  INFO    OGG-01057  Recovery completed for all targets.

***********************************************************************
**                     Run Time Messages                             **
***********************************************************************


2018-09-29 11:17:10  INFO    OGG-03522  Setting session time zone to source database time zone 'GMT'.

As I already mentioned that the Integrated Capture is tightly coupled with Oracle Database, there are some in-built database view to monitor the progress of the Integrated capture at the database level. One of them and frequently used is DBA_CAPTURE

SQL> desc DBA_CAPTURE
 Name Null? Type
 ----------------------------------------- -------- ----------------------------
 CAPTURE_NAME NOT NULL VARCHAR2(30)
 QUEUE_NAME NOT NULL VARCHAR2(30)
 QUEUE_OWNER NOT NULL VARCHAR2(30)
 RULE_SET_NAME VARCHAR2(30)
 RULE_SET_OWNER VARCHAR2(30)
 CAPTURE_USER VARCHAR2(30)
 START_SCN NUMBER
 STATUS VARCHAR2(8)
 CAPTURED_SCN NUMBER
 APPLIED_SCN NUMBER
 USE_DATABASE_LINK VARCHAR2(3)
 FIRST_SCN NUMBER
 SOURCE_DATABASE VARCHAR2(128)
 SOURCE_DBID NUMBER
 SOURCE_RESETLOGS_SCN NUMBER
 SOURCE_RESETLOGS_TIME NUMBER
 LOGMINER_ID NUMBER
 NEGATIVE_RULE_SET_NAME VARCHAR2(30)
 NEGATIVE_RULE_SET_OWNER VARCHAR2(30)
 MAX_CHECKPOINT_SCN NUMBER
 REQUIRED_CHECKPOINT_SCN NUMBER
 LOGFILE_ASSIGNMENT VARCHAR2(8)
 STATUS_CHANGE_TIME DATE
 ERROR_NUMBER NUMBER
 ERROR_MESSAGE VARCHAR2(4000)
 VERSION VARCHAR2(64)
 CAPTURE_TYPE VARCHAR2(10)
 LAST_ENQUEUED_SCN NUMBER
 CHECKPOINT_RETENTION_TIME NUMBER
 START_TIME TIMESTAMP(6)
 PURPOSE VARCHAR2(19)
 CLIENT_NAME VARCHAR2(4000)
 CLIENT_STATUS VARCHAR2(8)
 OLDEST_SCN NUMBER
 FILTERED_SCN NUMBER

Query this view to check the status and progress of the Integrated Capture as below

SQL> select capture_name, start_scn, first_scn, logminer_id, source_database from dba_capture;

CAPTURE_NAME START_SCN FIRST_SCN LOGMINER_ID SOURCE_DATABASE
------------------------- ---------- ---------- ----------- -------------------------
OGG$CAP_INEXT 4043243 4043243 3 GGDB1

The detailed explanation of the DBA_CAPTURE can be checked from the below link,

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/refrn/ALL_CAPTURE.html#GUID-2B5CED0A-7CCB-479E-BFF8-028BD8BFBEFC

Hope you enjoyed the post. Will see about the Integrated Replicat process in my next post.

Cheers 🙂

Approach to Troubleshoot an Abended OGG Process

Oracle GoldenGate is an Heterogeneous replication tool. It is very easy to install and configure Oracle GoldenGate. The real challenge comes when the Processes gets ABEND. Sometimes it is easy to detect problems, but sometimes we will be really not knowing how to proceed or approach to solve or troubleshoot the issue.

This article explains,

1. Levels of Failure in Oracle GoldenGate.
2. The approach to Troubleshoot Oracle GoldenGate.
3. How to identify the issue.
4. What are the files to be looked for Troubleshooting Oracle GoldenGate.
5. Tools to Monitor and Troubleshoot Oracle GoldenGate.

1. LEVELS OF FAILURE

Oracle GoldenGate can Abend or Fail at different levels. There might be many reasons for the Oracle GoldenGate Process failures. The different levels at which the Oracle GoldenGate processes fails or Abends are .,

1. Database Level
2. Network Level
3. Storage Level
4. User Level

a. DATABASE LEVEL OF FAILURE

Oracle GoldenGate also fails if you have issues at the Database Level. Below are some of the issues listed.,

Tablespace filled
Redo log corruption
Archive log destination filled
No Primary Key or Unique Index on the tables
Archive log Mode not enabled
Reset Log performed
Memory Problem with Streams_Pool_Size 
Database Hung

b. NETWORK LEVEL OF FAILURE

Network plays a vital role in the Oracle GoldenGate Replication. For each and every commands you execute in the GGSCI prompt, the Manager Process opens a port. There should be a proper, speedy network between the Source and Target sides. Some of the Network level failures are listed below.,

Network Fails
Network slow
Ports Unavailability
Firewall Enabled

c. STORAGE LEVEL OF FAILURE

There should be sufficient storage space available for the Oracle GoldenGate to keep the Trail files. Even at Oracle Database level, there should be sufficient space to retain the Archive Log files and also space for tablespaces. Proper privileges should be given to the file system so that, Oracle GoldenGate Processes creates the trail files in the location.

File System fills
File System corruption
No Proper privileges given to the File System
Connection Problem between Database Server and Storage
No Free Disks Available in Storage

d. USER LEVEL OF FAILURE

Of course, we users make some mistakes. Some of the user level of failures are below.,

Mistakenly Delete the GoldenGate Admin User at Database level.
Manually Performing Operations like Insert, Delete and Update at Target Side.
Manually deleting / removing the Trail Files either from Source server or Target server.
Forcefully Stopping any Oracle GoldenGate Processes like Manager, Extract, Pump, Collector or Replicat.
Killing the Oracle GoldenGate Processes at OS level.
Performing an ETROLLOVER at Extract / Pump / Replicat Processes.

So we have seen the different levels of Failures in Oracle GoldenGate. How to proceed if you face these failures in your day to day life. What is the approach to identify the issue and solve it.

2. HOW TO APPROACH?

The below are the steps on how to approach to the problem. If the environment is a known one, then you can skip some of the steps.

Learn and Understand the Environment
Operating Provider and Operating System Version
Database Provider and Database Version
Is it a Cluster, Active / Passive?
Oracle GoldenGate UniDirectional or Bi-Directional
If Oracle, then is it a Single Instance or RAC – Real Application Clusters
Is it a Homogeneous or Heterogeneous Environment Replication
Network Flow, Ports Used and Firewalls configured
Components used in Oracle GoldenGate like Extract, Pump, Replicat processes and Trails files.

After seeing all the prerequisites like Environment study etc, check if the Processes are up and running. INFO ALL is the command to check the status of the processes. There are different status of process.

RUNNING
The Process has started and running normally.

STOPPED
The Process has stopped either normally (Controlled Manner) or due to an error.

STARTING
The Process is starting.

ABENDED
The Process has been stopped in an uncontrolled manner. Abnormal End is known was ABEND.

From the above status of the Processes status, RUNNING, STOPPED and ABENDED are common. But what is STARTING? What actually happens when the Oracle GoldenGate process is in this state?

Whenever you start an Abended Extract Process, it will take some time to get started. It is because, the process is getting recovered from its last Abend point. To recover it’s processing state, the Extract Process search back to its Online Redo Log file or Archive log file, to find the first log record for the opened transactions when it is crashed. The more back the Extract Process goes in search, the more it takes to recover itself and get started. So, It takes more time depending upon how long back the Open transaction is in the Redo Logs or Archive Logs.

To check the status of the Extract Process and also to check if it is recovering properly, issue the command.,

THREE BASIC FILES

There are many files which needs to be checked whenever you face issue in Oracle GoldenGate. Out of which Oracle GoldenGate logs the activity in three files.

1. Error Log File – ggserr.log
2. Report File
3. Discard File

4. Trace File
5. Alert Log File
6. DDL Trace File

The first three are the very basic, also can be called as major files which are to be looked in to whenever there are problems in the Oracle GoldenGate. Below, is the explanation for these three files.

What is Error Log file – ggserr.log?

This file is created during the Installation of the Oracle GoldenGate. The file is created in the Oracle GoldenGate home directory with the name ggserr.log. For each installation of Oracle GoldenGate, a ggserr.log file is created in the respective Oracle GoldenGate directory. This file is updated by all the processes of Oracle GoldenGate and the below information are logged in this file.,

 Start and Stop of the Oracle GoldenGate Processes.
 Processing Information like Bounded Recovery operations.
 Error Messages.
 Informational messages like normal operations happening in Oracle GoldenGate.
 WARNING Messages like Long Running Transactions.
 Commands executed in GGSCI Prompt.

The format in which the Oracle GoldenGate processes logs the information in to this ggserr.log file is below.,

You can view this file in the ggsci prompt itself by using the command VIEW GGSEVT. But it is always better to view it using the OS tool as this file can grow a lot. The below is the example.,

So with the ggserr.log file you basically identify the below.,

 What is the Error?
 When the Error occurred?
 How Frequently it occurred?
 What were the operations performed before the Error occurred?
 How Frequently the error occurred?

What is Report File?

A Report file is a process specific log file. Each process has its own report file created and this file is created during the instantiation of the process. This file is stored in the directory /dirrpt and the format of this file is .rpt. This file is automatically renamed on the next instantiation of the process. If a process starts all the log entries for that process are written to its respective report file.

Let’s consider a process called EXT and the report file during instantiation of this process is called as EXT.rpt. If this process is stopped and started again, existing file EXT.rpt will be automatically renamed to EXT0.rpt and a new file will be generated with the name EXT.rpt and this occurs recursively till the value of the sequence reaches 9. If the last report file name for the process EXT is created as EXT9, now during the new file generation, the last file EXT9.rpt will be removed and EXT8.rpt will be renamed as EXT9.rpt. So, the report file with the lower sequence value will be the latest and younger one when compared with older sequence valued report file.

REPORTROLLOVER parameter is used to manually or forcefully create a new report file for the processes. To view the current report of the process the below command is used.,

To get the runtime statistics report of a process, use the below command,

The below information can be seen in the report file of a particular process.,

	Oracle GoldenGate Product Version and Release
	Operating Version, Release, Machine Type, Hostname and Ulimit settings of the respective process
	Memory Utilized by the respective process
	Configured Parameters of the respective Oracle GoldenGate Process
	Database Provider, Version and Release
	Trail files Information
	Mapping of Tables
	Informational messages with respective to a particular process
	Warning messages with respective to a particular process
	Error messages with respective to a particular process
	All DDL Operations performed.
	All the Discarded Errors and Ignored Operations
	Crash dumps
	Any commands which are performed on that particular process.

The below is the example of the Report file which I had split it to many parts so that you will get an clear understanding.

1. Oracle GoldenGate Product Version and Release. Operating Version, Release, Machine Type, Hostname and Ulimit settings of the respective process

2. Configured Parameters of the respective Oracle GoldenGate Process

3. Database Provider, Version, Release and Trail File information.

4. Mapping of tables and Informational messages with respect to the Process.

5. Crash dump and Error messages of the respective process.

Above examples clearly shows the contents of a Report file. So with the help of a Report file, the following can be known,

 In which Trail File the Process gets Abend.
 Whether the Trail File is moving forward?
 Whether the process is getting failed with Same Trail File?
 What operations has been performed before the process abend?
 Whether any errors in the Parameter configuration?
 Whether the MAP statements has the correct table names?

What is Discard File?

A log file for logging failed operations of the Oracle GoldenGate processes. It is mainly used for Data errors. In Oracle GoldenGate 11g, this file is not created by default. We have to mention a keyword DISCARDFILE to enable discard file logging. But from Oracle GoldenGate 12c, this file is generated by default during the instantiation of the process.

The Naming format of the Log file is ., but this file can named manually when enabling. Extension of this file is .DSC and this file is located in the directory /dirrpt

PURGE and APPEND keywords are used in the process parameter files to manually maintain the Discard File. Similar to the Report file, the Discard file can also be rolled over using the keyword DISCARDFILEROLLOVER. The syntax is as below.,

file_name
The relative or fully qualified name of the discard file, including the actual file name.

APPEND
Adds new content to existing content if the file already exists.

PURGE
Purges the file before writing new content.

MAXBYTESn | MEGABYTESn
File size in Bytes. For file size in bytes the valid range is from 1 to 2147483646. The default is 50000000. For file size in megabytes the valid range is from 1 to 2147. The default size is 50MB. If the specified size is exceeded, the process Abends.

NODISCARDFILE
When using this parameter, there will be no discard file creation. It prevents generating the Discard file.

The below is the example for the Discard file parameter used in the Replicat process parameter file.,

The Discard File is mainly used in the Target Side. Each and Every Replicat Process should have its own Discard File. This is a mandatory one.

The below is the example which shows the contents of the Discard file. The Replicat process got Abended due to the error OCI Error ORA-01403 : no data found. The discard file is as below.,

So, we have seen about the three basic and important file where Oracle GoldenGate Processes logs the information. There is also a tool which is used to troubleshoot Oracle GoldenGate during Data corruption or trail file corruption. This is mainly used when Data error occurs in the Oracle GoldenGate.

The tool is called LOGDUMP. It is a very useful tool which allows a user to navigate through the trail file and compare the information of the trail file with the data extracted and replicated by the processes. The below can be seen in the trail file using the LOGDUMP utility.,

 Transactions Information
 Operation type and Time when the Record written.
 Source Object name
 Image type, whether it is a Before Image or After Image.
 Column information with data and sequence information.
 Record length, Record data in ASCII format.
 RBA Information.

The below is the example of the contents of the Trail File.,

Some of the Logdump commands with the description are below., To get in to the logdump prompt, just run the logdump program from the Oracle GoldenGate Home directory.

Logdump 1> GHDR ON – To view the Record Header.

Logdump 2> DETAIL ON – To view the column information.

Logdump 3> DETAIL DATA – To view the Hex and ASCII values of the Column.

Logdump 4> USERTOKEN ON – User defined information specified in the Table of Map statements. These information are stored in the Trail file.

Logdump 4> GGSTOKEN ON – Oracle GoldenGate generated tokens. These tokens contains the Transaction ID, Row ID etc.,

Logdump 5> RECLEN length – Manually control the length of the record.

Logdump 6> OPEN file_name – To open a Trail file.

Logdump 7> NEXT – To move to the next File record. In short, you can use the letter N.

Logdump 8> POS rba – To position to a particular RBA.

Logdump 9> POS FIRST – To go to the position of the first record in the file.

Logdump 10> POS 0 – This is the alternate command for the POS FIRST. Either of this can be used.

Logdump 11> SCANFORENDTRANS – To go to the end of the transaction.

Logdump 12> HELP – To get the online help.

Logdump 13> EXIT – To exit from the Logdump prompt. You can also use QUIT alternatively.

Hope you got a clear view on how to approach to a Oracle GoldenGate problem and also find who stopped the Oracle GoldenGate process and the reason behind it.

Oracle GoldenGate Extract Trail Formats FORMATASCII | FORMATSQL | FORMATXML

By default Oracle GoldenGate will write the trail files in it’s own Canonical Format. But other than this default format, we can make the Extract Process to write the Trail files in the below formats which is offered by Oracle GoldenGate itself.,

ASCII
SQL
XML

In this article, we will see how to make the Extract process to write it’s file in the above formats.

ASCII Format

To make the Extract process write the data in an ASCII format, you need to use the below parameter in the Extract process parameter file,

FORMATASCII

You can format output that is compatible with most database load utilities and other programs that require ASCII input. This parameter is required by the file-to-database-utility initial load method.

A FORMATASCII statement affects all extract files or trails that are listed after it in the parameter file. The relative order of the statements in the parameter file is important. If listed after a file or trail specification, FORMATASCII will not take effect.

All the trail files (extract files) will be affected if they are mentioned or listed after the parameter FORMATASCII in the parameter file.

There are certain limitations needs to be noted when this parameter is used.,

Do not use FORMATASCII

if the data will be processed by the Replicat process. Replicat expects the default canonical format.
if FORMATSQL or FORMATXML is being used.
if the data contains LOBs.
if Extract is connected to a multi-byte DB2 subsystem.
if Oracle GoldenGate DDL support is active.
in pass-through mode in a data pump because there is no table metadata available to generate trail output in the specified form.,

Now, let us see how to use this parameter FORMATASCII and how the output is generated.,

I have a table as below in the source schema and it contains no rows.,

SQL> desc source.t1
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 EMP_ID                                    NOT NULL NUMBER
 EMP_NAME                                           VARCHAR2(20)
 SALARY                                             NUMBER(6)

SQL> select * from source.t1;

no rows selected

Add the parameter to the Extract process parameter file as below,

GGSCI (OGGR2-1.localdomain) 9> view params ext1

EXTRACT EXT1
USERID ggadmin, PASSWORD oracle
FORMATASCII
EXTTRAIL /vol3/ogg/dirdat/et
TABLE source.*;

Let us start the extract now.

GGSCI (OGGR2-1.localdomain as ggadmin@GGDB1) 52> info ext1

EXTRACT    EXT1      Last Started 2017-05-25 18:51   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:02 ago)
Process ID           3106
Log Read Checkpoint  Oracle Redo Logs
                     2017-05-25 18:51:51  Seqno 255, RBA 4478464
                     SCN 0.3646969 (3646969)

Now the Extract EXT1 is started and it is in RUNNING state. Let us insert a record in the table SOURCE.T1 and update it. Check how the data is written in the extract file (trail file)

SQL> insert into source.t1 values(1,'VEERA',1000);

1 row created.

SQL> update source.t1 set salary=2000 where emp_id=1;

1 row updated.

SQL> commit;

Commit complete.

SQL> select * from source.t1;

    EMP_ID EMP_NAME                 SALARY
---------- -------------------- ----------
         1 VEERA                      2000

Let us open the extract file et000000 and check how the record is written.,

[oracle@OGGR2-1 dirdat]$ cat et000000
B,2017-05-28:16:05:37.000000,1495967737,256,
I,A,SOURCE.T1,EMP_ID,'1',EMP_NAME,'VEERA',SALARY,1000
V,B,SOURCE.T1,EMP_ID,'1',SALARY,1000
V,A,SOURCE.T1,EMP_ID,'1',SALARY,2000
C,

In the above output both Before and After images are captured by default. So by default the output contains the below,

1. Database object names, such as table and column names, and CHAR and VARCHAR data are written in the default character set of the operating system.
2. Without specifying any parameter options, FORMATASCII generates records in the following format.

Line 1 contains the following tab-delimited list:

The operation-type indicator: I, D, U, V (insert, delete, update, compressed update).
A before or after image indicator: B or A.
The table name in the character set of the operating system.
A column name, column value, column name, column value, and so forth.
A newline character (starts a new line).

Line 2 contains the following tab-delimited begin-transaction record:

The begin transaction indicator, B.
The timestamp at which the transaction committed.
The sequence number of the transaction log in which the commit was found.
The relative byte address (RBA) of the commit record within the transaction log.
Line 3 contains the following tab-delimited commit record:
The commit character C.
A newline character.

There are lot more options which can used with the FORMATASCII parameter. For complete detail of the options used please check the “Oracle GoldenGate Reference Guide”.

SQL Format

If you want the output of the Extract file in an SQL format then use the below parameter in the Extract process parameter file.

FORMATSQL

This parameter also have some limitations as below,

Do not use FORMATSQL

if the data will be processed by the Replicat process. Replicat expects the default canonical format.
if FORMATASCII or FORMATXML is being used.
if Oracle GoldenGate DDL support is active.
in pass-through mode in a data pump because there is no table metadata available to generate trail output in the specified form.

Now, I have edited the parameter file of the Extract process EXT1 as below,

GGSCI (OGGR2-1.localdomain as ggadmin@GGDB1) 67> view params ext1

EXTRACT EXT1
USERID ggadmin, PASSWORD oracle
FORMATSQL
EXTTRAIL /vol3/ogg/dirdat/et
TABLE source.*;

Now the Extract EXT1 is started and it is in RUNNING state. Let us insert a record in the table SOURCE.T1 and update it. Check how the data is written in the extract file (trail file)

SQL> select * from source.t1;

no rows selected

SQL> insert into source.t1 values(1,'VEERA',1000);

1 row created.

SQL> update source.t1 set salary=2000 where emp_id=1;

1 row updated.

SQL> commit;

Commit complete.

SQL> select * from source.t1;

    EMP_ID EMP_NAME                 SALARY
---------- -------------------- ----------
         1 VEERA                      2000

Let us open the extract file et000000 and check how the record is written.,

[oracle@OGGR2-1 dirdat]$ cat et000000
B,2017-05-28:16:12:43.000000,1495968163,259,
INSERT INTO SOURCE.T1 (EMP_ID,EMP_NAME,SALARY) VALUES ('1','VEERA',1000);
UPDATE SOURCE.T1 SET SALARY=1000 WHERE EMP_ID='1';
UPDATE SOURCE.T1 SET SALARY=2000 WHERE EMP_ID='1';
C,

The default output is as below,
Database object names, such as table and column names, and CHAR and VARCHAR data are written in the default character set of the operating system.

Without options, FORMATSQL transactions are output as follows, in comma-delimited format:

The begin-transaction indicator, B.
The timestamp at which the transaction was committed.
The sequence number of the transaction log in which the commit was found.
The relative byte address (RBA) of the commit record within the transaction log.
The SQL statements.
The commit indicator, C.
A newline indicator.

This parameter FORMATSQL has three options which are below,

NONAMES

Omits column names for insert operations, because inserts contain all column names. This option conserves file size.

NOPKUPDATES

Converts UPDATE operations that affect columns in the target primary key to a DELETE followed by an INSERT. By default (without NOPKUPDATES), the output is a standard UPDATE operation.

ORACLE

Formats records for compatibility with Oracle databases by converting date and time columns to a format accepted by SQL*Plus.

For complete detail of the options used please check the “Oracle GoldenGate Reference Guide”.

XML Format

If you want the output of the Extract file in an XML format then use the below parameter in the Extract process parameter file.

FORMATXML

This parameter also have some limitations as below,

Do not use FORMATXML

if the data will be processed by the Replicat process. Replicat expects the default canonical format. Do not use FORMATXML if FORMATASCII or FORMATSQL is being used.
if Extract is connected to a multi-byte DB2 subsystem.
if Oracle GoldenGate DDL support is active.
in pass-through mode in a data pump because there is no table metadata available to generate trail output in the specified form.

Now, I have edited the parameter file of the Extract process EXT1 as below,

GGSCI (OGGR2-1.localdomain as ggadmin@GGDB1) 86> view params ext1

EXTRACT EXT1
USERID ggadmin, PASSWORD oracle
FORMATXML
EXTTRAIL /vol3/ogg/dirdat/et
TABLE source.*;

Now the Extract EXT1 is started and it is in RUNNING state. Let us insert a record in the table SOURCE.T1 and update it. Check how the data is written in the extract file (trail file)

SQL> select * from source.t1;

no rows selected

SQL> insert into source.t1 values(1,'VEERA',1000);

1 row created.

SQL> update source.t1 set salary=2000 where emp_id=1;

1 row updated.

SQL> commit;

Commit complete.

SQL> select * from source.t1;

    EMP_ID EMP_NAME                 SALARY
---------- -------------------- ----------
         1 VEERA                      2000

Let us open the extract file et000000 and check how the record is written.,

The parameter FORMATXML have below options which can used along with this.,

ENCODING UTF-8

Outputs the full sized XML to the XML file in UTF-8, but does not output headers.
The XML header tag and root node are included in the XML output. The root node is output as OracleGoldenGateFormatXML.
Regardless of their size, XML stored as CLOB is output in a CDATA section and binary data including BLOB is output to Base64 encoding.

INLINEPROPERTIES | NOINLINEPROPERTIES

Controls whether or not properties are included within the XML tag or written separately. INLINEPROPERTIES is the default.

TRANS | NOTRANS

Controls whether or not transaction boundaries and commit timestamps should be included in the XML output. TRANS is the default.

For complete detail of the options used please check the “Oracle GoldenGate Reference Guide”.

Hope this was helpful. Your comments and suggestions are always welcome.

Cheers,
Veera 🙂

Oracle GoldenGate Veridata 122 Step by Step Installation

Oracle GoldenGate Veridata is a high-speed data-comparison and repair solution that identifies, reports on, and fixes data discrepancies between heterogeneous databases without interrupting ongoing business processes.

This post explains the step by step installation and configuration of Oracle GoldenGate Veridata 12.2 in Linux.

The first and primary point to be considered before installing any Binaries are the Pre-requisites. For installing the Oracle GoldenGate Veridata, below are the pre-requisites which needs to be taken care.

Certification and System Requirements - Check if the hardware or software configuration supports the product which you are going to install.
Install JDK on the Target Machine -  A minimum of JDK 1.8.0_101 or higher is required.
Install Oracle WebLogic Server - Oracle Fusion Middleware 12.2.1.2.0 Infrastructure installer to install Oracle WebLogic Server and the Oracle Java Required Files (JRF) Infrastructure services.

You can either download it from the below links,

http://www.oracle.com/technetwork/developer-tools/adf/downloads/index.html
or 
https://edelivery.oracle.com/

Install a Certified Database for Configuring Veridata Repository - Oracle GoldenGate Veridata supports the following databases:
Oracle
Microsoft SQL Server

You can refer the Installation Guide below for more information on the Pre-requisites,

https://docs.oracle.com/goldengate/v12212/gg-veridata/GVDIS/GVDIS.pdf

So, now let us go into the Installation part.

INFRASTRUCTURE INSTALLATION

Firstly, we need to install both WebLogic Server (WLS) and the Oracle Fusion Middleware (FMW) Infrastructure.

1. Open a terminal and run the below command,

java -jar fmw_12.2.1.0.0_infrastructure.jar


2. A welcome screen appears. Just click Next button.

3. Check the option Skip Auto Updates and proceed.

4. Enter the location for Oracle Home to store the binary files.

5. Select either installation type, with or without examples. Notice at the bottom that WLS is going to be installed.

6. Wait for the progress bar to reach 100%. The minimum Java version that it is looking for is greater than 1.8. Here, in my installation, it threw a warning and I ignored it as I had already jdk 1.8 installed.

7. If you want to enter your email, you may. Otherwise, accept the warning that it is blank.

8. An Installation summary is displayed. Simply press the Install button below.

9. Wait for the progress bar to reach 100%. You can optionally view the logs.

10. Click the Finish Button.

Now we have successfully install the Infrastructure Binaries. The second step is to install the Oracle GoldenGate Veridata.

INSTALLATION OF ORACLE GOLDENGATE VERIDATA

Open a terminal session and execute the below command to invoke the Veridata Installation Screen,

java -jar fmw_12.2.1.0.0_ogg.jar

2. A welcome screen appears. Just click Next button.

3. Check the option Skip Auto Updates and proceed.

4. Enter the location for Oracle Home to store the binary files.

5. Select either installation type. Here we are going to install Veridata Server. You can install this in two ways

1. Oracle GoldenGate Veridata Server and Agent together (or)
2. Oracle GoldenGate Veridata Server separately and Oracle GoldenGate Veridata Agent separately.

Here I am selecting the “Oracle GoldenGate Veridata Server and Agent” as my installation type.
Click Next button

6. Wait for the progress bar to reach 100%. The minimum Java version JDK 1.8.0_101 or higher is required.

7. A summary page is displayed. Just click Next Button.

8. Wait for the progress bar to reach 100%. You can view the Logs by clicking the “View Log” button in the right bottom of the screen.

9. Installation is complete. Click the Finish button.

The beauty in here is, it tells you what is the next step you need to perform. This you can see in the bottom of the “Installation Complete” screen.

REPOSITORY CREATION (RCU)

The Repository Creation Utility (RCU) presumes that you have already installed a compatible database to house the repository.

1. Open a Terminal and Invoke the RCU by issuing the below command.

2. As usual a Welcome Screen appears. Click Next.

3. By default “System Load and Product Load” will be checked. Click the Next button

4. Enter the appropriate information to log in to your database with full privileges.

5. If this is the first time you have run RCU, then the only option is to create a new prefix. If you have run RCU in the past, then it will increment the new prefix to be one more than the last existing one, for example, DEV2 or DEV3. You can override the suggestion.

6. Select Oracle GoldenGate, Veridata Repository.

7. Enter the same password twice. Many of the sample VM databases use oracle (all lower case) as the default password for all schemas.

8. The respective Tablespaces will be created. Accept the default tablespace allocations.

9. A Summary Page will be displayed. Simply click on the Create button.

10. The Repository is created and you can see the details in the screen.

11. Click on the rcu.log to check the logs else you can click the Close button.

DOMAIN CREATION

The Configuration Wizard can either create a new domain or extend an existing domain. Here, I am creating a new Domain.

1. Open a terminal session. Invoke the Configuration wizard by entering

2. It will ask whether you need to create a new domain or extend an existing one. Select the 1st option “Create a new domain”. Provide the domain location and click on Next button.

3. Check if the Basic WebLogic Server Domain is pre-selected and then select Veridata Standard Server Domain. This will cause additional options to be selected automatically. If Oracle JRF is not auto-selected, then you did not install the Infrastructure correctly. Click on Next to continue.

4. Enter the same password twice for the WebLogic Administrator Account. Complexity rules require a number or special character in the password. Click on Next to continue.

5. If you plan to use localhost/127.0.0.1 as the host name/address, you must select Development. In production you would not use localhost, but would use the real host name/address. Click on Next to continue.

6. Fill in the Service, Host Name, Port, Schema, and Password. Click Get RCU Configuration to continue.

7. You must receive the message “Successfully Done.” Click Next to continue.

8. Verify that the host names and passwords are all filled in. Click on Next.

9. The “Test Selected Connections” happens automatically. Looking in the result log, make sure all six are successful. Click on the Next Button.

10. Enter the Password and Click on the Next button.

11. Select the Administration Server and click the Next button to continue.

12. Provide the port details and click on Next to continue.

13. You will see the Configuration Summary page. Just click the Create button.

14. When the progress bar reaches 100%, it should say, “Domain Created Successfully!”. Click the Next to continue.

15. Note the URLs to use from a web browser to reach the WebLogic Administration Console. Often there are two: an http URL which is insecure, and an https URL which uses SSL for increased security. Click on the Finish button.

START WEBLOGIC SERVER

Start the WebLogic Server by issuing the below command.

cd /vol3/oracle/user_projects/domains/base_domain/bin

sh startWebLogic.sh

Open the below URL.

http://oggr2-1.localdomain:7001/login/LoginForm.jsp

Enter the username and password. Click the Login button.

You will be entered into the WebLogic Server.

START VERIDATA SERVER

Start the WebLogic Server by issuing the below command.

cd /vol3/oracle/user_projects/domains/base_domain/veridata/bin

sh veridataServer.sh start

Provide the username and password.

You could see the below message in the screen.

Server state changed to RUNNING.

VERIDATA SERVER USER CREATION

Now we need to create a user for accessing the Veridata GUI. Navigate to the below page.

Home >Summary of Servers >Summary of Security Realms >myrealm >Users and Groups >weblogic >Users and Groups >Summary of Security Realms >myrealm >Users and Groups

1. Click on the New Tab

2. Create the new user. Enter the username and password. Click OK button.

3. You could see the message “User created successfully” and the user will be displayed in the list.

4. Now click on the veridata user. Click on Groups tab and select all the Veridata related groups and move them

5. Click on the Save button. You could see the message “Settings updated successfully”.

CONFIGURATION OF AGENT

1. Deploy the Agent as below.

1a. Under the Middleware home, navigate to the directory “veridata/agent”. You could see the file “agent_config.sh”. Execute the file as below,

sh agent_config.sh AGENT_HOME

You will see the message as “Successfully deployed the agent”.

2. Start the Agent as below.

2a. Under the Agent Home where you deployed the Agent, there will be a file named as “agent.properties.sample”. Take a backup of this and rename it as “agent.properties”.

2b. Edit the “agent.properties” file and give the port and jdbc url as below.

2c. Start the Agent as below. Navigate to the Agent Home and issue the below command.

cd /vol3/oracle/Agent
./agent.sh start agent.properties

Now login to the Veridata Server using the below URL

http://oggr2-1.localdomain:8830/veridata/login.jsf

Now you can perform your tasks like Comparison, Repair etc., using Oracle GoldenGate Veridata

TO BE NOTED IN WINDOWS

When installing in Windows, there is a slight change in starting the Agent after deploying it. You need to create the Agent as a service in Windows. I had already performed the installation in Windows and this is how it should be done.

Go to the below location and edit the agent_services.properties file.

C:\oracle\Middleware\Oracle_Home\Agent\service

_VERIDATA_SERVICE_NAME=

Give a name to the agent as below,

_VERIDATA_SERVICE_NAME=VAgent

Install the Agent as below.

Now start the Veridata Agent using the below command.

Start_Veridata_Agent.bat agent_service.properties

Hope you enjoyed this article. Your comments and suggestions are always welcome.

Cheers
Veera