How to use GoldenGate HANDLECOLLISIONS Parameter correctly?

When is the Oracle Goldengate HANDLECOLLISIONS parameter useful?

The Goldengate HANDLECOLLISIONS parameter is configured on the target database in the Replicat process to enable processing of the data when there are duplicate data integrity issues in the destination database.

There could be a number of reasons which could cause this condition. Some of them include the following.

    • The table data was instantiated at a particular CSN in the destination database but the Replicat process was started at a CSN prior table load CSN. This causes an overlap.
    • Duplicate data exists in the source table.
    • Misconfiguration of the extract or Replicat configuration

The HANDLECOLLISIONS parameter is utilized when there is a possibility of an overlap of the trail data being applied by the Replicat process to the destaination database.

Without the use of this parameter, the Replicat will ABEND when it tries to process the inserts from the trail into the table which already has the rows (PK or unique constraint violation).

It will also ABEND when the Replicat tries updating or deleting rows which are not present in the destination tables. To overcome this normally the RBA of the trail has to be moved forward one transaction before the Replicat can be restarted and will stay running.

The following is the behavior of the Replicat process when the Goldengate HANDLECOLLISIONS parameter is enabled.

On Source On Target Condition Action
UPDATES UPDATES Updated in source but row not present in target Ignored
DELETES DELETES Deleted in source but row not present in target Ignored

How do you apply rows which would normally fail on target?

To capture rows which are either duplicate INSERTS or do not exist in the destination to be updated or deleted, REPERROR can be used to record these rows into a discard file.

In the example below, the REPERROR (1403, Discard) parameter is used to identify a condition when the row the Replicat is looking for, is not present in the destination database.

Similarly, the REPERROR (0001, Discard) is raised when a duplicate INSERT is attempted but it violates a PK or unique value key as the row is already present in the table.

Replicat rep02
DISCARDFILE /u01/app/ha/ggs/dirrpt/rep02.dsc, APPEND, MEGABYTES 1024

This is how the Replicat will behave in during the different scenarios.

Source Target Error message
Duplicate inserts Send to discard file when it comes across – REPERROR (0001 Discard) Unique constraint violation.
Updated in source but row not present at target Send to discard file when it comes across – REPERROR (1403 Discard) No data found
Deleted in source but row not present at target Send to discard file when it comes across – REPERROR (1403 Discard) No data found

How can we enable HANDLECOLLISIONS for only one table?

Firstly, as discussed above the Goldengate HANDLECOLLISIONS should be used only when and where necessary.

It should be removed from the Oracle Goldengate Replication configuration as soon as possible.

Secondly if it has to be enabled, it should only be done so ONLY for tables requiring this.

This can be achieved by using HANDLECOLLISION, but by listing the specific tables and then turning it off using the NOHANDLECOLLISIONS clause for the remaining tables, as shown below.


Set Globally

Enable global HANDLECOLLISIONS for ALL MAP statements

MAP vst.inventory, TARGET vst.inventory;
MAP vst.trans_hist, TARGET vst.trans_hist;
MAP vst.trans, TARGET vst.trans;
MAP vst.orders, TARGET vst.orders;

Set for Group of MAP Statements

Enable HANDLECOLLISIONS for some MAP statements
MAP vst.inventory, TARGET vst.inventory;
MAP vst.trans_hist, TARGET vst.trans_hist;
MAP vst.trans, TARGET vst.trans;
MAP vst.orders, TARGET vst.orders;

Set for Specific Tables

Enable global HANDLECOLLISIONS but disable for specific tables

MAP vst.inventory, TARGET vst.inventory;
MAP vst.trans_hist, TARGET vst.trans_hist;
MAP vst.orders, TARGET vst.orders, NOHANDLECOLLISIONS;

Don't forget to remove the HANDLECOLLISIONS parameter after the Replicat has moved past the CSN where it was abending previously.

Also make sure to restart the Replicat after the removing this parameter.

GoldenGate Document

You can read more on GoldenGate Handlecollisions here.

Troubleshooting Series

GoldenGate: Skipping Transactions
GoldenGate: Finding Open Database Transactions
GoldenGate: How to use handle Collisions correctly?


  1.'Baqar Ali Khan says

    We’ve found some duplicate data in TARGET and those tables don’t have any Constraints,
    Please advise me how to avoid this duplication, Where as Source table data has less records than Target table.

    • Natik Ameen says

      Hi Baqar,

      Thanks for reaching out.

      You will need to determine at what stage did the data duplication happen. Usually it takes place during the initial load and Replicat setup steps due to an overlap.

      This issue often goes undetected when the HANDLECOLLISIONS was left turned on during initial data instantiation and was not removed. Over time if there is any issue which could cause data duplication it will go silently undetected due to the HANDLECOLLISIONS parameter.

      To avoid this there should also be a periodic check on the integrity of the data between source and destination. A poor man’s check would be to write some SQL scripts to do a MINUS between the source and target data. The best method would normally be to use VERIDATA to do these comparisons however there is an additional licensing cost to it.

      Hopefully this helps.a

      – Natik Ameen

  2.'Nico says


    Thanks for these explanations.
    In our case, we are going to have a small subset of our OLTP database into a target database meant for data warehouse (they will use it to feed the data warehouse rather than hit OLTP directly).

    Since this is going to be a subset, just like you describe above, there will be updated rows in OLTP which might be deleted already on the target system.

    Is it possible to change the update into an insert rather than ignore or log it in a discard file?
    In our case, we would still need to obtain the updated row because it has become relevant again much after it was deleted on the target database.



    • Natik Ameen says

      Hi Nico,

      Yes it can be done by setting the NOCOMPRESSDELETES on the source. This will place the image of the deleted row into the trail file. Also set the INSERTALLRECORDS in the replicat parameter file. This will insert all deleted rows on the source database into the target table.

      – Natik

  3.'Augusto says

    I have a lot of problem with GoldenGate, all the time, the process remain “running” but no data has been replicated (REPLICAT process), i try to change the RBA position, he start, jump 2/3 RBA positions and “lock” again.

    I try to see the logs, but the GoldenGate say nothing, i use HANDLECOLLISIONS, but doesn’t change nothing.

    I have noticed a network change, and some locks on Oracle Database, i really try everything. You have some “light” to give me?

    Thanks so much.


    • Natik Ameen says

      Hi Augusto,

      There could be several reasons for this. I would start out with first seeing if there are any peromance bottle necks in the database. You can run the following query to see if there are any system events (waits) that could point to a problem in the database.

      SELECT username, machine, program, last_call_et, event, blocking_session
      FROM v$session
      WHERE username=’‘;

      In this query replace the ‘‘ with your database GoldenGate username/ pay attention to the “event” and the “blocking_session” column.

      If there is nothing that needs attention in the database I would try to enable tracing at the GoldenGate level for the REPLICAT process.

      If this does not provide enough information I would also consider turning tracing on at the OS level. Depending on the type of the Operating system you could use TRUSS is you are on SOLARIS, for LINUX you can use STRACE and so on.

      There is another possibility that there may be a large SQL DML operation in the trail file and you are not seeing any progress until it finishes.

      I hope that was helpful.

      – Natik

  4.'carlos says


    I am new to goldengate I am trying to syncup the data present in the source and the target database using goldengate. I am having a real issue with the handlecollisions parameter. I am starting my replicat at a particular CSN because I am receiving an error with an update statement on a table and the replicat stops when I use the handlecollisions parameter its is going fine without any issues. After the replicat has crossed the CSN and I remove the HANDLECOLLISIONS parameter and restart the replicat I am again seeing the same issue with the update statement. Can you please let me know how to fix this error.

    Thank you

    • Natik Ameen says

      Hi Carlos,

      Thanks for checking with me here. This is what I would suggest.

      Try to find the transaction in the trailfile to see when the transaction started. It’s possible that the transaction that you are having problem with was not committed until much later (but has an older SCN since the transaction was started much earlier). If the commit happened much later it will appear in the trail file after the cutoff CSN you are expecting which could be one of the reasons youre replicat is abending.

      Again you need to get more information on this transaction which can be obtained by looking at the details in the trailfile.

      Hopefully this helps. Let me know if I can be of further help.

      – Natik Ameen

  5.'carlos says

    Thank you very much for your quick reply as per your suggestions I did verify and fixed the issue. I have one last question for you. Can you please let me know what are the parameters that I need to use in my replicat parameter file so that I increase the speed of replication. Currently it is taking forever to syncup and I have a huge database that needs to be synced up

    Thank you


    • Natik Ameen says

      Hi Carlos,

      I am glad you were able to solve the issue.

      To find out what needs to be done to speedup the rate of processing in the database you would need to first first find what wait events you are seeing on the target database.

      To do that there are a number of things that can be done.

      You can start out with querying the V$SESSION_WAIT.

      You can also query the EVENT column in the V$SESSION for the GoldenGate session to determine what the session is waiting for.

      There is also an option of breaking up the Replicat process into multiple Replicats but you would need information on what tables can be grouped in each of the replicats.

      Let me know what the waits are so that I can help.

      – Natik Ameen

  6.'Aziz Yousfi says

    Thanks for all replays
    I got this error in the replicat process for update statements:
    sql error 1403 mapping from s.table_name to t.table_name OCI error no data found

    • Natik Ameen says

      Aziz, always glad to help.

      You may want to look at these areas which may help identify the issue.
      1. Verify that the table structure between source & target si the same.
      2. Ensure trandata is on for the source table.
      3. Does or did this row exist in target table.

      So based on your discovery above you will be able to take the appropriate action.

      If the issue is with the data being msising for some reason, to keep the replicat moving, you may need to use HANDLECOLLISIONS for the replicat process. The HANLDECOLLISION clause will insert a record on the target in-case when it does not exist in the target table.

      Make sure to remove the HANDLECOLLISION from the parameter file after moving past this transaction and restarting the replicat.

      For more information on HANDLECOLLISIONS read Reply

  7.'Ajay says

    Hi Natik

    I m new to GG. Can you please tell me where to declare put this handlecollision parameter. I mean which location/ parameter file.

    • Natik Ameen says

      Hi Ajay,

      Sure you can use this parameter by placing “HANDLECOLLISIONS” parameter before the MAP table names for the table you want to turn on this feature.

      Let me know if you need further help.

  8.'Aziz Yousfi says

    Hi Natik

    I hope you are OK and thanks a lot for your helpful replays .
    I have a problem with the replicat process it shows that it is running but when I send it status or stats it gives me this error :
    ERROR: sending message to REPLICAT (Timeout waiting for message).
    I then killed the process or stop it manually and start it again but it gives me also the same error .
    I wait for about 45 hours but nothing change .
    What can I do to troubleshoot that ?


  9. Karthi Sundar says

    Hi Natik,

    I have gone through your postings. I have a question on something similar to “Nico Says” asked for.

    Our source database has huge number of records for years worth of data, but our target database table does not really required all that old records and needed only 60 days of data. So we have exported subset of data from source and imported in target database and setup the GG replication by ignoring NO_DATA_FOUND condition. This works for us 99% of the time.

    But due to nature of our application which it can update some old records (which we have not taken into our subset) and the moment it is updated, that record become relevant for target database (we need to get that updated record in target). you have mentioned to use COMPRESSDELETES and INSERTALLRECORDS. by reading the usage of these 2 parameters, i am not able to understand how it can help.

    Can you please provide little more details on how to insert the record with new values in target, which was actually updated in source ?

    Thanks a lot for your time.

  10.'Karthi Sundar says

    Hi Natik,

    can you please provide little more details on how to insert a missing record in target database? that was actually updated in source database and not copied to target during initial load.

    Karthi Sundar.

Leave a Reply

Your email address will not be published.