Oracle database CDC (Change Data Capture)
LogMiner Connector
a2.pseudocolumn.ora_rowscn
, a2.pseudocolumn.ora_commitscn
, a2.pseudocolumn.ora_rowts
, & a2.pseudocolumn.ora_operation
. For more information please read KAFKA-CONNECT.md
a2.pseudocolumn.ora_username
, a2.pseudocolumn.ora_osusername
, a2.pseudocolumn.ora_hostname
, a2.pseudocolumn.ora_audit_session_id
, a2.pseudocolumn.ora_session_info
, & a2.pseudocolumn.ora_client_id
. For more information please read KAFKA-CONNECT.md
Sink Connector
New parameters: a2.table.mapper
, a2.table.name.prefix
, and a2.table.name.suffix
Simplification of configuration for Oracle Active DataGuard - now the same configuration is used for Oracle Active DataGuard as for a primary database
###LogMiner Connector
a2.stop.on.ora.1284
to manage the connector behavior on ORA-1284. For more information please read KAFKA-CONNECT.md
a2.print.unable.to.delete.warning
to manage the connector output in log for DELETE operations over table's without PK. For more information please read KAFKA-CONNECT.md
a2.schema.name.mapper
to manage schema names generation. For more information please read KAFKA-CONNECT.md
###Docker image Rehost Confluent schema registry clients (Avro/Protobuf/JSON Schema) and bump version to 7.5.3
a2.topic.mapper
to manage the name of the Kafka topic to which data will be sent. For more information please read KAFKA-CONNECT.md
a2.table.mapper
to manage the table in which to sink the data.ServiceLoader manifest files, for more information please read KIP-898: Modernize Connect plugin discovery
a2.incomplete.redo.tolerance
- to manage connector behavior when processing an incomplete redo record. For more information please read KAFKA-CONNECT.md
a2.print.all.online.scn.ranges
- to control output when processing online redo logs. For more information please read KAFKA-CONNECT.md
a2.log.miner.reconnect.ms
- to manage reconnect interval for LogMiner for Unix/Linux. For more information please read KAFKA-CONNECT.md
a2.pk.type
- to manage behavior when choosing key fields in schema for table. For more information please read KAFKA-CONNECT.md
a2.use.rowid.as.key
- to manage behavior when the table does not have appropriate PK/unique columns for key fields. For more information please read KAFKA-CONNECT.md
a2.use.all.columns.on.delete
- to manage behavior when reading and processing a redo record for DELETE. For more information please read KAFKA-CONNECT.md
Online redo logs processing:
Online redo logs are processed when parameter a2.process.online.redo.logs
is set to true (Default - false). To control the lag between data processing in Oracle, the parameter a2.scn.query.interval.ms
is used, which sets the lag in milliseconds for processing data in online logs.
This expands the range of connector tasks and makes its use possible where minimal and managed latency is required.
default values: Column default values are now part of table schema
19c enhancements:
a2.print.invalid.hex.value.warning
value to true
solution for incomplete redo information: Solution for problem described in LogMiner REDO_SQL missing WHERE clause and LogMiner Redo SQL w/o WHERE-clause
Support for INTERVALYM/INTERVALDS TIMESTAMP enhancements SDU hint in log
a2.pk.string.length
parameter for Sink Connector and other Sink Connector enhancementa2.transaction.implementation
parameter for LogMiner Source Connector: when set to ChronicleQueue
(default) oracdc uses Chronicle Queue to store information about SQL statements in Oracle transaction and uses off-heap memory and needs disk space to store memory mapped files; when set to ArrayList
oracdc uses ArrayList to store information about SQL statements in Oracle transaction and uses JVM heap (no disk space needed).fix unhandled ORA-17410 running 12c on Windows and strict checks for supplemental logging settings
New a2.schema.type=single
- schema type to store all columns from database row in one message with just value schema