SeaTunnel is a next-generation super high-performance, distributed, massive data integration tool.
file
spell errors (#6606)username
to user
(#6627)ReadonlyConfig::toConfig
(#6353)ResourceManger
and EventReport
module (#6620)OptionUtilTest.test
(#5894)SeaTunnelRow::getBytesSize
not support map interface (#5990)FileUtils::createNewFile
not create new file (#5943)Object.class
option value can not return normal value (#6247)isPartitionFieldWriteInFile
occurred exception when no columns are given (#5508)name
method (#5988)ConnectorPackageServiceContainer
miss implement getSavePointCommand/getRestoreCommand (#5780)JdbcHiveIT
and SparkSinkTest
(#5798)SeaTunnelSource::getProducedCatalogTables
(#5562)SeaTunnelPluginLifeCycle
as deprecated (#5625setTypeInfo
(#5647)SeaTunnelSource::getProducedType
(#5670)SeaTunnelSink::setTypeInfo
(#5682)Factory
option to avoid useless info (#5754)DataTypeConvertor
to improve error message (#5782)SeaTunnelSink::getConsumedType
method and mark it as deprecated (#5755)FILE_OPERATION_FAILED
to CommonError
(#5928)serialVersionUID
to ColumnSupportResourceShare
to spark/flink (#5847)ignoreParseErrors
. (#6065)seatunnel-format-compatible-debezium-json
(#5803)CommonErrorCodeDeprecated.JSON_OPERATION_FAILED
(#5948)amazonsqs
to AmazonSqs
as connector identifier (#5742)getCountSql
to getExistDataSql
(#5838)JsonWriteStrategy
& ExcelWriteStrategy
(#5925)exactly_once
is turned off (#6017)int identity
type in sql server (#6186)seatunnel-hadoop3-3.1.4-uber.jar
into release binary package (#5743)CheckpointTimeOutTest.testJobLevelCheckpointTimeOut
(#5403)RestJobExecutionEnvironment
implement (#5671)RestJobExecutionEnvironment
to rest package (#5764)result_table_name
from action name(checkpoint state key) (#5779)init
and restoreCommit
method in SinkAggregatedCommitter
(#5598)prepare
, getProducedType
method (#5741)table-names
from FakeSource/Assert to produce/assert multi-table (#5604)LZO
compress on File Read (#5083)uuid
in postgres jdbc (#6185)DOING_SAVEPOINT
and SAVEPOINT_DONE
(#5917)connector/seatunnel
directory (#5489).scalafmt.conf
file (#5616)ignoreParseErrors
. (#6065)job.mode
(#4826)nodeUrls
property name fix (#4951)incubating
keyword in document (#5257)notifyTaskStatusToMaster
failed when job not running or failed before run (#4847)Committer
(#3803)path
(#3804)seatunnel-api
from engine storage (#3834)In this version, we have fixed numerous bugs in the Zeta engine, improving its stability and reliability. We have also improved the stability of the CI/CD process in the engineering aspect, optimizing the contributor's experience. In terms of connectors, we have added several new connectors, fixed hidden bugs in commonly used connectors, and refactored some of them to improve the stability of data transmission and enhance the user experience. In this release, we have also added support for MySQL CDC full-table synchronization to StarRocks, and automatic table creation is now possible on the StarRocks end.
fileNameExpression
it will throw NullPointerException #3706[Source] [Fake]
[Source] [Clickhouse]
[Source] [FtpFile]
[Source] [HDFSFile]
[Source] [LocalFile]
[Source] [OSSFile]
[Source] [IoTDB]
[Source] [JDBC]
[Sink] [Assert]
[Sink] [Clickhouse]
[Sink] [Console]
[Sink] [Enterprise-WeChat]
[Sink] [FtpFile]
[Sink] [HDFSFile]
[Sink] [LocalFile]
[Sink] [OSSFile]
[Sink] [IoTDB]
[Sink] [JDBC]
[Sink] [Kudu]
[Sink] [Hive]
[Connector][Flink][Fake] Supported BigInteger Type (#2118)
[Connector][Spark][TiDB] Refactored config parameters (#1983)
[Connector][Flink]add AssertSink connector (#2022)
[Connector][Spark][ClickHouse]Support Rsync to transfer clickhouse data file (#2074)
[Connector & e2e][Flink] add IT for Assert Sink in e2e module (#2036)
[Transform][Spark] data quality for null data rate (#1978)
[Transform][Spark] Add a module to set default value for null field #1958
[Chore]a more understandable code,and code warning will disappear #2005
[Spark] Use higher version of the libthrift dependency (#1994)
[Core][Starter] Change jar connector load logic (#2193)
[Core]Add plugin discovery module (#1881)
[Connector][Hudi] Source loads the data twice
[Connector][Doris]Fix the bug Unrecognized field "TwoPhaseCommit" after doris 0.15 (#2054)
[Connector][Jdbc]Fix the data output exception when accessing Hive using Spark JDBC #2085
[Connector][Jdbc]Fix JDBC data loss occurs when partition_column (partition mode) is set #2033
[Connector][Kafka]KafkaTableStream schema json parse #2168
[seatunnel-core] Failed to get APP_DIR path bug fixed (#2165)
[seatunnel-api-flink] Connectors dependencies repeat additions (#2207)
[seatunnel-core] Failed to get APP_DIR path bug fixed (#2165)
[seatunnel-core-flink] Updated FlinkRunMode enum to get the proper help message for run modes. (#2008)
[seatunnel-core-flink]fix same source and sink registerplugin librarycache error (#2015)
[Command]fix commandArgs -t(--check) conflict with flink deployment target (#2174)
[Core][Jackson]fix jackson type convert error (#2031)
[Core][Starter] When use cluster mode, but starter app root dir also should same as client mode. (#2141)
source socket connector docs update (#1995)
Add uuid, udf, replace transform to doc (#2016)
Update Flink engine version requirements (#2220)
Add Flink SQL module to website. (#2021)
[kubernetes] update seatunnel doc on kubernetes (#2035)
Upgrade common-collecions4 to 4.4
Upgrade common-codec to 1.13
[Feature]
[Bugfix]
[Improvement]