type:LOAD_RUN_FAIL; msg:Failed to commit txn 1067004. Tablet [858988] success replica num 1 is less then quorum replica num 2 while error backends 852399,10008

【详述】问题详细描述:broker load 大概5亿数据量
【导入/导出方式】broker load
【背景】做过哪些操作?
fe.conf
max_broker_concurrency = 5
max_bytes_per_broker_scanner=32212254720
be.conf
#1.be节点base compaction线程数量
base_compaction_num_threads_per_disk=4
#2.base compaction时写磁盘的限速,单位为M
base_compaction_write_mbytes_per_sec=20
#3.be节点cumulative compaction的线程数量
cumulative_compaction_num_threads_per_disk=8
#4.be节点cumulative compactiond写磁盘的限速,单位为M
cumulative_compaction_write_mbytes_per_sec=300
#5.be节点cumulative compactiond线程轮询的间隔
cumulative_compaction_check_interval_seconds=2

【业务影响】
【StarRocks版本】例如:1.19.5
【集群规模】例如:3fe(1 follower+2observer)+5be(fe与be混部)
【机器信息】CPU虚拟核/内存/网卡,例如:48C/256G/万兆
【附件】be.WARNING
W0424 09:00:43.232565 49415 fragment_mgr.cpp:300] Retrying ReportExecStatus: No more data to read.
W0424 09:01:51.845860 49288 memtable.cpp:207] Too many segment files in one load. tablet=858996, segment_count=1001, limit=1000
W0424 09:01:52.400563 49653 internal_service.cpp:205] tablet writer add chunk failed, message=tablet_id = 858996 flush_status error , id=a4240e3c-3996-48be-92cf-d1d7c55fd52b, index_id=632630, sender_id=2
W0424 09:01:52.400704 49677 tablet_sink.cpp:199] NodeChannel[632630-10008] add batch req success but status isn’t ok, load_id=a4240e3c-3996-48be-92cf-d1d7c55fd52b, txn_id=1067004, node=10.20.52.196:8060, errmsg=tablet_id = 858996 flush_status error
W0424 09:16:17.161778 49410 tablet_sink.cpp:991] close channel failed. channel_name=NodeChannel[632630-10008], load_info=load_id=a4240e3c-3996-48be-92cf-d1d7c55fd52b, txn_id=1067004, errror_msg=already stopped, skip waiting for close. cancelled/!eos: : 1/1

  • fe.warn.log/be.warn.log/相应截图

fe.warn.log
2022-04-24 09:16:44,942 WARN (loading_load_task_scheduler_pool-0|165) [DatabaseTransactionMgr.commitTransaction():493] Failed to commit txn [1067004]. Tablet [858988] success replica num is 1 < quorum replica num 2 while error backends 852399,10008
2022-04-24 09:16:44,942 WARN (loading_load_task_scheduler_pool-0|165) [BrokerLoadJob.onLoadingTaskFinished():315] LOAD_JOB=903003, database_id={11056}, error_msg={Failed to commit txn with error:Failed to commit txn 1067004. Tablet [858988] success replica num 1 is less then quorum replica num 2 while error backends 852399,10008}
com.starrocks.transaction.TabletQuorumFailedException: Failed to commit txn 1067004. Tablet [858988] success replica num 1 is less then quorum replica num 2 while error backends 852399,10008
at com.starrocks.transaction.DatabaseTransactionMgr.commitTransaction(DatabaseTransactionMgr.java:498) ~[starrocks-fe.jar:?]
at com.starrocks.transaction.GlobalTransactionMgr.commitTransaction(GlobalTransactionMgr.java:339) ~[starrocks-fe.jar:?]
at com.starrocks.load.loadv2.BrokerLoadJob.onLoadingTaskFinished(BrokerLoadJob.java:293) [starrocks-fe.jar:?]
at com.starrocks.load.loadv2.BrokerLoadJob.onTaskFinished(BrokerLoadJob.java:118) [starrocks-fe.jar:?]
at com.starrocks.load.loadv2.LoadTask.exec(LoadTask.java:61) [starrocks-fe.jar:?]
at com.starrocks.task.MasterTask.run(MasterTask.java:35) [starrocks-fe.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_181]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
2022-04-24 09:16:44,949 WARN (loading_load_task_scheduler_pool-0|165) [LoadJob.unprotectedExecuteCancel():623] LOAD_JOB=903003, transaction_id={1067004}, error_msg={Failed to execute load with error: Failed to commit txn 1067004. Tablet [858988] success replica num 1 is less then quorum replica num 2 while error backends 852399,10008}

be节点状态都是正常的是么?导入数据一共是几个broker load 任务,导入到一个表中么?

be都是正常的,部署了5个broker,导入到一个日期分区

@dongquan 请问是什么原因,有什么参数可以调整?

建表语句您发一下

CREATE TABLE xxxxxxxxxxxx (
prod_id varchar(2048) NULL COMMENT “”,
lot_id varchar(2048) NULL COMMENT “”,
sort_id varchar(2048) NULL COMMENT “”,
end_time varchar(2048) NULL COMMENT “”,
testdate int(11) NULL COMMENT “”,
fablottype varchar(2048) NULL COMMENT “”,
shiplottype varchar(2048) NULL COMMENT “”,
wafer_id varchar(2048) NULL COMMENT “”,
tester_id varchar(2048) NULL COMMENT “”,
prober_id varchar(2048) NULL COMMENT “”,
proberecipe varchar(2048) NULL COMMENT “”,
probecard_id varchar(2048) NULL COMMENT “”,
production_ind varchar(2048) NULL COMMENT “”,
interface varchar(2048) NULL COMMENT “”,
program varchar(2048) NULL COMMENT “”,
programdeal varchar(2048) NULL COMMENT “”,
prodeng varchar(2048) NULL COMMENT “”,
version varchar(2048) NULL COMMENT “”,
program_rev varchar(2048) NULL COMMENT “”,
bindef_file varchar(2048) NULL COMMENT “”,
testtemp varchar(2048) NULL COMMENT “”,
wafernotch varchar(2048) NULL COMMENT “”,
fabsite varchar(2048) NULL COMMENT “”,
testsite varchar(2048) NULL COMMENT “”,
operator varchar(2048) NULL COMMENT “”,
test_time varchar(2048) NULL COMMENT “”,
site varchar(2048) NULL COMMENT “”,
diex varchar(2048) NULL COMMENT “”,
diey varchar(2048) NULL COMMENT “”,
dut varchar(2048) NULL COMMENT “”,
teststarttime varchar(2048) NULL COMMENT “”,
testendtime varchar(2048) NULL COMMENT “”,
testnumber varchar(2048) NULL COMMENT “”,
testblock varchar(2048) NULL COMMENT “”,
testname varchar(2048) NULL COMMENT “”,
value varchar(2048) NULL COMMENT “”,
extract_time varchar(2048) NULL COMMENT “”,
grade varchar(2048) NULL COMMENT “”,
sortid varchar(2048) NULL COMMENT “”
) ENGINE=OLAP
DUPLICATE KEY(prod_id, lot_id, sort_id, end_time, testdate, fablottype, shiplottype, wafer_id, tester_id)
COMMENT “OLAP”
PARTITION BY RANGE(testdate)
(
PARTITION p20220517 VALUES [(“20220517”), (“20220518”)),
PARTITION p20220518 VALUES [(“20220518”), (“20220519”)),
PARTITION p20220519 VALUES [(“20220519”), (“20220520”)),
PARTITION p20220520 VALUES [(“20220520”), (“20220521”)),
PARTITION p20220521 VALUES [(“20220521”), (“20220522”)),
PARTITION p20220522 VALUES [(“20220522”), (“20220523”)),
PARTITION p20220523 VALUES [(“20220523”), (“20220524”)),
PARTITION p20220524 VALUES [(“20220524”), (“20220525”)),
PARTITION p20220525 VALUES [(“20220525”), (“20220526”)),
PARTITION p20220526 VALUES [(“20220526”), (“20220527”)),
PARTITION p20220527 VALUES [(“20220527”), (“20220528”)),
PARTITION p20220528 VALUES [(“20220528”), (“20220529”)),
PARTITION p20220529 VALUES [(“20220529”), (“20220530”)),
PARTITION p20220530 VALUES [(“20220530”), (“20220531”)),
PARTITION p20220531 VALUES [(“20220531”), (“20220601”)))
DISTRIBUTED BY HASH(prod_id) BUCKETS 3
PROPERTIES (
“replication_num” = “3”,
“in_memory” = “false”,
“storage_format” = “DEFAULT”
);

be.conf中的write_buffer_size这个参数目前是默认的么?可以调整下以下两个参数

  • write_buffer_size

导入数据在 BE 上会先写入到一个内存块,当这个内存块达到阈值后才会写回磁盘。默认大小是 100MB。过小的阈值可能导致 BE 上存在大量的小文件。可以适当提高这个阈值减少文件数量。但过大的阈值可能导致 RPC 超时,见下面的配置说明。

  • tablet_writer_rpc_timeout_sec

导入过程中,发送一个 Batch(1024行)的 RPC 超时时间。默认 600 秒。因为该 RPC 可能涉及多个 分片内存块的写盘操作,所以可能会因为写盘导致 RPC 超时,可以适当调整这个超时时间来减少超时错误(如 send batch fail 错误)。同时,如果调大 write_buffer_size 配置,也需要适当调大这个参数。

兄弟,你这个问题解决了吗,怎么解决的?

你好,这个问题的根本原因有多种,请问下你也是报错“Too many segment files in one load”吗

我是用cloudcanal导数据的。错误信息:

java.lang.RuntimeException: Failed to flush data to StarRocks, Error response:
{“Status”:“Fail”,“BeginTxnTimeMs”:0,“Message”:“Failed to commit txn 43478770. Tablet [3309280] success replica num 0 is less then quorum replica num 1 while error backends 10007”,“NumberUnselectedRows”:0,“CommitAndPublishTimeMs”:0,“Label”:“63cd8d7b-32c6-4cae-b85d-0d02e7bac0ac”,“LoadBytes”:311136,“StreamLoadPutTimeMs”:1,“NumberTotalRows”:1024,“WriteDataTimeMs”:44,“TxnId”:43478770,“LoadTimeMs”:47,“ReadDataTimeMs”:0,“NumberLoadedRows”:1024,“NumberFilteredRows”:0}
{}
我现在导入数据后到某个值就会报这个错误

fe.warn.log报错信息:

2022-05-10 10:05:46,888 WARN (replayer|64) [Catalog.replayJournal():2468] replay journal cost too much time: 1001 replayedJournalId: 135688127
2022-05-10 10:10:13,969 WARN (replayer|64) [BDBJournalCursor.next():148] Catch an exception when get next JournalEntity. key:135697478
com.sleepycat.je.LockTimeoutException: (JE 7.3.7) Lock expired. Locker 1113094669 -1_replayer_ReplicaThreadLocker: waited for lock on database=135675396 LockAddr:795291322 LSN=0x20f5/0x2c2473 type=READ grant=WAIT_NEW timeoutMillis=1000 startTime=1652177412969 endTime=1652177413969
Owners: [<LockInfo locker=“404067135 -136379707_ReplayThread_ReplayTxn” type=“WRITE”/>]
Waiters: []

at com.sleepycat.je.txn.LockManager.makeTimeoutException(LockManager.java:1117) ~[je-7.3.7.jar:7.3.7]
at com.sleepycat.je.txn.LockManager.waitForLock(LockManager.java:606) ~[je-7.3.7.jar:7.3.7]
at com.sleepycat.je.txn.LockManager.lock(LockManager.java:345) ~[je-7.3.7.jar:7.3.7]
at com.sleepycat.je.txn.BasicLocker.lockInternal(BasicLocker.java:124) ~[je-7.3.7.jar:7.3.7]
at com.sleepycat.je.rep.txn.ReplicaThreadLocker.lockInternal(ReplicaThreadLocker.java:63) ~[je-7.3.7.jar:7.3.7]
at com.sleepycat.je.txn.Locker.lock(Locker.java:499) ~[je-7.3.7.jar:7.3.7]
at com.sleepycat.je.dbi.CursorImpl.lockLN(CursorImpl.java:3585) ~[je-7.3.7.jar:7.3.7]
at com.sleepycat.je.dbi.CursorImpl.lockLN(CursorImpl.java:3316) ~[je-7.3.7.jar:7.3.7]
at com.sleepycat.je.dbi.CursorImpl.lockLNAndCheckDefunct(CursorImpl.java:2138) ~[je-7.3.7.jar:7.3.7]
at com.sleepycat.je.dbi.CursorImpl.searchExact(CursorImpl.java:1950) ~[je-7.3.7.jar:7.3.7]
at com.sleepycat.je.Cursor.searchExact(Cursor.java:4194) ~[je-7.3.7.jar:7.3.7]
at com.sleepycat.je.Cursor.searchNoDups(Cursor.java:4055) ~[je-7.3.7.jar:7.3.7]
at com.sleepycat.je.Cursor.search(Cursor.java:3857) ~[je-7.3.7.jar:7.3.7]
at com.sleepycat.je.Cursor.getInternal(Cursor.java:1284) ~[je-7.3.7.jar:7.3.7]
at com.sleepycat.je.Database.get(Database.java:1271) ~[je-7.3.7.jar:7.3.7]
at com.sleepycat.je.Database.get(Database.java:1330) ~[je-7.3.7.jar:7.3.7]
at com.starrocks.journal.bdbje.CloseSafeDatabase.get(CloseSafeDatabase.java:47) ~[starrocks-fe.jar:?]
at com.starrocks.journal.bdbje.BDBJournalCursor.next(BDBJournalCursor.java:108) [starrocks-fe.jar:?]
at com.starrocks.catalog.Catalog.replayJournal(Catalog.java:2450) [starrocks-fe.jar:?]
at com.starrocks.catalog.Catalog$3.runOneCycle(Catalog.java:2239) [starrocks-fe.jar:?]
at com.starrocks.common.util.Daemon.run(Daemon.java:119) [starrocks-fe.jar:?]

单副本的表?可以在10007这个be节点日志中搜43478770

最早是3个副本,也是一样的,数据同步到2万9696的时候就开始报这个错误了,后面改为1个副本看行不行

be.WARNING:2860005:W0510 10:11:25.261531 28886 tablets_channel.cpp:311] Fail to close tablet writer, tablet_id=3309280 transaction_id=43478770 err=Cancelled: primary key size exceed the limit.
be.WARNING.log.20220509-160739:2860005:W0510 10:11:25.261531 28886 tablets_channel.cpp:311] Fail to close tablet writer, tablet_id=3309280 transaction_id=43478770 err=Cancelled: primary key size exceed the limit.

这个是搜索出来的结果,要怎么改呢?

引用

主键列目前有127字节限制,是不是主键列太多了,建议减少主键列。

谢谢,是这个原因