==> Synchronizing chroot copy [/home/alhp/workspace/chroot/root] -> [build_9cf44918-a9a9-4e82-aca6-76fbe4411177]...done ==> Making package: seaweedfs 3.87-1.1 (Wed May 7 02:07:27 2025) ==> Retrieving sources... -> Downloading seaweedfs-3.87.tar.gz... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 2561k 0 2561k 0 0 2445k 0 --:--:-- 0:00:01 --:--:-- 2445k 100 7937k 0 7937k 0 0 3890k 0 --:--:-- 0:00:02 --:--:-- 5413k 100 13.1M 0 13.1M 0 0 4421k 0 --:--:-- 0:00:03 --:--:-- 5461k 100 19.2M 0 19.2M 0 0 4870k 0 --:--:-- 0:00:04 --:--:-- 5718k 100 27.0M 0 27.0M 0 0 5503k 0 --:--:-- 0:00:05 --:--:-- 6306k 100 28.5M 0 28.5M 0 0 5614k 0 --:--:-- 0:00:05 --:--:-- 6412k ==> Validating source files with sha256sums... seaweedfs-3.87.tar.gz ... Passed ==> Making package: seaweedfs 3.87-1.1 (Wed May 7 00:07:35 2025) ==> Checking runtime dependencies... ==> Installing missing dependencies... resolving dependencies... looking for conflicting packages... Package (1) New Version Net Change extra/mailcap 2.1.54-2 0.11 MiB Total Installed Size: 0.11 MiB :: Proceed with installation? [Y/n] checking keyring... checking package integrity... loading package files... checking for file conflicts... :: Processing package changes... installing mailcap... :: Running post-transaction hooks... (1/1) Arming ConditionNeedsUpdate... ==> Checking buildtime dependencies... ==> Installing missing dependencies... resolving dependencies... looking for conflicting packages... Package (1) New Version Net Change extra/go 2:1.24.2-1 237.76 MiB Total Installed Size: 237.76 MiB :: Proceed with installation? [Y/n] checking keyring... checking package integrity... loading package files... checking for file conflicts... :: Processing package changes... installing go... :: Running post-transaction hooks... (1/1) Arming ConditionNeedsUpdate... ==> Retrieving sources... -> Found seaweedfs-3.87.tar.gz ==> WARNING: Skipping all source file integrity checks. ==> Extracting sources... -> Extracting seaweedfs-3.87.tar.gz with bsdtar ==> Starting prepare()... ==> Starting build()... ==> Starting check()... ? github.com/seaweedfs/seaweedfs/weed [no test files] === RUN TestConcurrentAddRemoveNodes --- PASS: TestConcurrentAddRemoveNodes (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/cluster 0.004s === RUN TestAddServer I0507 00:08:58.355997 lock_ring.go:43 add server localhost:8080 I0507 00:08:58.356188 lock_ring.go:43 add server localhost:8081 I0507 00:08:58.356195 lock_ring.go:43 add server localhost:8082 I0507 00:08:58.356197 lock_ring.go:43 add server localhost:8083 I0507 00:08:58.356199 lock_ring.go:43 add server localhost:8084 I0507 00:08:58.356201 lock_ring.go:59 remove server localhost:8084 I0507 00:08:58.356203 lock_ring.go:59 remove server localhost:8082 I0507 00:08:58.356205 lock_ring.go:59 remove server localhost:8080 --- PASS: TestAddServer (0.11s) === RUN TestLockRing --- PASS: TestLockRing (0.22s) PASS ok github.com/seaweedfs/seaweedfs/weed/cluster/lock_manager 0.337s === RUN TestReadingTomlConfiguration database is map[connection_max:5000 enabled:true ports:[8001 8001 8002] server:192.168.1.1] servers is map[alpha:map[dc:eqdc10 ip:10.0.0.1] beta:map[dc:eqdc10 ip:10.0.0.2]] alpha ip is 10.0.0.1 --- PASS: TestReadingTomlConfiguration (0.00s) === RUN TestXYZ I0507 00:09:00.193269 volume_test.go:12 Last-Modified Mon, 08 Jul 2013 08:53:16 GMT --- PASS: TestXYZ (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/command 0.025s ? github.com/seaweedfs/seaweedfs/weed/command/scaffold [no test files] === RUN TestChunkGroup_doSearchChunks --- PASS: TestChunkGroup_doSearchChunks (0.00s) === RUN TestDoMaybeManifestize test 0 test 1 test 2 test 3 --- PASS: TestDoMaybeManifestize (0.00s) === RUN Test_removeGarbageChunks --- PASS: Test_removeGarbageChunks (0.00s) === RUN TestDoMinusChunks 2025/05/07 00:09:00 first deleted chunks: [file_id:"1" size:3 modified_ts_ns:100 source_file_id:"11" file_id:"2" offset:3 size:3 modified_ts_ns:200 file_id:"3" offset:6 size:3 modified_ts_ns:300 source_file_id:"33"] 2025/05/07 00:09:00 clusterA synced empty chunks event result: [] --- PASS: TestDoMinusChunks (0.00s) === RUN TestCompactFileChunksRealCase I0507 00:09:00.180734 filechunks2_test.go:83 before chunk 2,512f31f2c0700a [ 0, 25) I0507 00:09:00.180821 filechunks2_test.go:83 before chunk 6,512f2c2e24e9e8 [ 868352, 917585) I0507 00:09:00.180825 filechunks2_test.go:83 before chunk 7,514468dd5954ca [ 884736, 901120) I0507 00:09:00.180827 filechunks2_test.go:83 before chunk 5,5144463173fe77 [ 917504, 2297856) I0507 00:09:00.180828 filechunks2_test.go:83 before chunk 4,51444c7ab54e2d [ 2301952, 2367488) I0507 00:09:00.180829 filechunks2_test.go:83 before chunk 4,514450e643ad22 [ 2371584, 2420736) I0507 00:09:00.180830 filechunks2_test.go:83 before chunk 6,514456a5e9e4d7 [ 2449408, 2490368) I0507 00:09:00.180832 filechunks2_test.go:83 before chunk 3,51444f8d53eebe [ 2494464, 2555904) I0507 00:09:00.180833 filechunks2_test.go:83 before chunk 4,5144578b097c7e [ 2560000, 2596864) I0507 00:09:00.180834 filechunks2_test.go:83 before chunk 3,51445500b6b4ac [ 2637824, 2678784) I0507 00:09:00.180835 filechunks2_test.go:83 before chunk 1,51446285e52a61 [ 2695168, 2715648) I0507 00:09:00.180846 filechunks2_test.go:83 compacted chunk 2,512f31f2c0700a [ 0, 25) I0507 00:09:00.180847 filechunks2_test.go:83 compacted chunk 6,512f2c2e24e9e8 [ 868352, 917585) I0507 00:09:00.180848 filechunks2_test.go:83 compacted chunk 7,514468dd5954ca [ 884736, 901120) I0507 00:09:00.180849 filechunks2_test.go:83 compacted chunk 5,5144463173fe77 [ 917504, 2297856) I0507 00:09:00.180850 filechunks2_test.go:83 compacted chunk 4,51444c7ab54e2d [ 2301952, 2367488) I0507 00:09:00.180851 filechunks2_test.go:83 compacted chunk 4,514450e643ad22 [ 2371584, 2420736) I0507 00:09:00.180853 filechunks2_test.go:83 compacted chunk 6,514456a5e9e4d7 [ 2449408, 2490368) I0507 00:09:00.180854 filechunks2_test.go:83 compacted chunk 3,51444f8d53eebe [ 2494464, 2555904) I0507 00:09:00.180855 filechunks2_test.go:83 compacted chunk 4,5144578b097c7e [ 2560000, 2596864) I0507 00:09:00.180856 filechunks2_test.go:83 compacted chunk 3,51445500b6b4ac [ 2637824, 2678784) I0507 00:09:00.180857 filechunks2_test.go:83 compacted chunk 1,51446285e52a61 [ 2695168, 2715648) --- PASS: TestCompactFileChunksRealCase (0.00s) === RUN TestReadResolvedChunks resolved to 4 visible intervales [0,50) a 1 [50,150) b 2 [175,275) e 5 [275,300) d 4 --- PASS: TestReadResolvedChunks (0.00s) === RUN TestReadResolvedChunks2 resolved to 2 visible intervales [200,225) e 5 [225,250) c 3 --- PASS: TestReadResolvedChunks2 (0.00s) === RUN TestRandomizedReadResolvedChunks --- PASS: TestRandomizedReadResolvedChunks (0.00s) === RUN TestSequentialReadResolvedChunks visibles 13--- PASS: TestSequentialReadResolvedChunks (0.00s) === RUN TestActualReadResolvedChunks [0,2097152) 5,e7b96fef48 1634447487595823000 [2097152,4194304) 5,e5562640b9 1634447487595826000 [4194304,6291456) 5,df033e0fe4 1634447487595827000 [6291456,8388608) 7,eb08148a9b 1634447487595827000 [8388608,10485760) 7,e0f92d1604 1634447487595828000 [10485760,12582912) 7,e33cb63262 1634447487595828000 [12582912,14680064) 5,ea98e40e93 1634447487595829000 [14680064,16777216) 5,e165661172 1634447487595829000 [16777216,18874368) 3,e692097486 1634447487595830000 [18874368,20971520) 3,e28e2e3cbd 1634447487595830000 [20971520,23068672) 3,e443974d4e 1634447487595830000 [23068672,25165824) 2,e815bed597 1634447487595831000 [25165824,27140560) 5,e94715199e 1634447487595832000 --- PASS: TestActualReadResolvedChunks (0.00s) === RUN TestActualReadResolvedChunks2 [0,184320) 1,e7b96fef48 1 [184320,188416) 2,33562640b9 4 [188416,2285568) 4,df033e0fe4 3 --- PASS: TestActualReadResolvedChunks2 (0.00s) === RUN TestCompactFileChunks --- PASS: TestCompactFileChunks (0.00s) === RUN TestCompactFileChunks2 --- PASS: TestCompactFileChunks2 (0.00s) === RUN TestRandomFileChunksCompact --- PASS: TestRandomFileChunksCompact (0.00s) === RUN TestIntervalMerging 2025/05/07 00:09:00 ++++++++++ merged test case 0 ++++++++++++++++++++ 2025/05/07 00:09:00 test case 0, interval start=0, stop=100, fileId=abc 2025/05/07 00:09:00 test case 0, interval start=100, stop=200, fileId=asdf 2025/05/07 00:09:00 test case 0, interval start=200, stop=300, fileId=fsad 2025/05/07 00:09:00 ++++++++++ merged test case 1 ++++++++++++++++++++ 2025/05/07 00:09:00 test case 1, interval start=0, stop=200, fileId=asdf 2025/05/07 00:09:00 ++++++++++ merged test case 2 ++++++++++++++++++++ 2025/05/07 00:09:00 test case 2, interval start=0, stop=70, fileId=b 2025/05/07 00:09:00 test case 2, interval start=70, stop=100, fileId=a 2025/05/07 00:09:00 ++++++++++ merged test case 3 ++++++++++++++++++++ 2025/05/07 00:09:00 test case 3, interval start=0, stop=50, fileId=asdf 2025/05/07 00:09:00 test case 3, interval start=50, stop=300, fileId=xxxx 2025/05/07 00:09:00 ++++++++++ merged test case 4 ++++++++++++++++++++ 2025/05/07 00:09:00 test case 4, interval start=0, stop=200, fileId=asdf 2025/05/07 00:09:00 test case 4, interval start=250, stop=500, fileId=xxxx 2025/05/07 00:09:00 ++++++++++ merged test case 5 ++++++++++++++++++++ 2025/05/07 00:09:00 test case 5, interval start=0, stop=200, fileId=d 2025/05/07 00:09:00 test case 5, interval start=200, stop=220, fileId=c 2025/05/07 00:09:00 ++++++++++ merged test case 6 ++++++++++++++++++++ 2025/05/07 00:09:00 test case 6, interval start=0, stop=100, fileId=xyz 2025/05/07 00:09:00 ++++++++++ merged test case 7 ++++++++++++++++++++ 2025/05/07 00:09:00 test case 7, interval start=0, stop=2097152, fileId=3,029565bf3092 2025/05/07 00:09:00 test case 7, interval start=2097152, stop=5242880, fileId=6,029632f47ae2 2025/05/07 00:09:00 test case 7, interval start=5242880, stop=8388608, fileId=2,029734c5aa10 2025/05/07 00:09:00 test case 7, interval start=8388608, stop=11534336, fileId=5,02982f80de50 2025/05/07 00:09:00 test case 7, interval start=11534336, stop=14376529, fileId=7,0299ad723803 2025/05/07 00:09:00 ++++++++++ merged test case 8 ++++++++++++++++++++ 2025/05/07 00:09:00 test case 8, interval start=0, stop=77824, fileId=4,0b3df938e301 2025/05/07 00:09:00 test case 8, interval start=77824, stop=208896, fileId=4,0b3f0c7202f0 2025/05/07 00:09:00 test case 8, interval start=208896, stop=339968, fileId=2,0b4031a72689 2025/05/07 00:09:00 test case 8, interval start=339968, stop=471040, fileId=3,0b416a557362 2025/05/07 00:09:00 test case 8, interval start=471040, stop=472225, fileId=6,0b3e0650019c --- PASS: TestIntervalMerging (0.00s) === RUN TestChunksReading 2025/05/07 00:09:00 ++++++++++ read test case 0 ++++++++++++++++++++ 2025/05/07 00:09:00 read case 0, chunk 0, offset=0, size=100, fileId=abc 2025/05/07 00:09:00 read case 0, chunk 1, offset=0, size=100, fileId=asdf 2025/05/07 00:09:00 read case 0, chunk 2, offset=0, size=50, fileId=fsad 2025/05/07 00:09:00 ++++++++++ read test case 1 ++++++++++++++++++++ 2025/05/07 00:09:00 read case 1, chunk 0, offset=50, size=100, fileId=asdf 2025/05/07 00:09:00 ++++++++++ read test case 2 ++++++++++++++++++++ 2025/05/07 00:09:00 read case 2, chunk 0, offset=20, size=30, fileId=b 2025/05/07 00:09:00 read case 2, chunk 1, offset=57, size=10, fileId=a 2025/05/07 00:09:00 ++++++++++ read test case 3 ++++++++++++++++++++ 2025/05/07 00:09:00 read case 3, chunk 0, offset=0, size=50, fileId=asdf 2025/05/07 00:09:00 read case 3, chunk 1, offset=0, size=150, fileId=xxxx 2025/05/07 00:09:00 ++++++++++ read test case 4 ++++++++++++++++++++ 2025/05/07 00:09:00 read case 4, chunk 0, offset=0, size=200, fileId=asdf 2025/05/07 00:09:00 read case 4, chunk 1, offset=0, size=150, fileId=xxxx 2025/05/07 00:09:00 ++++++++++ read test case 5 ++++++++++++++++++++ 2025/05/07 00:09:00 read case 5, chunk 0, offset=0, size=200, fileId=c 2025/05/07 00:09:00 read case 5, chunk 1, offset=130, size=20, fileId=b 2025/05/07 00:09:00 ++++++++++ read test case 6 ++++++++++++++++++++ 2025/05/07 00:09:00 read case 6, chunk 0, offset=0, size=100, fileId=xyz 2025/05/07 00:09:00 ++++++++++ read test case 7 ++++++++++++++++++++ 2025/05/07 00:09:00 read case 7, chunk 0, offset=0, size=100, fileId=abc 2025/05/07 00:09:00 read case 7, chunk 1, offset=0, size=100, fileId=asdf 2025/05/07 00:09:00 ++++++++++ read test case 8 ++++++++++++++++++++ 2025/05/07 00:09:00 read case 8, chunk 0, offset=0, size=90, fileId=abc 2025/05/07 00:09:00 read case 8, chunk 1, offset=0, size=100, fileId=asdf 2025/05/07 00:09:00 read case 8, chunk 2, offset=0, size=110, fileId=fsad 2025/05/07 00:09:00 ++++++++++ read test case 9 ++++++++++++++++++++ 2025/05/07 00:09:00 read case 9, chunk 0, offset=0, size=43175936, fileId=2,111fc2cbfac1 2025/05/07 00:09:00 read case 9, chunk 1, offset=0, size=9805824, fileId=2,112a36ea7f85 2025/05/07 00:09:00 read case 9, chunk 2, offset=0, size=19582976, fileId=4,112d5f31c5e7 2025/05/07 00:09:00 read case 9, chunk 3, offset=0, size=60690432, fileId=1,113245f0cdb6 2025/05/07 00:09:00 read case 9, chunk 4, offset=0, size=4014080, fileId=3,1141a70733b5 2025/05/07 00:09:00 read case 9, chunk 5, offset=0, size=16309588, fileId=1,114201d5bbdb --- PASS: TestChunksReading (0.00s) === RUN TestViewFromVisibleIntervals --- PASS: TestViewFromVisibleIntervals (0.00s) === RUN TestViewFromVisibleIntervals2 --- PASS: TestViewFromVisibleIntervals2 (0.00s) === RUN TestViewFromVisibleIntervals3 --- PASS: TestViewFromVisibleIntervals3 (0.00s) === RUN TestCompactFileChunks3 --- PASS: TestCompactFileChunks3 (0.00s) === RUN TestFilerConf --- PASS: TestFilerConf (0.00s) === RUN TestProtoMarshal e to: 234,2423423422 * 2342342354223234,2342342342"# 0Ø: text/jsonP --- PASS: TestProtoMarshal (0.00s) === RUN TestIntervalList_Overlay [0,25) 6 6 [25,50) 1 1 [50,150) 2 2 [175,210) 5 5 [210,225) 3 3 [225,250) 4 4 [0,25) 6 6 [25,50) 1 1 [50,150) 7 7 [175,210) 5 5 [210,225) 3 3 [225,250) 4 4 --- PASS: TestIntervalList_Overlay (0.00s) === RUN TestIntervalList_Overlay2 [0,50) 2 2 [50,100) 1 1 --- PASS: TestIntervalList_Overlay2 (0.00s) === RUN TestIntervalList_Overlay3 [0,60) 2 2 [60,100) 1 1 --- PASS: TestIntervalList_Overlay3 (0.00s) === RUN TestIntervalList_Overlay4 [0,100) 2 2 --- PASS: TestIntervalList_Overlay4 (0.00s) === RUN TestIntervalList_Overlay5 [0,110) 2 2 --- PASS: TestIntervalList_Overlay5 (0.00s) === RUN TestIntervalList_Overlay6 [50,110) 2 2 --- PASS: TestIntervalList_Overlay6 (0.00s) === RUN TestIntervalList_Overlay7 [50,90) 2 2 [90,100) 1 1 --- PASS: TestIntervalList_Overlay7 (0.00s) === RUN TestIntervalList_Overlay8 [50,60) 1 1 [60,90) 2 2 [90,100) 1 1 --- PASS: TestIntervalList_Overlay8 (0.00s) === RUN TestIntervalList_Overlay9 [50,60) 1 1 [60,100) 2 2 --- PASS: TestIntervalList_Overlay9 (0.00s) === RUN TestIntervalList_Overlay10 [50,60) 1 1 [60,110) 2 2 --- PASS: TestIntervalList_Overlay10 (0.00s) === RUN TestIntervalList_Overlay11 [0,90) 5 5 [90,100) 1 1 [100,110) 2 2 --- PASS: TestIntervalList_Overlay11 (0.00s) === RUN TestIntervalList_insertInterval1 [50,150) 2 2 [200,250) 3 3 --- PASS: TestIntervalList_insertInterval1 (0.00s) === RUN TestIntervalList_insertInterval2 [0,25) 3 3 [50,150) 2 2 --- PASS: TestIntervalList_insertInterval2 (0.00s) === RUN TestIntervalList_insertInterval3 [0,75) 3 3 [75,150) 2 2 [200,250) 4 4 --- PASS: TestIntervalList_insertInterval3 (0.00s) === RUN TestIntervalList_insertInterval4 [0,200) 3 3 [200,250) 4 4 --- PASS: TestIntervalList_insertInterval4 (0.00s) === RUN TestIntervalList_insertInterval5 [0,225) 5 5 [225,250) 4 4 --- PASS: TestIntervalList_insertInterval5 (0.00s) === RUN TestIntervalList_insertInterval6 [0,50) 1 1 [50,150) 2 2 [150,200) 1 1 [200,250) 4 4 [250,275) 1 1 --- PASS: TestIntervalList_insertInterval6 (0.00s) === RUN TestIntervalList_insertInterval7 [50,150) 2 2 [150,200) 1 1 [200,250) 4 4 [250,275) 1 1 --- PASS: TestIntervalList_insertInterval7 (0.00s) === RUN TestIntervalList_insertInterval8 [50,75) 2 2 [75,200) 3 3 [200,250) 4 4 [250,275) 3 3 --- PASS: TestIntervalList_insertInterval8 (0.00s) === RUN TestIntervalList_insertInterval9 [50,150) 3 3 [200,250) 4 4 --- PASS: TestIntervalList_insertInterval9 (0.00s) === RUN TestIntervalList_insertInterval10 [50,100) 2 2 [100,200) 5 5 [200,300) 4 4 --- PASS: TestIntervalList_insertInterval10 (0.00s) === RUN TestIntervalList_insertInterval11 [0,64) 1 1 [64,68) 2 2 [68,72) 4 4 [72,136) 3 3 --- PASS: TestIntervalList_insertInterval11 (0.00s) === RUN TestIntervalList_insertIntervalStruct [0,64) 1 {1 0 0} [64,68) 4 {4 0 0} [68,72) 2 {2 0 0} [72,136) 3 {3 0 0} --- PASS: TestIntervalList_insertIntervalStruct (0.00s) === RUN TestReaderAt --- PASS: TestReaderAt (0.00s) === RUN TestReaderAt0 --- PASS: TestReaderAt0 (0.00s) === RUN TestReaderAt1 --- PASS: TestReaderAt1 (0.00s) === RUN TestReaderAtGappedChunksDoNotLeak --- PASS: TestReaderAtGappedChunksDoNotLeak (0.00s) === RUN TestReaderAtSparseFileDoesNotLeak --- PASS: TestReaderAtSparseFileDoesNotLeak (0.00s) === RUN TestFilerRemoteStorage_FindRemoteStorageClient --- PASS: TestFilerRemoteStorage_FindRemoteStorageClient (0.00s) === RUN TestS3Conf --- PASS: TestS3Conf (0.00s) === RUN TestCheckDuplicateAccessKey --- PASS: TestCheckDuplicateAccessKey (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/filer 0.017s ? github.com/seaweedfs/seaweedfs/weed/filer/abstract_sql [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/arangodb [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/cassandra [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/cassandra2 [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/elastic/v7 [no test files] === RUN TestStore --- PASS: TestStore (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/filer/etcd 0.013s ? github.com/seaweedfs/seaweedfs/weed/filer/hbase [no test files] === RUN TestCreateAndFind I0507 00:09:00.186792 leveldb_store.go:47 filer store dir: /tmp/TestCreateAndFind3401275163/001 I0507 00:09:00.186913 file_util.go:27 Folder /tmp/TestCreateAndFind3401275163/001 Permission: -rwxr-xr-x I0507 00:09:00.187518 filer.go:155 create filer.store.id to 1988733383 --- PASS: TestCreateAndFind (0.01s) === RUN TestEmptyRoot I0507 00:09:00.190168 leveldb_store.go:47 filer store dir: /tmp/TestEmptyRoot1195537286/001 I0507 00:09:00.190188 file_util.go:27 Folder /tmp/TestEmptyRoot1195537286/001 Permission: -rwxr-xr-x I0507 00:09:00.190649 filer.go:155 create filer.store.id to -844323995 --- PASS: TestEmptyRoot (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/filer/leveldb 0.021s === RUN TestCreateAndFind I0507 00:09:00.186309 leveldb2_store.go:43 filer store leveldb2 dir: /tmp/TestCreateAndFind965985501/001 I0507 00:09:00.186463 file_util.go:27 Folder /tmp/TestCreateAndFind965985501/001 Permission: -rwxr-xr-x I0507 00:09:00.187483 filer.go:155 create filer.store.id to -909105283 --- PASS: TestCreateAndFind (0.01s) === RUN TestEmptyRoot I0507 00:09:00.189961 leveldb2_store.go:43 filer store leveldb2 dir: /tmp/TestEmptyRoot1663490548/001 I0507 00:09:00.189974 file_util.go:27 Folder /tmp/TestEmptyRoot1663490548/001 Permission: -rwxr-xr-x I0507 00:09:00.190636 filer.go:155 create filer.store.id to -3914623 --- PASS: TestEmptyRoot (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/filer/leveldb2 0.021s === RUN TestCreateAndFind I0507 00:09:00.187711 leveldb3_store.go:50 filer store leveldb3 dir: /tmp/TestCreateAndFind2713983055/001 I0507 00:09:00.187858 file_util.go:27 Folder /tmp/TestCreateAndFind2713983055/001 Permission: -rwxr-xr-x I0507 00:09:00.188990 filer.go:155 create filer.store.id to 1945176 --- PASS: TestCreateAndFind (0.01s) === RUN TestEmptyRoot I0507 00:09:00.190412 leveldb3_store.go:50 filer store leveldb3 dir: /tmp/TestEmptyRoot3498794763/001 I0507 00:09:00.190423 file_util.go:27 Folder /tmp/TestEmptyRoot3498794763/001 Permission: -rwxr-xr-x I0507 00:09:00.190907 filer.go:155 create filer.store.id to 1506910640 --- PASS: TestEmptyRoot (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/filer/leveldb3 0.021s ? github.com/seaweedfs/seaweedfs/weed/filer/mongodb [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/mysql [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/mysql2 [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/postgres [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/postgres2 [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/redis [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/redis2 [no test files] testing: warning: no tests to run PASS ok github.com/seaweedfs/seaweedfs/weed/filer/redis3 0.007s [no tests to run] ? github.com/seaweedfs/seaweedfs/weed/filer/redis_lua [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/redis_lua/stored_procedure [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/sqlite [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/store_test [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/tarantool [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/tikv [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/ydb [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer_client [no test files] ? github.com/seaweedfs/seaweedfs/weed/ftpd [no test files] === RUN TestShortHostname --- PASS: TestShortHostname (0.00s) === RUN TestInfo I0507 00:09:01.050071 glog_test.go:92 test --- PASS: TestInfo (0.00s) === RUN TestInfoDepth I0507 00:09:01.050125 glog_test.go:109 depth-test0 I0507 00:09:01.050128 glog_test.go:110 depth-test1 --- PASS: TestInfoDepth (0.00s) === RUN TestCopyStandardLogToPanic --- PASS: TestCopyStandardLogToPanic (0.00s) === RUN TestStandardLog I0507 00:09:01.050149 glog_test.go:163 test --- PASS: TestStandardLog (0.00s) === RUN TestHeader I0102 15:04:05.067890 glog_test.go:181 test --- PASS: TestHeader (0.00s) === RUN TestError E0507 00:09:01.050186 glog_test.go:202 test --- PASS: TestError (0.00s) === RUN TestWarning W0507 00:09:01.050195 glog_test.go:224 test --- PASS: TestWarning (0.00s) === RUN TestV I0507 00:09:01.050201 glog_test.go:243 test --- PASS: TestV (0.00s) === RUN TestVmoduleOn I0507 00:09:01.050214 glog_test.go:267 test --- PASS: TestVmoduleOn (0.00s) === RUN TestVmoduleOff --- PASS: TestVmoduleOff (0.00s) === RUN TestVmoduleGlob --- PASS: TestVmoduleGlob (0.00s) === RUN TestRollover I0507 00:09:01.050250 glog_test.go:339 x I0507 00:09:01.050440 glog_test.go:348 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx I0507 00:09:02.053738 glog_test.go:361 x --- PASS: TestRollover (1.00s) === RUN TestLogBacktraceAt I0507 00:09:02.053869 glog_test.go:395 we want a stack trace here goroutine 21 [running]: github.com/seaweedfs/seaweedfs/weed/glog.stacks(0x0) /startdir/src/seaweedfs-3.87/weed/glog/glog.go:768 +0x85 github.com/seaweedfs/seaweedfs/weed/glog.(*loggingT).output(0x6ed860, 0x0, 0xc0000de1c0, {0x60bc6e?, 0x1?}, 0x0?, 0x0) /startdir/src/seaweedfs-3.87/weed/glog/glog.go:677 +0xe5 github.com/seaweedfs/seaweedfs/weed/glog.(*loggingT).printDepth(0x6ed860, 0x0, 0xc000092e90?, {0xc000092e30, 0x1, 0x1}) /startdir/src/seaweedfs-3.87/weed/glog/glog.go:648 +0xea github.com/seaweedfs/seaweedfs/weed/glog.(*loggingT).print(...) /startdir/src/seaweedfs-3.87/weed/glog/glog.go:639 github.com/seaweedfs/seaweedfs/weed/glog.Info(...) /startdir/src/seaweedfs-3.87/weed/glog/glog.go:1061 github.com/seaweedfs/seaweedfs/weed/glog.TestLogBacktraceAt(0xc0000e7c00) /startdir/src/seaweedfs-3.87/weed/glog/glog_test.go:395 +0x438 testing.tRunner(0xc0000e7c00, 0x5aade8) /usr/lib/go/src/testing/testing.go:1792 +0xf4 created by testing.(*T).Run in goroutine 1 /usr/lib/go/src/testing/testing.go:1851 +0x413 --- PASS: TestLogBacktraceAt (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/glog 1.006s === RUN TestGetActionsUserPath --- PASS: TestGetActionsUserPath (0.00s) === RUN TestGetActionsWildcardPath --- PASS: TestGetActionsWildcardPath (0.00s) === RUN TestGetActionsInvalidAction --- PASS: TestGetActionsInvalidAction (0.00s) === RUN TestCreateUser --- PASS: TestCreateUser (0.00s) === RUN TestListUsers --- PASS: TestListUsers (0.00s) === RUN TestListAccessKeys --- PASS: TestListAccessKeys (0.00s) === RUN TestGetUser --- PASS: TestGetUser (0.00s) === RUN TestCreatePolicy --- PASS: TestCreatePolicy (0.00s) === RUN TestPutUserPolicy --- PASS: TestPutUserPolicy (0.00s) === RUN TestPutUserPolicyError E0507 00:09:01.620754 iamapi_management_handlers.go:508 PutUserPolicy: the user with name InvalidUser cannot be found E0507 00:09:01.620914 iamapi_handlers.go:29 Response the user with name InvalidUser cannot be found --- PASS: TestPutUserPolicyError (0.00s) === RUN TestGetUserPolicy --- PASS: TestGetUserPolicy (0.00s) === RUN TestUpdateUser --- PASS: TestUpdateUser (0.00s) === RUN TestDeleteUser --- PASS: TestDeleteUser (0.00s) === RUN TestHandleImplicitUsername --- PASS: TestHandleImplicitUsername (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/iamapi 0.017s === RUN TestCropping --- PASS: TestCropping (0.06s) === RUN TestXYZ --- PASS: TestXYZ (0.32s) === RUN TestResizing --- PASS: TestResizing (0.02s) PASS ok github.com/seaweedfs/seaweedfs/weed/images 0.411s === RUN TestInodeEntry_removeOnePath === RUN TestInodeEntry_removeOnePath/actual_case === RUN TestInodeEntry_removeOnePath/empty === RUN TestInodeEntry_removeOnePath/single === RUN TestInodeEntry_removeOnePath/first === RUN TestInodeEntry_removeOnePath/middle === RUN TestInodeEntry_removeOnePath/last === RUN TestInodeEntry_removeOnePath/not_found --- PASS: TestInodeEntry_removeOnePath (0.00s) --- PASS: TestInodeEntry_removeOnePath/actual_case (0.00s) --- PASS: TestInodeEntry_removeOnePath/empty (0.00s) --- PASS: TestInodeEntry_removeOnePath/single (0.00s) --- PASS: TestInodeEntry_removeOnePath/first (0.00s) --- PASS: TestInodeEntry_removeOnePath/middle (0.00s) --- PASS: TestInodeEntry_removeOnePath/last (0.00s) --- PASS: TestInodeEntry_removeOnePath/not_found (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/mount 0.009s ? github.com/seaweedfs/seaweedfs/weed/mount/meta_cache [no test files] === RUN Test_PageChunkWrittenIntervalList --- PASS: Test_PageChunkWrittenIntervalList (0.00s) === RUN Test_PageChunkWrittenIntervalList1 --- PASS: Test_PageChunkWrittenIntervalList1 (0.00s) === RUN TestUploadPipeline --- PASS: TestUploadPipeline (18.51s) PASS ok github.com/seaweedfs/seaweedfs/weed/mount/page_writer 18.519s ? github.com/seaweedfs/seaweedfs/weed/mount/unmount [no test files] ? github.com/seaweedfs/seaweedfs/weed/mq/agent [no test files] ? github.com/seaweedfs/seaweedfs/weed/mq/broker [no test files] ? github.com/seaweedfs/seaweedfs/weed/mq/client/agent_client [no test files] ? github.com/seaweedfs/seaweedfs/weed/mq/client/pub_client [no test files] ? github.com/seaweedfs/seaweedfs/weed/mq/client/sub_client [no test files] ? github.com/seaweedfs/seaweedfs/weed/mq/logstore [no test files] === RUN Test_allocateOneBroker === RUN Test_allocateOneBroker/test_only_one_broker I0507 00:09:01.833718 allocate.go:81 EnsureAssignmentsToActiveBrokers: activeBrokers: 1, followerCount: 1, assignments: [partition:{ring_size:2520 range_stop:2520 unix_time_ns:1746576541833700708}] I0507 00:09:01.834215 allocate.go:125 EnsureAssignmentsToActiveBrokers: activeBrokers: 1, followerCount: 1, assignments: [partition:{ring_size:2520 range_stop:2520 unix_time_ns:1746576541833700708} leader_broker:"localhost:17777"] hasChanges: true I0507 00:09:01.834247 allocate.go:33 allocate topic partitions 1: [partition:{ring_size:2520 range_stop:2520 unix_time_ns:1746576541833700708} leader_broker:"localhost:17777"] --- PASS: Test_allocateOneBroker (0.00s) --- PASS: Test_allocateOneBroker/test_only_one_broker (0.00s) === RUN TestEnsureAssignmentsToActiveBrokersX === RUN TestEnsureAssignmentsToActiveBrokersX/test_empty_leader test empty leader before [partition:{} follower_broker:"localhost:2"] I0507 00:09:01.834346 allocate.go:81 EnsureAssignmentsToActiveBrokers: activeBrokers: 6, followerCount: 1, assignments: [partition:{} follower_broker:"localhost:2"] I0507 00:09:01.834444 allocate.go:125 EnsureAssignmentsToActiveBrokers: activeBrokers: 6, followerCount: 1, assignments: [partition:{} leader_broker:"localhost:4" follower_broker:"localhost:2"] hasChanges: true test empty leader after [partition:{} leader_broker:"localhost:4" follower_broker:"localhost:2"] === RUN TestEnsureAssignmentsToActiveBrokersX/test_empty_follower test empty follower before [partition:{} leader_broker:"localhost:1"] I0507 00:09:01.834463 allocate.go:81 EnsureAssignmentsToActiveBrokers: activeBrokers: 6, followerCount: 1, assignments: [partition:{} leader_broker:"localhost:1"] I0507 00:09:01.834567 allocate.go:125 EnsureAssignmentsToActiveBrokers: activeBrokers: 6, followerCount: 1, assignments: [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:6"] hasChanges: true test empty follower after [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:6"] === RUN TestEnsureAssignmentsToActiveBrokersX/test_dead_follower test dead follower before [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:200"] I0507 00:09:01.834652 allocate.go:81 EnsureAssignmentsToActiveBrokers: activeBrokers: 6, followerCount: 1, assignments: [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:200"] I0507 00:09:01.834722 allocate.go:125 EnsureAssignmentsToActiveBrokers: activeBrokers: 6, followerCount: 1, assignments: [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:3"] hasChanges: true test dead follower after [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:3"] === RUN TestEnsureAssignmentsToActiveBrokersX/test_dead_leader_and_follower test dead leader and follower before [partition:{} leader_broker:"localhost:100" follower_broker:"localhost:200"] I0507 00:09:01.834750 allocate.go:81 EnsureAssignmentsToActiveBrokers: activeBrokers: 6, followerCount: 1, assignments: [partition:{} leader_broker:"localhost:100" follower_broker:"localhost:200"] I0507 00:09:01.834807 allocate.go:125 EnsureAssignmentsToActiveBrokers: activeBrokers: 6, followerCount: 1, assignments: [partition:{} leader_broker:"localhost:6" follower_broker:"localhost:2"] hasChanges: true test dead leader and follower after [partition:{} leader_broker:"localhost:6" follower_broker:"localhost:2"] === RUN TestEnsureAssignmentsToActiveBrokersX/test_low_active_brokers test low active brokers before [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:2"] I0507 00:09:01.834831 allocate.go:81 EnsureAssignmentsToActiveBrokers: activeBrokers: 2, followerCount: 3, assignments: [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:2"] I0507 00:09:01.834889 allocate.go:125 EnsureAssignmentsToActiveBrokers: activeBrokers: 2, followerCount: 3, assignments: [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:2"] hasChanges: false test low active brokers after [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:2"] === RUN TestEnsureAssignmentsToActiveBrokersX/test_low_active_brokers_with_one_follower test low active brokers with one follower before [partition:{} leader_broker:"localhost:1"] I0507 00:09:01.834942 allocate.go:81 EnsureAssignmentsToActiveBrokers: activeBrokers: 2, followerCount: 1, assignments: [partition:{} leader_broker:"localhost:1"] I0507 00:09:01.835004 allocate.go:125 EnsureAssignmentsToActiveBrokers: activeBrokers: 2, followerCount: 1, assignments: [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:1"] hasChanges: true test low active brokers with one follower after [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:1"] === RUN TestEnsureAssignmentsToActiveBrokersX/test_single_active_broker test single active broker before [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:2"] I0507 00:09:01.835060 allocate.go:81 EnsureAssignmentsToActiveBrokers: activeBrokers: 1, followerCount: 3, assignments: [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:2"] I0507 00:09:01.835125 allocate.go:125 EnsureAssignmentsToActiveBrokers: activeBrokers: 1, followerCount: 3, assignments: [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:1"] hasChanges: true test single active broker after [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:1"] --- PASS: TestEnsureAssignmentsToActiveBrokersX (0.00s) --- PASS: TestEnsureAssignmentsToActiveBrokersX/test_empty_leader (0.00s) --- PASS: TestEnsureAssignmentsToActiveBrokersX/test_empty_follower (0.00s) --- PASS: TestEnsureAssignmentsToActiveBrokersX/test_dead_follower (0.00s) --- PASS: TestEnsureAssignmentsToActiveBrokersX/test_dead_leader_and_follower (0.00s) --- PASS: TestEnsureAssignmentsToActiveBrokersX/test_low_active_brokers (0.00s) --- PASS: TestEnsureAssignmentsToActiveBrokersX/test_low_active_brokers_with_one_follower (0.00s) --- PASS: TestEnsureAssignmentsToActiveBrokersX/test_single_active_broker (0.00s) === RUN TestBalanceTopicPartitionOnBrokers === RUN TestBalanceTopicPartitionOnBrokers/test --- PASS: TestBalanceTopicPartitionOnBrokers (0.00s) --- PASS: TestBalanceTopicPartitionOnBrokers/test (0.00s) === RUN Test_findMissingPartitions === RUN Test_findMissingPartitions/one_partition === RUN Test_findMissingPartitions/two_partitions === RUN Test_findMissingPartitions/four_partitions,_missing_last_two === RUN Test_findMissingPartitions/four_partitions,_missing_first_two === RUN Test_findMissingPartitions/four_partitions,_missing_middle_two === RUN Test_findMissingPartitions/four_partitions,_missing_three --- PASS: Test_findMissingPartitions (0.00s) --- PASS: Test_findMissingPartitions/one_partition (0.00s) --- PASS: Test_findMissingPartitions/two_partitions (0.00s) --- PASS: Test_findMissingPartitions/four_partitions,_missing_last_two (0.00s) --- PASS: Test_findMissingPartitions/four_partitions,_missing_first_two (0.00s) --- PASS: Test_findMissingPartitions/four_partitions,_missing_middle_two (0.00s) --- PASS: Test_findMissingPartitions/four_partitions,_missing_three (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/mq/pub_balancer 0.010s === RUN TestEnumScalarType === RUN TestEnumScalarType/Boolean === RUN TestEnumScalarType/Integer === RUN TestEnumScalarType/Long === RUN TestEnumScalarType/Float === RUN TestEnumScalarType/Double === RUN TestEnumScalarType/Bytes === RUN TestEnumScalarType/String --- PASS: TestEnumScalarType (0.00s) --- PASS: TestEnumScalarType/Boolean (0.00s) --- PASS: TestEnumScalarType/Integer (0.00s) --- PASS: TestEnumScalarType/Long (0.00s) --- PASS: TestEnumScalarType/Float (0.00s) --- PASS: TestEnumScalarType/Double (0.00s) --- PASS: TestEnumScalarType/Bytes (0.00s) --- PASS: TestEnumScalarType/String (0.00s) === RUN TestField --- PASS: TestField (0.00s) === RUN TestRecordType fields: < name: "field_key" field_index: 1 type: < scalar_type: INT32 > > fields: < name: "field_record" field_index: 2 type: < record_type: < fields: < name: "field_1" field_index: 1 type: < scalar_type: INT32 > > fields: < name: "field_2" field_index: 2 type: < scalar_type: STRING > > > > > {"fields":[{"name":"field_key","field_index":1,"type":{"Kind":{"ScalarType":1}}},{"name":"field_record","field_index":2,"type":{"Kind":{"RecordType":{"fields":[{"name":"field_1","field_index":1,"type":{"Kind":{"ScalarType":1}}},{"name":"field_2","field_index":2,"type":{"Kind":{"ScalarType":7}}}]}}}}]} --- PASS: TestRecordType (0.00s) === RUN TestStructToSchema === RUN TestStructToSchema/scalar_type === RUN TestStructToSchema/simple_struct_type === RUN TestStructToSchema/simple_list === RUN TestStructToSchema/simple_[]byte === RUN TestStructToSchema/nested_simpe_structs === RUN TestStructToSchema/nested_struct_type --- PASS: TestStructToSchema (0.00s) --- PASS: TestStructToSchema/scalar_type (0.00s) --- PASS: TestStructToSchema/simple_struct_type (0.00s) --- PASS: TestStructToSchema/simple_list (0.00s) --- PASS: TestStructToSchema/simple_[]byte (0.00s) --- PASS: TestStructToSchema/nested_simpe_structs (0.00s) --- PASS: TestStructToSchema/nested_struct_type (0.00s) === RUN TestToParquetLevels === RUN TestToParquetLevels/nested_type --- PASS: TestToParquetLevels (0.00s) --- PASS: TestToParquetLevels/nested_type (0.00s) === RUN TestWriteReadParquet RecordType: fields:{name:"Address" type:{record_type:{fields:{name:"City" type:{scalar_type:STRING}} fields:{name:"Street" type:{scalar_type:STRING}}}}} fields:{name:"Company" type:{scalar_type:STRING}} fields:{name:"CreatedAt" type:{scalar_type:INT64}} fields:{name:"ID" type:{scalar_type:INT64}} fields:{name:"Person" type:{record_type:{fields:{name:"emails" type:{list_type:{element_type:{scalar_type:STRING}}}} fields:{name:"zName" type:{scalar_type:STRING}}}}} ParquetSchema: message example { optional group Address { optional binary City; optional binary Street; } optional binary Company; optional int64 CreatedAt; optional int64 ID; optional group Person { repeated binary emails; optional binary zName; } } Go Type: struct { Address *struct { City *[]uint8; Street *[]uint8 }; Company *[]uint8; CreatedAt *int64; ID *int64; Person *struct { Emails []*[]uint8; ZName *[]uint8 } } Write RecordValue: fields:{key:"Company" value:{string_value:"company_0"}} fields:{key:"CreatedAt" value:{int64_value:2}} fields:{key:"ID" value:{int64_value:1}} fields:{key:"Person" value:{record_value:{fields:{key:"emails" value:{list_value:{values:{string_value:"john_0@a.com"} values:{string_value:"john_0@b.com"} values:{string_value:"john_0@c.com"} values:{string_value:"john_0@d.com"} values:{string_value:"john_0@e.com"}}}} fields:{key:"zName" value:{string_value:"john_0"}}}}} Build Row: [C:0 D:0 R:0 V: C:1 D:0 R:0 V: C:2 D:1 R:0 V:company_0 C:3 D:1 R:0 V:2 C:4 D:1 R:0 V:1 C:5 D:2 R:0 V:john_0@a.com C:5 D:2 R:1 V:john_0@b.com C:5 D:2 R:1 V:john_0@c.com C:5 D:2 R:1 V:john_0@d.com C:5 D:2 R:1 V:john_0@e.com C:6 D:2 R:0 V:john_0] Write RecordValue: fields:{key:"Company" value:{string_value:"company_1"}} fields:{key:"CreatedAt" value:{int64_value:4}} fields:{key:"ID" value:{int64_value:2}} fields:{key:"Person" value:{record_value:{fields:{key:"emails" value:{list_value:{values:{string_value:"john_1@a.com"} values:{string_value:"john_1@b.com"} values:{string_value:"john_1@c.com"} values:{string_value:"john_1@d.com"} values:{string_value:"john_1@e.com"}}}} fields:{key:"zName" value:{string_value:"john_1"}}}}} Build Row: [C:0 D:0 R:0 V: C:1 D:0 R:0 V: C:2 D:1 R:0 V:company_1 C:3 D:1 R:0 V:4 C:4 D:1 R:0 V:2 C:5 D:2 R:0 V:john_1@a.com C:5 D:2 R:1 V:john_1@b.com C:5 D:2 R:1 V:john_1@c.com C:5 D:2 R:1 V:john_1@d.com C:5 D:2 R:1 V:john_1@e.com C:6 D:2 R:0 V:john_1] Write RecordValue: fields:{key:"Company" value:{string_value:"company_2"}} fields:{key:"CreatedAt" value:{int64_value:6}} fields:{key:"ID" value:{int64_value:3}} fields:{key:"Person" value:{record_value:{fields:{key:"emails" value:{list_value:{values:{string_value:"john_2@a.com"} values:{string_value:"john_2@b.com"} values:{string_value:"john_2@c.com"} values:{string_value:"john_2@d.com"} values:{string_value:"john_2@e.com"}}}} fields:{key:"zName" value:{string_value:"john_2"}}}}} Build Row: [C:0 D:0 R:0 V: C:1 D:0 R:0 V: C:2 D:1 R:0 V:company_2 C:3 D:1 R:0 V:6 C:4 D:1 R:0 V:3 C:5 D:2 R:0 V:john_2@a.com C:5 D:2 R:1 V:john_2@b.com C:5 D:2 R:1 V:john_2@c.com C:5 D:2 R:1 V:john_2@d.com C:5 D:2 R:1 V:john_2@e.com C:6 D:2 R:0 V:john_2] Read RecordValue: fields:{key:"Address" value:{record_value:{fields:{key:"City" value:{string_value:""}} fields:{key:"Street" value:{string_value:""}}}}} fields:{key:"Company" value:{string_value:"company_0"}} fields:{key:"CreatedAt" value:{int64_value:2}} fields:{key:"ID" value:{int64_value:1}} fields:{key:"Person" value:{record_value:{fields:{key:"emails" value:{list_value:{values:{string_value:"john_0@a.com"} values:{string_value:"john_0@b.com"} values:{string_value:"john_0@c.com"} values:{string_value:"john_0@d.com"} values:{string_value:"john_0@e.com"}}}} fields:{key:"zName" value:{string_value:"john_0"}}}}} Read RecordValue: fields:{key:"Address" value:{record_value:{fields:{key:"City" value:{string_value:""}} fields:{key:"Street" value:{string_value:""}}}}} fields:{key:"Company" value:{string_value:"company_1"}} fields:{key:"CreatedAt" value:{int64_value:4}} fields:{key:"ID" value:{int64_value:2}} fields:{key:"Person" value:{record_value:{fields:{key:"emails" value:{list_value:{values:{string_value:"john_1@a.com"} values:{string_value:"john_1@b.com"} values:{string_value:"john_1@c.com"} values:{string_value:"john_1@d.com"} values:{string_value:"john_1@e.com"}}}} fields:{key:"zName" value:{string_value:"john_1"}}}}} Read RecordValue: fields:{key:"Address" value:{record_value:{fields:{key:"City" value:{string_value:""}} fields:{key:"Street" value:{string_value:""}}}}} fields:{key:"Company" value:{string_value:"company_2"}} fields:{key:"CreatedAt" value:{int64_value:6}} fields:{key:"ID" value:{int64_value:3}} fields:{key:"Person" value:{record_value:{fields:{key:"emails" value:{list_value:{values:{string_value:"john_2@a.com"} values:{string_value:"john_2@b.com"} values:{string_value:"john_2@c.com"} values:{string_value:"john_2@d.com"} values:{string_value:"john_2@e.com"}}}} fields:{key:"zName" value:{string_value:"john_2"}}}}} total: 3 --- PASS: TestWriteReadParquet (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/mq/schema 0.007s === RUN TestMessageSerde serialized size 368 --- PASS: TestMessageSerde (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/mq/segment 0.002s === RUN TestRingBuffer --- PASS: TestRingBuffer (0.00s) === RUN TestInflightMessageTracker --- PASS: TestInflightMessageTracker (0.00s) === RUN TestInflightMessageTracker2 --- PASS: TestInflightMessageTracker2 (0.00s) === RUN TestInflightMessageTracker3 --- PASS: TestInflightMessageTracker3 (0.00s) === RUN TestInflightMessageTracker4 --- PASS: TestInflightMessageTracker4 (0.00s) === RUN TestAddConsumerInstance &{isAssign:true partition:{RangeStart:0 RangeStop:1 RingSize:3 UnixTimeNs:0} consumer:first ts:{wall:13978829128807958568 ext:505983635 loc:0x1ab86c0}} &{isAssign:true partition:{RangeStart:1 RangeStop:2 RingSize:3 UnixTimeNs:0} consumer:first ts:{wall:13978829128807961604 ext:505986671 loc:0x1ab86c0}} --- PASS: TestAddConsumerInstance (1.00s) === RUN TestMultipleConsumerInstances &{isAssign:true partition:{RangeStart:0 RangeStop:1 RingSize:3 UnixTimeNs:0} consumer:first ts:{wall:13978829129883731909 ext:1508015162 loc:0x1ab86c0}} &{isAssign:true partition:{RangeStart:1 RangeStop:2 RingSize:3 UnixTimeNs:0} consumer:second ts:{wall:13978829129883734964 ext:1508018218 loc:0x1ab86c0}} &{isAssign:true partition:{RangeStart:2 RangeStop:3 RingSize:3 UnixTimeNs:0} consumer:third ts:{wall:13978829129883735906 ext:1508019149 loc:0x1ab86c0}} --- PASS: TestMultipleConsumerInstances (1.00s) === RUN TestConfirmAdjustment &{isAssign:true partition:{RangeStart:0 RangeStop:1 RingSize:3 UnixTimeNs:0} consumer:second ts:{wall:13978829130959445535 ext:2509986964 loc:0x1ab86c0}} &{isAssign:true partition:{RangeStart:1 RangeStop:2 RingSize:3 UnixTimeNs:0} consumer:first ts:{wall:13978829130959449492 ext:2509990922 loc:0x1ab86c0}} &{isAssign:true partition:{RangeStart:2 RangeStop:3 RingSize:3 UnixTimeNs:0} consumer:third ts:{wall:13978829130959450735 ext:2509992154 loc:0x1ab86c0}} &{isAssign:true partition:{RangeStart:2 RangeStop:3 RingSize:3 UnixTimeNs:0} consumer:first ts:{wall:13978829133109482790 ext:4512540571 loc:0x1ab86c0}} --- PASS: TestConfirmAdjustment (5.00s) === RUN Test_doBalanceSticky === RUN Test_doBalanceSticky/1_consumer_instance,_1_partition === RUN Test_doBalanceSticky/2_consumer_instances,_1_partition === RUN Test_doBalanceSticky/1_consumer_instance,_2_partitions === RUN Test_doBalanceSticky/2_consumer_instances,_2_partitions === RUN Test_doBalanceSticky/2_consumer_instances,_2_partitions,_1_deleted_consumer_instance === RUN Test_doBalanceSticky/2_consumer_instances,_2_partitions,_1_new_consumer_instance === RUN Test_doBalanceSticky/2_consumer_instances,_2_partitions,_1_new_partition === RUN Test_doBalanceSticky/2_consumer_instances,_2_partitions,_1_new_partition,_1_new_consumer_instance --- PASS: Test_doBalanceSticky (0.00s) --- PASS: Test_doBalanceSticky/1_consumer_instance,_1_partition (0.00s) --- PASS: Test_doBalanceSticky/2_consumer_instances,_1_partition (0.00s) --- PASS: Test_doBalanceSticky/1_consumer_instance,_2_partitions (0.00s) --- PASS: Test_doBalanceSticky/2_consumer_instances,_2_partitions (0.00s) --- PASS: Test_doBalanceSticky/2_consumer_instances,_2_partitions,_1_deleted_consumer_instance (0.00s) --- PASS: Test_doBalanceSticky/2_consumer_instances,_2_partitions,_1_new_consumer_instance (0.00s) --- PASS: Test_doBalanceSticky/2_consumer_instances,_2_partitions,_1_new_partition (0.00s) --- PASS: Test_doBalanceSticky/2_consumer_instances,_2_partitions,_1_new_partition,_1_new_consumer_instance (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/mq/sub_coordinator 7.012s ? github.com/seaweedfs/seaweedfs/weed/mq/topic [no test files] ? github.com/seaweedfs/seaweedfs/weed/notification [no test files] ? github.com/seaweedfs/seaweedfs/weed/notification/aws_sqs [no test files] ? github.com/seaweedfs/seaweedfs/weed/notification/gocdk_pub_sub [no test files] ? github.com/seaweedfs/seaweedfs/weed/notification/google_pub_sub [no test files] ? github.com/seaweedfs/seaweedfs/weed/notification/kafka [no test files] ? github.com/seaweedfs/seaweedfs/weed/notification/log [no test files] === RUN TestCaching vid 123 locations = [{a.com:8080 0}] --- PASS: TestCaching (2.01s) === RUN TestCreateNeedleFromRequest needle: 0f084d17353afda0 Size:0, DataSize:0, Name:t.txt, Mime:text/plain; charset=utf-8 Compressed:true, originalSize: 1422 W0507 00:09:04.282361 upload_content.go:190 uploading 0 to http://localhost:8080/389,0f084d17353afda0: upload t.txt 803 bytes to http://localhost:8080/389,0f084d17353afda0: EOF needle: 0f084d17353afda0 Size:0, DataSize:0, Name:t.txt, Mime:text/plain; charset=utf-8 Compressed:true, originalSize: 1422 W0507 00:09:04.756769 upload_content.go:190 uploading 1 to http://localhost:8080/389,0f084d17353afda0: upload t.txt 803 bytes to http://localhost:8080/389,0f084d17353afda0: EOF needle: 0f084d17353afda0 Size:0, DataSize:0, Name:t.txt, Mime:text/plain; charset=utf-8 Compressed:true, originalSize: 1422 W0507 00:09:05.470593 upload_content.go:190 uploading 2 to http://localhost:8080/389,0f084d17353afda0: upload t.txt 803 bytes to http://localhost:8080/389,0f084d17353afda0: EOF err: upload t.txt 803 bytes to http://localhost:8080/389,0f084d17353afda0: EOF uploadResult: needle: 0f084d17353afda0 Size:0, DataSize:0, Name:t.txt, Mime:text/plain Compressed:true, dataSize:803 originalSize:1422 W0507 00:09:05.470757 upload_content.go:190 uploading 0 to http://localhost:8080/389,0f084d17353afda0: upload t.txt 803 bytes to http://localhost:8080/389,0f084d17353afda0: EOF needle: 0f084d17353afda0 Size:0, DataSize:0, Name:t.txt, Mime:text/plain Compressed:true, dataSize:803 originalSize:1422 W0507 00:09:05.947182 upload_content.go:190 uploading 1 to http://localhost:8080/389,0f084d17353afda0: upload t.txt 803 bytes to http://localhost:8080/389,0f084d17353afda0: EOF needle: 0f084d17353afda0 Size:0, DataSize:0, Name:t.txt, Mime:text/plain Compressed:true, dataSize:803 originalSize:1422 W0507 00:09:06.660602 upload_content.go:190 uploading 2 to http://localhost:8080/389,0f084d17353afda0: upload t.txt 803 bytes to http://localhost:8080/389,0f084d17353afda0: EOF --- PASS: TestCreateNeedleFromRequest (2.38s) PASS ok github.com/seaweedfs/seaweedfs/weed/operation 4.393s === RUN TestJsonpMarshalUnmarshal marshalled: { "backendType": "aws", "backendId": "", "key": "", "offset": "0", "fileSize": "12", "modifiedTime": "0", "extension": "" } unmarshalled: backend_type:"aws" backend_id:"temp" file_size:12 --- PASS: TestJsonpMarshalUnmarshal (0.00s) === RUN TestServerAddresses_ToAddressMapOrSrv_shouldRemovePrefix --- PASS: TestServerAddresses_ToAddressMapOrSrv_shouldRemovePrefix (0.00s) === RUN TestServerAddresses_ToAddressMapOrSrv_shouldHandleIPPortList --- PASS: TestServerAddresses_ToAddressMapOrSrv_shouldHandleIPPortList (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/pb 0.004s === RUN TestFileIdSize 24 14 --- PASS: TestFileIdSize (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/pb/filer_pb 0.007s ? github.com/seaweedfs/seaweedfs/weed/pb/iam_pb [no test files] ? github.com/seaweedfs/seaweedfs/weed/pb/master_pb [no test files] ? github.com/seaweedfs/seaweedfs/weed/pb/message_fbs [no test files] ? github.com/seaweedfs/seaweedfs/weed/pb/mount_pb [no test files] ? github.com/seaweedfs/seaweedfs/weed/pb/mq_agent_pb [no test files] ? github.com/seaweedfs/seaweedfs/weed/pb/mq_pb [no test files] ? github.com/seaweedfs/seaweedfs/weed/pb/remote_pb [no test files] ? github.com/seaweedfs/seaweedfs/weed/pb/s3_pb [no test files] ? github.com/seaweedfs/seaweedfs/weed/pb/schema_pb [no test files] ? github.com/seaweedfs/seaweedfs/weed/pb/volume_server_pb [no test files] === RUN TestGjson { "quiz": { "sport": { "q1": { "question": "Which one is correct team name in NBA?", "options": [ "New York Bulls", "Los Angeles Kings", "Golden State Warriros", "Huston Rocket" ], "answer": "Huston Rocket" } }, "maths": { "q1": { "question": "5 + 7 = ?", "options": [ "10", "11", "12", "13" ], "answer": "12" }, "q2": { "question": "12 - 8 = ?", "options": [ "1", "2", "3", "4" ], "answer": "4" } } } } +++++++++++ 12 5 { "sport": { "q1": { "question": "Which one is correct team name in NBA?", "options": [ "New York Bulls", "Los Angeles Kings", "Golden State Warriros", "Huston Rocket" ], "answer": "Huston Rocket" } }, "maths": { "q1": { "question": "5 + 7 = ?", "options": [ "10", "11", "12", "13" ], "answer": "12" }, "q2": { "question": "12 - 8 = ?", "options": [ "1", "2", "3", "4" ], "answer": "4" } } } 0 0 ----------- { "fruit": "Apple", "size": "Large", "quiz": "Red" } +++++++++++ 51 3 Red 13 3 Apple ----------- --- PASS: TestGjson (0.00s) === RUN TestJsonQueryRow {fruit:"Bl\"ue",size:6} --- PASS: TestJsonQueryRow (0.00s) === RUN TestJsonQueryNumber {fruit:"Bl\"ue",quiz:"green"} --- PASS: TestJsonQueryNumber (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/query/json 0.001s ? github.com/seaweedfs/seaweedfs/weed/query/sqltypes [no test files] ? github.com/seaweedfs/seaweedfs/weed/remote_storage [no test files] ? github.com/seaweedfs/seaweedfs/weed/remote_storage/azure [no test files] ? github.com/seaweedfs/seaweedfs/weed/remote_storage/gcs [no test files] ? github.com/seaweedfs/seaweedfs/weed/remote_storage/s3 [no test files] ? github.com/seaweedfs/seaweedfs/weed/replication [no test files] ? github.com/seaweedfs/seaweedfs/weed/replication/repl_util [no test files] ? github.com/seaweedfs/seaweedfs/weed/replication/sink [no test files] ? github.com/seaweedfs/seaweedfs/weed/replication/sink/azuresink [no test files] ? github.com/seaweedfs/seaweedfs/weed/replication/sink/b2sink [no test files] ? github.com/seaweedfs/seaweedfs/weed/replication/sink/filersink [no test files] ? github.com/seaweedfs/seaweedfs/weed/replication/sink/gcssink [no test files] ? github.com/seaweedfs/seaweedfs/weed/replication/sink/localsink [no test files] ? github.com/seaweedfs/seaweedfs/weed/replication/sink/s3sink [no test files] ? github.com/seaweedfs/seaweedfs/weed/replication/source [no test files] ? github.com/seaweedfs/seaweedfs/weed/replication/sub [no test files] === RUN TestIdentityListFileFormat { "identities": [ { "name": "some_name", "credentials": [ { "accessKey": "some_access_key1", "secretKey": "some_secret_key2" } ], "actions": [ "Admin", "Read", "Write" ], "account": null }, { "name": "some_read_only_user", "credentials": [ { "accessKey": "some_access_key1", "secretKey": "some_secret_key1" } ], "actions": [ "Read" ], "account": null }, { "name": "some_normal_user", "credentials": [ { "accessKey": "some_access_key2", "secretKey": "some_secret_key2" } ], "actions": [ "Read", "Write" ], "account": null } ], "accounts": [] } --- PASS: TestIdentityListFileFormat (0.00s) === RUN TestCanDo --- PASS: TestCanDo (0.00s) === RUN TestLoadS3ApiConfiguration --- PASS: TestLoadS3ApiConfiguration (0.00s) === RUN TestIsRequestPresignedSignatureV4 --- PASS: TestIsRequestPresignedSignatureV4 (0.00s) === RUN TestIsReqAuthenticated --- PASS: TestIsReqAuthenticated (0.00s) === RUN TestCheckaAnonymousRequestAuthType --- PASS: TestCheckaAnonymousRequestAuthType (0.00s) === RUN TestCheckAdminRequestAuthType --- PASS: TestCheckAdminRequestAuthType (0.00s) === RUN TestGetStringToSignPUT --- PASS: TestGetStringToSignPUT (0.00s) === RUN TestGetStringToSignGETEmptyStringHash --- PASS: TestGetStringToSignGETEmptyStringHash (0.00s) === RUN TestBuildBucketMetadata W0507 00:09:04.348373 bucket_metadata.go:105 Invalid ownership: , bucket: ownershipEmptyStr W0507 00:09:04.348714 bucket_metadata.go:116 owner[id=xxxxx] is invalid, bucket: acpEmptyObject --- PASS: TestBuildBucketMetadata (0.00s) === RUN TestGetBucketMetadata --- PASS: TestGetBucketMetadata (1.00s) === RUN TestNewSignV4ChunkedReaderstreamingAws4HmacSha256Payload --- PASS: TestNewSignV4ChunkedReaderstreamingAws4HmacSha256Payload (0.00s) === RUN TestNewSignV4ChunkedReaderStreamingUnsignedPayloadTrailer --- PASS: TestNewSignV4ChunkedReaderStreamingUnsignedPayloadTrailer (0.00s) === RUN TestInitiateMultipartUploadResult --- PASS: TestInitiateMultipartUploadResult (0.00s) === RUN TestListPartsResult --- PASS: TestListPartsResult (0.00s) === RUN Test_parsePartNumber === RUN Test_parsePartNumber/first === RUN Test_parsePartNumber/second --- PASS: Test_parsePartNumber (0.00s) --- PASS: Test_parsePartNumber/first (0.00s) --- PASS: Test_parsePartNumber/second (0.00s) === RUN TestGetAccountId --- PASS: TestGetAccountId (0.00s) === RUN TestExtractAcl --- PASS: TestExtractAcl (0.00s) === RUN TestParseAndValidateAclHeaders W0507 00:09:05.352045 s3api_acl_helper.go:292 invalid canonical grantee! account id[notExistsAccount] is not exists W0507 00:09:05.352050 s3api_acl_helper.go:281 invalid group grantee! group name[http:sfasf] is not valid --- PASS: TestParseAndValidateAclHeaders (0.00s) === RUN TestDetermineReqGrants --- PASS: TestDetermineReqGrants (0.00s) === RUN TestAssembleEntryWithAcp --- PASS: TestAssembleEntryWithAcp (0.00s) === RUN TestGrantEquals --- PASS: TestGrantEquals (0.00s) === RUN TestSetAcpOwnerHeader --- PASS: TestSetAcpOwnerHeader (0.00s) === RUN TestSetAcpGrantsHeader --- PASS: TestSetAcpGrantsHeader (0.00s) === RUN TestListBucketsHandler --- PASS: TestListBucketsHandler (0.00s) === RUN TestLimit --- PASS: TestLimit (0.00s) === RUN TestProcessMetadata --- PASS: TestProcessMetadata (0.00s) === RUN TestProcessMetadataBytes --- PASS: TestProcessMetadataBytes (0.00s) === RUN TestListObjectsHandler --- PASS: TestListObjectsHandler (0.00s) === RUN Test_normalizePrefixMarker === RUN Test_normalizePrefixMarker/prefix_is_a_directory === RUN Test_normalizePrefixMarker/normal_case === RUN Test_normalizePrefixMarker/empty_prefix === RUN Test_normalizePrefixMarker/empty_directory --- PASS: Test_normalizePrefixMarker (0.00s) --- PASS: Test_normalizePrefixMarker/prefix_is_a_directory (0.00s) --- PASS: Test_normalizePrefixMarker/normal_case (0.00s) --- PASS: Test_normalizePrefixMarker/empty_prefix (0.00s) --- PASS: Test_normalizePrefixMarker/empty_directory (0.00s) === RUN TestRemoveDuplicateSlashes === RUN TestRemoveDuplicateSlashes/empty === RUN TestRemoveDuplicateSlashes/slash === RUN TestRemoveDuplicateSlashes/object === RUN TestRemoveDuplicateSlashes/correct_path === RUN TestRemoveDuplicateSlashes/path_with_duplicates --- PASS: TestRemoveDuplicateSlashes (0.00s) --- PASS: TestRemoveDuplicateSlashes/empty (0.00s) --- PASS: TestRemoveDuplicateSlashes/slash (0.00s) --- PASS: TestRemoveDuplicateSlashes/object (0.00s) --- PASS: TestRemoveDuplicateSlashes/correct_path (0.00s) --- PASS: TestRemoveDuplicateSlashes/path_with_duplicates (0.00s) === RUN TestS3ApiServer_toFilerUrl === RUN TestS3ApiServer_toFilerUrl/simple === RUN TestS3ApiServer_toFilerUrl/double_prefix === RUN TestS3ApiServer_toFilerUrl/triple_prefix === RUN TestS3ApiServer_toFilerUrl/empty_prefix --- PASS: TestS3ApiServer_toFilerUrl (0.00s) --- PASS: TestS3ApiServer_toFilerUrl/simple (0.00s) --- PASS: TestS3ApiServer_toFilerUrl/double_prefix (0.00s) --- PASS: TestS3ApiServer_toFilerUrl/triple_prefix (0.00s) --- PASS: TestS3ApiServer_toFilerUrl/empty_prefix (0.00s) === RUN TestCopyObjectResponse 2025-05-07T00:09:05.353752639Z12345678 --- PASS: TestCopyObjectResponse (0.00s) === RUN TestCopyPartResponse 2025-05-07T00:09:05.353764601Z12345678 --- PASS: TestCopyPartResponse (0.00s) === RUN TestXMLUnmarshall --- PASS: TestXMLUnmarshall (0.00s) === RUN TestXMLMarshall --- PASS: TestXMLMarshall (0.00s) === RUN TestValidateTags --- PASS: TestValidateTags (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/s3api 1.029s === RUN TestPostPolicyForm --- PASS: TestPostPolicyForm (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/s3api/policy 0.007s ? github.com/seaweedfs/seaweedfs/weed/s3api/s3_constants [no test files] === RUN Test_verifyBucketName --- PASS: Test_verifyBucketName (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/s3api/s3bucket 0.003s ? github.com/seaweedfs/seaweedfs/weed/s3api/s3err [no test files] ? github.com/seaweedfs/seaweedfs/weed/security [no test files] === RUN TestSequencer I0507 00:09:04.333949 snowflake_sequencer.go:21 use snowflake seq id generator, nodeid:for_test hex_of_nodeid: 1 1aa4e183a3401000 1aa4e183a3401001 1aa4e183a3401002 1aa4e183a3401003 1aa4e183a3401004 1aa4e183a3401005 1aa4e183a3401006 1aa4e183a3401007 1aa4e183a3401008 1aa4e183a3401009 1aa4e183a340100a 1aa4e183a340100b 1aa4e183a340100c 1aa4e183a340100d 1aa4e183a340100e 1aa4e183a340100f 1aa4e183a3401010 1aa4e183a3401011 1aa4e183a3401012 1aa4e183a3401013 1aa4e183a3401014 1aa4e183a3401015 1aa4e183a3401016 1aa4e183a3401017 1aa4e183a3401018 1aa4e183a3401019 1aa4e183a340101a 1aa4e183a340101b 1aa4e183a340101c 1aa4e183a340101d 1aa4e183a340101e 1aa4e183a340101f 1aa4e183a3401020 1aa4e183a3401021 1aa4e183a3401022 1aa4e183a3401023 1aa4e183a3401024 1aa4e183a3401025 1aa4e183a3401026 1aa4e183a3401027 1aa4e183a3401028 1aa4e183a3401029 1aa4e183a340102a 1aa4e183a340102b 1aa4e183a340102c 1aa4e183a340102d 1aa4e183a340102e 1aa4e183a340102f 1aa4e183a3401030 1aa4e183a3401031 1aa4e183a3401032 1aa4e183a3401033 1aa4e183a3401034 1aa4e183a3401035 1aa4e183a3401036 1aa4e183a3401037 1aa4e183a3401038 1aa4e183a3401039 1aa4e183a340103a 1aa4e183a340103b 1aa4e183a340103c 1aa4e183a340103d 1aa4e183a340103e 1aa4e183a340103f 1aa4e183a3401040 1aa4e183a3401041 1aa4e183a3401042 1aa4e183a3401043 1aa4e183a3401044 1aa4e183a3401045 1aa4e183a3401046 1aa4e183a3401047 1aa4e183a3401048 1aa4e183a3401049 1aa4e183a340104a 1aa4e183a340104b 1aa4e183a340104c 1aa4e183a340104d 1aa4e183a340104e 1aa4e183a340104f 1aa4e183a3401050 1aa4e183a3401051 1aa4e183a3401052 1aa4e183a3401053 1aa4e183a3401054 1aa4e183a3401055 1aa4e183a3401056 1aa4e183a3401057 1aa4e183a3401058 1aa4e183a3401059 1aa4e183a340105a 1aa4e183a340105b 1aa4e183a340105c 1aa4e183a340105d 1aa4e183a340105e 1aa4e183a340105f 1aa4e183a3401060 1aa4e183a3401061 1aa4e183a3401062 1aa4e183a3401063 --- PASS: TestSequencer (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/sequence 0.008s === RUN TestParseURL --- PASS: TestParseURL (0.00s) === RUN TestPtrie matched1 /topics/abc matched1 /topics/abc/d matched2 /topics/abc matched2 /topics/abc/d --- PASS: TestPtrie (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/server 0.022s ? github.com/seaweedfs/seaweedfs/weed/server/constants [no test files] === RUN TestToBreadcrumb === RUN TestToBreadcrumb/empty === RUN TestToBreadcrumb/test1 === RUN TestToBreadcrumb/test2 === RUN TestToBreadcrumb/test3 --- PASS: TestToBreadcrumb (0.00s) --- PASS: TestToBreadcrumb/empty (0.00s) --- PASS: TestToBreadcrumb/test1 (0.00s) --- PASS: TestToBreadcrumb/test2 (0.00s) --- PASS: TestToBreadcrumb/test3 (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/server/filer_ui 0.008s ? github.com/seaweedfs/seaweedfs/weed/server/master_ui [no test files] ? github.com/seaweedfs/seaweedfs/weed/server/volume_server_ui [no test files] === RUN TestCollectCollectionsForVolumeIds --- PASS: TestCollectCollectionsForVolumeIds (0.00s) === RUN TestParseReplicaPlacementArg using master default replica placement "123" for EC volumes using replica placement "021" for EC volumes --- PASS: TestParseReplicaPlacementArg (0.00s) === RUN TestEcDistribution => 192.168.1.5:8080 27010 => 192.168.1.6:8080 17420 => 192.168.1.1:8080 17330 => 192.168.1.4:8080 1900 => 192.168.1.2:8080 1540 --- PASS: TestEcDistribution (0.00s) === RUN TestPickRackToBalanceShardsInto --- PASS: TestPickRackToBalanceShardsInto (0.00s) === RUN TestPickEcNodeToBalanceShardsInto --- PASS: TestPickEcNodeToBalanceShardsInto (0.00s) === RUN TestCountFreeShardSlots === RUN TestCountFreeShardSlots/topology_#1,_free_HDD_shards === RUN TestCountFreeShardSlots/topology_#1,_no_free_SSD_shards_available === RUN TestCountFreeShardSlots/topology_#2,_no_negative_free_HDD_shards === RUN TestCountFreeShardSlots/topology_#2,_no_free_SSD_shards_available --- PASS: TestCountFreeShardSlots (0.00s) --- PASS: TestCountFreeShardSlots/topology_#1,_free_HDD_shards (0.00s) --- PASS: TestCountFreeShardSlots/topology_#1,_no_free_SSD_shards_available (0.00s) --- PASS: TestCountFreeShardSlots/topology_#2,_no_negative_free_HDD_shards (0.00s) --- PASS: TestCountFreeShardSlots/topology_#2,_no_free_SSD_shards_available (0.00s) === RUN TestCommandEcBalanceSmall balanceEcVolumes c1 dn1 moves ec shard 1.5 to dn2 dn1 moves ec shard 1.6 to dn2 dn1 moves ec shard 1.0 to dn2 dn1 moves ec shard 1.1 to dn2 dn1 moves ec shard 1.2 to dn2 dn1 moves ec shard 1.3 to dn2 dn1 moves ec shard 1.4 to dn2 dn2 moves ec shard 2.1 to dn1 dn2 moves ec shard 2.2 to dn1 dn2 moves ec shard 2.3 to dn1 dn2 moves ec shard 2.4 to dn1 dn2 moves ec shard 2.5 to dn1 dn2 moves ec shard 2.6 to dn1 dn2 moves ec shard 2.0 to dn1 --- PASS: TestCommandEcBalanceSmall (0.00s) === RUN TestCommandEcBalanceNothingToMove balanceEcVolumes c1 --- PASS: TestCommandEcBalanceNothingToMove (0.00s) === RUN TestCommandEcBalanceAddNewServers balanceEcVolumes c1 --- PASS: TestCommandEcBalanceAddNewServers (0.00s) === RUN TestCommandEcBalanceAddNewRacks balanceEcVolumes c1 dn2 moves ec shard 1.9 to dn3 dn2 moves ec shard 1.10 to dn4 dn1 moves ec shard 1.0 to dn3 dn2 moves ec shard 1.7 to dn4 dn2 moves ec shard 1.8 to dn4 dn1 moves ec shard 1.1 to dn3 dn1 moves ec shard 1.2 to dn4 dn1 moves ec shard 2.8 to dn3 dn1 moves ec shard 2.9 to dn4 dn2 moves ec shard 2.2 to dn3 dn2 moves ec shard 2.3 to dn4 dn1 moves ec shard 2.7 to dn4 dn2 moves ec shard 2.0 to dn3 dn2 moves ec shard 2.1 to dn3 --- PASS: TestCommandEcBalanceAddNewRacks (0.00s) === RUN TestCommandEcBalanceVolumeEvenButRackUneven balanceEcVolumes c1 dn_shared moves ec shards 1.0 to dn3 --- PASS: TestCommandEcBalanceVolumeEvenButRackUneven (0.00s) === RUN TestCircuitBreakerShell --- PASS: TestCircuitBreakerShell (0.00s) === RUN TestIsGoodMove replication: 100 expected false name: test 100 move to wrong data centers replication: 100 expected true name: test 100 move to spread into proper data centers replication: 001 expected false name: test move to the same node replication: 001 expected false name: test move to the same rack, but existing node replication: 001 expected true name: test move to the same rack, a new node replication: 010 expected false name: test 010 move all to the same rack replication: 010 expected true name: test 010 move to spread racks replication: 010 expected true name: test 010 move to spread racks replication: 011 expected true name: test 011 switch which rack has more replicas replication: 011 expected true name: test 011 move the lonely replica to another racks replication: 011 expected false name: test 011 move to wrong racks replication: 011 expected false name: test 011 move all to the same rack --- PASS: TestIsGoodMove (0.00s) === RUN TestBalance hdd 0.10 0.21:0.06 moving volume 31 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.20:0.06 moving volume 29 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.20:0.06 moving volume 30 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.20:0.06 moving volume 27 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.19:0.06 moving volume 28 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.19:0.06 moving volume collection4_7 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.19:0.06 moving volume collection0_25 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.18:0.06 moving volume collection3_9 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.18:0.06 moving volume collection1_80 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.18:0.06 moving volume collection1_69 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.18:0.06 moving volume 4 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.17:0.06 moving volume collection1_84 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.17:0.07 moving volume 2 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.17:0.07 moving volume collection1_63 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.17:0.07 moving volume 6 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.17:0.07 moving volume collection1_74 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.16:0.07 moving volume 3 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.16:0.07 moving volume collection1_85 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.16:0.07 moving volume collection1_54 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.16:0.07 moving volume collection1_81 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.15:0.07 moving volume collection1_97 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.15:0.07 moving volume collection1_56 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.15:0.07 moving volume collection1_174 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.15:0.07 moving volume collection2_380 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.15:0.07 moving volume collection1_105 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.14:0.07 moving volume collection1_215 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.14:0.07 moving volume collection0_24 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.14:0.07 moving volume collection1_173 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.14:0.07 moving volume collection1_107 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.07 moving volume 5 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_136 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_238 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_240 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection0_26 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_167 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_66 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_65 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_57 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_62 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_67 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_138 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_70 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_90 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_72 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_71 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_75 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_58 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_177 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_87 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.09 moving volume collection1_73 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_77 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_116 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_83 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_91 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_79 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_64 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_61 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_76 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_59 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_139 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_96 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_144 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_95 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_92 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_86 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_60 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_55 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.10 moving volume collection2_379 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.10 moving volume collection1_94 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.10 moving volume collection1_82 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.10 moving volume collection1_128 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.10 moving volume collection1_89 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.10 moving volume collection1_53 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.10 moving volume collection2_357 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.10 moving volume collection1_99 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.10 moving volume collection1_111 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.11:0.10 moving volume collection1_176 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.11:0.10 moving volume collection4_7 192.168.1.1:8080 => 192.168.1.5:8080 hdd 0.10 0.11:0.10 moving volume collection3_9 192.168.1.1:8080 => 192.168.1.5:8080 hdd 0.10 0.11:0.10 moving volume collection1_169 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.11:0.10 moving volume 1 192.168.1.1:8080 => 192.168.1.5:8080 hdd 0.10 0.11:0.10 moving volume collection1_197 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.11:0.10 moving volume 4 192.168.1.1:8080 => 192.168.1.5:8080 hdd 0.10 0.11:0.10 moving volume 2 192.168.1.1:8080 => 192.168.1.5:8080 hdd 0.10 0.11:0.10 moving volume collection1_126 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.11:0.10 moving volume collection2_381 192.168.1.2:8080 => 192.168.1.5:8080 hdd 0.10 0.11:0.10 moving volume collection1_165 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.11:0.10 moving volume 6 192.168.1.1:8080 => 192.168.1.5:8080 hdd 0.10 0.11:0.10 moving volume 3 192.168.1.1:8080 => 192.168.1.5:8080 hdd 0.10 0.11:0.10 moving volume collection1_232 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.11:0.10 moving volume collection0_25 192.168.1.1:8080 => 192.168.1.5:8080 hdd 0.10 0.11:0.10 moving volume collection2_345 192.168.1.4:8080 => 192.168.1.5:8080 hdd 0.10 0.11:0.10 moving volume collection1_135 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.11:0.10 moving volume collection1_68 192.168.1.1:8080 => 192.168.1.5:8080 hdd 0.10 0.11:0.10 moving volume collection1_117 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.11:0.10 moving volume collection1_74 192.168.1.1:8080 => 192.168.1.5:8080 hdd 0.10 0.11:0.10 moving volume collection2_378 192.168.1.1:8080 => 192.168.1.5:8080 hdd 0.10 0.11:0.10 moving volume collection1_194 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.11:0.10 moving volume collection1_179 192.168.1.2:8080 => 192.168.1.5:8080 --- PASS: TestBalance (0.00s) === RUN TestVolumeSelection collect volumes quiet for: 0 seconds --- PASS: TestVolumeSelection (0.00s) === RUN TestDeleteEmptySelection --- PASS: TestDeleteEmptySelection (0.00s) === RUN TestShouldSkipVolume --- PASS: TestShouldSkipVolume (0.00s) === RUN TestSatisfyReplicaPlacementComplicated replication: 100 expected false name: test 100 negative replication: 100 expected true name: test 100 positive replication: 022 expected true name: test 022 positive replication: 022 expected false name: test 022 negative replication: 210 expected true name: test 210 moved from 200 positive replication: 210 expected false name: test 210 moved from 200 negative extra dc replication: 210 expected false name: test 210 moved from 200 negative extra data node --- PASS: TestSatisfyReplicaPlacementComplicated (0.00s) === RUN TestSatisfyReplicaPlacement01x replication: 011 expected true name: test 011 same existing rack replication: 011 expected false name: test 011 negative replication: 011 expected true name: test 011 different existing racks replication: 011 expected false name: test 011 different existing racks negative --- PASS: TestSatisfyReplicaPlacement01x (0.00s) === RUN TestSatisfyReplicaPlacement00x replication: 001 expected true name: test 001 replication: 002 expected true name: test 002 positive replication: 002 expected false name: test 002 negative, repeat the same node replication: 002 expected false name: test 002 negative, enough node already --- PASS: TestSatisfyReplicaPlacement00x (0.00s) === RUN TestSatisfyReplicaPlacement100 replication: 100 expected true name: test 100 --- PASS: TestSatisfyReplicaPlacement100 (0.00s) === RUN TestMisplacedChecking replication: 001 expected true name: test 001 replication: 010 expected false name: test 010 replication: 011 expected false name: test 011 replication: 110 expected true name: test 110 replication: 100 expected true name: test 100 --- PASS: TestMisplacedChecking (0.00s) === RUN TestPickingMisplacedVolumeToDelete replication: 001 name: test 001 command_volume_fix_replication_test.go:435: test 001: picked dn2 001 replication: 100 name: test 100 command_volume_fix_replication_test.go:435: test 100: picked dn2 100 --- PASS: TestPickingMisplacedVolumeToDelete (0.00s) === RUN TestSatisfyReplicaCurrentLocation === RUN TestSatisfyReplicaCurrentLocation/test_001 === RUN TestSatisfyReplicaCurrentLocation/test_010 === RUN TestSatisfyReplicaCurrentLocation/test_011 === RUN TestSatisfyReplicaCurrentLocation/test_110 === RUN TestSatisfyReplicaCurrentLocation/test_100 --- PASS: TestSatisfyReplicaCurrentLocation (0.00s) --- PASS: TestSatisfyReplicaCurrentLocation/test_001 (0.00s) --- PASS: TestSatisfyReplicaCurrentLocation/test_010 (0.00s) --- PASS: TestSatisfyReplicaCurrentLocation/test_011 (0.00s) --- PASS: TestSatisfyReplicaCurrentLocation/test_110 (0.00s) --- PASS: TestSatisfyReplicaCurrentLocation/test_100 (0.00s) === RUN TestParsing --- PASS: TestParsing (0.06s) === RUN TestVolumeServerEvacuate moving volume collection0_15 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection0_21 192.168.1.4:8080 => 192.168.1.6:8080 moving volume collection0_22 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection0_23 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection0_24 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection0_25 192.168.1.4:8080 => 192.168.1.2:8080 moving volume 27 192.168.1.4:8080 => 192.168.1.2:8080 moving volume 28 192.168.1.4:8080 => 192.168.1.2:8080 moving volume 29 192.168.1.4:8080 => 192.168.1.2:8080 moving volume 30 192.168.1.4:8080 => 192.168.1.2:8080 moving volume 31 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_33 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_38 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_51 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_52 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_54 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_63 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_69 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_74 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_80 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_84 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_85 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_97 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_98 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_105 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_106 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_112 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_116 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_119 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_128 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_133 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_136 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_138 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_140 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_144 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_161 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_173 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_174 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_197 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_219 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection2_263 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection2_272 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection2_291 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection2_299 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection2_301 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection2_302 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection2_339 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection2_345 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection2_355 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection2_373 192.168.1.4:8080 => 192.168.1.2:8080 --- PASS: TestVolumeServerEvacuate (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/shell 0.158s === RUN TestRobinCounter --- PASS: TestRobinCounter (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/stats 0.007s === RUN TestUnUsedSpace --- PASS: TestUnUsedSpace (0.00s) === RUN TestFirstInvalidIndex I0507 00:09:04.339628 volume_loading.go:139 checking volume data integrity for volume 1 I0507 00:09:04.339787 volume_loading.go:157 loading memory index /tmp/TestFirstInvalidIndex2495070558/001/1.idx to memory --- PASS: TestFirstInvalidIndex (0.00s) === RUN TestFastLoadingNeedleMapMetrics I0507 00:09:04.349818 needle_map_metric_test.go:26 FileCount expected 10000 actual 11983 I0507 00:09:04.349835 needle_map_metric_test.go:27 DeletedSize expected 1684 actual 1684 I0507 00:09:04.349838 needle_map_metric_test.go:28 ContentSize expected 10000 actual 10000 I0507 00:09:04.349841 needle_map_metric_test.go:29 DeletedCount expected 1684 actual 3667 I0507 00:09:04.349843 needle_map_metric_test.go:30 MaxFileKey expected 10000 actual 10000 --- PASS: TestFastLoadingNeedleMapMetrics (0.01s) === RUN TestBinarySearch --- PASS: TestBinarySearch (0.00s) === RUN TestSortVolumeInfos --- PASS: TestSortVolumeInfos (0.00s) === RUN TestReadNeedMetaWithWritesAndUpdates I0507 00:09:04.350003 volume_loading.go:139 checking volume data integrity for volume 1 I0507 00:09:04.350012 volume_loading.go:157 loading memory index /tmp/TestReadNeedMetaWithWritesAndUpdates4170369089/001/1.idx to memory --- PASS: TestReadNeedMetaWithWritesAndUpdates (0.00s) === RUN TestReadNeedMetaWithDeletesThenWrites I0507 00:09:04.350438 volume_loading.go:139 checking volume data integrity for volume 1 I0507 00:09:04.350446 volume_loading.go:157 loading memory index /tmp/TestReadNeedMetaWithDeletesThenWrites2436271090/001/1.idx to memory --- PASS: TestReadNeedMetaWithDeletesThenWrites (0.00s) === RUN TestMakeDiff --- PASS: TestMakeDiff (0.00s) === RUN TestMemIndexCompaction I0507 00:09:04.353987 volume_loading.go:139 checking volume data integrity for volume 1 I0507 00:09:04.353996 volume_loading.go:157 loading memory index /tmp/TestMemIndexCompaction3270433457/001/1.idx to memory I0507 00:09:04.438401 needle_map_memory.go:111 loading idx from offset 0 for file: /tmp/TestMemIndexCompaction3270433457/001/1.cpx volume_vacuum_test.go:92: compaction speed: 101428542.39 bytes/s I0507 00:09:04.506594 volume_vacuum.go:114 Committing volume 1 vacuuming... I0507 00:09:04.585935 needle_map_memory.go:111 loading idx from offset 9700 for file: /tmp/TestMemIndexCompaction3270433457/001/1.cpx I0507 00:09:04.612325 volume_loading.go:98 readSuperBlock volume 1 version 3 I0507 00:09:04.612346 volume_loading.go:139 checking volume data integrity for volume 1 I0507 00:09:04.612354 volume_loading.go:154 updating memory compact index /tmp/TestMemIndexCompaction3270433457/001/1.idx volume_vacuum_test.go:110: realRecordCount:29700, v.FileCount():29700 mm.DeletedCount():9806 I0507 00:09:04.612400 volume_loading.go:98 readSuperBlock volume 1 version 3 I0507 00:09:04.612407 volume_loading.go:139 checking volume data integrity for volume 1 I0507 00:09:04.612411 volume_loading.go:157 loading memory index /tmp/TestMemIndexCompaction3270433457/001/1.idx to memory --- PASS: TestMemIndexCompaction (0.31s) === RUN TestLDBIndexCompaction I0507 00:09:04.665110 volume_loading.go:139 checking volume data integrity for volume 1 I0507 00:09:04.665123 volume_loading.go:172 loading leveldb index /tmp/TestLDBIndexCompaction1394748754/001/1.ldb I0507 00:09:04.665654 needle_map_leveldb.go:122 generateLevelDbFile /tmp/TestLDBIndexCompaction1394748754/001/1.ldb, watermark 0, num of entries:0 I0507 00:09:04.667712 needle_map_leveldb.go:66 Loading /tmp/TestLDBIndexCompaction1394748754/001/1.ldb... , watermark: 0 I0507 00:09:04.784135 needle_map_leveldb.go:338 loading idx to leveldb from offset 0 for file: /tmp/TestLDBIndexCompaction1394748754/001/1.cpx volume_vacuum_test.go:92: compaction speed: 93304290.05 bytes/s I0507 00:09:04.998890 volume_vacuum.go:114 Committing volume 1 vacuuming... I0507 00:09:05.066770 needle_map_leveldb.go:338 loading idx to leveldb from offset 9719 for file: /tmp/TestLDBIndexCompaction1394748754/001/1.cpx I0507 00:09:05.124151 volume_loading.go:98 readSuperBlock volume 1 version 3 I0507 00:09:05.124166 volume_loading.go:139 checking volume data integrity for volume 1 I0507 00:09:05.124174 volume_loading.go:169 updating leveldb index /tmp/TestLDBIndexCompaction1394748754/001/1.ldb volume_vacuum_test.go:105: watermark from levelDB: 20000, realWatermark: 20000, nm.recordCount: 29719, realRecordCount:29719, fileCount=29719, deletedcount:9702 I0507 00:09:05.134591 volume_loading.go:98 readSuperBlock volume 1 version 3 I0507 00:09:05.134604 volume_loading.go:139 checking volume data integrity for volume 1 I0507 00:09:05.134613 volume_loading.go:172 loading leveldb index /tmp/TestLDBIndexCompaction1394748754/001/1.ldb I0507 00:09:05.135447 needle_map_leveldb.go:66 Loading /tmp/TestLDBIndexCompaction1394748754/001/1.ldb... , watermark: 20000 --- PASS: TestLDBIndexCompaction (0.52s) === RUN TestSearchVolumesWithDeletedNeedles I0507 00:09:05.180680 volume_loading.go:139 checking volume data integrity for volume 1 I0507 00:09:05.180690 volume_loading.go:157 loading memory index /tmp/TestSearchVolumesWithDeletedNeedles2352872702/001/1.idx to memory offset: 9648, isLast: false --- PASS: TestSearchVolumesWithDeletedNeedles (0.00s) === RUN TestDestroyEmptyVolumeWithOnlyEmpty I0507 00:09:05.180886 volume_loading.go:139 checking volume data integrity for volume 1 I0507 00:09:05.180891 volume_loading.go:157 loading memory index /tmp/TestDestroyEmptyVolumeWithOnlyEmpty3520596832/001/1.idx to memory --- PASS: TestDestroyEmptyVolumeWithOnlyEmpty (0.00s) === RUN TestDestroyEmptyVolumeWithoutOnlyEmpty I0507 00:09:05.181037 volume_loading.go:139 checking volume data integrity for volume 1 I0507 00:09:05.181041 volume_loading.go:157 loading memory index /tmp/TestDestroyEmptyVolumeWithoutOnlyEmpty3402961762/001/1.idx to memory --- PASS: TestDestroyEmptyVolumeWithoutOnlyEmpty (0.00s) === RUN TestDestroyNonemptyVolumeWithOnlyEmpty I0507 00:09:05.181169 volume_loading.go:139 checking volume data integrity for volume 1 I0507 00:09:05.181174 volume_loading.go:157 loading memory index /tmp/TestDestroyNonemptyVolumeWithOnlyEmpty1434424657/001/1.idx to memory --- PASS: TestDestroyNonemptyVolumeWithOnlyEmpty (0.00s) === RUN TestDestroyNonemptyVolumeWithoutOnlyEmpty I0507 00:09:05.181284 volume_loading.go:139 checking volume data integrity for volume 1 I0507 00:09:05.181287 volume_loading.go:157 loading memory index /tmp/TestDestroyNonemptyVolumeWithoutOnlyEmpty1973475461/001/1.idx to memory --- PASS: TestDestroyNonemptyVolumeWithoutOnlyEmpty (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/storage 0.857s ? github.com/seaweedfs/seaweedfs/weed/storage/backend [no test files] === RUN TestMemoryMapMaxSizeReadWrite --- PASS: TestMemoryMapMaxSizeReadWrite (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/storage/backend/memory_map 0.002s ? github.com/seaweedfs/seaweedfs/weed/storage/backend/rclone_backend [no test files] ? github.com/seaweedfs/seaweedfs/weed/storage/backend/s3_backend [no test files] === RUN TestEncodingDecoding I0507 00:09:04.339310 ec_encoder.go:81 encodeDatFile 1.dat size:2590912 --- PASS: TestEncodingDecoding (0.23s) === RUN TestLocateData [{BlockIndex:5 InnerBlockOffset:100 Size:9900 IsLargeBlock:true LargeBlockRowsCount:1} {BlockIndex:6 InnerBlockOffset:0 Size:10000 IsLargeBlock:true LargeBlockRowsCount:1} {BlockIndex:7 InnerBlockOffset:0 Size:10000 IsLargeBlock:true LargeBlockRowsCount:1} {BlockIndex:8 InnerBlockOffset:0 Size:10000 IsLargeBlock:true LargeBlockRowsCount:1} {BlockIndex:9 InnerBlockOffset:0 Size:10000 IsLargeBlock:true LargeBlockRowsCount:1} {BlockIndex:0 InnerBlockOffset:0 Size:1 IsLargeBlock:false LargeBlockRowsCount:1}] --- PASS: TestLocateData (0.00s) === RUN TestLocateData2 --- PASS: TestLocateData2 (0.00s) === RUN TestLocateData3 {BlockIndex:8876 InnerBlockOffset:912752 Size:112568 IsLargeBlock:false LargeBlockRowsCount:2} --- PASS: TestLocateData3 (0.00s) === RUN TestPositioning offset: 31300679656 size: 1167 offset: 11513014944 size: 66044 offset: 26311863528 size: 26823 interval: {BlockIndex:14852 InnerBlockOffset:994536 Size:26856 IsLargeBlock:false LargeBlockRowsCount:1}, shardId: 2, shardOffset: 2631871720 --- PASS: TestPositioning (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/storage/erasure_coding 0.246s ? github.com/seaweedfs/seaweedfs/weed/storage/idx [no test files] === RUN TestParseFileIdFromString --- PASS: TestParseFileIdFromString (0.00s) === RUN TestParseKeyHash --- PASS: TestParseKeyHash (0.00s) === RUN TestAppend --- PASS: TestAppend (0.00s) === RUN TestNewVolumeId volume_id_test.go:11: a is not legal volume id, strconv.ParseUint: parsing "a": invalid syntax --- PASS: TestNewVolumeId (0.00s) === RUN TestVolumeId_String --- PASS: TestVolumeId_String (0.00s) === RUN TestVolumeId_Next --- PASS: TestVolumeId_Next (0.00s) === RUN TestTTLReadWrite --- PASS: TestTTLReadWrite (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/storage/needle 0.005s === RUN TestMemoryUsage Each 15.69 Bytes Alloc = 19 MiB TotalAlloc = 25 MiB Sys = 29 MiB NumGC = 5 Taken = 690.333283ms Each 15.37 Bytes Alloc = 37 MiB TotalAlloc = 49 MiB Sys = 49 MiB NumGC = 7 Taken = 726.486718ms Each 15.27 Bytes Alloc = 55 MiB TotalAlloc = 74 MiB Sys = 69 MiB NumGC = 8 Taken = 674.296047ms Each 15.21 Bytes Alloc = 74 MiB TotalAlloc = 98 MiB Sys = 89 MiB NumGC = 9 Taken = 659.189417ms Each 15.18 Bytes Alloc = 92 MiB TotalAlloc = 122 MiB Sys = 109 MiB NumGC = 10 Taken = 656.475506ms Each 15.16 Bytes Alloc = 110 MiB TotalAlloc = 147 MiB Sys = 125 MiB NumGC = 11 Taken = 653.468627ms Each 15.15 Bytes Alloc = 129 MiB TotalAlloc = 171 MiB Sys = 145 MiB NumGC = 12 Taken = 648.527537ms Each 15.14 Bytes Alloc = 147 MiB TotalAlloc = 195 MiB Sys = 161 MiB NumGC = 13 Taken = 651.175223ms Each 15.13 Bytes Alloc = 165 MiB TotalAlloc = 220 MiB Sys = 181 MiB NumGC = 14 Taken = 649.846741ms Each 15.12 Bytes Alloc = 184 MiB TotalAlloc = 244 MiB Sys = 201 MiB NumGC = 15 Taken = 653.961493ms --- PASS: TestMemoryUsage (6.66s) === RUN TestSnowflakeSequencer I0507 00:09:11.687441 snowflake_sequencer.go:21 use snowflake seq id generator, nodeid:for_test hex_of_nodeid: 1 --- PASS: TestSnowflakeSequencer (0.05s) === RUN TestOverflow2 needle key: 150073 needle key: 150076 needle key: 150088 needle key: 150089 needle key: 150124 needle key: 150137 needle key: 150145 needle key: 150147 needle key: 150158 needle key: 150162 --- PASS: TestOverflow2 (0.00s) === RUN TestIssue52 key 10002 ok true 10002, 1250, 10002 key 10002 ok true 10002, 1250, 10002 --- PASS: TestIssue52 (0.00s) === RUN TestCompactMap --- PASS: TestCompactMap (0.05s) === RUN TestOverflow overflow[ 0 ]: 1 overflow[ 1 ]: 2 overflow[ 2 ]: 3 overflow[ 3 ]: 4 overflow[ 4 ]: 5 overflow[ 0 ]: 1 size -12 overflow[ 1 ]: 2 size 12 overflow[ 2 ]: 3 size 24 overflow[ 3 ]: 4 size -12 overflow[ 4 ]: 5 size 12 overflow[ 0 ]: 1 overflow[ 1 ]: 2 overflow[ 2 ]: 3 overflow[ 3 ]: 4 overflow[ 4 ]: 5 overflow[ 0 ]: 1 overflow[ 1 ]: 2 overflow[ 2 ]: 3 overflow[ 3 ]: 4 overflow[ 4 ]: 5 --- PASS: TestOverflow (0.00s) === RUN TestCompactSection_Get compact_map_test.go:201: 1574318345753513987 compact_map_test.go:212: 1574318350048481283 --- PASS: TestCompactSection_Get (0.65s) PASS ok github.com/seaweedfs/seaweedfs/weed/storage/needle_map 7.414s === RUN TestReplicaPlacementSerialDeserial --- PASS: TestReplicaPlacementSerialDeserial (0.00s) === RUN TestReplicaPlacementHasReplication === RUN TestReplicaPlacementHasReplication/empty_replica_placement === RUN TestReplicaPlacementHasReplication/no_replication === RUN TestReplicaPlacementHasReplication/same_rack_replication === RUN TestReplicaPlacementHasReplication/diff_rack_replication === RUN TestReplicaPlacementHasReplication/DC_replication === RUN TestReplicaPlacementHasReplication/full_replication --- PASS: TestReplicaPlacementHasReplication (0.00s) --- PASS: TestReplicaPlacementHasReplication/empty_replica_placement (0.00s) --- PASS: TestReplicaPlacementHasReplication/no_replication (0.00s) --- PASS: TestReplicaPlacementHasReplication/same_rack_replication (0.00s) --- PASS: TestReplicaPlacementHasReplication/diff_rack_replication (0.00s) --- PASS: TestReplicaPlacementHasReplication/DC_replication (0.00s) --- PASS: TestReplicaPlacementHasReplication/full_replication (0.00s) === RUN TestSuperBlockReadWrite --- PASS: TestSuperBlockReadWrite (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/storage/super_block 0.005s ? github.com/seaweedfs/seaweedfs/weed/storage/types [no test files] ? github.com/seaweedfs/seaweedfs/weed/storage/volume_info [no test files] === RUN TestRemoveDataCenter data: map[dc1:map[rack1:map[server111:map[limit:3 volumes:[map[id:1 size:12312] map[id:2 size:12312] map[id:3 size:12312]]] server112:map[limit:10 volumes:[map[id:4 size:12312] map[id:5 size:12312] map[id:6 size:12312]]]] rack2:map[server121:map[limit:4 volumes:[map[id:4 size:12312] map[id:5 size:12312] map[id:6 size:12312]]] server122:map[limit:4 volumes:[]] server123:map[limit:5 volumes:[map[id:2 size:12312] map[id:3 size:12312] map[id:4 size:12312]]]]] dc2:map[] dc3:map[rack2:map[server321:map[limit:4 volumes:[map[id:1 size:12312] map[id:3 size:12312] map[id:5 size:12312]]]]]] I0507 00:09:05.415057 node.go:250 weedfs adds child dc3 I0507 00:09:05.415173 node.go:250 weedfs:dc3 adds child rack2 I0507 00:09:05.415178 node.go:250 weedfs:dc3:rack2 adds child server321 I0507 00:09:05.415181 node.go:250 weedfs:dc3:rack2:server321 adds child I0507 00:09:05.415186 node.go:250 weedfs adds child dc1 I0507 00:09:05.415187 node.go:250 weedfs:dc1 adds child rack1 I0507 00:09:05.415189 node.go:250 weedfs:dc1:rack1 adds child server111 I0507 00:09:05.415190 node.go:250 weedfs:dc1:rack1:server111 adds child I0507 00:09:05.415194 node.go:250 weedfs:dc1:rack1 adds child server112 I0507 00:09:05.415195 node.go:250 weedfs:dc1:rack1:server112 adds child I0507 00:09:05.415198 node.go:250 weedfs:dc1 adds child rack2 I0507 00:09:05.415199 node.go:250 weedfs:dc1:rack2 adds child server121 I0507 00:09:05.415201 node.go:250 weedfs:dc1:rack2:server121 adds child I0507 00:09:05.415206 node.go:250 weedfs:dc1:rack2 adds child server122 I0507 00:09:05.415208 node.go:250 weedfs:dc1:rack2:server122 adds child I0507 00:09:05.415210 node.go:250 weedfs:dc1:rack2 adds child server123 I0507 00:09:05.415211 node.go:250 weedfs:dc1:rack2:server123 adds child I0507 00:09:05.415214 node.go:250 weedfs adds child dc2 I0507 00:09:05.415217 node.go:264 weedfs removes dc2 I0507 00:09:05.415219 node.go:264 weedfs removes dc3 --- PASS: TestRemoveDataCenter (0.00s) === RUN TestHandlingVolumeServerHeartbeat I0507 00:09:05.415247 node.go:250 weedfs adds child dc1 I0507 00:09:05.415251 node.go:250 weedfs:dc1 adds child rack1 I0507 00:09:05.415254 node.go:250 weedfs:dc1:rack1 adds child 127.0.0.1:34534 I0507 00:09:05.415261 node.go:250 weedfs:dc1:rack1:127.0.0.1:34534 adds child I0507 00:09:05.415264 node.go:250 weedfs:dc1:rack1:127.0.0.1:34534 adds child ssd I0507 00:09:05.415289 volume_layout.go:417 Volume 1 becomes writable I0507 00:09:05.415296 volume_layout.go:417 Volume 2 becomes writable I0507 00:09:05.415298 volume_layout.go:417 Volume 3 becomes writable I0507 00:09:05.415299 volume_layout.go:417 Volume 4 becomes writable I0507 00:09:05.415301 volume_layout.go:417 Volume 5 becomes writable I0507 00:09:05.415303 volume_layout.go:417 Volume 6 becomes writable I0507 00:09:05.415304 volume_layout.go:417 Volume 7 becomes writable I0507 00:09:05.415306 volume_layout.go:417 Volume 8 becomes writable I0507 00:09:05.415308 volume_layout.go:417 Volume 9 becomes writable I0507 00:09:05.415310 volume_layout.go:417 Volume 10 becomes writable I0507 00:09:05.415312 volume_layout.go:417 Volume 11 becomes writable I0507 00:09:05.415313 volume_layout.go:417 Volume 12 becomes writable I0507 00:09:05.415315 volume_layout.go:417 Volume 13 becomes writable I0507 00:09:05.415317 volume_layout.go:417 Volume 14 becomes writable I0507 00:09:05.415324 data_node.go:81 Deleting volume id: 7 I0507 00:09:05.415327 data_node.go:81 Deleting volume id: 13 I0507 00:09:05.415328 data_node.go:81 Deleting volume id: 14 I0507 00:09:05.415329 data_node.go:81 Deleting volume id: 8 I0507 00:09:05.415331 data_node.go:81 Deleting volume id: 9 I0507 00:09:05.415332 data_node.go:81 Deleting volume id: 10 I0507 00:09:05.415333 data_node.go:81 Deleting volume id: 11 I0507 00:09:05.415334 data_node.go:81 Deleting volume id: 12 I0507 00:09:05.415338 topology.go:329 removing volume info: Id:7, Size:25432, ReplicaPlacement:000, Collection:, Version:3, FileCount:2343, DeleteCount:345, DeletedByteCount:34524, ReadOnly:false from 127.0.0.1:34534 I0507 00:09:05.415347 volume_layout.go:229 volume 7 does not have enough copies I0507 00:09:05.415349 volume_layout.go:237 volume 7 remove from writable I0507 00:09:05.415351 volume_layout.go:405 Volume 7 becomes unwritable I0507 00:09:05.415353 topology.go:329 removing volume info: Id:13, Size:25432, ReplicaPlacement:000, Collection:, Version:3, FileCount:2343, DeleteCount:345, DeletedByteCount:34524, ReadOnly:false from 127.0.0.1:34534 I0507 00:09:05.415356 volume_layout.go:229 volume 13 does not have enough copies I0507 00:09:05.415357 volume_layout.go:237 volume 13 remove from writable I0507 00:09:05.415359 volume_layout.go:405 Volume 13 becomes unwritable I0507 00:09:05.415360 topology.go:329 removing volume info: Id:14, Size:25432, ReplicaPlacement:000, Collection:, Version:3, FileCount:2343, DeleteCount:345, DeletedByteCount:34524, ReadOnly:false from 127.0.0.1:34534 I0507 00:09:05.415362 volume_layout.go:229 volume 14 does not have enough copies I0507 00:09:05.415364 volume_layout.go:237 volume 14 remove from writable I0507 00:09:05.415365 volume_layout.go:405 Volume 14 becomes unwritable I0507 00:09:05.415366 topology.go:329 removing volume info: Id:8, Size:25432, ReplicaPlacement:000, Collection:, Version:3, FileCount:2343, DeleteCount:345, DeletedByteCount:34524, ReadOnly:false from 127.0.0.1:34534 I0507 00:09:05.415369 volume_layout.go:229 volume 8 does not have enough copies I0507 00:09:05.415370 volume_layout.go:237 volume 8 remove from writable I0507 00:09:05.415371 volume_layout.go:405 Volume 8 becomes unwritable I0507 00:09:05.415373 topology.go:329 removing volume info: Id:9, Size:25432, ReplicaPlacement:000, Collection:, Version:3, FileCount:2343, DeleteCount:345, DeletedByteCount:34524, ReadOnly:false from 127.0.0.1:34534 I0507 00:09:05.415375 volume_layout.go:229 volume 9 does not have enough copies I0507 00:09:05.415376 volume_layout.go:237 volume 9 remove from writable I0507 00:09:05.415377 volume_layout.go:405 Volume 9 becomes unwritable I0507 00:09:05.415378 topology.go:329 removing volume info: Id:10, Size:25432, ReplicaPlacement:000, Collection:, Version:3, FileCount:2343, DeleteCount:345, DeletedByteCount:34524, ReadOnly:false from 127.0.0.1:34534 I0507 00:09:05.415381 volume_layout.go:229 volume 10 does not have enough copies I0507 00:09:05.415382 volume_layout.go:237 volume 10 remove from writable I0507 00:09:05.415383 volume_layout.go:405 Volume 10 becomes unwritable I0507 00:09:05.415385 topology.go:329 removing volume info: Id:11, Size:25432, ReplicaPlacement:000, Collection:, Version:3, FileCount:2343, DeleteCount:345, DeletedByteCount:34524, ReadOnly:false from 127.0.0.1:34534 I0507 00:09:05.415387 volume_layout.go:229 volume 11 does not have enough copies I0507 00:09:05.415388 volume_layout.go:237 volume 11 remove from writable I0507 00:09:05.415389 volume_layout.go:405 Volume 11 becomes unwritable I0507 00:09:05.415391 topology.go:329 removing volume info: Id:12, Size:25432, ReplicaPlacement:000, Collection:, Version:3, FileCount:2343, DeleteCount:345, DeletedByteCount:34524, ReadOnly:false from 127.0.0.1:34534 I0507 00:09:05.415393 volume_layout.go:229 volume 12 does not have enough copies I0507 00:09:05.415394 volume_layout.go:237 volume 12 remove from writable I0507 00:09:05.415395 volume_layout.go:405 Volume 12 becomes unwritable I0507 00:09:05.415400 topology.go:329 removing volume info: Id:3, Size:0, ReplicaPlacement:000, Collection:, Version:3, FileCount:0, DeleteCount:0, DeletedByteCount:0, ReadOnly:false from 127.0.0.1:34534 I0507 00:09:05.415402 volume_layout.go:229 volume 3 does not have enough copies I0507 00:09:05.415403 volume_layout.go:237 volume 3 remove from writable I0507 00:09:05.415405 volume_layout.go:405 Volume 3 becomes unwritable I0507 00:09:05.415407 volume_layout.go:417 Volume 3 becomes writable after add volume id 2 after add volume id 3 after add volume id 4 after add volume id 5 after add volume id 6 after add volume id 1 after add writable volume id 1 after add writable volume id 2 after add writable volume id 4 after add writable volume id 5 after add writable volume id 6 after add writable volume id 3 I0507 00:09:05.415438 topology_event_handling.go:86 Removing Volume 6 from the dead volume server 127.0.0.1:34534 I0507 00:09:05.415441 volume_layout.go:456 Volume 6 has 0 replica, less than required 1 I0507 00:09:05.415443 volume_layout.go:405 Volume 6 becomes unwritable I0507 00:09:05.415445 topology_event_handling.go:86 Removing Volume 1 from the dead volume server 127.0.0.1:34534 I0507 00:09:05.415447 volume_layout.go:456 Volume 1 has 0 replica, less than required 1 I0507 00:09:05.415449 volume_layout.go:405 Volume 1 becomes unwritable I0507 00:09:05.415451 topology_event_handling.go:86 Removing Volume 2 from the dead volume server 127.0.0.1:34534 I0507 00:09:05.415453 volume_layout.go:456 Volume 2 has 0 replica, less than required 1 I0507 00:09:05.415455 volume_layout.go:405 Volume 2 becomes unwritable I0507 00:09:05.415457 topology_event_handling.go:86 Removing Volume 3 from the dead volume server 127.0.0.1:34534 I0507 00:09:05.415459 volume_layout.go:456 Volume 3 has 0 replica, less than required 1 I0507 00:09:05.415460 volume_layout.go:405 Volume 3 becomes unwritable I0507 00:09:05.415462 topology_event_handling.go:86 Removing Volume 4 from the dead volume server 127.0.0.1:34534 I0507 00:09:05.415464 volume_layout.go:456 Volume 4 has 0 replica, less than required 1 I0507 00:09:05.415466 volume_layout.go:405 Volume 4 becomes unwritable I0507 00:09:05.415467 topology_event_handling.go:86 Removing Volume 5 from the dead volume server 127.0.0.1:34534 I0507 00:09:05.415469 volume_layout.go:456 Volume 5 has 0 replica, less than required 1 I0507 00:09:05.415471 volume_layout.go:405 Volume 5 becomes unwritable I0507 00:09:05.415477 node.go:264 weedfs:dc1:rack1 removes 127.0.0.1:34534 --- PASS: TestHandlingVolumeServerHeartbeat (0.00s) === RUN TestAddRemoveVolume I0507 00:09:05.415497 node.go:250 weedfs adds child dc1 I0507 00:09:05.415499 node.go:250 weedfs:dc1 adds child rack1 I0507 00:09:05.415501 node.go:250 weedfs:dc1:rack1 adds child 127.0.0.1:34534 I0507 00:09:05.415503 node.go:250 weedfs:dc1:rack1:127.0.0.1:34534 adds child I0507 00:09:05.415506 node.go:250 weedfs:dc1:rack1:127.0.0.1:34534 adds child ssd I0507 00:09:05.415529 volume_layout.go:417 Volume 1 becomes writable I0507 00:09:05.415535 topology.go:329 removing volume info: Id:1, Size:100, ReplicaPlacement:000, Collection:xcollection, Version:3, FileCount:123, DeleteCount:23, DeletedByteCount:45, ReadOnly:false from 127.0.0.1:34534 I0507 00:09:05.415540 volume_layout.go:229 volume 1 does not have enough copies I0507 00:09:05.415543 volume_layout.go:237 volume 1 remove from writable I0507 00:09:05.415544 volume_layout.go:405 Volume 1 becomes unwritable --- PASS: TestAddRemoveVolume (0.00s) === RUN TestListCollections I0507 00:09:05.415566 node.go:250 weedfs adds child dc1 I0507 00:09:05.415570 node.go:250 weedfs:dc1 adds child rack1 I0507 00:09:05.415572 node.go:250 weedfs:dc1:rack1 adds child 127.0.0.1:34534 I0507 00:09:05.415575 volume_layout.go:229 volume 1111 does not have enough copies I0507 00:09:05.415580 volume_layout.go:237 volume 1111 remove from writable I0507 00:09:05.415582 volume_layout.go:229 volume 2222 does not have enough copies I0507 00:09:05.415584 volume_layout.go:237 volume 2222 remove from writable I0507 00:09:05.415588 volume_layout.go:229 volume 3333 does not have enough copies I0507 00:09:05.415589 volume_layout.go:237 volume 3333 remove from writable === RUN TestListCollections/no_volume_types_selected === RUN TestListCollections/normal_volumes === RUN TestListCollections/EC_volumes === RUN TestListCollections/normal_+_EC_volumes --- PASS: TestListCollections (0.00s) --- PASS: TestListCollections/no_volume_types_selected (0.00s) --- PASS: TestListCollections/normal_volumes (0.00s) --- PASS: TestListCollections/EC_volumes (0.00s) --- PASS: TestListCollections/normal_+_EC_volumes (0.00s) === RUN TestFindEmptySlotsForOneVolume data: map[dc1:map[rack1:map[server111:map[limit:3 volumes:[map[id:1 size:12312] map[id:2 size:12312] map[id:3 size:12312]]] server112:map[limit:10 volumes:[map[id:4 size:12312] map[id:5 size:12312] map[id:6 size:12312]]]] rack2:map[server121:map[limit:4 volumes:[map[id:4 size:12312] map[id:5 size:12312] map[id:6 size:12312]]] server122:map[limit:4 volumes:[]] server123:map[limit:5 volumes:[map[id:2 size:12312] map[id:3 size:12312] map[id:4 size:12312]]]]] dc2:map[] dc3:map[rack2:map[server321:map[limit:4 volumes:[map[id:1 size:12312] map[id:3 size:12312] map[id:5 size:12312]]]]]] I0507 00:09:05.415695 node.go:250 weedfs adds child dc1 I0507 00:09:05.415697 node.go:250 weedfs:dc1 adds child rack1 I0507 00:09:05.415702 node.go:250 weedfs:dc1:rack1 adds child server111 I0507 00:09:05.415705 node.go:250 weedfs:dc1:rack1:server111 adds child I0507 00:09:05.415709 node.go:250 weedfs:dc1:rack1 adds child server112 I0507 00:09:05.415710 node.go:250 weedfs:dc1:rack1:server112 adds child I0507 00:09:05.415713 node.go:250 weedfs:dc1 adds child rack2 I0507 00:09:05.415716 node.go:250 weedfs:dc1:rack2 adds child server121 I0507 00:09:05.415718 node.go:250 weedfs:dc1:rack2:server121 adds child I0507 00:09:05.415721 node.go:250 weedfs:dc1:rack2 adds child server122 I0507 00:09:05.415722 node.go:250 weedfs:dc1:rack2:server122 adds child I0507 00:09:05.415724 node.go:250 weedfs:dc1:rack2 adds child server123 I0507 00:09:05.415726 node.go:250 weedfs:dc1:rack2:server123 adds child I0507 00:09:05.415728 node.go:250 weedfs adds child dc2 I0507 00:09:05.415730 node.go:250 weedfs adds child dc3 I0507 00:09:05.415731 node.go:250 weedfs:dc3 adds child rack2 I0507 00:09:05.415733 node.go:250 weedfs:dc3:rack2 adds child server321 I0507 00:09:05.415734 node.go:250 weedfs:dc3:rack2:server321 adds child assigned node : server123 assigned node : server122 assigned node : server121 --- PASS: TestFindEmptySlotsForOneVolume (0.00s) === RUN TestReplication011 data: map[dc1:map[rack1:map[server111:map[limit:300 volumes:[map[id:1 size:12312] map[id:2 size:12312] map[id:3 size:12312]]] server112:map[limit:300 volumes:[map[id:4 size:12312] map[id:5 size:12312] map[id:6 size:12312]]] server113:map[limit:300 volumes:[]] server114:map[limit:300 volumes:[]] server115:map[limit:300 volumes:[]] server116:map[limit:300 volumes:[]]] rack2:map[server121:map[limit:300 volumes:[map[id:4 size:12312] map[id:5 size:12312] map[id:6 size:12312]]] server122:map[limit:300 volumes:[]] server123:map[limit:300 volumes:[map[id:2 size:12312] map[id:3 size:12312] map[id:4 size:12312]]] server124:map[limit:300 volumes:[]] server125:map[limit:300 volumes:[]] server126:map[limit:300 volumes:[]]] rack3:map[server131:map[limit:300 volumes:[]] server132:map[limit:300 volumes:[]] server133:map[limit:300 volumes:[]] server134:map[limit:300 volumes:[]] server135:map[limit:300 volumes:[]] server136:map[limit:300 volumes:[]]]]] I0507 00:09:05.415856 node.go:250 weedfs adds child dc1 I0507 00:09:05.415863 node.go:250 weedfs:dc1 adds child rack2 I0507 00:09:05.415866 node.go:250 weedfs:dc1:rack2 adds child server124 I0507 00:09:05.415869 node.go:250 weedfs:dc1:rack2:server124 adds child I0507 00:09:05.415872 node.go:250 weedfs:dc1:rack2 adds child server125 I0507 00:09:05.415874 node.go:250 weedfs:dc1:rack2:server125 adds child I0507 00:09:05.415877 node.go:250 weedfs:dc1:rack2 adds child server126 I0507 00:09:05.415880 node.go:250 weedfs:dc1:rack2:server126 adds child I0507 00:09:05.415883 node.go:250 weedfs:dc1:rack2 adds child server121 I0507 00:09:05.415886 node.go:250 weedfs:dc1:rack2:server121 adds child I0507 00:09:05.415890 node.go:250 weedfs:dc1:rack2 adds child server122 I0507 00:09:05.415895 node.go:250 weedfs:dc1:rack2:server122 adds child I0507 00:09:05.415897 node.go:250 weedfs:dc1:rack2 adds child server123 I0507 00:09:05.415899 node.go:250 weedfs:dc1:rack2:server123 adds child I0507 00:09:05.415902 node.go:250 weedfs:dc1 adds child rack3 I0507 00:09:05.415904 node.go:250 weedfs:dc1:rack3 adds child server132 I0507 00:09:05.415906 node.go:250 weedfs:dc1:rack3:server132 adds child I0507 00:09:05.415908 node.go:250 weedfs:dc1:rack3 adds child server133 I0507 00:09:05.415910 node.go:250 weedfs:dc1:rack3:server133 adds child I0507 00:09:05.415912 node.go:250 weedfs:dc1:rack3 adds child server134 I0507 00:09:05.415914 node.go:250 weedfs:dc1:rack3:server134 adds child I0507 00:09:05.415916 node.go:250 weedfs:dc1:rack3 adds child server135 I0507 00:09:05.415918 node.go:250 weedfs:dc1:rack3:server135 adds child I0507 00:09:05.415921 node.go:250 weedfs:dc1:rack3 adds child server136 I0507 00:09:05.415923 node.go:250 weedfs:dc1:rack3:server136 adds child I0507 00:09:05.415925 node.go:250 weedfs:dc1:rack3 adds child server131 I0507 00:09:05.415926 node.go:250 weedfs:dc1:rack3:server131 adds child I0507 00:09:05.415929 node.go:250 weedfs:dc1 adds child rack1 I0507 00:09:05.415930 node.go:250 weedfs:dc1:rack1 adds child server111 I0507 00:09:05.415932 node.go:250 weedfs:dc1:rack1:server111 adds child I0507 00:09:05.415936 node.go:250 weedfs:dc1:rack1 adds child server112 I0507 00:09:05.415941 node.go:250 weedfs:dc1:rack1:server112 adds child I0507 00:09:05.415944 node.go:250 weedfs:dc1:rack1 adds child server113 I0507 00:09:05.415946 node.go:250 weedfs:dc1:rack1:server113 adds child I0507 00:09:05.415949 node.go:250 weedfs:dc1:rack1 adds child server114 I0507 00:09:05.415953 node.go:250 weedfs:dc1:rack1:server114 adds child I0507 00:09:05.415955 node.go:250 weedfs:dc1:rack1 adds child server115 I0507 00:09:05.415957 node.go:250 weedfs:dc1:rack1:server115 adds child I0507 00:09:05.415965 node.go:250 weedfs:dc1:rack1 adds child server116 I0507 00:09:05.415968 node.go:250 weedfs:dc1:rack1:server116 adds child assigned node : server122 assigned node : server124 assigned node : server136 --- PASS: TestReplication011 (0.00s) === RUN TestFindEmptySlotsForOneVolumeScheduleByWeight data: map[dc1:map[rack1:map[server111:map[limit:2000 volumes:[]]]] dc2:map[rack2:map[server222:map[limit:2000 volumes:[]]]] dc3:map[rack3:map[server333:map[limit:1000 volumes:[]]]] dc4:map[rack4:map[server444:map[limit:1000 volumes:[]]]] dc5:map[rack5:map[server555:map[limit:500 volumes:[]]]] dc6:map[rack6:map[server666:map[limit:500 volumes:[]]]]] I0507 00:09:05.416043 node.go:250 weedfs adds child dc1 I0507 00:09:05.416045 node.go:250 weedfs:dc1 adds child rack1 I0507 00:09:05.416046 node.go:250 weedfs:dc1:rack1 adds child server111 I0507 00:09:05.416049 node.go:250 weedfs:dc1:rack1:server111 adds child I0507 00:09:05.416052 node.go:250 weedfs adds child dc2 I0507 00:09:05.416054 node.go:250 weedfs:dc2 adds child rack2 I0507 00:09:05.416056 node.go:250 weedfs:dc2:rack2 adds child server222 I0507 00:09:05.416059 node.go:250 weedfs:dc2:rack2:server222 adds child I0507 00:09:05.416062 node.go:250 weedfs adds child dc3 I0507 00:09:05.416064 node.go:250 weedfs:dc3 adds child rack3 I0507 00:09:05.416072 node.go:250 weedfs:dc3:rack3 adds child server333 I0507 00:09:05.416075 node.go:250 weedfs:dc3:rack3:server333 adds child I0507 00:09:05.416077 node.go:250 weedfs adds child dc4 I0507 00:09:05.416078 node.go:250 weedfs:dc4 adds child rack4 I0507 00:09:05.416080 node.go:250 weedfs:dc4:rack4 adds child server444 I0507 00:09:05.416082 node.go:250 weedfs:dc4:rack4:server444 adds child I0507 00:09:05.416085 node.go:250 weedfs adds child dc5 I0507 00:09:05.416087 node.go:250 weedfs:dc5 adds child rack5 I0507 00:09:05.416089 node.go:250 weedfs:dc5:rack5 adds child server555 I0507 00:09:05.416091 node.go:250 weedfs:dc5:rack5:server555 adds child I0507 00:09:05.416094 node.go:250 weedfs adds child dc6 I0507 00:09:05.416097 node.go:250 weedfs:dc6 adds child rack6 I0507 00:09:05.416098 node.go:250 weedfs:dc6:rack6 adds child server666 I0507 00:09:05.416101 node.go:250 weedfs:dc6:rack6:server666 adds child server444 : 295 server333 : 315 server111 : 534 server222 : 521 server555 : 169 server666 : 166 --- PASS: TestFindEmptySlotsForOneVolumeScheduleByWeight (0.00s) === RUN TestPickForWrite data: map[dc1:map[rack1:map[serverdc111:map[ip:127.0.0.1 limit:100 volumes:[map[collection:test id:1 replication:001 size:12312] map[collection:test id:2 replication:100 size:12312] map[collection:test id:4 replication:100 size:12312] map[collection:test id:6 replication:010 size:12312]]]]] dc2:map[rack1:map[serverdc211:map[ip:127.0.0.2 limit:100 volumes:[map[collection:test id:2 replication:100 size:12312] map[collection:test id:3 replication:010 size:12312] map[collection:test id:5 replication:001 size:12312] map[collection:test id:6 replication:010 size:12312]]]]] dc3:map[rack1:map[serverdc311:map[ip:127.0.0.3 limit:100 volumes:[map[collection:test id:1 replication:001 size:12312] map[collection:test id:3 replication:010 size:12312] map[collection:test id:4 replication:100 size:12312] map[collection:test id:5 replication:001 size:12312]]]]]] I0507 00:09:05.417610 node.go:250 weedfs adds child dc1 I0507 00:09:05.417612 node.go:250 weedfs:dc1 adds child rack1 I0507 00:09:05.417615 node.go:250 weedfs:dc1:rack1 adds child serverdc111 I0507 00:09:05.417621 volume_layout.go:417 Volume 1 becomes writable I0507 00:09:05.417624 node.go:250 weedfs:dc1:rack1:serverdc111 adds child I0507 00:09:05.417631 volume_layout.go:417 Volume 2 becomes writable I0507 00:09:05.417634 volume_layout.go:417 Volume 4 becomes writable I0507 00:09:05.417640 volume_layout.go:417 Volume 6 becomes writable I0507 00:09:05.417642 node.go:250 weedfs adds child dc2 I0507 00:09:05.417643 node.go:250 weedfs:dc2 adds child rack1 I0507 00:09:05.417645 node.go:250 weedfs:dc2:rack1 adds child serverdc211 I0507 00:09:05.417648 volume_layout.go:405 Volume 2 becomes unwritable I0507 00:09:05.417650 volume_layout.go:417 Volume 2 becomes writable I0507 00:09:05.417651 node.go:250 weedfs:dc2:rack1:serverdc211 adds child I0507 00:09:05.417655 volume_layout.go:417 Volume 3 becomes writable I0507 00:09:05.417657 volume_layout.go:417 Volume 5 becomes writable I0507 00:09:05.417660 volume_layout.go:405 Volume 6 becomes unwritable I0507 00:09:05.417661 volume_layout.go:417 Volume 6 becomes writable I0507 00:09:05.417666 node.go:250 weedfs adds child dc3 I0507 00:09:05.417668 node.go:250 weedfs:dc3 adds child rack1 I0507 00:09:05.417670 node.go:250 weedfs:dc3:rack1 adds child serverdc311 I0507 00:09:05.417672 volume_layout.go:405 Volume 1 becomes unwritable I0507 00:09:05.417673 volume_layout.go:417 Volume 1 becomes writable I0507 00:09:05.417675 node.go:250 weedfs:dc3:rack1:serverdc311 adds child I0507 00:09:05.417678 volume_layout.go:405 Volume 3 becomes unwritable I0507 00:09:05.417679 volume_layout.go:417 Volume 3 becomes writable I0507 00:09:05.417682 volume_layout.go:405 Volume 4 becomes unwritable I0507 00:09:05.417684 volume_layout.go:417 Volume 4 becomes writable I0507 00:09:05.417690 volume_layout.go:405 Volume 5 becomes unwritable I0507 00:09:05.417691 volume_layout.go:417 Volume 5 becomes writable --- PASS: TestPickForWrite (0.00s) === RUN TestVolumesBinaryState === RUN TestVolumesBinaryState/mark_true_when_copies_exist === RUN TestVolumesBinaryState/mark_true_when_no_copies_exist --- PASS: TestVolumesBinaryState (0.00s) --- PASS: TestVolumesBinaryState/mark_true_when_copies_exist (0.00s) --- PASS: TestVolumesBinaryState/mark_true_when_no_copies_exist (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/topology 0.010s === RUN TestByteParsing --- PASS: TestByteParsing (0.00s) === RUN TestSameAsJavaImplementation Now we need to generate a 256-bit key for AES 256 GCM --- PASS: TestSameAsJavaImplementation (0.00s) === RUN TestToShortFileName --- PASS: TestToShortFileName (0.00s) === RUN TestHumanReadableIntsMax --- PASS: TestHumanReadableIntsMax (0.00s) === RUN TestHumanReadableInts --- PASS: TestHumanReadableInts (0.00s) === RUN TestAsyncPool -- Executing third function -- -- Executing first function -- -- Executing second function -- -- Third Function finished -- -- Executing fourth function -- -- Second Function finished -- -- Executing fifth function -- -- First Function finished -- 1 2 3 -- Fourth fifth finished -- -- Fourth Function finished -- 4 5 --- PASS: TestAsyncPool (0.12s) === RUN TestOrderedLock ActiveLock 1 acquired lock 1 ActiveLock 1 released lock 1 ActiveLock 5 acquired lock 0 ActiveLock 2 acquired lock 0 ActiveLock 3 acquired lock 0 ActiveLock 6 acquired lock 0 ActiveLock 4 acquired lock 0 ActiveLock 5 released lock 0 ActiveLock 4 released lock 0 ActiveLock 2 released lock 0 ActiveLock 6 released lock 0 ActiveLock 3 released lock 0 ActiveLock 7 acquired lock 1 ActiveLock 7 released lock 1 ActiveLock 8 acquired lock 0 ActiveLock 8 released lock 0 ActiveLock 9 acquired lock 0 ActiveLock 10 acquired lock 0 ActiveLock 10 released lock 0 ActiveLock 12 acquired lock 0 ActiveLock 13 acquired lock 0 ActiveLock 14 acquired lock 0 ActiveLock 15 acquired lock 0 ActiveLock 16 acquired lock 0 ActiveLock 13 released lock 0 ActiveLock 15 released lock 0 ActiveLock 9 released lock 0 ActiveLock 16 released lock 0 ActiveLock 14 released lock 0 ActiveLock 12 released lock 0 ActiveLock 17 acquired lock 1 ActiveLock 17 released lock 1 ActiveLock 19 acquired lock 0 ActiveLock 18 acquired lock 0 ActiveLock 18 released lock 0 ActiveLock 19 released lock 0 ActiveLock 20 acquired lock 1 ActiveLock 20 released lock 1 ActiveLock 21 acquired lock 0 ActiveLock 22 acquired lock 0 ActiveLock 23 acquired lock 0 ActiveLock 24 acquired lock 0 ActiveLock 24 released lock 0 ActiveLock 22 released lock 0 ActiveLock 23 released lock 0 ActiveLock 21 released lock 0 ActiveLock 25 acquired lock 1 ActiveLock 25 released lock 1 ActiveLock 26 acquired lock 0 ActiveLock 27 acquired lock 0 ActiveLock 28 acquired lock 0 ActiveLock 29 acquired lock 0 ActiveLock 29 released lock 0 ActiveLock 26 released lock 0 ActiveLock 27 released lock 0 ActiveLock 28 released lock 0 ActiveLock 30 acquired lock 1 ActiveLock 30 released lock 1 ActiveLock 31 acquired lock 0 ActiveLock 32 acquired lock 0 ActiveLock 33 acquired lock 0 ActiveLock 34 acquired lock 0 ActiveLock 34 released lock 0 ActiveLock 32 released lock 0 ActiveLock 33 released lock 0 ActiveLock 31 released lock 0 ActiveLock 35 acquired lock 1 ActiveLock 35 released lock 1 ActiveLock 36 acquired lock 0 ActiveLock 37 acquired lock 0 ActiveLock 38 acquired lock 0 ActiveLock 39 acquired lock 0 ActiveLock 37 released lock 0 ActiveLock 36 released lock 0 ActiveLock 39 released lock 0 ActiveLock 38 released lock 0 ActiveLock 40 acquired lock 1 ActiveLock 40 released lock 1 ActiveLock 41 acquired lock 0 ActiveLock 42 acquired lock 0 ActiveLock 44 acquired lock 0 ActiveLock 44 released lock 0 ActiveLock 43 acquired lock 0 ActiveLock 41 released lock 0 ActiveLock 42 released lock 0 ActiveLock 43 released lock 0 ActiveLock 45 acquired lock 1 ActiveLock 45 released lock 1 ActiveLock 46 acquired lock 0 ActiveLock 47 acquired lock 0 ActiveLock 47 released lock 0 ActiveLock 48 acquired lock 0 ActiveLock 49 acquired lock 0 ActiveLock 49 released lock 0 ActiveLock 46 released lock 0 ActiveLock 48 released lock 0 ActiveLock 11 acquired lock 1 ActiveLock 11 released lock 1 ActiveLock 50 acquired lock 0 ActiveLock 50 released lock 0 --- PASS: TestOrderedLock (1.19s) === RUN TestParseMinFreeSpace --- PASS: TestParseMinFreeSpace (0.00s) === RUN TestNewQueue --- PASS: TestNewQueue (0.00s) === RUN TestEnqueueAndConsume 1 2 3 ----------------------- 4 5 6 7 ----------------------- --- PASS: TestEnqueueAndConsume (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/util 1.317s ? github.com/seaweedfs/seaweedfs/weed/util/buffer_pool [no test files] === RUN TestJobQueue enqueued 5 items dequeue 1 dequeue 2 enqueue 6 enqueue 7 dequeue ... dequeued 3 dequeue ... dequeued 4 dequeue ... dequeued 5 dequeue ... dequeued 6 dequeue ... dequeued 7 enqueue 8 enqueue 9 enqueue 10 enqueue 11 enqueue 12 dequeued 8 dequeued 9 dequeued 10 dequeued 11 dequeued 12 --- PASS: TestJobQueue (0.00s) === RUN TestJobQueueClose dequeued 1 dequeued 2 dequeued 3 dequeued 4 dequeued 5 dequeued 6 dequeued 7 --- PASS: TestJobQueueClose (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/util/buffered_queue 0.001s ? github.com/seaweedfs/seaweedfs/weed/util/buffered_writer [no test files] === RUN TestOnDisk I0507 00:09:05.416676 needle_map_leveldb.go:122 generateLevelDbFile /tmp/TestOnDisk738839255/001/c0_2_0.ldb, watermark 0, num of entries:0 I0507 00:09:05.417105 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk738839255/001/c0_2_0.ldb... , watermark: 0 I0507 00:09:05.418042 needle_map_leveldb.go:122 generateLevelDbFile /tmp/TestOnDisk738839255/001/c0_2_1.ldb, watermark 0, num of entries:0 I0507 00:09:05.418904 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk738839255/001/c0_2_1.ldb... , watermark: 0 I0507 00:09:05.419106 needle_map_leveldb.go:122 generateLevelDbFile /tmp/TestOnDisk738839255/001/c1_3_0.ldb, watermark 0, num of entries:0 I0507 00:09:05.419781 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk738839255/001/c1_3_0.ldb... , watermark: 0 I0507 00:09:05.420275 needle_map_leveldb.go:122 generateLevelDbFile /tmp/TestOnDisk738839255/001/c1_3_1.ldb, watermark 0, num of entries:0 I0507 00:09:05.421573 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk738839255/001/c1_3_1.ldb... , watermark: 0 I0507 00:09:05.422201 needle_map_leveldb.go:122 generateLevelDbFile /tmp/TestOnDisk738839255/001/c1_3_2.ldb, watermark 0, num of entries:0 I0507 00:09:05.422883 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk738839255/001/c1_3_2.ldb... , watermark: 0 I0507 00:09:05.423570 needle_map_leveldb.go:122 generateLevelDbFile /tmp/TestOnDisk738839255/001/c2_2_0.ldb, watermark 0, num of entries:0 I0507 00:09:05.424233 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk738839255/001/c2_2_0.ldb... , watermark: 0 I0507 00:09:05.424442 needle_map_leveldb.go:122 generateLevelDbFile /tmp/TestOnDisk738839255/001/c2_2_1.ldb, watermark 0, num of entries:0 I0507 00:09:05.424776 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk738839255/001/c2_2_1.ldb... , watermark: 0 I0507 00:09:05.425775 needle_map_leveldb.go:122 generateLevelDbFile /tmp/TestOnDisk738839255/001/c0_2_0.ldb, watermark 0, num of entries:0 I0507 00:09:05.426185 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk738839255/001/c0_2_0.ldb... , watermark: 0 I0507 00:09:05.426575 needle_map_leveldb.go:122 generateLevelDbFile /tmp/TestOnDisk738839255/001/c0_2_1.ldb, watermark 0, num of entries:0 I0507 00:09:05.426846 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk738839255/001/c0_2_1.ldb... , watermark: 0 I0507 00:09:05.428309 needle_map_leveldb.go:122 generateLevelDbFile /tmp/TestOnDisk738839255/001/c0_2_0.ldb, watermark 0, num of entries:2 I0507 00:09:05.428845 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk738839255/001/c0_2_0.ldb... , watermark: 0 I0507 00:09:05.429885 needle_map_leveldb.go:122 generateLevelDbFile /tmp/TestOnDisk738839255/001/c0_2_1.ldb, watermark 0, num of entries:1 I0507 00:09:05.430275 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk738839255/001/c0_2_1.ldb... , watermark: 0 I0507 00:09:05.430610 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk738839255/001/c1_3_0.ldb... , watermark: 0 I0507 00:09:05.430974 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk738839255/001/c1_3_1.ldb... , watermark: 0 I0507 00:09:05.432155 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk738839255/001/c1_3_2.ldb... , watermark: 0 I0507 00:09:05.432830 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk738839255/001/c2_2_0.ldb... , watermark: 0 I0507 00:09:05.434049 needle_map_leveldb.go:122 generateLevelDbFile /tmp/TestOnDisk738839255/001/c2_2_1.ldb, watermark 0, num of entries:0 I0507 00:09:05.434468 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk738839255/001/c2_2_1.ldb... , watermark: 0 chunk_cache_on_disk_test.go:98: failed to write to and read from cache: 2 --- FAIL: TestOnDisk (0.02s) FAIL FAIL github.com/seaweedfs/seaweedfs/weed/util/chunk_cache 0.028s ? github.com/seaweedfs/seaweedfs/weed/util/fla9 [no test files] ? github.com/seaweedfs/seaweedfs/weed/util/grace [no test files] ? github.com/seaweedfs/seaweedfs/weed/util/http [no test files] ? github.com/seaweedfs/seaweedfs/weed/util/http/client [no test files] ? github.com/seaweedfs/seaweedfs/weed/util/httpdown [no test files] === RUN TestNewLogBufferFirstBuffer processed all messages E0507 00:09:05.447196 log_read.go:115 LoopProcessLogData: test process log entry 1 ts_ns:1746576545447175677 partition_key_hash:-736260903 data:"\x9f\xa9\xb3\xd22\xbf6C\xcc[d\xe4\xfej\x1e{!\x90\x1b\xe0J*\x8e$\x13ua\xb0\xefކ\x90\xc1J\xaf\t\xd4\xeb\x80l9\xbd\x87X\x1e,\x95\x0b(5\xc0\xbdo\xeeP\xbe\x02\x11U\xf4\x7f7V(\xf8\xceP\xbee\xee\x1d\xf4\xbf\xf9~\x0f\xe3\xe5l\x08S\x86_SuP\x88\xa7˃o\x0b\x1a\xbd\x8ct\xea\xcd-\x9b\xbfݛ\x9f1x/\xf2\xd5\xe6_e|\xd5ф\xefݯ{_\x7f\x19\xbd\x83\x80\xbe\xbf\xdb\x18\xae\xae\xce\x01v!S\xc8\x1al~\xf1\x18]2J\x02\xd7QC\xa2\x18\x05\x1f؈_\xf8D\xaa\xf2Ml\x19\x02\x85\x84}\xbd\xe0\x1a/\xbbq\x96l\xcftjb\xb5vúK[K?\xef/9N/\x83Uk~M\xe3;ѻ\xf2H\xc8\x01\x1ad\xfb\xc0\t\x1e\xa9\x1b9\xe5\x0f\x165\x94I!k\xecY\x83c\xbc\x0f\xecؕ\x9c\xfc`|Cs\x02\x86b\xce\x11\x13m\xcd\x07l\x17\x18\x0e:q\x04\xbfDn\x1aO\xa3w\xfe\xa8\xfe\xe5\xf3%\xf7\xc1!\xf2*G\x1b)\x06U\xef\xc1\x07q\x9a`\xd2\xde\x02f{ka\xbb\x85z\x8c瀠\x9c7\x81X\xb6#9\xc9\x06\x99.׭\xeb\xd0\xce˸\xb9\x89\xb3#8\x11\xbe\xe6Isq\xcb\xffF\x92\xce\\\xea\x06$ \x0f\x9e\x0e\xda\xc5bRh\xd2\xfd\xcbf\x8d\x9a\xe6\xcesx\x9a\xef>'I\xd0\x1d3å\x00\xe7\x80\x15\xb5M\xc9'\r\x85\x02\xac~\x86\xc2d6׫\x96\x84\x0f\xac\x7f\r\x98\xa2յ\x1f\x13m\xc6;\xf9\xed\xe7;\x87\x15\xf1\x16u?\xe5\xa43\xf4>l\xed\xeb\xc5\xe0\x1a%}\xaf\x16Ė\xeaw\xa1r\xd0\xf6\xae\xabfV\xdeà\x81\xe7\xe3u:\x9e6\\\xc65\xd3\x02M\x9d\xf8+\xbd\xc3r\x9f\x9f3\xe7]\xc1\xc7\xe5\x8e\x1a\xda-\"lx\x904)H\xca\xdb\x0f\xa4\x04\xc20\xb6\x8a\xf3T\xfc\xf1,vԍ{-n\x1b\xe9dצ\xcc\xc2g\xb5J\xa8\x0c\xc08\x89\xd6\x18\x0b9P\x84\x1a\x07\xf8\xc9'b\xce\xc8.\xb1s\x96\xab\x1a\x81y\xaf\xf4\x8e\x01\x13g:hS\x01\xd2\xcaq\xf0Y\x9b\xbf\xf6\xdcgWN\xc0\x1dj\x9dtF\xa6\xd0(z\xfd\xa6\xab\xc0\xef.\xe8\x155:p&F\x7f\xeb7\x0c+K2g\xa5\xfe\\\xc12\xf0\xcd\xefplJñ+p\xb3hcfl\x11\xf5\xafb\xe9P\t\x98\xab\x83\x19$[\xe4\xb7䛫\xf9\xc0\x89i&^\xdb1Q\x01\xfb\xfe\xa3_0\x81<\x19\xbec\xe3\x14\xdb㜾\xd8(-B\xc1jh/xQ\xfe\x06\x90h\xbd)\x05ܵ~m}k\x96Y<\xe6\x94=\xb8-\xb5G\x99\x08\xf9\x12\xf9\x08\xda\xdb\xf0 ):A\"\x9a\xaa\x9d\xad]\x18\xe6NF|\x14\xc6\xc3b\x0bL\x07۳\x96\xf1\xccׁ\x11pI̻\xec$\x1c_\xa7Q\xe6r\xe5w<$\x15\x0c\xb3uI#\xd3.\x8d\x80\xe6\x05\x10D\x996\xf0\xa8\xd4\x030R\xf5\x8f\x93\x9f\xcfD\xe6\xf2ą\xb9\xed\xb3\xc9N~\xfe\xab\xf2\xd9X<.\r\xdf\xc8V;V\xde:.\x99H\xcb\x00\xaa\x01\xa7R91\x08\xc4\xf93@\x92o\x1f\x95=[\x9f\xf7\x83\xaa\xd10\x916\xfa\x80\x9ed\xb2\x92AI\xd7\xe3sIT\xb4_;Ʉ{\xffc\x90޳֛\xa5\x9a\xa7\xbf-\x94\x80c>K\x8f\xb7O\xdf?Ȃ\x82\x02\xabԕ\xb8\xc6\x06@}\xd0\xc8}\x9a!\x00\x14\xf4\x0e]#\xc1\x8c\xb6<\xca\xc8\xec\xe2\xd2\xf8|Sj\xd9`\xab\x9e\n\xbf\xab\xf1\xa7\xea\x93@\xcc)\xa3\xb0\x0e\xf9;\xa7e\xbe\xa6\xea\xcb(\x02pN\xb2\x1f\xdej\x0e\xdcH\x9e0\xe5\xfdښ\x93+Ҥ\xdb\\\xab\x9e\xbeX.d\x10\xf3\xc1\xb5\x06\xee\xe0\xabц\x19\xd8ၞ\xef6\xbe\x821\xae\x83\xc5\xde<\xb8\x93\xd6N\xc8\xf9\xa4X2I\xf9\x05\x9f>q\x16\xe1\x00R\xa5\x9f\xa2&>\x99#\x81\xbdK\xaeD\x84\\8Q\xdc\xd9`G\xe5]": EOF before flush: sent 5000 received 5000 lastProcessedTime 2025-05-07 00:09:05.447175677 +0000 UTC isDone true err: EOF --- PASS: TestNewLogBufferFirstBuffer (0.03s) PASS ok github.com/seaweedfs/seaweedfs/weed/util/log_buffer 0.040s === RUN TestAllocateFree --- PASS: TestAllocateFree (0.00s) === RUN TestAllocateFreeEdgeCases --- PASS: TestAllocateFreeEdgeCases (0.00s) === RUN TestBitCount --- PASS: TestBitCount (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/util/mem 0.002s === RUN TestNameList 0 1 10 11 12 13 14 15 16 17 18 19 2 20 21 22 23 24 25 26 27 28 29 3 30 31 32 33 34 35 36 37 38 39 4 40 41 42 43 44 45 46 47 48 49 5 50 51 52 53 54 55 56 57 58 59 6 60 61 62 63 64 65 66 67 68 69 7 70 71 72 73 74 75 76 77 78 79 8 80 81 82 83 84 85 86 87 88 89 9 90 91 92 93 94 95 96 97 98 99 --- PASS: TestNameList (0.06s) === RUN TestReverseInsert --- PASS: TestReverseInsert (0.00s) === RUN TestInsertAndFind --- PASS: TestInsertAndFind (0.04s) === RUN TestDelete --- PASS: TestDelete (0.04s) === RUN TestNext --- PASS: TestNext (0.01s) === RUN TestPrev --- PASS: TestPrev (0.02s) === RUN TestFindGreaterOrEqual --- PASS: TestFindGreaterOrEqual (0.02s) === RUN TestChangeValue --- PASS: TestChangeValue (0.01s) PASS ok github.com/seaweedfs/seaweedfs/weed/util/skiplist 0.195s === RUN TestLocationIndex --- PASS: TestLocationIndex (0.00s) === RUN TestLookupFileId --- PASS: TestLookupFileId (0.00s) === RUN TestConcurrentGetLocations --- PASS: TestConcurrentGetLocations (0.99s) PASS ok github.com/seaweedfs/seaweedfs/weed/wdclient 0.997s ? github.com/seaweedfs/seaweedfs/weed/wdclient/exclusive_locks [no test files] ? github.com/seaweedfs/seaweedfs/weed/wdclient/net2 [no test files] ? github.com/seaweedfs/seaweedfs/weed/wdclient/resource_pool [no test files] FAIL ==> ERROR: A failure occurred in check(). Aborting... ==> ERROR: Build failed, check /home/alhp/workspace/chroot/build_9cf44918-a9a9-4e82-aca6-76fbe4411177/build