Scheduled Maintenance: We are aware of an issue with Google, AOL, and Yahoo services as email providers which are blocking new registrations. We are trying to fix the issue and we have several internal and external support tickets in process to resolve the issue. Please see: viewtopic.php?t=158230

 

 

 

BTRFS on Debian Stable, your experiences?

If none of the specific sub-forums seem right for your thread, ask here.
Message
Author
LE_746F6D617A7A69
Posts: 932
Joined: 2020-05-03 14:16
Has thanked: 7 times
Been thanked: 68 times

Re: BTRFS on Debian Stable, your experiences?

#61 Post by LE_746F6D617A7A69 »

pylkko wrote:
LE_746F6D617A7A69 wrote:That benchmark on Phoronix is worth crap, because Mr Larabel has no idea how to setup md Raid10 and how to configure Ext4 for use with the raid array.
Yes, if someone knew how to design a better, more fair and meanigful benchmark, people would not ignore it, for sure as crap! Right now, this is the best we have.
Yes, such benchmarks do exist today - one of them is the iozone ;) How did You got the impression that Mr Larabel benchmarks are "fair" is an another issue - perhaps You should learn more about "sponsored" benchmarks ...
Bill Gates: "(...) In my case, I went to the garbage cans at the Computer Science Center and I fished out listings of their operating system."
The_full_story and Nothing_have_changed

User avatar
pylkko
Posts: 1802
Joined: 2014-11-06 19:02

Re: BTRFS on Debian Stable, your experiences?

#62 Post by pylkko »

If you say that such benchmarks exist, please provide a link describing the results and what machine/setup they were ran on. Also clearly explain why this other result is more fair or better.

Or perhaps, if such a thing has not been done, but you suspect it could be done, what kind of results would you expect. Which one of the charts on Phoronix would you expect to be different and why?

No, I do not know about "sponsored benchmarks by Mr. Larablel". Please elaborate so I can learn.

Please notice that I did not say that that benchmark is fair necessarily. I only said that if someone knew another even more fair way to do it, it would be a big thing and people would be genuinely interested.

LE_746F6D617A7A69
Posts: 932
Joined: 2020-05-03 14:16
Has thanked: 7 times
Been thanked: 68 times

Re: BTRFS on Debian Stable, your experiences?

#63 Post by LE_746F6D617A7A69 »

I've already explained what I wanted to explain, and I'm not going to waste more time on this topic.
Bill Gates: "(...) In my case, I went to the garbage cans at the Computer Science Center and I fished out listings of their operating system."
The_full_story and Nothing_have_changed

User avatar
pylkko
Posts: 1802
Joined: 2014-11-06 19:02

phoronix on btrfs on 5.10

#64 Post by pylkko »

I noticed that Phoronix discussed future improvements to btrfs performance anounced for 5.10 kernel.
https://www.phoronix.com/scan.php?page= ... ync-Faster

This is what David Sterba writes on the list:
https://lore.kernel.org/lkml/cover.1602 ... @suse.com/
Hilights:

- fsync performance improvements
- less contention of log mutex (throughput +4%, latency -14%,
dbench with 32 clients)
- skip unnecessary commits for link and rename (throughput +6%,
latency -30%, rename latency -75%, dbench with 16 clients)
- make fast fsync wait only for writeback (throughput +10..40%,
runtime -1..-20%, dbench with 1 to 64 clients on various file/block
sizes

LE_746F6D617A7A69
Posts: 932
Joined: 2020-05-03 14:16
Has thanked: 7 times
Been thanked: 68 times

Re: BTRFS on Debian Stable, your experiences?

#65 Post by LE_746F6D617A7A69 »

pylkko wrote:If you say that such benchmarks exist, please provide a link describing the results and what machine/setup they were ran on.
I have no time for this. I didn't even read Your post in full...

Here is my cheap array: sequential read:

Code: Select all

fio --filename=/dev/md/SYS_1TB_R10 --direct=1 --rw=read --bs=2M --ioengine=libaio --iodepth=64 --runtime=600 --numjobs=1 --time_based --group_reporting --name=throughput-test-job --eta-newline=1 --readonly
throughput-test-job: (g=0): rw=read, bs=(R) 2048KiB-2048KiB, (W) 2048KiB-2048KiB, (T) 2048KiB-2048KiB, ioengine=libaio, iodepth=64
fio-3.12
Starting 1 process
Jobs: 1 (f=1): [R(1)][0.5%][r=534MiB/s][r=267 IOPS][eta 09m:57s]
Jobs: 1 (f=1): [R(1)][0.8%][r=518MiB/s][r=259 IOPS][eta 09m:55s] 
Jobs: 1 (f=1): [R(1)][1.2%][r=500MiB/s][r=250 IOPS][eta 09m:53s] 
Jobs: 1 (f=1): [R(1)][1.5%][r=486MiB/s][r=243 IOPS][eta 09m:51s] 
Jobs: 1 (f=1): [R(1)][1.8%][r=518MiB/s][r=259 IOPS][eta 09m:49s] 
Jobs: 1 (f=1): [R(1)][2.2%][r=532MiB/s][r=266 IOPS][eta 09m:47s] 
Jobs: 1 (f=1): [R(1)][2.5%][r=511MiB/s][r=255 IOPS][eta 09m:45s] 
Jobs: 1 (f=1): [R(1)][2.8%][r=531MiB/s][r=265 IOPS][eta 09m:43s] 
Jobs: 1 (f=1): [R(1)][3.2%][r=494MiB/s][r=247 IOPS][eta 09m:41s] 
Jobs: 1 (f=1): [R(1)][3.5%][r=530MiB/s][r=265 IOPS][eta 09m:39s] 
Jobs: 1 (f=1): [R(1)][3.8%][r=531MiB/s][r=265 IOPS][eta 09m:37s] 
Jobs: 1 (f=1): [R(1)][4.2%][r=536MiB/s][r=268 IOPS][eta 09m:35s] 
Jobs: 1 (f=1): [R(1)][4.5%][r=522MiB/s][r=261 IOPS][eta 09m:33s] 
Jobs: 1 (f=1): [R(1)][4.8%][r=507MiB/s][r=253 IOPS][eta 09m:31s] 
Jobs: 1 (f=1): [R(1)][5.2%][r=535MiB/s][r=267 IOPS][eta 09m:29s] 
Jobs: 1 (f=1): [R(1)][5.5%][r=498MiB/s][r=249 IOPS][eta 09m:27s] 
Jobs: 1 (f=1): [R(1)][5.8%][r=504MiB/s][r=252 IOPS][eta 09m:25s] 
Jobs: 1 (f=1): [R(1)][6.2%][r=533MiB/s][r=266 IOPS][eta 09m:23s] 
Jobs: 1 (f=1): [R(1)][6.5%][r=534MiB/s][r=267 IOPS][eta 09m:21s] 
Jobs: 1 (f=1): [R(1)][6.8%][r=536MiB/s][r=268 IOPS][eta 09m:19s] 
Jobs: 1 (f=1): [R(1)][7.2%][r=537MiB/s][r=268 IOPS][eta 09m:17s] 
Jobs: 1 (f=1): [R(1)][7.5%][r=537MiB/s][r=268 IOPS][eta 09m:15s] 
Jobs: 1 (f=1): [R(1)][7.8%][r=536MiB/s][r=268 IOPS][eta 09m:13s] 
Jobs: 1 (f=1): [R(1)][8.2%][r=530MiB/s][r=265 IOPS][eta 09m:11s] 
Jobs: 1 (f=1): [R(1)][8.5%][r=537MiB/s][r=268 IOPS][eta 09m:09s] 
Jobs: 1 (f=1): [R(1)][8.8%][r=536MiB/s][r=268 IOPS][eta 09m:07s] 
Jobs: 1 (f=1): [R(1)][9.2%][r=536MiB/s][r=268 IOPS][eta 09m:05s] 
Jobs: 1 (f=1): [R(1)][9.5%][r=528MiB/s][r=264 IOPS][eta 09m:03s] 
Jobs: 1 (f=1): [R(1)][9.8%][r=482MiB/s][r=241 IOPS][eta 09m:01s] 
Jobs: 1 (f=1): [R(1)][10.2%][r=484MiB/s][r=242 IOPS][eta 08m:59s] 
Jobs: 1 (f=1): [R(1)][10.5%][r=536MiB/s][r=268 IOPS][eta 08m:57s] 
Jobs: 1 (f=1): [R(1)][10.8%][r=486MiB/s][r=243 IOPS][eta 08m:55s] 
Jobs: 1 (f=1): [R(1)][11.2%][r=474MiB/s][r=237 IOPS][eta 08m:53s] 
Jobs: 1 (f=1): [R(1)][11.5%][r=442MiB/s][r=221 IOPS][eta 08m:51s] 
Jobs: 1 (f=1): [R(1)][11.8%][r=536MiB/s][r=268 IOPS][eta 08m:49s] 
Jobs: 1 (f=1): [R(1)][12.2%][r=534MiB/s][r=267 IOPS][eta 08m:47s] 
Jobs: 1 (f=1): [R(1)][12.5%][r=456MiB/s][r=228 IOPS][eta 08m:45s] 
Jobs: 1 (f=1): [R(1)][12.8%][r=536MiB/s][r=268 IOPS][eta 08m:43s] 
Jobs: 1 (f=1): [R(1)][13.2%][r=531MiB/s][r=265 IOPS][eta 08m:41s] 
Jobs: 1 (f=1): [R(1)][13.5%][r=536MiB/s][r=268 IOPS][eta 08m:39s] 
Jobs: 1 (f=1): [R(1)][13.8%][r=537MiB/s][r=268 IOPS][eta 08m:37s] 
Jobs: 1 (f=1): [R(1)][14.2%][r=488MiB/s][r=244 IOPS][eta 08m:35s] 
Jobs: 1 (f=1): [R(1)][14.5%][r=531MiB/s][r=265 IOPS][eta 08m:33s] 
Jobs: 1 (f=1): [R(1)][14.8%][r=522MiB/s][r=261 IOPS][eta 08m:31s] 
Jobs: 1 (f=1): [R(1)][15.2%][r=500MiB/s][r=250 IOPS][eta 08m:29s] 
Jobs: 1 (f=1): [R(1)][15.5%][r=496MiB/s][r=248 IOPS][eta 08m:27s] 
Jobs: 1 (f=1): [R(1)][15.8%][r=537MiB/s][r=268 IOPS][eta 08m:25s] 
Jobs: 1 (f=1): [R(1)][16.2%][r=448MiB/s][r=224 IOPS][eta 08m:23s] 
Jobs: 1 (f=1): [R(1)][16.5%][r=530MiB/s][r=265 IOPS][eta 08m:21s] 
Jobs: 1 (f=1): [R(1)][16.8%][r=462MiB/s][r=231 IOPS][eta 08m:19s] 
Jobs: 1 (f=1): [R(1)][17.2%][r=536MiB/s][r=268 IOPS][eta 08m:17s] 
Jobs: 1 (f=1): [R(1)][17.5%][r=536MiB/s][r=268 IOPS][eta 08m:15s] 
Jobs: 1 (f=1): [R(1)][17.8%][r=537MiB/s][r=268 IOPS][eta 08m:13s] 
Jobs: 1 (f=1): [R(1)][18.2%][r=537MiB/s][r=268 IOPS][eta 08m:11s] 
Jobs: 1 (f=1): [R(1)][18.5%][r=530MiB/s][r=265 IOPS][eta 08m:09s] 
Jobs: 1 (f=1): [R(1)][18.8%][r=536MiB/s][r=268 IOPS][eta 08m:07s] 
Jobs: 1 (f=1): [R(1)][19.2%][r=537MiB/s][r=268 IOPS][eta 08m:05s] 
Jobs: 1 (f=1): [R(1)][19.5%][r=516MiB/s][r=258 IOPS][eta 08m:03s] 
Jobs: 1 (f=1): [R(1)][19.8%][r=532MiB/s][r=266 IOPS][eta 08m:01s] 
Jobs: 1 (f=1): [R(1)][20.2%][r=480MiB/s][r=240 IOPS][eta 07m:59s] 
Jobs: 1 (f=1): [R(1)][20.5%][r=535MiB/s][r=267 IOPS][eta 07m:57s] 
Jobs: 1 (f=1): [R(1)][20.8%][r=536MiB/s][r=268 IOPS][eta 07m:55s] 
Jobs: 1 (f=1): [R(1)][21.2%][r=536MiB/s][r=268 IOPS][eta 07m:53s] 
Jobs: 1 (f=1): [R(1)][21.5%][r=422MiB/s][r=211 IOPS][eta 07m:51s] 
Jobs: 1 (f=1): [R(1)][21.8%][r=486MiB/s][r=243 IOPS][eta 07m:49s] 
Jobs: 1 (f=1): [R(1)][22.2%][r=534MiB/s][r=267 IOPS][eta 07m:47s] 
Jobs: 1 (f=1): [R(1)][22.5%][r=494MiB/s][r=247 IOPS][eta 07m:45s] 
Jobs: 1 (f=1): [R(1)][22.8%][r=537MiB/s][r=268 IOPS][eta 07m:43s] 
Jobs: 1 (f=1): [R(1)][23.2%][r=506MiB/s][r=253 IOPS][eta 07m:41s] 
Jobs: 1 (f=1): [R(1)][23.5%][r=474MiB/s][r=237 IOPS][eta 07m:39s] 
Jobs: 1 (f=1): [R(1)][23.8%][r=488MiB/s][r=244 IOPS][eta 07m:37s] 
Jobs: 1 (f=1): [R(1)][24.2%][r=531MiB/s][r=265 IOPS][eta 07m:35s] 
Jobs: 1 (f=1): [R(1)][24.5%][r=514MiB/s][r=257 IOPS][eta 07m:33s] 
Jobs: 1 (f=1): [R(1)][24.8%][r=484MiB/s][r=242 IOPS][eta 07m:31s] 
Jobs: 1 (f=1): [R(1)][25.2%][r=448MiB/s][r=224 IOPS][eta 07m:29s] 
Jobs: 1 (f=1): [R(1)][25.5%][r=458MiB/s][r=229 IOPS][eta 07m:27s] 
Jobs: 1 (f=1): [R(1)][25.8%][r=442MiB/s][r=221 IOPS][eta 07m:25s] 
Jobs: 1 (f=1): [R(1)][26.2%][r=452MiB/s][r=226 IOPS][eta 07m:23s] 
Jobs: 1 (f=1): [R(1)][26.5%][r=528MiB/s][r=264 IOPS][eta 07m:21s] 
Jobs: 1 (f=1): [R(1)][26.8%][r=537MiB/s][r=268 IOPS][eta 07m:19s] 
Jobs: 1 (f=1): [R(1)][27.2%][r=504MiB/s][r=252 IOPS][eta 07m:17s] 
Jobs: 1 (f=1): [R(1)][27.5%][r=534MiB/s][r=267 IOPS][eta 07m:15s] 
Jobs: 1 (f=1): [R(1)][27.8%][r=534MiB/s][r=267 IOPS][eta 07m:13s] 
Jobs: 1 (f=1): [R(1)][28.2%][r=496MiB/s][r=248 IOPS][eta 07m:11s] 
Jobs: 1 (f=1): [R(1)][28.5%][r=505MiB/s][r=252 IOPS][eta 07m:09s] 
Jobs: 1 (f=1): [R(1)][28.8%][r=527MiB/s][r=263 IOPS][eta 07m:07s] 
Jobs: 1 (f=1): [R(1)][29.2%][r=536MiB/s][r=268 IOPS][eta 07m:05s] 
Jobs: 1 (f=1): [R(1)][29.5%][r=518MiB/s][r=259 IOPS][eta 07m:03s] 
Jobs: 1 (f=1): [R(1)][29.8%][r=490MiB/s][r=245 IOPS][eta 07m:01s] 
Jobs: 1 (f=1): [R(1)][30.2%][r=450MiB/s][r=225 IOPS][eta 06m:59s] 
Jobs: 1 (f=1): [R(1)][30.5%][r=500MiB/s][r=250 IOPS][eta 06m:57s] 
Jobs: 1 (f=1): [R(1)][30.8%][r=460MiB/s][r=230 IOPS][eta 06m:55s] 
Jobs: 1 (f=1): [R(1)][31.2%][r=480MiB/s][r=240 IOPS][eta 06m:53s] 
Jobs: 1 (f=1): [R(1)][31.5%][r=482MiB/s][r=241 IOPS][eta 06m:51s] 
Jobs: 1 (f=1): [R(1)][31.8%][r=536MiB/s][r=268 IOPS][eta 06m:49s] 
Jobs: 1 (f=1): [R(1)][32.2%][r=530MiB/s][r=265 IOPS][eta 06m:47s] 
Jobs: 1 (f=1): [R(1)][32.5%][r=464MiB/s][r=232 IOPS][eta 06m:45s] 
Jobs: 1 (f=1): [R(1)][32.8%][r=436MiB/s][r=218 IOPS][eta 06m:43s] 
Jobs: 1 (f=1): [R(1)][33.2%][r=537MiB/s][r=268 IOPS][eta 06m:41s] 
Jobs: 1 (f=1): [R(1)][33.5%][r=452MiB/s][r=226 IOPS][eta 06m:39s] 
Jobs: 1 (f=1): [R(1)][33.8%][r=458MiB/s][r=229 IOPS][eta 06m:37s] 
Jobs: 1 (f=1): [R(1)][34.2%][r=531MiB/s][r=265 IOPS][eta 06m:35s] 
Jobs: 1 (f=1): [R(1)][34.5%][r=524MiB/s][r=262 IOPS][eta 06m:33s] 
Jobs: 1 (f=1): [R(1)][34.8%][r=526MiB/s][r=263 IOPS][eta 06m:31s] 
Jobs: 1 (f=1): [R(1)][35.2%][r=472MiB/s][r=236 IOPS][eta 06m:29s] 
Jobs: 1 (f=1): [R(1)][35.5%][r=537MiB/s][r=268 IOPS][eta 06m:27s] 
Jobs: 1 (f=1): [R(1)][35.8%][r=534MiB/s][r=267 IOPS][eta 06m:25s] 
Jobs: 1 (f=1): [R(1)][36.2%][r=504MiB/s][r=252 IOPS][eta 06m:23s] 
Jobs: 1 (f=1): [R(1)][36.5%][r=537MiB/s][r=268 IOPS][eta 06m:21s] 
Jobs: 1 (f=1): [R(1)][36.8%][r=522MiB/s][r=261 IOPS][eta 06m:19s] 
Jobs: 1 (f=1): [R(1)][37.2%][r=462MiB/s][r=231 IOPS][eta 06m:17s] 
Jobs: 1 (f=1): [R(1)][37.5%][r=531MiB/s][r=265 IOPS][eta 06m:15s] 
Jobs: 1 (f=1): [R(1)][37.8%][r=535MiB/s][r=267 IOPS][eta 06m:13s] 
Jobs: 1 (f=1): [R(1)][38.2%][r=454MiB/s][r=227 IOPS][eta 06m:11s] 
Jobs: 1 (f=1): [R(1)][38.5%][r=536MiB/s][r=268 IOPS][eta 06m:09s] 
Jobs: 1 (f=1): [R(1)][38.8%][r=531MiB/s][r=265 IOPS][eta 06m:07s] 
Jobs: 1 (f=1): [R(1)][39.2%][r=414MiB/s][r=207 IOPS][eta 06m:05s] 
Jobs: 1 (f=1): [R(1)][39.5%][r=516MiB/s][r=258 IOPS][eta 06m:03s] 
Jobs: 1 (f=1): [R(1)][39.8%][r=506MiB/s][r=253 IOPS][eta 06m:01s] 
Jobs: 1 (f=1): [R(1)][40.2%][r=456MiB/s][r=228 IOPS][eta 05m:59s] 
Jobs: 1 (f=1): [R(1)][40.5%][r=512MiB/s][r=256 IOPS][eta 05m:57s] 
Jobs: 1 (f=1): [R(1)][40.8%][r=536MiB/s][r=268 IOPS][eta 05m:55s] 
Jobs: 1 (f=1): [R(1)][41.2%][r=530MiB/s][r=265 IOPS][eta 05m:53s] 
Jobs: 1 (f=1): [R(1)][41.5%][r=531MiB/s][r=265 IOPS][eta 05m:51s] 
Jobs: 1 (f=1): [R(1)][41.8%][r=536MiB/s][r=268 IOPS][eta 05m:49s] 
Jobs: 1 (f=1): [R(1)][42.2%][r=530MiB/s][r=265 IOPS][eta 05m:47s] 
Jobs: 1 (f=1): [R(1)][42.5%][r=531MiB/s][r=265 IOPS][eta 05m:45s] 
Jobs: 1 (f=1): [R(1)][42.8%][r=412MiB/s][r=206 IOPS][eta 05m:43s] 
Jobs: 1 (f=1): [R(1)][43.2%][r=472MiB/s][r=236 IOPS][eta 05m:41s] 
Jobs: 1 (f=1): [R(1)][43.5%][r=446MiB/s][r=223 IOPS][eta 05m:39s] 
Jobs: 1 (f=1): [R(1)][43.8%][r=436MiB/s][r=218 IOPS][eta 05m:37s] 
Jobs: 1 (f=1): [R(1)][44.2%][r=534MiB/s][r=267 IOPS][eta 05m:35s] 
Jobs: 1 (f=1): [R(1)][44.5%][r=530MiB/s][r=265 IOPS][eta 05m:33s] 
Jobs: 1 (f=1): [R(1)][44.8%][r=490MiB/s][r=245 IOPS][eta 05m:31s] 
Jobs: 1 (f=1): [R(1)][45.2%][r=515MiB/s][r=257 IOPS][eta 05m:29s] 
Jobs: 1 (f=1): [R(1)][45.5%][r=458MiB/s][r=229 IOPS][eta 05m:27s] 
Jobs: 1 (f=1): [R(1)][45.8%][r=446MiB/s][r=223 IOPS][eta 05m:25s] 
Jobs: 1 (f=1): [R(1)][46.2%][r=460MiB/s][r=230 IOPS][eta 05m:23s] 
Jobs: 1 (f=1): [R(1)][46.5%][r=436MiB/s][r=218 IOPS][eta 05m:21s] 
Jobs: 1 (f=1): [R(1)][46.8%][r=536MiB/s][r=268 IOPS][eta 05m:19s] 
Jobs: 1 (f=1): [R(1)][47.2%][r=530MiB/s][r=265 IOPS][eta 05m:17s] 
Jobs: 1 (f=1): [R(1)][47.5%][r=537MiB/s][r=268 IOPS][eta 05m:15s] 
Jobs: 1 (f=1): [R(1)][47.8%][r=480MiB/s][r=240 IOPS][eta 05m:13s] 
Jobs: 1 (f=1): [R(1)][48.2%][r=454MiB/s][r=227 IOPS][eta 05m:11s] 
Jobs: 1 (f=1): [R(1)][48.5%][r=488MiB/s][r=244 IOPS][eta 05m:09s] 
Jobs: 1 (f=1): [R(1)][48.8%][r=537MiB/s][r=268 IOPS][eta 05m:07s] 
Jobs: 1 (f=1): [R(1)][49.2%][r=430MiB/s][r=215 IOPS][eta 05m:05s] 
Jobs: 1 (f=1): [R(1)][49.5%][r=474MiB/s][r=237 IOPS][eta 05m:03s] 
Jobs: 1 (f=1): [R(1)][49.8%][r=534MiB/s][r=267 IOPS][eta 05m:01s] 
Jobs: 1 (f=1): [R(1)][50.2%][r=498MiB/s][r=249 IOPS][eta 04m:59s] 
Jobs: 1 (f=1): [R(1)][50.5%][r=536MiB/s][r=268 IOPS][eta 04m:57s] 
Jobs: 1 (f=1): [R(1)][50.8%][r=536MiB/s][r=268 IOPS][eta 04m:55s] 
Jobs: 1 (f=1): [R(1)][51.2%][r=454MiB/s][r=227 IOPS][eta 04m:53s] 
Jobs: 1 (f=1): [R(1)][51.5%][r=537MiB/s][r=268 IOPS][eta 04m:51s] 
Jobs: 1 (f=1): [R(1)][51.8%][r=508MiB/s][r=254 IOPS][eta 04m:49s] 
Jobs: 1 (f=1): [R(1)][52.2%][r=476MiB/s][r=238 IOPS][eta 04m:47s] 
Jobs: 1 (f=1): [R(1)][52.5%][r=525MiB/s][r=262 IOPS][eta 04m:45s] 
Jobs: 1 (f=1): [R(1)][52.8%][r=531MiB/s][r=265 IOPS][eta 04m:43s] 
Jobs: 1 (f=1): [R(1)][53.2%][r=518MiB/s][r=259 IOPS][eta 04m:41s] 
Jobs: 1 (f=1): [R(1)][53.5%][r=530MiB/s][r=265 IOPS][eta 04m:39s] 
Jobs: 1 (f=1): [R(1)][53.8%][r=525MiB/s][r=262 IOPS][eta 04m:37s] 
Jobs: 1 (f=1): [R(1)][54.2%][r=518MiB/s][r=259 IOPS][eta 04m:35s] 
Jobs: 1 (f=1): [R(1)][54.5%][r=518MiB/s][r=259 IOPS][eta 04m:33s] 
Jobs: 1 (f=1): [R(1)][54.8%][r=448MiB/s][r=224 IOPS][eta 04m:31s] 
Jobs: 1 (f=1): [R(1)][55.2%][r=531MiB/s][r=265 IOPS][eta 04m:29s] 
Jobs: 1 (f=1): [R(1)][55.5%][r=504MiB/s][r=252 IOPS][eta 04m:27s] 
Jobs: 1 (f=1): [R(1)][55.8%][r=512MiB/s][r=256 IOPS][eta 04m:25s] 
Jobs: 1 (f=1): [R(1)][56.2%][r=519MiB/s][r=259 IOPS][eta 04m:23s] 
Jobs: 1 (f=1): [R(1)][56.5%][r=530MiB/s][r=265 IOPS][eta 04m:21s] 
Jobs: 1 (f=1): [R(1)][56.8%][r=518MiB/s][r=259 IOPS][eta 04m:19s] 
Jobs: 1 (f=1): [R(1)][57.2%][r=530MiB/s][r=265 IOPS][eta 04m:17s] 
Jobs: 1 (f=1): [R(1)][57.5%][r=517MiB/s][r=258 IOPS][eta 04m:15s] 
Jobs: 1 (f=1): [R(1)][57.8%][r=518MiB/s][r=259 IOPS][eta 04m:13s] 
Jobs: 1 (f=1): [R(1)][58.2%][r=456MiB/s][r=228 IOPS][eta 04m:11s] 
Jobs: 1 (f=1): [R(1)][58.5%][r=518MiB/s][r=259 IOPS][eta 04m:09s] 
Jobs: 1 (f=1): [R(1)][58.8%][r=468MiB/s][r=234 IOPS][eta 04m:07s] 
Jobs: 1 (f=1): [R(1)][59.2%][r=494MiB/s][r=247 IOPS][eta 04m:05s] 
Jobs: 1 (f=1): [R(1)][59.5%][r=518MiB/s][r=259 IOPS][eta 04m:03s] 
Jobs: 1 (f=1): [R(1)][59.8%][r=518MiB/s][r=259 IOPS][eta 04m:01s] 
Jobs: 1 (f=1): [R(1)][60.2%][r=484MiB/s][r=242 IOPS][eta 03m:59s] 
Jobs: 1 (f=1): [R(1)][60.5%][r=470MiB/s][r=235 IOPS][eta 03m:57s] 
Jobs: 1 (f=1): [R(1)][60.8%][r=432MiB/s][r=216 IOPS][eta 03m:55s] 
Jobs: 1 (f=1): [R(1)][61.2%][r=480MiB/s][r=240 IOPS][eta 03m:53s] 
Jobs: 1 (f=1): [R(1)][61.5%][r=519MiB/s][r=259 IOPS][eta 03m:51s] 
Jobs: 1 (f=1): [R(1)][61.8%][r=442MiB/s][r=221 IOPS][eta 03m:49s] 
Jobs: 1 (f=1): [R(1)][62.2%][r=480MiB/s][r=240 IOPS][eta 03m:47s] 
Jobs: 1 (f=1): [R(1)][62.5%][r=518MiB/s][r=259 IOPS][eta 03m:45s] 
Jobs: 1 (f=1): [R(1)][62.8%][r=519MiB/s][r=259 IOPS][eta 03m:43s] 
Jobs: 1 (f=1): [R(1)][63.2%][r=506MiB/s][r=253 IOPS][eta 03m:41s] 
Jobs: 1 (f=1): [R(1)][63.5%][r=518MiB/s][r=259 IOPS][eta 03m:39s] 
Jobs: 1 (f=1): [R(1)][63.8%][r=518MiB/s][r=259 IOPS][eta 03m:37s] 
Jobs: 1 (f=1): [R(1)][64.2%][r=501MiB/s][r=250 IOPS][eta 03m:35s] 
Jobs: 1 (f=1): [R(1)][64.5%][r=518MiB/s][r=259 IOPS][eta 03m:33s] 
Jobs: 1 (f=1): [R(1)][64.8%][r=496MiB/s][r=248 IOPS][eta 03m:31s] 
Jobs: 1 (f=1): [R(1)][65.2%][r=470MiB/s][r=235 IOPS][eta 03m:29s] 
Jobs: 1 (f=1): [R(1)][65.5%][r=466MiB/s][r=233 IOPS][eta 03m:27s] 
Jobs: 1 (f=1): [R(1)][65.8%][r=518MiB/s][r=259 IOPS][eta 03m:25s] 
Jobs: 1 (f=1): [R(1)][66.2%][r=518MiB/s][r=259 IOPS][eta 03m:23s] 
Jobs: 1 (f=1): [R(1)][66.5%][r=458MiB/s][r=229 IOPS][eta 03m:21s] 
Jobs: 1 (f=1): [R(1)][66.8%][r=519MiB/s][r=259 IOPS][eta 03m:19s] 
Jobs: 1 (f=1): [R(1)][67.2%][r=500MiB/s][r=250 IOPS][eta 03m:17s] 
Jobs: 1 (f=1): [R(1)][67.5%][r=450MiB/s][r=225 IOPS][eta 03m:15s] 
Jobs: 1 (f=1): [R(1)][67.8%][r=519MiB/s][r=259 IOPS][eta 03m:13s] 
Jobs: 1 (f=1): [R(1)][68.2%][r=518MiB/s][r=259 IOPS][eta 03m:11s] 
Jobs: 1 (f=1): [R(1)][68.5%][r=518MiB/s][r=259 IOPS][eta 03m:09s] 
Jobs: 1 (f=1): [R(1)][68.8%][r=518MiB/s][r=259 IOPS][eta 03m:07s] 
Jobs: 1 (f=1): [R(1)][69.2%][r=519MiB/s][r=259 IOPS][eta 03m:05s] 
Jobs: 1 (f=1): [R(1)][69.5%][r=510MiB/s][r=255 IOPS][eta 03m:03s] 
Jobs: 1 (f=1): [R(1)][69.8%][r=524MiB/s][r=262 IOPS][eta 03m:01s] 
Jobs: 1 (f=1): [R(1)][70.2%][r=416MiB/s][r=208 IOPS][eta 02m:59s] 
Jobs: 1 (f=1): [R(1)][70.5%][r=456MiB/s][r=228 IOPS][eta 02m:57s] 
Jobs: 1 (f=1): [R(1)][70.8%][r=462MiB/s][r=231 IOPS][eta 02m:55s] 
Jobs: 1 (f=1): [R(1)][71.2%][r=426MiB/s][r=213 IOPS][eta 02m:53s] 
Jobs: 1 (f=1): [R(1)][71.5%][r=517MiB/s][r=258 IOPS][eta 02m:51s] 
Jobs: 1 (f=1): [R(1)][71.8%][r=442MiB/s][r=221 IOPS][eta 02m:49s] 
Jobs: 1 (f=1): [R(1)][72.2%][r=428MiB/s][r=214 IOPS][eta 02m:47s] 
Jobs: 1 (f=1): [R(1)][72.5%][r=518MiB/s][r=259 IOPS][eta 02m:45s] 
Jobs: 1 (f=1): [R(1)][72.8%][r=456MiB/s][r=228 IOPS][eta 02m:43s] 
Jobs: 1 (f=1): [R(1)][73.2%][r=466MiB/s][r=233 IOPS][eta 02m:41s] 
Jobs: 1 (f=1): [R(1)][73.5%][r=494MiB/s][r=247 IOPS][eta 02m:39s] 
Jobs: 1 (f=1): [R(1)][73.8%][r=518MiB/s][r=259 IOPS][eta 02m:37s] 
Jobs: 1 (f=1): [R(1)][74.2%][r=519MiB/s][r=259 IOPS][eta 02m:35s] 
Jobs: 1 (f=1): [R(1)][74.5%][r=518MiB/s][r=259 IOPS][eta 02m:33s] 
Jobs: 1 (f=1): [R(1)][74.8%][r=450MiB/s][r=225 IOPS][eta 02m:31s] 
Jobs: 1 (f=1): [R(1)][75.2%][r=462MiB/s][r=231 IOPS][eta 02m:29s] 
Jobs: 1 (f=1): [R(1)][75.5%][r=508MiB/s][r=254 IOPS][eta 02m:27s] 
Jobs: 1 (f=1): [R(1)][75.8%][r=444MiB/s][r=222 IOPS][eta 02m:25s] 
Jobs: 1 (f=1): [R(1)][76.2%][r=518MiB/s][r=259 IOPS][eta 02m:23s] 
Jobs: 1 (f=1): [R(1)][76.5%][r=460MiB/s][r=230 IOPS][eta 02m:21s] 
Jobs: 1 (f=1): [R(1)][76.8%][r=518MiB/s][r=259 IOPS][eta 02m:19s] 
Jobs: 1 (f=1): [R(1)][77.2%][r=518MiB/s][r=259 IOPS][eta 02m:17s] 
Jobs: 1 (f=1): [R(1)][77.5%][r=458MiB/s][r=229 IOPS][eta 02m:15s] 
Jobs: 1 (f=1): [R(1)][77.8%][r=519MiB/s][r=259 IOPS][eta 02m:13s] 
Jobs: 1 (f=1): [R(1)][78.2%][r=518MiB/s][r=259 IOPS][eta 02m:11s] 
Jobs: 1 (f=1): [R(1)][78.5%][r=422MiB/s][r=211 IOPS][eta 02m:09s] 
Jobs: 1 (f=1): [R(1)][78.8%][r=507MiB/s][r=253 IOPS][eta 02m:07s] 
Jobs: 1 (f=1): [R(1)][79.2%][r=518MiB/s][r=259 IOPS][eta 02m:05s] 
Jobs: 1 (f=1): [R(1)][79.5%][r=518MiB/s][r=259 IOPS][eta 02m:03s] 
Jobs: 1 (f=1): [R(1)][79.8%][r=516MiB/s][r=258 IOPS][eta 02m:01s] 
Jobs: 1 (f=1): [R(1)][80.2%][r=464MiB/s][r=232 IOPS][eta 01m:59s] 
Jobs: 1 (f=1): [R(1)][80.5%][r=416MiB/s][r=208 IOPS][eta 01m:57s] 
Jobs: 1 (f=1): [R(1)][80.8%][r=518MiB/s][r=259 IOPS][eta 01m:55s] 
Jobs: 1 (f=1): [R(1)][81.2%][r=519MiB/s][r=259 IOPS][eta 01m:53s] 
Jobs: 1 (f=1): [R(1)][81.5%][r=518MiB/s][r=259 IOPS][eta 01m:51s] 
Jobs: 1 (f=1): [R(1)][81.8%][r=428MiB/s][r=214 IOPS][eta 01m:49s] 
Jobs: 1 (f=1): [R(1)][82.2%][r=518MiB/s][r=259 IOPS][eta 01m:47s] 
Jobs: 1 (f=1): [R(1)][82.5%][r=519MiB/s][r=259 IOPS][eta 01m:45s] 
Jobs: 1 (f=1): [R(1)][82.8%][r=424MiB/s][r=212 IOPS][eta 01m:43s] 
Jobs: 1 (f=1): [R(1)][83.2%][r=430MiB/s][r=215 IOPS][eta 01m:41s] 
Jobs: 1 (f=1): [R(1)][83.5%][r=519MiB/s][r=259 IOPS][eta 01m:39s] 
Jobs: 1 (f=1): [R(1)][83.8%][r=408MiB/s][r=204 IOPS][eta 01m:37s] 
Jobs: 1 (f=1): [R(1)][84.2%][r=510MiB/s][r=255 IOPS][eta 01m:35s] 
Jobs: 1 (f=1): [R(1)][84.5%][r=518MiB/s][r=259 IOPS][eta 01m:33s] 
Jobs: 1 (f=1): [R(1)][84.8%][r=519MiB/s][r=259 IOPS][eta 01m:31s] 
Jobs: 1 (f=1): [R(1)][85.2%][r=518MiB/s][r=259 IOPS][eta 01m:29s] 
Jobs: 1 (f=1): [R(1)][85.5%][r=470MiB/s][r=235 IOPS][eta 01m:27s] 
Jobs: 1 (f=1): [R(1)][85.8%][r=519MiB/s][r=259 IOPS][eta 01m:25s] 
Jobs: 1 (f=1): [R(1)][86.2%][r=518MiB/s][r=259 IOPS][eta 01m:23s] 
Jobs: 1 (f=1): [R(1)][86.5%][r=498MiB/s][r=249 IOPS][eta 01m:21s] 
Jobs: 1 (f=1): [R(1)][86.8%][r=524MiB/s][r=262 IOPS][eta 01m:19s] 
Jobs: 1 (f=1): [R(1)][87.2%][r=492MiB/s][r=246 IOPS][eta 01m:17s] 
Jobs: 1 (f=1): [R(1)][87.5%][r=518MiB/s][r=259 IOPS][eta 01m:15s] 
Jobs: 1 (f=1): [R(1)][87.8%][r=432MiB/s][r=216 IOPS][eta 01m:13s] 
Jobs: 1 (f=1): [R(1)][88.2%][r=462MiB/s][r=231 IOPS][eta 01m:11s] 
Jobs: 1 (f=1): [R(1)][88.5%][r=519MiB/s][r=259 IOPS][eta 01m:09s] 
Jobs: 1 (f=1): [R(1)][88.8%][r=518MiB/s][r=259 IOPS][eta 01m:07s] 
Jobs: 1 (f=1): [R(1)][89.2%][r=462MiB/s][r=231 IOPS][eta 01m:05s] 
Jobs: 1 (f=1): [R(1)][89.5%][r=519MiB/s][r=259 IOPS][eta 01m:03s] 
Jobs: 1 (f=1): [R(1)][89.8%][r=519MiB/s][r=259 IOPS][eta 01m:01s] 
Jobs: 1 (f=1): [R(1)][90.2%][r=518MiB/s][r=259 IOPS][eta 00m:59s] 
Jobs: 1 (f=1): [R(1)][90.5%][r=438MiB/s][r=219 IOPS][eta 00m:57s] 
Jobs: 1 (f=1): [R(1)][90.8%][r=426MiB/s][r=213 IOPS][eta 00m:55s] 
Jobs: 1 (f=1): [R(1)][91.2%][r=518MiB/s][r=259 IOPS][eta 00m:53s] 
Jobs: 1 (f=1): [R(1)][91.5%][r=524MiB/s][r=262 IOPS][eta 00m:51s] 
Jobs: 1 (f=1): [R(1)][91.8%][r=482MiB/s][r=241 IOPS][eta 00m:49s] 
Jobs: 1 (f=1): [R(1)][92.2%][r=462MiB/s][r=231 IOPS][eta 00m:47s] 
Jobs: 1 (f=1): [R(1)][92.5%][r=484MiB/s][r=242 IOPS][eta 00m:45s] 
Jobs: 1 (f=1): [R(1)][92.8%][r=516MiB/s][r=258 IOPS][eta 00m:43s] 
Jobs: 1 (f=1): [R(1)][93.2%][r=450MiB/s][r=225 IOPS][eta 00m:41s] 
Jobs: 1 (f=1): [R(1)][93.5%][r=462MiB/s][r=231 IOPS][eta 00m:39s] 
Jobs: 1 (f=1): [R(1)][93.8%][r=518MiB/s][r=259 IOPS][eta 00m:37s] 
Jobs: 1 (f=1): [R(1)][94.2%][r=516MiB/s][r=258 IOPS][eta 00m:35s] 
Jobs: 1 (f=1): [R(1)][94.5%][r=513MiB/s][r=256 IOPS][eta 00m:33s] 
Jobs: 1 (f=1): [R(1)][94.8%][r=516MiB/s][r=258 IOPS][eta 00m:31s] 
Jobs: 1 (f=1): [R(1)][95.2%][r=516MiB/s][r=258 IOPS][eta 00m:29s] 
Jobs: 1 (f=1): [R(1)][95.5%][r=456MiB/s][r=228 IOPS][eta 00m:27s] 
Jobs: 1 (f=1): [R(1)][95.8%][r=442MiB/s][r=221 IOPS][eta 00m:25s] 
Jobs: 1 (f=1): [R(1)][96.2%][r=518MiB/s][r=259 IOPS][eta 00m:23s] 
Jobs: 1 (f=1): [R(1)][96.5%][r=518MiB/s][r=259 IOPS][eta 00m:21s] 
Jobs: 1 (f=1): [R(1)][96.8%][r=513MiB/s][r=256 IOPS][eta 00m:19s] 
Jobs: 1 (f=1): [R(1)][97.2%][r=424MiB/s][r=212 IOPS][eta 00m:17s] 
Jobs: 1 (f=1): [R(1)][97.5%][r=522MiB/s][r=261 IOPS][eta 00m:15s] 
Jobs: 1 (f=1): [R(1)][97.8%][r=448MiB/s][r=224 IOPS][eta 00m:13s] 
Jobs: 1 (f=1): [R(1)][98.2%][r=404MiB/s][r=202 IOPS][eta 00m:11s] 
Jobs: 1 (f=1): [R(1)][98.5%][r=506MiB/s][r=253 IOPS][eta 00m:09s] 
Jobs: 1 (f=1): [R(1)][98.8%][r=518MiB/s][r=259 IOPS][eta 00m:07s] 
Jobs: 1 (f=1): [R(1)][99.2%][r=440MiB/s][r=220 IOPS][eta 00m:05s] 
Jobs: 1 (f=1): [R(1)][99.5%][r=519MiB/s][r=259 IOPS][eta 00m:03s] 
Jobs: 1 (f=1): [R(1)][99.8%][r=514MiB/s][r=257 IOPS][eta 00m:01s] 
Jobs: 1 (f=1): [R(1)][100.0%][r=519MiB/s][r=259 IOPS][eta 00m:00s]
throughput-test-job: (groupid=0, jobs=1): err= 0: pid=2857793: Tue Oct 13 19:27:12 2020
  read: IOPS=248, BW=497MiB/s (521MB/s)(291GiB/600259msec)
    slat (usec): min=13, max=235539, avg=33.19, stdev=609.84
    clat (msec): min=15, max=2737, avg=257.32, stdev=55.58
     lat (msec): min=15, max=2737, avg=257.36, stdev=55.58
    clat percentiles (msec):
     |  1.00th=[  209],  5.00th=[  232], 10.00th=[  232], 20.00th=[  236],
     | 30.00th=[  243], 40.00th=[  245], 50.00th=[  247], 60.00th=[  251],
     | 70.00th=[  255], 80.00th=[  275], 90.00th=[  300], 95.00th=[  317],
     | 99.00th=[  351], 99.50th=[  363], 99.90th=[  927], 99.95th=[ 1284],
     | 99.99th=[ 2366]
   bw (  KiB/s): min=217088, max=577536, per=99.99%, avg=509033.04, stdev=44409.97, samples=1200
   iops        : min=  106, max=  282, avg=248.52, stdev=21.69, samples=1200
  lat (msec)   : 20=0.01%, 50=0.01%, 100=0.01%, 250=57.84%, 500=41.95%
  lat (msec)   : 750=0.05%, 1000=0.05%
  cpu          : usr=0.07%, sys=0.81%, ctx=78725, majf=0, minf=585
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=149211,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=497MiB/s (521MB/s), 497MiB/s-497MiB/s (521MB/s-521MB/s), io=291GiB (313GB), run=600259-600259msec
IOPS:

Code: Select all

fio --filename=/dev/md/SYS_1TB_R10 --direct=1 --rw=randread --bs=4k --ioengine=libaio --iodepth=64 --runtime=10 --numjobs=1 --time_based --group_reporting --name=throughput-test-job --eta-newline=1 --readonly
throughput-test-job: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.12
Starting 1 process
Jobs: 1 (f=1): [r(1)][30.0%][r=2400KiB/s][r=600 IOPS][eta 00m:07s]
Jobs: 1 (f=1): [r(1)][50.0%][r=2468KiB/s][r=617 IOPS][eta 00m:05s] 
Jobs: 1 (f=1): [r(1)][70.0%][r=2334KiB/s][r=583 IOPS][eta 00m:03s] 
Jobs: 1 (f=1): [r(1)][90.0%][r=2446KiB/s][r=611 IOPS][eta 00m:01s] 
Jobs: 1 (f=1): [r(1)][100.0%][r=2360KiB/s][r=590 IOPS][eta 00m:00s]
throughput-test-job: (groupid=0, jobs=1): err= 0: pid=2820140: Tue Oct 13 17:10:39 2020
  read: IOPS=587, BW=2351KiB/s (2408kB/s)(23.4MiB/10210msec)
    slat (usec): min=2, max=154, avg= 5.44, stdev= 3.68
    clat (msec): min=4, max=982, avg=106.77, stdev=102.56
     lat (msec): min=4, max=982, avg=106.78, stdev=102.56
    clat percentiles (msec):
     |  1.00th=[    9],  5.00th=[   14], 10.00th=[   18], 20.00th=[   27],
     | 30.00th=[   40], 40.00th=[   55], 50.00th=[   77], 60.00th=[   99],
     | 70.00th=[  130], 80.00th=[  171], 90.00th=[  239], 95.00th=[  305],
     | 99.00th=[  493], 99.50th=[  550], 99.90th=[  701], 99.95th=[  802],
     | 99.99th=[  986]
   bw (  KiB/s): min= 1720, max= 2648, per=100.00%, avg=2374.95, stdev=203.43, samples=20
   iops        : min=  430, max=  662, avg=593.70, stdev=50.87, samples=20
  lat (msec)   : 10=2.13%, 20=11.06%, 50=23.88%, 100=23.63%, 250=30.52%
  lat (msec)   : 500=7.81%, 750=0.87%, 1000=0.10%
  cpu          : usr=0.31%, sys=0.33%, ctx=5974, majf=0, minf=73
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=6002,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=2351KiB/s (2408kB/s), 2351KiB/s-2351KiB/s (2408kB/s-2408kB/s), io=23.4MiB (24.6MB), run=10210-10210msec
Read it carefully: 497MiB/s sequential read using much cheaper and slower HDDs vs 195MB/s reported by Phoronix
IOPS=587 vs 65210 reported by Phoronix -> that result would be good for SSD, but it's impossible for HDD

Mr Larabel is measuring something - but clearly he has no idea what is he measuring.
And even more funny is that there's quite big group of morons who are using his benchmark results as a reference for choosing which hardware to buy ... :lol:

EDIT: I've forgot to mention the HDD type used - My array is based on 4x WD5003ABYX-18WERA0.

EDIT2: I've updated sequential read test with much longer run to show one more important aspect: MD Raid10 is the only array in the world which offers constant performance independently of the location of the data on the surface of the platters ;)
(the average dropped from 504 to 497MiB/s - probably because Transmission started to upload some Debian ISO images in the mean time)
Bill Gates: "(...) In my case, I went to the garbage cans at the Computer Science Center and I fished out listings of their operating system."
The_full_story and Nothing_have_changed

User avatar
pylkko
Posts: 1802
Joined: 2014-11-06 19:02

Re: BTRFS on Debian Stable, your experiences?

#66 Post by pylkko »

If it were true (though it is debatable) that there is a flaw in the benchmark, and that people were making the wrong decision buying hardware) and it also turned out that you knew this, but instead of contacting phoronix to arrange other benchmarks, you spend your time calling these other people "morons" and laughing at them, what would that make you look like? I mean what kind of person does that kind of a thing?

You do not explicate your point clearly, but I gather you take issue with the fact that if the phoronix raid benchmark would have had ext4 condition tuned (stripe size optimization, something that is not currently available in btrfs as you mention in your earlier post), it would perform better in the sequential read task in a way that would not benefit the btrfs condition.

I believe that it is really likely that you are correct. However, as is discussed in that article and in the discussion section of it, there are a near infinite amount of small hair splitting changes that could be made to any benchmark to test in alternative ways. For example, why is the btrfs tested using it's own RAID implementation and not put on MDRAID like all the other file-systems (discussed in the discussion section). According to some people this would make a more egg-to-egg comparison of fs comparison, whereas other people think that would not reflect real life usage and thus would be artificial. Similarly one could invent btrfs settings that are not available on ext4 to improve performance in a selected usage area and then split hairs over if it is fair to compare with those settings or the default ones. Phoronix typically uses the default settings for this reason.

While the kind of things you speak of are interesting and certainly relevant in some particular use cases, I don't feel that they can be used to evaluate an entire file-system universally. Remember that many people don't use HDD's at all any more, and that many people also don't use RAID at all. Many people don't care about sequential reads or performance differences. Sure, if you are going to use RAID10 and are constantly moving massive files, then you might get similar (or better) performance out of ext4 than btrfs by optimizing stripe size. Your file will move in 7 seconds instead of 15. We don't know because there is no test of it, but we can suspect it would be so, as you pointed out. But even then you might get even better by using XFS (it seems) and as was pointed out, nothing prevents people from using a combination of filesystems and mounting settings to tweak every directory in their system for their own use cases. Thus, basically all that you said seems unreasonable. But I did write to Micheal Larabel to ask if he'd have time to do such a benchmark, lets see what he says.
EDIT: typo "stride"->"stripe"
Last edited by pylkko on 2020-10-16 13:16, edited 1 time in total.

LE_746F6D617A7A69
Posts: 932
Joined: 2020-05-03 14:16
Has thanked: 7 times
Been thanked: 68 times

Re: BTRFS on Debian Stable, your experiences?

#67 Post by LE_746F6D617A7A69 »

The main problem with the Phoronix file system benchmarks is the IOPS: the reported values are unrealistic even for SSD arrays.
There's only one explanation for this phenomenon: the benchmarks are performed in RAM (in the I/O caches) - i.e. the tests are too small to show real fs performance.
Since the same methodology is used for all the Phoronix FS benchmarks, it means that all of them are flawed and unrealistic.

----------------------------
pylkko wrote:Many people don't care about sequential reads or performance differences. Sure, if you are going to use RAID10 and are constantly moving massive files, then you might get similar (or better) performance out of ext4 than btrfs by optimizing stride size. Your file will move in 7 seconds instead of 15.
That's not the point. Sequential read and write performance tests are made for finding the absolute maximum bandwidth for given file system/hardware - nothing more. I've posted my seq. read test only to show that the quality of Phoronix bechmarks is questionable.
Bill Gates: "(...) In my case, I went to the garbage cans at the Computer Science Center and I fished out listings of their operating system."
The_full_story and Nothing_have_changed

deolsunny533
Posts: 1
Joined: 2020-10-14 13:30

Re: BTRFS on Debian Stable, your experiences?

#68 Post by deolsunny533 »

I am wondering if there is a way to set trash bins like a virtual shared drive across every volume on the system so if a drive gets near full it moves trash to another drive.
Last edited by deolsunny533 on 2020-10-15 09:06, edited 1 time in total.

User avatar
Head_on_a_Stick
Posts: 14114
Joined: 2014-06-01 17:46
Location: London, England
Has thanked: 81 times
Been thanked: 133 times

Re: BTRFS on Debian Stable, your experiences?

#69 Post by Head_on_a_Stick »

deadbang

User avatar
pylkko
Posts: 1802
Joined: 2014-11-06 19:02

Re: BTRFS on Debian Stable, your experiences?

#70 Post by pylkko »

LE_746F6D617A7A69 wrote:The main problem with the Phoronix file system benchmarks is the IOPS: the reported values are unrealistic even for SSD arrays.
There's only one explanation for this phenomenon: the benchmarks are performed in RAM (in the I/O caches) - i.e. the tests are too small to show real fs performance.
Since the same methodology is used for all the Phoronix FS benchmarks, it means that all of them are flawed and unrealistic.
The article clearly states that the writes are buffered and that this is by design. Even if it is all in RAM entirely, there are still statistically significant differences whatever the reason. The setup is what you would get by default in real life.

That's not the point. Sequential read and write performance tests are made for finding the absolute maximum bandwidth for given file system/hardware - nothing more. I've posted my seq. read test only to show that the quality of Phoronix bechmarks is questionable.
Since caches would apply to all the conditions of the experiment, it is hard to understand why they would devalidate the results as you claim. Certainly the results would not do a 180 with smaller or no caches. Maximum bandwidth is not the same as performance. Performance is only marginally interesting. And so on again.

LE_746F6D617A7A69
Posts: 932
Joined: 2020-05-03 14:16
Has thanked: 7 times
Been thanked: 68 times

Re: BTRFS on Debian Stable, your experiences?

#71 Post by LE_746F6D617A7A69 »

pylkko wrote:The article clearly states that the writes are buffered and that this is by design. Even if it is all in RAM entirely, there are still statistically significant differences whatever the reason. The setup is what you would get by default in real life.
Here's the problem with most of the benchmarks on the internet: people making the benchmarks don't know what they are doing, and people watching the results don't understand what all those coloured charts are meaning - especially in case of file systems.
pylkko wrote:The setup is what you would get by default in real life.
No it's not, f.e. because one CPU has 8MB L3 cache and the other one has 32MB L3, one of systems can have 4GB of DDR3 RAM, some other has 64GB of DDR4 RAM -> File System benchmarks performed in caches/RAM are not reproducible on different machines, what means that they are useless.

But the most fundamental problem is the "real life" term.
If Your small database entirely fits into RAM, You'll get the performance peak caused by the caches. But if the DB doesn't fit the caches, then You'll see the real performance of a File System, which depends only on how good it is in squeezing every single KB/s from the storage hardware.

A very "good" example is NTFS: it has no chances work efficiently on any HDD/SSD because of crazy fragmentation levels (shitty block allocator among other problems) -> it kills the performance of HDDs and it destroys the SSDs physically, by increasing number of page erase cycles.

Another thing is, that different cache performance can be completely misleading: a file system with more frequent cache flushes can expose lower maximum cache performance peaks - but it can have higher average cache performance because of lower probability of situations where mandatory time-dependant flushes will overlap with cache misses (so the dependency on max IOPS is lower)
Bill Gates: "(...) In my case, I went to the garbage cans at the Computer Science Center and I fished out listings of their operating system."
The_full_story and Nothing_have_changed

User avatar
pylkko
Posts: 1802
Joined: 2014-11-06 19:02

Re: BTRFS on Debian Stable, your experiences?

#72 Post by pylkko »

Here is the problem with arrogance: people that have nothing better to offer try to bring others down in order to feel better themselves.

"Everything is entirely wrong", "it is so totally flawed", "others are all morons", "Larabel is bribed by sponsors" and guess what? "People don't even understand charts"!

"I made benchmarks but nobody believes it"

...turns out to be that he thinks that one fringe case benchmark by phoronix should have been done differently, although it would only marginally change the end results. Believe me, there is a considerable degree of autocorrelation in benchmarks on different computers, so that not having the exact same processor/whatever will change the numbers a bit but often the trend remains.

LE_746F6D617A7A69
Posts: 932
Joined: 2020-05-03 14:16
Has thanked: 7 times
Been thanked: 68 times

Re: BTRFS on Debian Stable, your experiences?

#73 Post by LE_746F6D617A7A69 »

pylkko wrote:Here is the problem with arrogance: people that have nothing better to offer try to bring others down in order to feel better themselves.

"Everything is entirely wrong", "it is so totally flawed", "others are all morons", "Larabel is bribed by sponsors" and guess what? "People don't even understand charts"!

"I made benchmarks but nobody believes it"

...turns out to be that he thinks that one fringe case benchmark by phoronix should have been done differently, although it would only marginally change the end results. Believe me, there is a considerable degree of autocorrelation in benchmarks, so that not having the exact same processor will change the numbers a bit but the trend remains.
:lol:
... and the above is a proof of what? ... lack of arguments? :lol:

But seriously: File Systems are extremely complex. There are many cases where there's no single answer which of them is better.
For a laptop/home PC in most cases all You need is to have enough RAM.

The problem arises when You need to decide what FS is best for Your business, or when You need extreme performance just to beat the records ;) - then every KB/s counts.
In such cases, You have to tune Your systems for particular I/O traffic profile, and of course, *ALL* the benchmarks available on Internet are completely useless for such purpose.

Regards ;)
Bill Gates: "(...) In my case, I went to the garbage cans at the Computer Science Center and I fished out listings of their operating system."
The_full_story and Nothing_have_changed

User avatar
bester69
Posts: 2072
Joined: 2015-04-02 13:15
Has thanked: 24 times
Been thanked: 14 times

Re: BTRFS on Debian Stable, your experiences?

#74 Post by bester69 »

LE_746F6D617A7A69 wrote:
....
The problem arises when You need to decide what FS is best for Your business, ...

Regards ;)
there is no problems, btrfs gives you thousansds of snapshots and 100% reliability, and many more feature, like compressión on fly and Copy on Write..

Ive been using btrfs for 5 or 7 years.. and never ever failt me.. all snapshots brought back my system whenever I need them.. i didnt get any kind of lost data or corruption.. that filesystem is wonderfull....and as for speed is very similar to ext4.
bester69 wrote:STOP 2030 globalists demons, keep the fight for humanity freedom against NWO...

Post Reply