Scheduled Maintenance: We are aware of an issue with Google, AOL, and Yahoo services as email providers which are blocking new registrations. We are trying to fix the issue and we have several internal and external support tickets in process to resolve the issue. Please see: viewtopic.php?t=158230
Btrfs Compression To Enhance Performance?
Btrfs Compression To Enhance Performance?
https://www.phoronix.com/scan.php?page= ... 2635&num=1
https://www.phoronix.com/scan.php?page= ... 2638&num=1
So, Using btrfs with compression will always enhance performance regarding using not that compression?
My system is 1corex2Ghz,
should i enable btrfs compression?;
What about installing kernel > 4.14 and using Zstd to get the maximun speed with btrfs?
https://www.phoronix.com/scan.php?page= ... 2638&num=1
So, Using btrfs with compression will always enhance performance regarding using not that compression?
My system is 1corex2Ghz,
should i enable btrfs compression?;
What about installing kernel > 4.14 and using Zstd to get the maximun speed with btrfs?
bester69 wrote:STOP 2030 globalists demons, keep the fight for humanity freedom against NWO...
- Head_on_a_Stick
- Posts: 14114
- Joined: 2014-06-01 17:46
- Location: London, England
- Has thanked: 81 times
- Been thanked: 133 times
Re: Btrfs Compression To Enhance Performance?
Run an objective benchmark on your system now and then appy the compression and run the benchmark again.bester69 wrote:should i enable btrfs compression?
Asking here is just silly.
I refer the right honourable gentleman to the answer I gave a moment ago.What about installing kernel > 4.14 and using Zstd to get the maximun speed with btrfs?
deadbang
Re: Btrfs Compression To Enhance Performance?
It is important to notice that there are several different kinds of 'performance'. For example, if instead of just reading and writing files, now your CPU is also performing compression algorithms on them, then obviously there will be a 'overhead' associated with that. There will therfore be less CPU cycles for other things.
But for most computers the situation probably is that the CPUs are underused, are idle most of the time and users rarely do multiple tasks simultaneously. Since moving data from disk is often the slowest link in the chain when you are doing that one task on the computer, sacrificing that CPU time might be a good thing. Why it works is in the time your disk would need to write down the extra blocks in the noncompressed version (as opposed to the compressed situation), even a slow CPU can do many many calculations (compression related calculations).
Also, there is the question of throughput vs latency. Often when benchmarking disks or setups for real life use random read/write at 4k size is used, because this is more like what a operating system will be doing in normal use (than writing large single files). Disk latency has a significant impact on this. So it is possible to have a disk that can write large files sequentially at amazing speeds but still be worse at 4k random read write than a slower disk.
Like some one comments in the link you posted, 'depends on which kind of work load is more important for you'
Then there is also this to consider: on a battery powered device, compressing everything all the time might mean more energy use and shorter battery time.
But all these things are non-tested theoretical speculation and the only way to get reliable data would be to do real experiments with real devices.
From what I read, the Zstd is supposed to be really good. Burbot is also a new less tested feature...
But for most computers the situation probably is that the CPUs are underused, are idle most of the time and users rarely do multiple tasks simultaneously. Since moving data from disk is often the slowest link in the chain when you are doing that one task on the computer, sacrificing that CPU time might be a good thing. Why it works is in the time your disk would need to write down the extra blocks in the noncompressed version (as opposed to the compressed situation), even a slow CPU can do many many calculations (compression related calculations).
Also, there is the question of throughput vs latency. Often when benchmarking disks or setups for real life use random read/write at 4k size is used, because this is more like what a operating system will be doing in normal use (than writing large single files). Disk latency has a significant impact on this. So it is possible to have a disk that can write large files sequentially at amazing speeds but still be worse at 4k random read write than a slower disk.
Like some one comments in the link you posted, 'depends on which kind of work load is more important for you'
Then there is also this to consider: on a battery powered device, compressing everything all the time might mean more energy use and shorter battery time.
But all these things are non-tested theoretical speculation and the only way to get reliable data would be to do real experiments with real devices.
From what I read, the Zstd is supposed to be really good. Burbot is also a new less tested feature...
Re: Btrfs Compression To Enhance Performance?
I had already forgotten about IO being the headneck time operations, so now i understand why enhance performance; CPU is waiting in inactivity for disk to respond, so the overhead load its very small in comparison with the inactivity wait; so the compression its suppose to win here to the overheads.pylkko wrote:.... Since moving data from disk is often the slowest link in the chain when you are doing that one task on the computer, sacrificing that CPU time might be a good thing. ...
...
Like some one comments in the link you posted, 'depends on which kind of work load is more important for you'
....
From what I read, the Zstd is supposed to be really good. Burbot is also a new less tested feature...
To me I would be matter working with loads type playing hd movies, my system its already very short when playing 1080p.
By the way, Ive just lost my btrfs partition by testing zstd and then uninstalling kernel 4.14 without before defragmenting to lzo or no compression, you can imagine Now, Ive to install a new debian in other partition with kernel 4.14 in order to be able to recover the lost partition.
The recovering, would be something like this:
With a kernel 4.14:
Code: Select all
mount -t btrfs -o subvolid=0 /dev/sda2 /mnt
btrfs filesystem defragment -r -v -cno /mnt
Last edited by bester69 on 2017-12-06 08:59, edited 1 time in total.
bester69 wrote:STOP 2030 globalists demons, keep the fight for humanity freedom against NWO...
Re: Btrfs Compression To Enhance Performance?
This sound some difficult to me, I would have to study benchmark tools. I think there is no need at the moment..Head_on_a_Stick wrote:Run an objective benchmark on your system now and then appy the compression and run the benchmark again.bester69 wrote:should i enable btrfs compression?
Asking here is just silly.
I refer the right honourable gentleman to the answer I gave a moment ago.What about installing kernel > 4.14 and using Zstd to get the maximun speed with btrfs?
thanks
bester69 wrote:STOP 2030 globalists demons, keep the fight for humanity freedom against NWO...
Re: Btrfs Compression To Enhance Performance?
Unfortunately, for HD video this will probably not help that much because I believe most video storage formats already use compression.
So you created or forced a compression on the latest kernel on a btrfs subvolume and then tried to use it with old kernel tools? I think I even told you that you might have problems if you mix older kernels with newer btrfs user space tools in a previous thread
Well, did you manage to repair it?
So you created or forced a compression on the latest kernel on a btrfs subvolume and then tried to use it with old kernel tools? I think I even told you that you might have problems if you mix older kernels with newer btrfs user space tools in a previous thread
Well, did you manage to repair it?
Re: Btrfs Compression To Enhance Performance?
To me, this sound as if it was a good idea to use compression with a lowlatency kernel to compensate ..pylkko wrote: Also, there is the question of throughput vs latency. Often when benchmarking disks or setups for real life use random read/write at 4k size is used, because this is more like what a operating system will be doing in normal use (than writing large single files). Disk latency has a significant impact on this. So it is possible to have a disk that can write large files sequentially at amazing speeds but still be worse at 4k random read write than a slower disk...
From what I read, the Zstd is supposed to be really good. Burbot is also a new less tested feature...
bester69 wrote:STOP 2030 globalists demons, keep the fight for humanity freedom against NWO...
Re: Btrfs Compression To Enhance Performance?
im on it now, i think its cos kernels under 4.14 cant mount dirty disks with zstd, so you can imagine..Ive to install a new debian and then install kernel 4.14 in order to be able to defrag with no compression so can be accesible again in the grub for the olders kernels.pylkko wrote:Unfortunately, for HD video this will probably not help that much because I believe most video storage formats already use compression.
So you created or forced a compression on the latest kernel on a btrfs subvolume and then tried to use it with old kernel tools? I think I even told you that you might have problems if you mix older kernels with newer btrfs user space tools in a previous thread
Well, did you manage to repair it?
bester69 wrote:STOP 2030 globalists demons, keep the fight for humanity freedom against NWO...
Re: Btrfs Compression To Enhance Performance?
I think you warned me, . It seems newers kernel alter user space in a way older ones cant work again..pylkko wrote:Unfortunately, for HD video this will probably not help that much because I believe most video storage formats already use compression.
So you created or forced a compression on the latest kernel on a btrfs subvolume and then tried to use it with old kernel tools? I think I even told you that you might have problems if you mix older kernels with newer btrfs user space tools in a previous thread
Well, did you manage to repair it?
I gave into this trouble:
https://unix.stackexchange.com/question ... ree-faileddmesg output
[ 119.698406] BTRFS info (device sdc2): disk space caching is enabled
[ 119.698409] BTRFS: couldn't mount because of unsupported optional features (10).
[ 119.744887] BTRFS: open_ctree failed
Now, I can only boot the installation with kernel 4.14,..
It seems not possible to go back with a lower kernel if not regenerating the filesystem, so there seems no solution but recreating user-space with a copy of the data.
I guess I could make a btrfs send /receive to replicate installation in a new clean metadata-btrfs partition without dragging issues. Im not sure this method will create a new clean repository.., what do you think about send/receive to regenerate the system?.
In the last case, I could use fsarchiver..
bester69 wrote:STOP 2030 globalists demons, keep the fight for humanity freedom against NWO...
- Head_on_a_Stick
- Posts: 14114
- Joined: 2014-06-01 17:46
- Location: London, England
- Has thanked: 81 times
- Been thanked: 133 times
Re: Btrfs Compression To Enhance Performance?
bester69 wrote:This sound some difficult to me, I would have to study benchmark tools.
Code: Select all
empty@Xanadu:~ $ bonnie++
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.97 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
Xanadu 15808M 788 99 300466 11 166272 12 6000 98 687480 27 7348 89
Latency 18933us 22783us 184ms 2656us 221ms 448ms
Version 1.97 ------Sequential Create------ --------Random Create--------
Xanadu -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
Latency 217us 370us 232us 219us 19us 1884us
1.97,1.97,Xanadu,1,1512580698,15808M,,788,99,300466,11,166272,12,6000,98,687480,27,7348,89,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,18933us,22783us,184ms,2656us,221ms,448ms,217us,370us,232us,219us,19us,1884us
empty@Xanadu:~ $
Lather, rinse and repeat
Isn't learning fun?
deadbang
Re: Btrfs Compression To Enhance Performance?
I finally removed btrfs compression, Thought some operations were quicker, I was feeling some tinny lag in desktop experience,
I also lost the whole system by using 4.14 with compression, fortunately I have a fsarchiver full system backup 5Gb in cloud and could restore the installation.
I also lost the whole system by using 4.14 with compression, fortunately I have a fsarchiver full system backup 5Gb in cloud and could restore the installation.
bester69 wrote:STOP 2030 globalists demons, keep the fight for humanity freedom against NWO...