Dimensioning of the SWAP space

Help with issues regarding installation of Debian

Re: Dimensioning of the SWAP space

Postby trinidad » 2021-02-23 13:53

Basically the three tools to optimize a given machine's performance (bare metal) regarding swap have been mentioned, and in fact when dealing with any processes that carry a heavy RAM tax all of them may be employed for performance optimization. They are somewhat interrelated and optimization during heavy RAM usage may require tweaking all three. They are as follows: cache pressure settings, swappiness, and earlyOOM. Swap has also proved to be convenient for some processes where slower linked and staged, or timed data processing is necessary. Theoretically speaking, journaling file systems should have a swap partition for stability (this applies importantly to scientific machines), and as we have discovered with the Intel CPU vulnerabilities, a swap partition is no less secure than encrypted chip memory. More than any other, specific software design and integration should be the determining factor for specific swap sizing. So you see, it really depends what software you are using, and what you are doing with it. In the end, at the consumer baseline, modern hardware design gives back some stability in favour of cheaper performance anyway, so for a home user there is really no guarantee of trouble free computing at the hardware level. As far as software... Debian stable is the way to go.

TC
You can't believe your eyes if your imagination is out of focus.
trinidad
 
Posts: 167
Joined: 2016-08-04 14:58

Re: Dimensioning of the SWAP space

Postby LE_746F6D617A7A69 » 2021-02-23 20:43

p.H wrote:Nonsense. vm.swappiness is not a threshold.

It's better to call it "threshold" (while it is in fact "aggressiveness") to explain basic rules for people who don't want to read the kernel code - this hides the fuzzy logic behind the swapiness.
For the same reason I've used the term "frozen by the kernel" instead of: "the MMU is raising page fault interrupt, which stops the execution of the process"

@TC
+1, great post.
Bill Gates: "(...) In my case, I went to the garbage cans at the Computer Science Center and I fished out listings of their operating system."
The_full_story and Nothing_have_changed
LE_746F6D617A7A69
 
Posts: 470
Joined: 2020-05-03 14:16

Re: Dimensioning of the SWAP space

Postby MicroScreen » 2021-02-24 06:36

trinidad wrote:Theoretically speaking, journaling file systems should have a swap partition for stability (this applies importantly to scientific machines), and as we have discovered with the Intel CPU vulnerabilities, a swap partition is no less secure than encrypted chip memory. More than any other, specific software design and integration should be the determining factor for specific swap sizing. So you see, it really depends what software you are using, and what you are doing with it.

This is largely in line with what I have already said. It always depends on how software was written and what amounts of data are moved in the ongoing process. It is therefore completely pointless to discuss purely theoretical scenarios when each user works under completely different conditions. Generalizations, as presented here by a single discussant, completely miss the topic and should therefore be ignored if possible!
"Move forward and do what you think is best. If you make a mistake, you’ll learn something. But don't make the same mistake twice." - Akio Morita
User avatar
MicroScreen
 
Posts: 23
Joined: 2019-01-10 17:20

Re: Dimensioning of the SWAP space

Postby LE_746F6D617A7A69 » 2021-02-24 21:19

MicroScreen wrote:(...) It always depends on how software was written and what amounts of data are moved in the ongoing process. It is therefore completely pointless to discuss purely theoretical scenarios when each user works under completely different conditions. Generalizations, as presented here by a single discussant, completely miss the topic and should therefore be ignored if possible!

"Purely theoretical scenarios" are used to explain the basic rules/typical cases.
Typical Linux system runs tens of services (processes) - and excluding Real-Time projects (rare cases), all other applications are creating *regular* processes - which are subject to generating problems resulting from insufficient RAM size/problems with using swap memory.
Only privileged process can call mlockall() - so only special applications can disable swapping of their RAM areas - in all other cases, typical applications are becoming a victims of memory swapping.

It is obvious, that You have no idea of "how software is written", starting from how the process' virtual memory is managed and ending with how to write the software in general - I think this is the main reason behind Your stupid aggressiveness.
Bill Gates: "(...) In my case, I went to the garbage cans at the Computer Science Center and I fished out listings of their operating system."
The_full_story and Nothing_have_changed
LE_746F6D617A7A69
 
Posts: 470
Joined: 2020-05-03 14:16

Re: Dimensioning of the SWAP space

Postby CwF » 2021-02-24 22:41

Well LE, I did give real examples and did say it's 'observational'. There's nothing theoretical about it. I've tested zramswap to lock, it works and manages better than nothing. I don't notice its operation...

So question; In my prior post both machines have a small swap occupied, both with >60% memory free --What is the swap, when did it happen, why is it there?

Ya see, I don't care why. But your suggestion is the existence of that swap represents a negative performance impact occurred at some point? I didn't notice, because no - it doesn't impact performance, IMO it increases performance.

The first machine now has 340.8MB in swap, and 38GB free memory. The second machines swap also dropped, 293.8MB with 16GB free. Neither has rebooted. I hardly think it's there only to prevent OOM, that view simply has no merit in such a case as this. Something wants to swap out, so let it...

Perhaps if swap were on a SSD I'd notice? No, I don't. Maybe on a spinner, but I'd notice before swap started...I now prefer only memory as swap because it works and can be designed in, specifically to eliminate a physical device, my #1 reason. To go without swap is not a valid option since my testing shows performance is lower. Oh, and I 'dimension' that allocation to 10%

note: the etc/default/zramswap changes for Bullseye. New option language and defaults - 50%!
CwF
 
Posts: 948
Joined: 2018-06-20 15:16

Re: Dimensioning of the SWAP space

Postby MicroScreen » 2021-02-25 04:49

CwF wrote:I hardly think it's there only to prevent OOM, that view simply has no merit in such a case as this. Something wants to swap out, so let it.

I now prefer only memory as swap because it works and can be designed in, specifically to eliminate a physical device, my #1 reason. To go without swap is not a valid option since my testing shows performance is lower. Oh, and I 'dimension' that allocation to 10%.

That's the way it is, and 99% of that is consistent with the experience of all those users who are more involved in the subject. The very idea that swapping would be something negative is completely absurd and only proves that this troll does not have the slightest knowledge of the matter. The memory management of individual applications is often fundamentally different from what one expects and should therefore always be observed first. There are a lot of programs that start outsourcing their process data early in any case, long before the RAM even threatens to become full, simply because the program was written in this way. But this dilettante will probably never understand that, because he simply has too little sense for it.
It's better to just ignore him and his nonsense!
"Move forward and do what you think is best. If you make a mistake, you’ll learn something. But don't make the same mistake twice." - Akio Morita
User avatar
MicroScreen
 
Posts: 23
Joined: 2019-01-10 17:20

Re: Dimensioning of the SWAP space

Postby trinidad » 2021-02-25 13:42

On servers that run many virtual machines or containers, zramswap allows you to optimise memory usage by swapping out data that's not often accessed, but when a user needs to access it, it will be available fast. It allows you to overcommit on memory with a negligible hit on your application performance (and often an improvement in performance where you can use more main memory for filesystem cache).


https://manpages.debian.org/buster/zram ... .1.en.html

https://fedoraproject.org/wiki/Changes/SwapOnZRAM

Are there any comparative figures on data loss/errors with zramswap to alternatively using specifically targeted disk swap partitions?

Compression caching... is still compression caching.

TC
You can't believe your eyes if your imagination is out of focus.
trinidad
 
Posts: 167
Joined: 2016-08-04 14:58

Re: Dimensioning of the SWAP space

Postby CwF » 2021-02-25 14:46

trinidad wrote: data loss/errors with zramswap to alternatively using specifically targeted disk swap partitions?

If this translates to "Is data recovered from swap on next reboot after a crash?"
I think the answer is NO, could it be, sure.

Good quote!
CwF
 
Posts: 948
Joined: 2018-06-20 15:16

Re: Dimensioning of the SWAP space

Postby trinidad » 2021-02-25 15:12

@CwF I know that the obvious trade-off with multi-voltage SSDs or any multi voltage chip storage involves wear levelling and certainly zramswap can alleviate that issue to some degree, but I always wonder about performance versus accuracy, and I remain a little reticent over our headlong rush into hardware chip performance these days almost fearing an event horizon that could leave us with no older actually working technology having been pushed out of the market by newer faster stuff. I've read a few papers on VM memory management with some new promising ideas that make deployments and cloning/testing much more efficient, and some of the ideas are creative and amazing compared to what I used to have to slug through. I don't think software in general was quite prepared for these new possibilities and I think as things go along there are going to be many improvements in how software uses swap. Swap is definitely an interesting subject for discussion and it is currently cutting edge for sys admins.

@MicroScreen
There are a lot of programs that start outsourcing their process data early in any case, long before the RAM even threatens to become full, simply because the program was written in this way


Yes indeed, and there are new ways of handling swap more efficiently in such cases such as "lazy" paging.

TC
You can't believe your eyes if your imagination is out of focus.
trinidad
 
Posts: 167
Joined: 2016-08-04 14:58

Re: Dimensioning of the SWAP space

Postby CwF » 2021-02-25 15:44

trinidad wrote:wear levelling and certainly zramswap can alleviate that issue to some degree

No alleviation, zramswap totally eliminates the issue the same as no swap.

trinidad wrote:I always wonder about performance versus accuracy

I'm not sure I get the point. Accuracy where?

If memory is the concern, maybe, but I do use ECC. I've had VM's and nested layers entirely in memory for weeks at a time. I do move their images to disk and back and am more concerned with integrity while on storage. Never have had issue with cosmic rays!
CwF
 
Posts: 948
Joined: 2018-06-20 15:16

Re: Dimensioning of the SWAP space

Postby trinidad » 2021-02-25 15:48

Never have had issue with cosmic rays!


God I hate those damn things!

TC
You can't believe your eyes if your imagination is out of focus.
trinidad
 
Posts: 167
Joined: 2016-08-04 14:58

Re: Dimensioning of the SWAP space

Postby MicroScreen » 2021-02-26 06:48

CwF wrote:Never have had issue with cosmic rays!

Just wonderful, that's one of the few things that I hadn't considered until now. :lol:
I just hope that you won't experience any unpleasant surprises with neutrinos either. :D

@trinidad
Are there any comparative figures on data loss/errors with zramswap to alternatively using specifically targeted disk swap partitions?
Compression caching ... is still compression caching.

Well, nobody should claim that this is the ideal solution for every case, I would even vehemently reject it on old machines with extremely little space in RAM. The conventional setup of swap partitions on hard drives certainly still has some permission, but not always. However, the real issue here is still the dimensioning of an adequate solution, no matter what it looks like in the end. That means finding the optimal performance and data security through the most suitable individual solution.

Accordingly, the essential procedure is:
    1. Installation according to recommended empirical values* as a starting point,
    2. Measurement and observation of the outsourcing behaviour,
    3. Adjustment and redimensioning of the swap space as required.
* David Both is of course only one of many who bring their experiences to a short denominator :!:
"Move forward and do what you think is best. If you make a mistake, you’ll learn something. But don't make the same mistake twice." - Akio Morita
User avatar
MicroScreen
 
Posts: 23
Joined: 2019-01-10 17:20

Re: Dimensioning of the SWAP space

Postby LE_746F6D617A7A69 » 2021-02-26 20:52

CwF wrote:Well LE, I did give real examples and did say it's 'observational'. There's nothing theoretical about it. I've tested zramswap to lock, it works and manages better than nothing. I don't notice its operation (...)
What I've said, is that having mulicore CPUs *DOES NOT* improve the performance in case of compressed swap memory - this is technically impossible, as explained earlier, so we have a misunderstanding here..
Definitely, in most (but not all) cases compressed swap memory is faster than HDD-based swap, and in case of SSD it depends on quality and age of that SSD (BTW, I just can't wait for PLC SSDs - they will be slowing down dramatically within a month).

Another thing is, that Your 'observations' are subjective - f.e. what is the delay of switching between two desktop applications - is it 200ms or 10ms? - have You measured this?
Of course, You didn't - but such apparently "small" delays are killing the servers and time-critical applications.

On desktop:
Try to launch 2 huge applications for test - the easiest (but not very accurate) way to test this, is to create 2 virtual machines, which are using 2 times more memory than the host can offer - then You will at least start to see the problem.
Bill Gates: "(...) In my case, I went to the garbage cans at the Computer Science Center and I fished out listings of their operating system."
The_full_story and Nothing_have_changed
LE_746F6D617A7A69
 
Posts: 470
Joined: 2020-05-03 14:16

Re: Dimensioning of the SWAP space

Postby MicroScreen » 2021-02-27 07:27

@ all of you:
ignore LE_746F6D617A7A69, he isn't able to adhere to the given topic or to contribute something constructive here!
His rubbish isn't worth looking into. The topic here is: Dimensioning of the SWAP space and not which solution is the "best" in each case. :roll:

There is enough reading material in the form of literature and also online that covers instruction-level parallelism and task parallelism. The hardware-specific facts are also taken into account here, such as the L1 and L2 CPU cashes, the number and size of which play an extremely important role here. So it always depends on the respective kernel architecture and the software used, how efficiently these options are used.

Nothing is impossible :!:
"Move forward and do what you think is best. If you make a mistake, you’ll learn something. But don't make the same mistake twice." - Akio Morita
User avatar
MicroScreen
 
Posts: 23
Joined: 2019-01-10 17:20

Re: Dimensioning of the SWAP space

Postby LE_746F6D617A7A69 » 2021-02-27 08:31

MicroScreen wrote:@ all of you:
ignore LE_746F6D617A7A69, he isn't able to adhere to the given topic or to contribute something constructive here!
His rubbish isn't worth looking into. The topic here is: Dimensioning of the SWAP space and not which solution is the "best" in each case. :roll:
It's not Me who have started a sub-thread about compressed swap - apparently You have a serious problem with identifying who are You talking too... :lol:

MicroScreen wrote:There is enough reading material in the form of literature and also online that covers instruction-level parallelism and task parallelism. The hardware-specific facts are also taken into account here, such as the L1 and L2 CPU cashes, the number and size of which play an extremely important role here. So it always depends on the respective kernel architecture and the software used, how efficiently these options are used.

Better read that material instead of polluting this forums with your worthless posts.
Bill Gates: "(...) In my case, I went to the garbage cans at the Computer Science Center and I fished out listings of their operating system."
The_full_story and Nothing_have_changed
LE_746F6D617A7A69
 
Posts: 470
Joined: 2020-05-03 14:16

PreviousNext

Return to Installation

Who is online

Users browsing this forum: No registered users and 17 guests

fashionable