Basically the three tools to optimize a given machine's performance (bare metal) regarding swap have been mentioned, and in fact when dealing with any processes that carry a heavy RAM tax all of them may be employed for performance optimization. They are somewhat interrelated and optimization during heavy RAM usage may require tweaking all three. They are as follows: cache pressure settings, swappiness, and earlyOOM. Swap has also proved to be convenient for some processes where slower linked and staged, or timed data processing is necessary. Theoretically speaking, journaling file systems should have a swap partition for stability (this applies importantly to scientific machines), and as we have discovered with the Intel CPU vulnerabilities, a swap partition is no less secure than encrypted chip memory. More than any other, specific software design and integration should be the determining factor for specific swap sizing. So you see, it really depends what software you are using, and what you are doing with it. In the end, at the consumer baseline, modern hardware design gives back some stability in favour of cheaper performance anyway, so for a home user there is really no guarantee of trouble free computing at the hardware level. As far as software... Debian stable is the way to go.
TC