MemSpeed needs to allocate sufficiently much memory to ensure that the measurement is not handled by the L3 cache. By default, it allocates 1/4 of 1 NUMA node worth of memory. There is an option to allocate even more memory so that the inter-NUMA memory speed can also be measured.
If you run one or two instances of MemSpeed simultaneously, there should be no swapping. I do this regularly on my laptop.
It should be possible to limit this further to e.g. at most 10x the last-level cache size.
I have made a simple attempt at this and limited the amount of memory used (per rank) to 1GB which should be much larger than the first level of cache for a while still. The largest cache like memory (see https://en.wikipedia.org/wiki/CPU_cache#MULTILEVEL) would be the eDRAM on Haswell CPUs with integrated graphics which is apparently 128 MB.
Note that this renders the skip_largemem_benchmarks options somewhat redundant, though not fully since skip_largemem_benchmarks would trigger on nodes with less than 4GB of memory per MPI rank used. It does become redundant on "typical" clusters though since we use only a small number of MPI ranks per node.
The test still takes approximately 2min on my workstation, though at least 1min of that is not the main memory test. This long runtime is probably a measure of how slow the CPUs in my workstation are is by now (certainly compared to the amount of memory in it).