There have been some good examples of system configurations and recommendations already. I will add my thoughts on the subject.
I am currently building a new PC very similar to yours. I have it nearly assembled but am still waiting on the DRAM memory modules before I can boot it, install the OS and applications, and start my performance tuning. I am using the Ryzen 9 3950X, 64 GB of 3600 MHz DRAM, two 1 TB PCIe 4.0 4 lane M.2 NVMe SSDs all on an ASUS X570 MB. I also have a 6 TB slow (5400 rpm) backing HDD in the configuration.
I am planning to use one M.2 SSD (the one driven directly from the CPU) as the C: Boot Drive. It will contain the Windows 10 Pro OS and all key programs and applications. The second M.2 SSD (driven from the X570 chipset) will contain my data which is used most frequently as well as sundry seldom used programs and utility applications. The slower, larger HDD will be used for "online backups" of data. This will contain image data from older or completed imaging projects. Finally, an external similarly sized HDD will be used for "offline backups" of older data and periodic backups.
In my working career before retirement, I worked on design, bring-up, test, and debug of large very high performance computer systems (think supercomputer building blocks). The one take-away from those years is when it comes to performance, the key is getting data latency as low as possible. In many to most cases, a computer system that can get the data into the processor's cache fastest will win even against systems with faster CPUs but longer latency. I am assembling this system with those things and more in mind.
In configuring the system for PixInsight, we need to make a distinction between PixInsight "swap directories" and OS system paging / swap files. PI uses the swap directories for temporary data storage during processing operations. With multiple directories defined, different execution threads can overlap storage I/O Operations better. These "swap files" are different from the Windows (or Linux or Mac) OS swap files. Those OS controlled swap files are used as "virtual memory" when the OS needs more RAM than is physically present in the hardware. As physical RAM fills up and applications require even more, the OS saves some of the lesser used RAM from running applications to the storage system (SSDs and HDDs) and uses the newly freed space to allocate the freed RAM to whatever thread is requesting use of more.
These OS swap files can be fixed in size or can be managed by the OS to increase or free up storage automatically as needed to allow for virtual memory operation. Those size choices / limits can be changed or configured from the Control Panel in Windows. (PS: if you have ever gotten the message "Out of Memory" inside PI, it often means that not only was all physical RAM completely filled up but Windows had run out of storage space on your drives and could not supply the needed RAM request even by moving stuff from RAM to virtual memory. The out of memory condition can be helped by freeing up storage space on your drives if they are running low or by allowing the OS to use more available drive space for storage of virtual memory. The added space can be allocated across multiple drives.)
I will note something about systems specifically intended to run PixInsight also. Many advocate RAM Disks for improving PixInsight Benchmark scores. They do and it works well. However, those swap files / directories are only used by PI. When they fill up as can happen on large Image Integrations, Local Normalization, and Drizzle Integrations, PI loses the advantage of parallelism for the normal temp files on your storage devices. At the same time, PI is likely allocating very large amounts of virtual memory to perform its processing functions. Once PI's need for RAM exceeds available physical RAM memory, then the OS will begin to page data out to storage and back in as needed. This adds additional loads to the storage bandwidth and negates some of the advantages of using RAM as temporary storage.
On large integrations on some of my imaging data, I found PI ran through my current system's 32 GB of RAM very quickly. Had I used RAM Disk, it would have filled up even faster and begun swapping virtual memory even sooner. Once swapping of virtual memory starts, storage bandwidth to the PI swap directories drops. For small integrations, this is not a concern and RAM Disks can help a lot. As a counter-case in point, I once used the Windows Task Manager to watch virtual memory allocations as I integrated ~850 images of 134 MB each. As ImageIntegration worked, total allocated memory grew to over 150 GB. That was all being paged in and out of the Windows OS page space. RAM Disk can actually hinder processing speed in such cases if it consumes too much of the physical RAM. (In fact, it is possible that the PI swap directory files placed on a RAM Disk could be swapped out of RAM to storage anyway as virtual memory needs increase.)
Back to how I will configure my new system -- here are my plans:
- SSD #1 (CPU) = Windows OS, Key Programs, Windows Managed Swap File.
- SSD #2 (X570) = Lesser used Programs, Windows Managed Swap File, Data storage
- HDD (SATA-3) = Accessible backup data and / or seldom used data files.
- SSD #1 = 8 to 12 swap directory entries
- SSD #2 = 8 to 12 swap directory entries
I will run both the PI Benchmark and a few medium size ImageIntegration runs to tune the PI swap directory counts. From past use of my older i7-3930K (6 core) system, my instinct is telling me that 12 may be close to the optimum number of PI swap directories on each SSD for this system but I will likely try as many as 16. It would be nice if we could allocate one per core (or thread) but even the SSD probably cannot handle that many I/O Operations even if it had enough transfer bandwidth.
If I were building the system solely for PixInsight, I might be tempted to put the boot drive SSD on the X570 PCIe connections and run the Data SSD from the CPU PCIe bus. The thought there is that once the programs are loaded, lower latency to the data might squeeze a little more performance out the system. There are probably a lot of configuration tuning tricks that could be tried when tuning a system for a single program. I may be able to report initial results later next week.
Edited by jdupton, 23 January 2020 - 05:19 PM.