As stated 600+ files easily processed in less than 8 hours.
This is a dependable Samsung 970 EVO Plus SSD 2TB NVMe M.2. Bought on Amazon and I've never had a lick of trouble with them. And I do realize that my PC is aging...unfortunately the motherboard is maxed out with respect to processor, but I could still max out the memory to 64GB DDR4 3600 (that would be double).
CPU loading is only about 38% and Memory use is ranging from 50% to 75% of the RAM. Interesting.
WBPP is not something I like to use. I much prefer to do the steps manually in PI, which was easy to do when I was semi-retired, but since I've been working again it helps to let things run during the day when I'm out of the house. I've done a couple of side by side comparison runs but always get inconsistent results. Manual is very repeatable. This was one of those situations where I just threw everything in the WBPP blender, hit "smoothie", and left the house. Never expected this to be a weeklong commitment!
The second reason I ran WBPP was that NSG supposedly has been supplanted by WBPP's Local Normalization. I wanted to see the results and NSG literally gave up and crashed after about 525 frames.
And this truly isn't about the target; the short exposures are warranted due to the potential to blow out the core on this particular object. Longer exposures don't always lead to improved images.
(Pardon my frank feedback. I don't know what you do and don't know.)
During the CPU intensive portions of processing it should be using as much of all the cores as you allow it to. It should basically be running them at full power. It shifts down during integration.
If your SSD has been solid, I'd be pretty surprised if it is giving issues now. You can try some of the various benchmarking tools out there to ensure it is operating properly. The "fakes" I referenced include forgeries of popular drives like that Samsung or others with crap internal electronics. It can be easy to run into rubbish products on shared marketplaces like Amazon and eBay, so seemed worth noting.
You do run into some ramped up limitations as you process deeper stacks of files. Absent details of how high resolution these files are, and taking into account that you can stack 500 in a much more reasonable amount of time, it sounds like the "right" approach to take here would be to split the project into, say, three or more integrations. And then integrate those integrations. You could do it manually and still be quite efficient. It is mainly the stacking stage which becomes really intensive due to the volume of data that is being managed. The other steps, such as star alignment, just work iteratively through the dataset you have provided. There would be greater slowdowns on some bulk steps, such as analyzing the full collection of files available for the local normalization master and weighting. If you broke that part of the process down into separate integrations, I'd suggest using NSG instead. It is less prone to creating artifacts or issues with multiple applications than LN, and can also be used for the file weighting with great results.
WBPP is just running the regular processes via scripting. So if something is off, it can usually be resolved by figuring out what setting needs to be adjusted for the process in WBPP. That said, with such a processing commitment, the manual process can be a lot easier to work through as you can check at each step that things are being done as desired. It's a drag to find out that something was set undesirably in WBPP after hours of processing.
In my opinion, NSG is still superior to local normalization in every way, other than local normalization being comfortably integrated into WBPP for automation. And local normalization frequently does a very good job. This may be another case where you are RAM limited, and could get things done by working in some smaller project bites. PixInsight gets crashy when it runs into RAM issues.
Regarding exposures, this is just feedback to side-step headaches like this, from someone who has ended up in just this position with data from a RASA. You can generally reduce your ISO/gain to take longer exposures without blowing the core in your target. On those modern Sony sensor astronomy cameras this would be a good opportunity to use gain 0 instead of the dual gain stage, for example. About the only solid deep space object exception I can think of is an extremely fast optic shooting M42 without blowing the core, where you can still end up with extremely short exposure times. Others like M45 tend to afford more headroom despite having very bright stars. In that case, or similar cases, you can consider two rounds of exposures. A shorter series of exposures for the highlights (e.g. core of Orion) and a longer series of exposures for the deeper details. Combine with HDRComposition, or use masks, Linear Fit, and PixelMath, or blend in Photoshop. Whatever works. And you get to end up with a small fraction of the files to work with, and you will also get materially better signal on those fainter details relative to time spent on the exposure.
Edited by James Peirce, 05 November 2022 - 11:14 AM.