Come ye all computer knowledgeable and software knowledgeable folk! Help solve this time consuming challenge!
Computer related questions with respect to solar system imaging & processing... For a while now, I've been watching CPU load, temperatures and overall time with respect to using software such as PIPP and AS!3 when it comes to the initial stages of processing. I took some screen shots and some times on a rather large file, that is common to me now (with the IMX183 sensor generating lots of data) and it really is taking a long time, which is annoying. So, maybe there's a solution out there? I didn't have any issue at all with time when I was doing several thousand frames from my IMX174 sensor, 290MM sensor and even the IMX183 sensor is fine when it comes to ROI and planets, everything was fast. But full pixel array IMX183 20Gb files are a bear to process because of time. It's so slow, and uses so few physical cores of the CPU that I wonder what it's doing? And because of that, maybe I would be better off running two instances of AS!3 and assigning them cores to run on to do twice the work in the same time? Or, maybe its my CPU and architecture being a bottle neck? I noticed in PIPP it always uses a single core no matter what I'm telling it to do, so that seems obvious to me, it will only speed up with more clock speed per core (which isn't happening it seems, most clock speeds are the same they've been for a decade and cores are just expanding). So I'm always looking for ways to speed things up. I normally don't have an issue with smaller files in the 2~5Gb range. But doing three 20Gb files takes an hour or more. But maybe it's just limited and cannot be sped up? That's the question and that's what I'd love to hear everyone's thoughts with experience with this stuff on!
AMD Bulldozer (yes, old, not a great architecture) chipsets
AMD FX 8350 4Ghz 8 physical core CPU (I upgraded a while back from my old Phenom 4 core, saw a moderate increase in overall speed)
16Gb RAM (memory only saturates if I drizzle, otherwise, it rarely even comes close to using all this)
Samsung EVO SSD
I realize there's newer, better platforms and that mine is rather dated relative to what's been released over the past few years. That's why I'm asking these questions because I'm curious how much better a newer platform will be. I'm looking to save time. But only if it's really reasonable. When I went from my 3.4Ghz 4 core Phenom to this 4Ghz 8 core Bulldozer, I saw fairly 20~30% decrease in time, but I can't say if it was the architecture and clock speed difference or if the doubling of physical cores mattered more. Not all software uses it the same. When the software uses all 8 cores, at full load, it's obvious and it is much faster at doing its work than the 4 core regardless of clock speed. But I'm noticed a lot of my common software (PIPP, AS!3) do not use all cores all the time (PIPP never does; AS!3 will use all cores at full load when it does Drizzle (I rarely use Drizzle, but some do, I will not use it for this test though), but the other sequences it rarely uses all cores and rarely at any sort of significant load). It would be of course wonderful to hear that another platform uses all cores and is much faster. I'd know what to do then. But if not, if this is common to everyone's platform out there, then its the software and that's fine, I'd rather know that and just accept that it will never be faster likely and then just explore running several instances of the software on dedicated cores and do more work over the same time at the same speed (if I can).
I would love to see anyone else crunch similar class data and post times with their hopefully better platforms to see the time differences or resource utilization! Especially someone with 16 cores, 32 cores, 64 cores even since those are available these days! Or even just someone with 4 to 8 cores or 16 cores but with 5Ghz or faster clock speeds?
My Camera (the data producer):
Camera: ASI183MM (20Mp array)
Test file: Full pixel array of the moon, 1000 frames at 8bit, 20Gb file (this is newer to me, much bigger files than my IMX174 & 290MM make with 1k frames)
Screen shots of each phase in AS!3 (6k alignment points, at 72 in size used for this, via auto) and PIPP
Software I use:
I will separate out each piece of software to show what they're doing at each step of the processing with associated CPU load, number of cores used, memory used, etc. The time is what I'm interested in. I could care less if all the work was done on one or two cores, if it meant less time. Some processes use the full CPU and load up memory. Some barely use the resources. Clearly something going on software wise. But then again, could also be the hardware to a degree if the software isn't optimized for it but is better on a different architecture or platform.
Approach one is all AS!3. All the processing of the entire file is done in AS!3 from start to finish. The time is captured. Sequence will be below.
Approach two is PIPP first (to limit the frames by quality only and reorder them to the same stack size as I use in AS!3) then follow it with AS!3 on the smaller file already quality sorted to reduce time. This is frankly faster, much faster, to do this method.
The output is pretty much the same that I can tell, looking at the output of each approach and doing my normal display processing on them. So the efficiency of the software seems to come into play with PIPP and AS!3 behaves a lot nicer with smaller clumps of data (and its the same data in this case, just 25% of it).
I will put each approach and results in separate posts for clarity.
Edited by MalVeauX, 07 April 2020 - 11:55 AM.