Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

PixInsight WBPP has been running for 60 hours

  • Please log in to reply
29 replies to this topic

#1 KTAZ

KTAZ

    Gemini

  • *****
  • topic starter
  • Posts: 3,450
  • Joined: 09 Apr 2020
  • Loc: Scottsdale, AZ

Posted 04 November 2022 - 11:17 PM

One integration...yep...60 hours.

 

ORxJRgv.jpg

 

I upgraded my second SSD...wondering if I have a swap file issue or what. Seems crazy to terminate the process after this much time, but it's only integrated channel 2 of 3 right now. At this pace I'll be waiting another 24 hours to finish.

 

Thinking about shutting it down and starting the integration over. Yeah, it's 1599 frames, but I have done 600 or 700 in less than a day. Something has changed.

 

Any thoughts?


Edited by KTAZ, 04 November 2022 - 11:17 PM.

  • Skywatchr likes this

#2 Forward Scatter

Forward Scatter

    Surveyor 1

  • -----
  • Posts: 1,755
  • Joined: 22 Jul 2018
  • Loc: Wandering the PNW

Posted 04 November 2022 - 11:54 PM

I just have to ask....What object did you acquire 1599 subs of?

 

Of course if you stop it now, you do have the registered and normalized frames to restart the integration at a later time.



#3 Marcelofig

Marcelofig

    Viking 1

  • -----
  • Posts: 886
  • Joined: 21 Jan 2015

Posted 05 November 2022 - 12:09 AM

1599 is a lot of frames and they are also very large, so so much processing time is not entirely unexpected.

 

What machine are you using, PC, Mac, Linux, what features?

 

And by the way, ask also in the PI forum, there you will probably get a more accurate answer.



#4 Jim Waters

Jim Waters

    Fly Me to the Moon

  • *****
  • Posts: 7,439
  • Joined: 21 Oct 2007
  • Loc: Phoenix, AZ USA

Posted 05 November 2022 - 01:21 AM

If you can add a second SSD and configure the PI Swap space there also.

 

How many CPU's / Cores and memory do you have?



#5 arbit

arbit

    Viking 1

  • -----
  • Posts: 774
  • Joined: 19 Feb 2012

Posted 05 November 2022 - 01:28 AM

You can always try Siril :-)

I do that for large runs. Can't say I've seen much of a difference in the integration. Post is a different matter.

Sent from my SM-S908E using Tapatalk

#6 maxsid

maxsid

    Viking 1

  • *****
  • Posts: 994
  • Joined: 11 Sep 2018
  • Loc: Sunnyvale, CA

Posted 05 November 2022 - 01:39 AM

I only used PI/WBPP a handful of times. Not much experience there.

I integrate my frames with APP.

With increasing number of frames the integration time gets progressively longer.

 

For large number of frames I process my data in 100 frame chunks and then integrate the resulting masters.

Much faster this way.



#7 Rasfahan

Rasfahan

    Vanguard

  • -----
  • Posts: 2,275
  • Joined: 12 May 2020
  • Loc: Hessen, Germany

Posted 05 November 2022 - 02:17 AM

With the huge number of subs you have serious memory consumption. PI will try to optimize for the physical RAM in your machine but that‘s not always successful. Once swapping starts performance will tank, so adding another SSD to the swap will probably not make much difference. Let it run to completion. Max‘ suggestion to process in chunks is also a good one: Pixel rejection will do its thing as well with 100 subs as with 1000 and memory consumption will be far less.


  • jdupton and Psychlist1972 like this

#8 astroboyabdi

astroboyabdi

    Ranger 4

  • -----
  • Posts: 315
  • Joined: 17 Apr 2019

Posted 05 November 2022 - 02:59 AM

Ive noticed long integration times even for shorts runs 18 lights with master darkss and flats, this is on the mac, its since i’ve updated to the latest Pi/wbpp

Must be something with the ln reference and local normalisation parts, seems quicker on my older windows machine, before update m1 max pro with 64gb ram was doing the same in under 15min but takes at least double now

#9 R Botero

R Botero

    Skylab

  • -----
  • Posts: 4,274
  • Joined: 02 Jan 2009
  • Loc: Kent, England

Posted 05 November 2022 - 03:02 AM

1599 images? Seriously? Those are my stacks for planetary but with software that handles them  in small regions of interest. 

As Torben says, diminishing returns for pixel rejection after about 100 frames. Take longer exposures. What are you imaging at 20s?

Maximum I’ve stacked is 9 panels of OSC APS-C sized frames (265 in total, 5 mins each) and that was excessive I think now. 

 

Roberto


Edited by R Botero, 05 November 2022 - 03:04 AM.


#10 arbit

arbit

    Viking 1

  • -----
  • Posts: 774
  • Joined: 19 Feb 2012

Posted 05 November 2022 - 05:44 AM

I had a smiley on my earlier comment, but seriously, if you do a large number of subs (for whatever reason), try Siril for pre.

If you prefer a WBPP type batch processing UI, also download its Companion app Sirilic.

I'd recommend the dev version if you are comfortable with that.

Sent from my SM-S908E using Tapatalk

#11 jdupton

jdupton

    Aurora

  • *****
  • Posts: 4,897
  • Joined: 21 Nov 2010
  • Loc: Central Texas, USA

Posted 05 November 2022 - 08:00 AM

KTAZ,

 

   I would just let it go. If you stop it now, you would need to completely recreate that last integration step and you are already as much as half of the way through.

 

   You are almost certainly memory limited for such a large single integration. That means that the VM / OS Page Space is working like crazy. There is no way around that. I hope you have plenty of free space on your primary drive and the OS doesn't run out. It would be a shame to get almost there and have the task crash due to an "out of memory error" (which is really an out of disk space issue). Keep in mind that PI Swap space is not used at all for preprocessing tasks and adding more will do nothing. Creating and dedicating memory to PI Swap Space in the form of RamDisk would make things much worse.

 

   Just keep on trucking... 

 

 

John



#12 James Peirce

James Peirce

    Viking 1

  • *****
  • Posts: 645
  • Joined: 21 Aug 2020
  • Loc: Salt Lake City, Utah, United States

Posted 05 November 2022 - 08:15 AM

Look at the console. It should be giving you feedback along the way, so you know the process is continuing. There is one stage at the end of image integration where it doesn’t update, but at that stage you are close to finished and it would be a shame to terminate the operation. Be patient.

The amount of time integration takes will be very significantly affected by some of the settings you choose in WBPP.

Modestly fast storage is some help (e.g. working on a 1000MB/s USB-C SanDisk Extreme Pro SSD), but file I/O only helps so much. It will certainly help, though, if the program is RAM-starved, but not anywhere near as much as having enough RAM. Usually the main factor which would speed PixInsight up is more/better CPU cores.

I assume you are stacking unguided and untracked shots from a tripod? If these are tracked shots, it’s time to re-evaluate your exposures (e.g. adjust gain if necessary to capture longer exposures).

Another option if you *are* working with untracked, unguided exposures, is to stack groups of, say, 100, 200, 300—whatever works best for the system—and then stack those stacks. It would be ideal to work on them all for the initial steps up through StarAlignment, or even LocalNormalization, and just do the integrations on selected groups.
  • psandelle and jdupton like this

#13 Skywatchr

Skywatchr

    Soyuz

  • *****
  • Posts: 3,574
  • Joined: 03 Jun 2006
  • Loc: North-Central Pa.

Posted 05 November 2022 - 08:32 AM

One integration...yep...60 hours.

 

ORxJRgv.jpg

 

I upgraded my second SSD...wondering if I have a swap file issue or what. Seems crazy to terminate the process after this much time, but it's only integrated channel 2 of 3 right now. At this pace I'll be waiting another 24 hours to finish.

 

Thinking about shutting it down and starting the integration over. Yeah, it's 1599 frames, but I have done 600 or 700 in less than a day. Something has changed.

 

Any thoughts?

Possibly the "new" SSD is misbehaving too and is having a "blast" at error correction.  Something you won't see on a GUI unless you specifically monitor for it.  I would just let it run to completion for now.  Then go back and check for any hardware performance anomalies.  That's a lot of frames to process, and different regions of SSD memory are being utilized.  Mind you, it's just a "possibility" in addition to what was already mentioned.

There have been reports from some "unlucky" ones of SSD problems in certain models, and production runs.

I've called myself "Lucky Eddy" many times because I seem to pick the only "defective" item out of a thousand. lol.gif



#14 8472

8472

    Vostok 1

  • -----
  • Posts: 196
  • Joined: 20 Jan 2018

Posted 05 November 2022 - 08:38 AM

I had a smiley on my earlier comment, but seriously, if you do a large number of subs (for whatever reason), try Siril for pre.

If you prefer a WBPP type batch processing UI, also download its Companion app Sirilic.

I'd recommend the dev version if you are comfortable with that.

Sent from my SM-S908E using Tapatalk

This.

 

It makes most of the other options seem painfully slow.


Edited by 8472, 05 November 2022 - 08:40 AM.


#15 KTAZ

KTAZ

    Gemini

  • *****
  • topic starter
  • Posts: 3,450
  • Joined: 09 Apr 2020
  • Loc: Scottsdale, AZ

Posted 05 November 2022 - 08:56 AM

Win 10, i7-7700K (4 cores, 8 threads) , 32GB RAM, 84GB page file space, 2TB SSD for operating system and PI, 2TB SSD for processing space.

 

Was looking at a few other sessions and my Pinwheel Galaxy was around 600 x 30". I know that didn't take anywhere near this much time. Even the largest projects I've processed were done in less than 8 hours. Something is amiss. I'm going to let this finish but then check that new drive and the way the swap files are setup. It has to be something with that.



#16 James Peirce

James Peirce

    Viking 1

  • *****
  • Posts: 645
  • Joined: 21 Aug 2020
  • Loc: Salt Lake City, Utah, United States

Posted 05 November 2022 - 09:45 AM

Win 10, i7-7700K (4 cores, 8 threads) , 32GB RAM, 84GB page file space, 2TB SSD for operating system and PI, 2TB SSD for processing space.

 

Was looking at a few other sessions and my Pinwheel Galaxy was around 600 x 30". I know that didn't take anywhere near this much time. Even the largest projects I've processed were done in less than 8 hours. Something is amiss. I'm going to let this finish but then check that new drive and the way the swap files are setup. It has to be something with that.

How many files were in those latest projects?

 

As an aside, why 30 second exposures instead of longer when that would massively reduce your post-processing overhead and also lend improvement to the final image?

 

P.S. There are fake storage drives on the market. Which one did you pick up? Your specifications look reasonable, here, but would lend themselves to long processing time. RAM is low for this sort of overhead (depending on the resolution of those files) and limited but not poor processing power. If you could upgrade the RAM that would probably ease reliance on swap storage, but, at first blush, you could massively simplify this by capturing longer and thus fewer exposures.


Edited by James Peirce, 05 November 2022 - 09:48 AM.


#17 KTAZ

KTAZ

    Gemini

  • *****
  • topic starter
  • Posts: 3,450
  • Joined: 09 Apr 2020
  • Loc: Scottsdale, AZ

Posted 05 November 2022 - 10:31 AM

As stated 600+ files easily processed in less than 8 hours.

 

This is a dependable Samsung 970 EVO Plus SSD 2TB NVMe M.2. Bought on Amazon and I've never had a lick of trouble with them. And I do realize that my PC is aging...unfortunately the motherboard is maxed out with respect to processor, but I could still max out the memory to 64GB DDR4 3600 (that would be double).

 

CPU loading is only about 38% and Memory use is ranging from 50% to 75% of the RAM. Interesting.

 

WBPP is not something I like to use. I much prefer to do the steps manually in PI, which was easy to do when I was semi-retired, but since I've been working again it helps to let things run during the day when I'm out of the house. I've done a couple of side by side comparison runs but always get inconsistent results. Manual is very repeatable. This was one of those situations where I just threw everything in the WBPP blender, hit "smoothie", and left the house. Never expected this to be a weeklong commitment!

 

The second reason I ran WBPP was that NSG supposedly has been supplanted by WBPP's Local Normalization. I wanted to see the results and NSG literally gave up and crashed after about 525 frames.

 

And this truly isn't about the target; the short exposures are warranted due to the potential to blow out the core on this particular object. Longer exposures don't always lead to improved images.


Edited by KTAZ, 05 November 2022 - 10:34 AM.

  • jdupton likes this

#18 James Peirce

James Peirce

    Viking 1

  • *****
  • Posts: 645
  • Joined: 21 Aug 2020
  • Loc: Salt Lake City, Utah, United States

Posted 05 November 2022 - 11:10 AM

As stated 600+ files easily processed in less than 8 hours.

 

This is a dependable Samsung 970 EVO Plus SSD 2TB NVMe M.2. Bought on Amazon and I've never had a lick of trouble with them. And I do realize that my PC is aging...unfortunately the motherboard is maxed out with respect to processor, but I could still max out the memory to 64GB DDR4 3600 (that would be double).

 

CPU loading is only about 38% and Memory use is ranging from 50% to 75% of the RAM. Interesting.

 

WBPP is not something I like to use. I much prefer to do the steps manually in PI, which was easy to do when I was semi-retired, but since I've been working again it helps to let things run during the day when I'm out of the house. I've done a couple of side by side comparison runs but always get inconsistent results. Manual is very repeatable. This was one of those situations where I just threw everything in the WBPP blender, hit "smoothie", and left the house. Never expected this to be a weeklong commitment!

 

The second reason I ran WBPP was that NSG supposedly has been supplanted by WBPP's Local Normalization. I wanted to see the results and NSG literally gave up and crashed after about 525 frames.

 

And this truly isn't about the target; the short exposures are warranted due to the potential to blow out the core on this particular object. Longer exposures don't always lead to improved images.

(Pardon my frank feedback. I don't know what you do and don't know.)

 

During the CPU intensive portions of processing it should be using as much of all the cores as you allow it to. It should basically be running them at full power. It shifts down during integration.

 

If your SSD has been solid, I'd be pretty surprised if it is giving issues now. You can try some of the various benchmarking tools out there to ensure it is operating properly. The "fakes" I referenced include forgeries of popular drives like that Samsung or others with crap internal electronics. It can be easy to run into rubbish products on shared marketplaces like Amazon and eBay, so seemed worth noting.

 

You do run into some ramped up limitations as you process deeper stacks of files. Absent details of how high resolution these files are, and taking into account that you can stack 500 in a much more reasonable amount of time, it sounds like the "right" approach to take here would be to split the project into, say, three or more integrations. And then integrate those integrations. You could do it manually and still be quite efficient. It is mainly the stacking stage which becomes really intensive due to the volume of data that is being managed. The other steps, such as star alignment, just work iteratively through the dataset you have provided. There would be greater slowdowns on some bulk steps, such as analyzing the full collection of files available for the local normalization master and weighting. If you broke that part of the process down into separate integrations, I'd suggest using NSG instead. It is less prone to creating artifacts or issues with multiple applications than LN, and can also be used for the file weighting with great results.

 

WBPP is just running the regular processes via scripting. So if something is off, it can usually be resolved by figuring out what setting needs to be adjusted for the process in WBPP. That said, with such a processing commitment, the manual process can be a lot easier to work through as you can check at each step that things are being done as desired. It's a drag to find out that something was set undesirably in WBPP after hours of processing.

 

In my opinion, NSG is still superior to local normalization in every way, other than local normalization being comfortably integrated into WBPP for automation. And local normalization frequently does a very good job. This may be another case where you are RAM limited, and could get things done by working in some smaller project bites. PixInsight gets crashy when it runs into RAM issues.

 

Regarding exposures, this is just feedback to side-step headaches like this, from someone who has ended up in just this position with data from a RASA. You can generally reduce your ISO/gain to take longer exposures without blowing the core in your target. On those modern Sony sensor astronomy cameras this would be a good opportunity to use gain 0 instead of the dual gain stage, for example. About the only solid deep space object exception I can think of is an extremely fast optic shooting M42 without blowing the core, where you can still end up with extremely short exposure times. Others like M45 tend to afford more headroom despite having very bright stars. In that case, or similar cases, you can consider two rounds of exposures. A shorter series of exposures for the highlights (e.g. core of Orion) and a longer series of exposures for the deeper details. Combine with HDRComposition, or use masks, Linear Fit, and PixelMath, or blend in Photoshop. Whatever works. And you get to end up with a small fraction of the files to work with, and you will also get materially better signal on those fainter details relative to time spent on the exposure.


Edited by James Peirce, 05 November 2022 - 11:14 AM.

  • KTAZ likes this

#19 KTAZ

KTAZ

    Gemini

  • *****
  • topic starter
  • Posts: 3,450
  • Joined: 09 Apr 2020
  • Loc: Scottsdale, AZ

Posted 05 November 2022 - 12:31 PM

All good feedback and appreciated. Heck, I don't know what I know. grin.gif

 

I'll reach out to whomever created NSG to see if I can figure out why it crashes for me at a certain point.

 

Regarding HDR Comp, that is a process that I'm working up to. I have a couple of targets that it would suit (you know who you area, NGC 6543), but this experiment with shorter exposures is just one more in the continuous sequence of learning that is the enjoyable part of this hobby.

 

Long wait times, on the other hand, not so much...



#20 KTAZ

KTAZ

    Gemini

  • *****
  • topic starter
  • Posts: 3,450
  • Joined: 09 Apr 2020
  • Loc: Scottsdale, AZ

Posted 06 November 2022 - 07:02 PM

Ok, it's done and after nearly 110 hours of chugging the results are less than spectacular. No biggie; I know I had some crap frames in there. Lesson learned. What I have may be of good use in an HDR Comp.

 

However, as soon as it finished up, I immediately started working on the swap files. When I updated the OS hard drive, I must not have done a very good job because I stopped at 4 swap files. This time I kept adding by 2 swap files at a time and carefully checked, and rechecked, every incremental improvement. Here is what I started out with:

 

TqY0N7Y.jpg

 

And here is what I ended up with after adding going up to 17 swap files:

10hDdb6.jpg

 

So to summarize:

 

Total score: +3%

CPU: Virtually unchanged

Swap: +37%

Transfer: +37%

 

We'll see how things go next time I stack, but I am not going to use WBPP with LocalNorm for large stacks any longer. It ain't worth the wait!


  • Skywatchr and psandelle like this

#21 jdupton

jdupton

    Aurora

  • *****
  • Posts: 4,897
  • Joined: 21 Nov 2010
  • Loc: Central Texas, USA

Posted 06 November 2022 - 07:46 PM

KTAZ,

 

We'll see how things go next time I stack, but I am not going to use WBPP with LocalNorm for large stacks any longer. It ain't worth the wait!

   Unfortunately, optimizing the PI Swap Folders won't help your slow WBPP performance. Adding or optimizing PI Swap Folders cannot help preprocessing tasks since the folders are not used by the types of tasks that PI performs when calibrating and stacking images. You will not see any difference in your WBPP runs after making the changes. This is a common area of confusion regarding PI Swap Folders.

 

   PI only uses the Swap Folders when it changes an existing image. If you stretch an image, color calibrate an image, run deconvolution on image, then the Swap Folders are used to save the "Before and After" copies of the image so that you can do an UnDo or ReDo very quickly. Having an optimized Swap Folder configuration helps these tasks.

 

   If the process you are running creates a brand new image file from an existing image file as is done during Image Calibration, Cosmetic Correction, DeBayering, Registration, and Integration, PI never uses the Swap Folders. You cannot UnDo any of those processes so the Swap Folders do nothing. You can only delete the newly created file and run the process again.

 

   For details and a longer explanation, see the following thread.

https://www.cloudynights.com/topic/747617-thoughts-on-pixinsight-use-of-ramdisk/

While the thread discusses putting Swap Folders on RamDisks, it also applies to the usual case of adding and optimizing Swap Folders in general.

 

 

John


Edited by jdupton, 06 November 2022 - 07:49 PM.

  • KTAZ and Rasfahan like this

#22 bobzeq25

bobzeq25

    ISS

  • *****
  • Posts: 31,183
  • Joined: 27 Oct 2014

Posted 25 November 2022 - 03:24 PM

 

 

As an aside, why 30 second exposures instead of longer when that would massively reduce your post-processing overhead and also lend improvement to the final image?

 

P.S. There are fake storage drives on the market. Which one did you pick up? Your specifications look reasonable, here, but would lend themselves to long processing time. RAM is low for this sort of overhead (depending on the resolution of those files) and limited but not poor processing power. If you could upgrade the RAM that would probably ease reliance on swap storage, but, at first blush, you could massively simplify this by capturing longer and thus fewer exposures.

Here's why.  Longer exposures can have drawbacks, I've seen that happen all the time here.  Because people's intuition tells them longer must be better.

 

Subexposure fairly rapidly runs into a wall.  Pixels saturate.  Causes loss of star color, clips (blurs) highlights.

 

Total imaging time never runs into a wall (although there are diminishing returns).  So more subs are _always_ better.

 

When I bought my F2 RASA, I knew I'd be using short subs to reduce pixel saturation.  So I built a new processing computer to handle them.  I routinely shoot hundreds of 10-30 second subs, even at Gain zero.  Even then some pixels inevitably saturate, I try to limit the number to a few hundred in each frame.
 


Edited by bobzeq25, 25 November 2022 - 03:27 PM.


#23 Rasfahan

Rasfahan

    Vanguard

  • -----
  • Posts: 2,275
  • Joined: 12 May 2020
  • Loc: Hessen, Germany

Posted 25 November 2022 - 06:44 PM

Amazing work.

 


Here's why.  Longer exposures can have drawbacks, I've seen that happen all the time here.  Because people's intuition tells them longer must be better.

 

Subexposure fairly rapidly runs into a wall.  Pixels saturate.  Causes loss of star color, clips (blurs) highlights.

 

Total imaging time never runs into a wall (although there are diminishing returns).  So more subs are _always_ better.

 

When I bought my F2 RASA, I knew I'd be using short subs to reduce pixel saturation.  So I built a new processing computer to handle them.  I routinely shoot hundreds of 10-30 second subs, even at Gain zero.  Even then some pixels inevitably saturate, I try to limit the number to a few hundred in each frame.
 

The OP is using an ASI071MC on an F/10 or F/7 scope. Unless he‘s on Bortle 9, I don‘t think he‘ll be sky limited. He also states the object is very bright - which would mean the long total integration time isn‘t warranted. The thing to do is to have some short and some long exposures for the bright and the dim parts if the dynamic range of the sensor isn‘t sufficient. Same with the RASA. If you‘re saturating, just take a few dozen subs with short exposures and take longer subs for the faint stuff.  Best of both worlds, but a bit more involved in postprocessing, of course (… says the guy who is too lazy to do that himself with his Espilon images and bought a workstation).


Edited by Rasfahan, 25 November 2022 - 06:45 PM.


#24 bobzeq25

bobzeq25

    ISS

  • *****
  • Posts: 31,183
  • Joined: 27 Oct 2014

Posted 25 November 2022 - 08:12 PM

 

If you‘re saturating, just take a few dozen subs with short exposures and take longer subs for the faint stuff. 

 

Why would I want to do that?  I can capture dim stuff just fine with short subs stacked.  It's why we stack.

 

Here's an example.  The dim stuff is very dim.  And that's 10 second subs.  Count'em ten seconds.

 

662 of them <smile>

 

The CN jpg doesn't show the true quality.  Here's the real deal.

 

https://www.astrobin.com/t5173s/

 

Choosing your data acquisition technique because of your computer limitations is a truly rotten idea.

 

Pleadies 2019 V3_smaller.jpg
 


Edited by bobzeq25, 25 November 2022 - 08:14 PM.


#25 Rasfahan

Rasfahan

    Vanguard

  • -----
  • Posts: 2,275
  • Joined: 12 May 2020
  • Loc: Hessen, Germany

Posted 25 November 2022 - 09:48 PM

Why would I want to do that?  I can capture dim stuff just fine with short subs stacked.  It's why we stack.

 

Here's an example.  The dim stuff is very dim.  And that's 10 second subs.  Count'em ten seconds.

 

662 of them <smile>

 

The CN jpg doesn't show the true quality.  Here's the real deal.

 

https://www.astrobin.com/t5173s/

 

Choosing your data acquisition technique because of your computer limitations is a truly rotten idea.

 

attachicon.gifPleadies 2019 V3_smaller.jpg

But your Pleiades are actually a good example for the point I was making: You can clearly see the patterned read noise from your camera because your short exposures weren‘t sky limited (horizontal streaking). If you have that at f/2 with a more modern chip, taking 30s exposures at f/7 might not be the best strategy for the OP - unless the target is bright, in which case you don‘t need 1600 subs.




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics