Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Pixinsight - adding more data to a project at a later date

  • Please log in to reply
17 replies to this topic

#1 gerdastro

gerdastro

    Vostok 1

  • -----
  • topic starter
  • Posts: 198
  • Joined: 26 Jul 2020

Posted 25 May 2024 - 02:10 AM

Frequently people here mention how they added additional nights of exposures to their project, and it sounds as if it is a very casual/easy thing to do. For me - doing all my processing in PI - adding another night sounds like a rather big task, especially since stacking takes so long. Wondering if I am missing out on a more elegant solution.

 

I am aware on how to add multiple nights to PI's WBPP. But how about adding future data when you have already run WBPP? If e.g. I have data of 2 nights, I stack and process in PI and come to the conclusion more data is needed - then go out and get another night: do I need to throw away all old processed data and start from scratch feeding WBPP the 3 nights? Or is there a part I can skip?

Obviously the actual processing part later on has to be redone completely. But is there anything I can do with the original master files that WBPP has created?

 



#2 Navy Chief

Navy Chief

    Mariner 2

  • *****
  • Posts: 218
  • Joined: 15 Dec 2014

Posted 25 May 2024 - 06:25 AM

You can reuse the master dark, bias, and flat for the data you already processed. WBPP will recognize them as masters. You add them to WBPP exactly the same as any other bias or dark, the script will show them as a master. If you used the same exposure times in the new data you can calibrate with the same darks, you will need new flats for every night of imaging.

When I add new data to an old project I restack everything in WBPP.
  • dswtan likes this

#3 imtl

imtl

    Cosmos

  • *****
  • Posts: 8,992
  • Joined: 07 Jun 2016
  • Loc: Down in a hole

Posted 25 May 2024 - 06:28 AM

You can reuse the master dark, bias, and flat for the data you already processed. WBPP will recognize them as masters. You add them to WBPP exactly the same as any other bias or dark, the script will show them as a master. If you used the same exposure times in the new data you can calibrate with the same darks, you will need new flats for every night of imaging.

When I add new data to an old project I restack everything in WBPP.


You need new flats whenever there is a change in your optical system. It has nothing to do with different nights of imaging.
  • dswtan likes this

#4 OrionSword

OrionSword

    Viking 1

  • *****
  • Posts: 607
  • Joined: 14 Aug 2010
  • Loc: Colorado

Posted 25 May 2024 - 06:46 AM

Also run "local normalization" to help improve the balance between sky values from other sessions.

 

As recommended by Juan Conejero:

 

Local normalization is not required if:

- There are no gradients.
- All of the images have the same signal levels.

 

Since the above conditions are virtually impossible with ground-based images, we can conclude that local normalization is necessary for optimal results, as the best way to achieve statistically compatible data for integration. This is why we consider LN an essential component of our standard preprocessing pipeline, and why we have included it in the WBPP script and are investing significant development resources in its implementation. Of course, the definition of 'optimal' varies largely, and other practical considerations (such as execution times and available hardware resources, among others) may impose conditions that can lead to disabling LN.



#5 Navy Chief

Navy Chief

    Mariner 2

  • *****
  • Posts: 218
  • Joined: 15 Dec 2014

Posted 25 May 2024 - 06:55 AM

You need new flats whenever there is a change in your optical system. It has nothing to do with different nights of imaging.


And even if nothing optically changes night to night you need new flats as there is no way to guarantee that you don't have new dust motes, or the existing ones moved.

Edited by Navy Chief, 25 May 2024 - 06:55 AM.


#6 imtl

imtl

    Cosmos

  • *****
  • Posts: 8,992
  • Joined: 07 Jun 2016
  • Loc: Down in a hole

Posted 25 May 2024 - 07:58 AM

And even if nothing optically changes night to night you need new flats as there is no way to guarantee that you don't have new dust motes, or the existing ones moved.

Then according to that you should take new flats every time you finish a sub. The point is, new flats are nothing to do with night to night.

The general concept is if the optics were not altered than old flats should work.

If one is worried about new dust motes than they should look at their calibrated lights and see what is going on. Also, check why their system is not closed one.


  • dswtan and acrh2 like this

#7 gerdastro

gerdastro

    Vostok 1

  • -----
  • topic starter
  • Posts: 198
  • Joined: 26 Jul 2020

Posted 25 May 2024 - 08:49 AM

Let me summarize to ensure I understand this correctly:

 

- keep the masters (lights, darks, bias, flats) from session 1

- go out for second session (I always take flats per session anyway just to be on the safe side)

- now in WBPP, you pull in the masters from session 1, and the subs from session 2

- let WBPP run - I would keep most boxes checked as below

Screenshot 2024-05-25 224341.jpg

 

 

This should give the same result as if I just did session 1 & 2 combined from scratch (on subs level) - but it should be significantly faster because WBPP does not need to consider the single subs from session 1 again but just uses their respective master as starting point instead, correct?



#8 PIEJr

PIEJr

    Vanguard

  • ***--
  • Posts: 2,477
  • Joined: 18 Jan 2023
  • Loc: Northern Los Angeles County, Southern California

Posted 25 May 2024 - 10:26 AM

Don't know because I consider myself a rank amateur with PixInsight.

But Visible Dark seems to have a good video on multiple files in Weighted Batch Preprocessing. (WBPP)

He talks about multiple nights and WBPP in THIS VIDEO.

 

Seems like if you retain your originals as separate files, maybe you could easily just tell PI to use the first, second, and subsequent files combined.

If you don't retain, then I sure don't know ... yet.

 

I tend to keep my originals, and let the processed files store in the date file. So, I have a processed batch, and all the originals in one file folder.

 

But PixInsight is a fragging mystery to me still.



#9 gerdastro

gerdastro

    Vostok 1

  • -----
  • topic starter
  • Posts: 198
  • Joined: 26 Jul 2020

Posted 25 May 2024 - 10:53 AM

Don't know because I consider myself a rank amateur with PixInsight.

But Visible Dark seems to have a good video on multiple files in Weighted Batch Preprocessing. (WBPP)

He talks about multiple nights and WBPP in THIS VIDEO.

 

Seems like if you retain your originals as separate files, maybe you could easily just tell PI to use the first, second, and subsequent files combined.

If you don't retain, then I sure don't know ... yet.

 

I tend to keep my originals, and let the processed files store in the date file. So, I have a processed batch, and all the originals in one file folder.

 

But PixInsight is a fragging mystery to me still.

Thanks. This is the video from how I learned how to process multiple nights in WBPP. But my point is how you add another night _later_ on when you already went through WBPP, got your master files ... and skip some of the work that you have already done if possible.



#10 idclimber

idclimber

    Cosmos

  • *****
  • Posts: 7,806
  • Joined: 08 Apr 2016
  • Loc: McCall Idaho

Posted 25 May 2024 - 10:57 AM

It is far better to have WBPP rerun from the beginning. You do need frames that were centered on the same or near the same celestial coordiatates and at the same rotation angle. If not then they may not integrate well. 

 

The main trick in WBPP is to get it to apply the flatsto the corresponding nights of imaging. The default is to make a single Master Flat from all the flats which we do not want. We can add a Grouping keyword on the preprocessing side to fix this. 

 

In my light frames I have a date in the file name. As such if I enter 2024 as a pre processing string it will group them by the next string which is month and day. We can alternatively put each session in a folder named Session 1, Session 2 and use Session as a keyword. 

 

What gets tricky is if you change gain or need to use different darks on each session. This may be the case if you have data that is taken with two different cameras over multiple years. You the setup a WBPP sessions that only calibrates and does not star align. Then enter the calibrated data for WBPP to integrate.

 

Adam Block recently showed something like this on his YouTube channel with data taken through four different filters (LRGB) to create what he calls a Super Lum. 


  • dswtan and gerdastro like this

#11 gerdastro

gerdastro

    Vostok 1

  • -----
  • topic starter
  • Posts: 198
  • Joined: 26 Jul 2020

Posted 25 May 2024 - 11:32 AM

It is far better to have WBPP rerun from the beginning. You do need frames that were centered on the same or near the same celestial coordiatates and at the same rotation angle. If not then they may not integrate well. 

 

The main trick in WBPP is to get it to apply the flatsto the corresponding nights of imaging. The default is to make a single Master Flat from all the flats which we do not want. We can add a Grouping keyword on the preprocessing side to fix this. 

 

In my light frames I have a date in the file name. As such if I enter 2024 as a pre processing string it will group them by the next string which is month and day. We can alternatively put each session in a folder named Session 1, Session 2 and use Session as a keyword. 

 

What gets tricky is if you change gain or need to use different darks on each session. This may be the case if you have data that is taken with two different cameras over multiple years. You the setup a WBPP sessions that only calibrates and does not star align. Then enter the calibrated data for WBPP to integrate.

 

Adam Block recently showed something like this on his YouTube channel with data taken through four different filters (LRGB) to create what he calls a Super Lum. 

So there is a difference after all between using the master files from session 1 with subs from session 2 vs restarting from scratch.

In that case I will not follow up on looking for a shortcut and just do it from scratch as compromising on image quality at such an early stage does not seem to be meaningful.

 

What I am wondering though moving forward - what would be a good way to quickly assess if adding additional data is still making a difference?

Let's say there is a session 1, a session 2, ... for each time, I start the whole process from scratch. WBPP (overnight as it is slow), then several hours of processing. Can I only judge at the end of processing if adding additional data has still improved the image significantly?

 

Or put differently: how are the more experienced imagers handling this? If you have collected a certain amount of data but feel you need more, would you still go through the whole process just to check how the already obtained data looks like ... even though you already know you will collect more in the near future and thus have to start right at the beginning again?


Edited by gerdastro, 25 May 2024 - 11:34 AM.


#12 idclimber

idclimber

    Cosmos

  • *****
  • Posts: 7,806
  • Joined: 08 Apr 2016
  • Loc: McCall Idaho

Posted 25 May 2024 - 12:08 PM

So there is a difference after all between using the master files from session 1 with subs from session 2 vs restarting from scratch.

In that case I will not follow up on looking for a shortcut and just do it from scratch as compromising on image quality at such an early stage does not seem to be meaningful.

 

What I am wondering though moving forward - what would be a good way to quickly assess if adding additional data is still making a difference?

Let's say there is a session 1, a session 2, ... for each time, I start the whole process from scratch. WBPP (overnight as it is slow), then several hours of processing. Can I only judge at the end of processing if adding additional data has still improved the image significantly?

 

Or put differently: how are the more experienced imagers handling this? If you have collected a certain amount of data but feel you need more, would you still go through the whole process just to check how the already obtained data looks like ... even though you already know you will collect more in the near future and thus have to start right at the beginning again?

The only difference is pixel rejection and weighing. Otherwise if both sets have the same number of frames the math is the same. After all integration is fundamentally a process of averaging. 



#13 pedxing

pedxing

    Gemini

  • *****
  • Posts: 3,195
  • Joined: 03 Nov 2009
  • Loc: SE Alaska

Posted 25 May 2024 - 12:33 PM

I run wbpp once to do the calibration of just the new session then run it again, only loading the calibrated lights from all sessions to do the measurements, sub rejection,registration, local normalization,integration, drizzling (if desired) and autocrop.

When you do this you get a warning about not having calibration frames which is fine because you're using pre-calibrated lights. You're just using wbpp for the rest of the steps beyond calibration on all of the data.

#14 dswtan

dswtan

    Mercury-Atlas

  • *****
  • Posts: 2,538
  • Joined: 29 Oct 2006
  • Loc: Morgan Hill, CA

Posted 25 May 2024 - 01:28 PM

What I am wondering though moving forward - what would be a good way to quickly assess if adding additional data is still making a difference?

...

Or put differently: how are the more experienced imagers handling this? If you have collected a certain amount of data but feel you need more, would you still go through the whole process just to check how the already obtained data looks like ... even though you already know you will collect more in the near future and thus have to start right at the beginning again?

More experienced imagers: we use our experience. gramps.gif

 

I've got a feel for what another hour will add with my set-ups. Also, I image a lot, so I can go long. Basically, I image for as long as I have patience for that object, or the trees/twilight get in the way. I am usually getting 1-3hrs per object per night, then over a few weeks, factoring in the moon and occasional California clouds. On my long f/l rig, I have to take into account seeing, so may skip there if that's really bad. 

 

I preview the stacks (over multiple nights) in NINA. That gives me a general idea, but not full processing (and mono only, for me). 

 

Then in WBPP, on my processing days (used to be weekends only, when I was working), I will generally stack selectively (as in, whenever I feel like it, not every night), the whole image series to that date, automatically preserving the master calibrations in WBPP. I am impatient to see what I'm capturing, so I do generally stack 2-3 times before final over the weeks, but as I build experience, I don't need to stack as often. I know to typically look for 10-20hr per target if possible in my Bortle 4-5 skies, though circumstances may vary that -- I've done over 40 for some targets and less than 4 for others. This is my experience, the object type, my interest in the object, and my local conditions. 

 

I'm semi-permanent, so I don't re-take flats unless I change the imaging train (which is every few months). If you have to set up every night, then sure it's best to re-take flats, and then use the keyword/session technique in WBPP as Dave mentioned. 



#15 Navy Chief

Navy Chief

    Mariner 2

  • *****
  • Posts: 218
  • Joined: 15 Dec 2014

Posted 25 May 2024 - 05:54 PM

Then according to that you should take new flats every time you finish a sub. The point is, new flats are nothing to do with night to night.
The general concept is if the optics were not altered than old flats should work.
If one is worried about new dust motes than they should look at their calibrated lights and see what is going on. Also, check why their system is not closed one.


If you want to push it to point of absurdity, then yes.

Back in reality we both know that the odds of a dust mote moving during a session are extremely low. The odds of a dust mote moving while moving the assembled imaging train are high enough to be concerned about and to warrant spending the time to take new flats.

#16 gerdastro

gerdastro

    Vostok 1

  • -----
  • topic starter
  • Posts: 198
  • Joined: 26 Jul 2020

Posted 26 May 2024 - 03:05 AM

I run wbpp once to do the calibration of just the new session then run it again, only loading the calibrated lights from all sessions to do the measurements, sub rejection,registration, local normalization,integration, drizzling (if desired) and autocrop.

When you do this you get a warning about not having calibration frames which is fine because you're using pre-calibrated lights. You're just using wbpp for the rest of the steps beyond calibration on all of the data.

Very helpful! Have you tested how much time you save this way vs. doing WBPP with all sessions from the beginning?



#17 pedxing

pedxing

    Gemini

  • *****
  • Posts: 3,195
  • Joined: 03 Nov 2009
  • Loc: SE Alaska

Posted 26 May 2024 - 03:00 PM

It's just my habit to calibrate sessions as I go, not so much about saving time.

I live in a rain forest so imaging sessions may be weeks, months or even years apart. Going back and calibrating old sessions from scratch is undesirable and unnecessary.

Also, since it doesn't get truly dark here 4 months out of the year, I often go back and reprocess old data sets when I can't image.

The takeaway here, though, is that you can have WBPP run any subset of its processes (for instance, just registration and integration) without having to go through the whole calibration from scratch.

Once your lights have been calibrated, they are calibrated, no need to ever calibrate them again. If you keep the calibrated lights you can reintegrate later with calibrated lights from new additional sessions without going back to square one.

I'm sure this saves time, but time saving isn't my main objective. Due to conditions where I live, I necessarily have to play the long game.
  • gerdastro likes this

#18 powerslide

powerslide

    Mariner 2

  • -----
  • Posts: 250
  • Joined: 24 Aug 2023
  • Loc: Sydney, Australia

Posted 26 May 2024 - 08:08 PM

I keep my output files I have a 2 TB M2, just for WBPP processing. I also use NINA folders per session underneath the target name. So all I need to do is add the appropriate Flats to each nights. then I just add the directory and it will add the Lights with the right Flats via grouping

 

When I process because I already have the previous nights cached it will skip over calibration, cosmetic correction debayering, measurements and registration and go straight to LN for the original subs. It still needs to do that for the new subs but can save significant time.   


  • gerdastro likes this


CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics