Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Post your Pixinsight Processing Flow!

  • Please log in to reply
113 replies to this topic

#1 cfosterstars

cfosterstars

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2579
  • Joined: 05 Sep 2014
  • Loc: Austin, Texas

Posted 03 May 2018 - 08:56 PM

There is another thread on Cloudy Nights called: Post a picture of your Mount. I suggest we start a thread on processing flows in Pixinsight. My hope is that we can all learn from each other and discuss how we do things differently.

 

Over the past year, I have been transitioning my imaging from DSLR with a full spectrum Canon 6D to Mono camera with filter wheel using a ASI1600MM-C with ZWO filters and more recently to an ASI1600MM-PRO camera with Astrodon filters. I struggled with issues with the filters and imaging artifacts, but that issue is covered in other threads.
 

At the same time, I was hiking up this learning curve, I simultaneously migrated my image processing from separate programs: Images Plus 6.0 for calibration and stacking and Photoshop CC for image post processing – and moving to pixinsight for all aspects of image processing. Of the two tasks, the change to pixinsight has been more difficult. This is mainly due to the very different nomenclature, program feel and the frankly myriad of so many options for how to do anything. I once saw a statement that said – if you can do something two different ways, pixinsight will offer all 20! The progress was slow. First here are my reference materials that I have used to develop my flow. I have been using the tutorials on YouTube by Richard Block:

 

https://www.youtube....UOe4R5Hng&t=13s

 

https://www.youtube....h?v=zU5jJgjKuQQ

 

https://www.youtube....h?v=ZLef9GlHLrs

 

I have also been using the book: Inside Pixinsight by Warren Keller and the tutorials from Light Vortex Astronomy:

 

http://www.lightvort.../tutorials.html

 

These resources have been quite valuable. I have also had invaluable help directly for other CN members (Jon Rista and others) that I would like to acknowledge. However, in many of the published references, the newer CMOS cameras are not the focus and most references were written with CCD cameras specifically in mind. I have a working flow for my ASI1600MM cameras that I would like to share. Just to be clear, this is not my work, but is complied for various sources.

 

At this point, I am comfortable with image pre-processing to reach master light frames and the early processing prior to combining into RGB images. This flow is for mono images specifically. I am not really comfortable with a process flow beyond this point so this is not a complete processes. Also, I am sure that other setting and flow will work. I do not claim that this is the best flow. It just works for me at my level of proficiency. This is my pre-processing flow for ASI1600MM-C and -PRO cameras:

 

1) I generated a dark frame library for three different camera temperatures: -10C, -15C and -20C, with two different gain setting for each temperature: 76 gain, 40 offset, 90 USB – for LRGB frames and 200 gain, 40 offset, 90 USB for narrow band filters, with exposure times of 10s, 30s, 60s, 90s, 120s, 150s, 180s, 240s, and 300s for my ASI1600MM-COOL. I also generated an analogous Dark Library for the ASI1600MM-PRO for the same gain setting, temperatures and exposures. However, this camera has not USB settings or offsets.  For each set of dark frames, I used PI to integrate the frames to generate DarkMaster frames. These frames are reused.

 

2) For the MasterDark generation I used these setting in PixInsight (Differences from Default):
a. Image Integration

           i. Combination: Average

           ii. Normalization: No Normalization
           iii. Weights: Don’t Care (All = 1)
b. Pixel Rejection (1)
           i. Rejection Algorithm: Winsorized Sigma Clipping
           ii. Normalization: No Normalization
c. Pixel Rejection (2)
           i. High and low Sigma = 3.0

 

3) All sets of dark frames were integrated into MasterDarkFrames.

 

4) The ASI1600MM-COOL and -PRO both do not work well with bias frame calibration. I use DarkFlat frames for calibrating my flat frames and use only MasterFlat and MasterDark Frames for calibrating my light frames. For my Canon 6D, I used bias frames to calibrate my Flat Frames and Light frames and do not use DarkFlat Frames.

5) For generating Flat frames for either of the ZWO ASI1600MM cameras, I used SGP flats calibration wizard to determine flat frame exposures for the seven filters with my flatman EM panel. I point the OTA to the zenith and park the scope. I do the Flat Frame capture in the early evening after dark, but before full astronomical dark. I use an illumination setting of 200 on the panel and a single layer of tee shirt fabric over the OTA for all seven filters. I use a 76 gain for the LRGB filters and 200 gain for the NB filters. This determines the exposure conditions for both the flats and dark flats. The gain setting by filter can be set in SGP by event so that all Flat frames for all filter/gain combinations can be taken sequentially. I collect dark flats at night with the scope ota capped to eliminate any possibility of light leakage. I then collected a dark flat library for each camera and filter at the three-different camera temperatures: -10C, -15C and -20C. For each set of dark flat frames, I used PI to integrate the frame to generate MasterDarkFlat frames with the same setting as the MasterDarkFrames above. The MasterDarkFlat frames are reused similarly to the MasterDarkFrames.

 

6) I collect flat frames using the flatman EM as stated in 2). I collect new flats if there is any change to my optical train or if I have not used the scope for an extended period of time such as through bad weather. The scope is semi-permanently setup in my backyard and is covered with a telegizmos 360 day covers for bad weather. The processing of covering and uncovering the rig can displace the camera slightly or move the dust around so I take new flat frames. The flats are taken at the same gain and temperature settings as the light frames: 76 gain for the LRGB filters and the 200 gain for the NB filters and the exposure times as determined by SGP flat calibration wizard.

 

7) For Flat Frame Calibration – to calibrate Flat frames for sensor bias and dark current
         a. Use MasterDarkFlats ONLY
         b. Only Check Optimize on the MasterDark options

 

8) For Flat frame integration and MasterFlat Frame generation, I used these setting in PixInsight (Differences from Default):
        a. Image Integration
              i. Combination: Average
              ii. Normalization: Multiplicative
              iii. Weights: Don’t Care (All = 1)
        b. Pixel Rejection (1)
              i. Rejection Algorithm: Winsorized Sigma Clipping
              ii. Normalization: Equalize Fluxes
        c. Pixel Rejection (2)
               i. High and low Sigma = 3.0

 

9) I then collected all light frames and start image processing. Depending on the target, I may have multiple sets of flats to process for different sets of light frames. The images are collected in folders by date and the flat frames are tracked by date also.

 

10) For Light Frame calibration, I uses the MasterDark frames from the appropriate DarkFrame Library for the combination of camera, temperature and gain used for the light frames. I also use the appropriate MasterFlat frame for the combination of camera, temperature, gain and filter. I do not use a MasterBias frame for the ZWO cameras. . I output the Calibrated light frames to appropriate folders by filter.

 

11) I them do CosmeticCorrection in Pixinsight on all the calibrated light frames. I use the MasterDark and Auto Detect methods. I used a single light frame by filter to test settings for the sigma value using the real-time preview. I adjust the sigma for Hot and Cold pixel corrections for both the MasterDark and Auto Detect methods by eye. I try and get about a total of >10K and 1K pixels for MasterDark and about 300 and 100 pixels for Auto Detect. I check this for each set of light frames by filter. This could probably be improved upon as a method of setting the Hot/Cold pixel sigma values. I output the Cosmetic-Correction light frames to appropriate folders by filter.

 

12) After CosmeticCorrection of the light frames, I use Blink to inspect my light frames to discard any obviously bad frames that I did not catch while doing imaging. I use the option to set the histogram screen stretch by frame even though this take more time. This is the first pass of grading the light frames.

 

13) Then use the subframeselector script in Pixinsight to grade and weight my light frames for further culling of less than optimal subframes. After running the algorithm to measure the subframes, I look at the graphs for SNRweight, eccentricity, and FWHM and mark any obvious outliers to discard. I then use the table to find the Min/Max values for FWHM, Eccentricity and SNRweight. I feed these into the following expression for weighting the light frames:

 

(10*(1-(FWHM-Min(FWHM))/(Max(FWHM)-Min(FWHM))) + 10*(1-(Eccentricity - Min(Eccentricity))/(Max(Eccentricity)-Min(Eccentricity)) + 30*(SNRWeight - Min(SNRWeight))/(Max(SNRWeight)-Min(SNRWeight))+50

 

to set the weighting for each light subframe based on 10% FWHM, 10% Eccentricity and 30% SNRWeight with a baseline of 50% weight for the worst non-discarded light frame. I then output the weighted and discarded light frames to appropriate folders by filter. I set the Weight keyword for the FITS header for the scoring to SSWEIGHT for later use in the image integration process. I also save the scoring table from the script to the folder with the scored subframes for later reference since I used the highest weighted subframe as the reference frame for StarAlignment and ImageIntegration in subsequent preprocessing steps.

 

14) The light frames are registered by filter using the StarAlignment process and the highest weighted light frame from the SubframeSelector as the reference frame. I check the distortion correction option at default settings and check the generate drizzle data check box. I use basically the default star detection settings since they do not seem to fail. The registered light frames are outputted to appropriate folders by filter and the drizzle date files are in the same folders by default.

 

15) LocalNormalization is run on the calibrated, cosmetically corrected, weighted and registered light frames by filter using the default setting accept for the scale factor which is increased to 256. Outlier rejection is checked with Hot pixel set to 2. This step is time consuming but seems to work well.

 

16) The light frames are then integrated without drizzle expansion to update the drizzle data. Again, the highest weighted light frame from the SubframeSelector as the reference frame. The light frames are loaded along with the LocalNormalization and Drizzle data files.  For this ImageIntegration, I used these setting in PixInsight (Differences from Default):
            a. Image Integration
                  i. Combination: Average
                  ii. Normalization: LocalNormalization
                  iii. Weights: FITS Keyword
                  iv. Weight Keyword: SSWEIGHT
                  v. Scale Estimator: (MAD)
                 vi. Generate Drizzle Data: Checked
            b. Pixel Rejection (1)
                  i. Rejection Algorithm: Winsorized Sigma Clipping
                 ii. Normalization: LocalNormalization
            c. Pixel Rejection (2)
                  i. High and low Sigma = 3.0
            d. Large Scale Pixel Rejection
                  i. Reject low large-scale structures: checked
                  ii. Reject High large-scale structures: checked

 

 

17) When the StarAlignment routine is completed, the integrated MasterLight is not Drizzled to a different pixel scale. This integrated image stack can be saved but is not used to generate final drizzled MasterLightFrame. The DrizzleIntegration process is used to generate the final MasterLightFrames. In DrizzleIntegration, the drizzle and LocalNormalization files are loaded into the DrizzleIntegration process. The default setting are used with a 2X drizzle integration. This process generated the final MasterLight frames by filter.

 

18) At this point the linear, calibrated, cosmetically-corrected, registered, integrated, normalized and drizzled MasterLight Frames have been created by filter and this is the end of the image preprocessing flow.

 

19) Each of the MasterLight frames must now be registered to a single Reference MasterLight Frame. I register the MasterLight frames to each other using either the LUM or the Ha as the reference image for LRGB or NB imaging, respectively.  I used the default setting for star alignment, but I do check the distortion option. This produces a set of registered MasterLight Frames.

 

20) To clean up the edge artifacts caused by the registration process and dithering, I then do a DynamicCrop using the MasterLight frame with the most restricted FOV as the template making a process icon for DynamicCrop. I then apply DynamicCrop using the process icon to all MasterLight frames so that they are the same size and FOV. I try to err on the side of preserving FOV rather than cropping excessively. I save the process icon so that the exact crop can be used on all the MasterLight Frames with identical results.

 

 

21) To flatten the background and remove gradients in the MasterLight frames, I do repeatedly run DynamicBackgroundExtraction on the registered and cropped MasterLight frames until the images are clean. This has been the only way I have been able to remove the residual artifacts I have from the leakage of light around my filters that leads to false color in the image corners. Even with the Astrodon filters, there is residual amp glow from the camera especially with the SII and Ha filters. I start to set up the DBE by focusing on the MasterLight frame that has the greatest amount of nebulosity or DSO. This is typically the Luminance or Ha filter. For sample generation, I start with a tolerance of 1.0, a sample radius of 20 with 20 samples per row and see how many samples are generated and how many are active. I may need to up the tolerance to as high as 3.0 or 4.0 to get then all to generate, but typically a tolerance of 2.0 should generate all the samples. I then use the inspection tab to verify that the sample points are free of start and move them as needed to avoid stars. I also delete any sample points in areas of nebulosity that were generated. After all the pre-generated sample points are inspected and point is nebulosity/DSO are deleted, I go back and manually add sample point in area of high gradient or that were missed in the automatic sample generation. I set the Target Image Correction to subtraction. I then create a process icon for this baseline DBE process settings and close the DBE process. For each MasterLight frame, I will first start the DBE process icon and see how many of the sample points are rejected by DBE for sample weighting and tolerance. If all points are accepted, the I will lower the tolerance until the sample point begin to be rejected and find the minimum value that all points are accepted. On the other hand, if after just starting the DBE process, many sample points are rejected due to high gradient, I increase the tolerance until all sample points are accepted. This is usually the areas of the image on narrowband MasterLight frames where the residual amp glow and may require the tolerance to be increase to as high as 4-5. In these area, I will add more sample points due to the high gradient. I then run the DBE process icon on the MasterLight frame for the first pass of background extraction. For each MasterLight frame, I run the DBE icon 2, 3 or more times until the background if flat with each successive pass of DBE using lower and lower tolerance values. The better this process is done, the fewer artifacts that will need to be removed at later steps in processing. Since all the frames have been registered and identically cropped, this DBE process icon can also be used later on for RGB or LRGB color composite images to further deal with gradients. This residual AMP glow is the main issue I have seen later in processing so getting rid of this issue early is the best solution.  After this processing is done, the output frames are saved as to a new folder.

 

22) After completing background and gradient removal with DBE, I have used the LinearFit process on and off. I am not entirely sold on it, but for now it is in my processing flow. The idea is that when we have calibrated and stacked images that are meant to be colour-combined, we have to consider how well the histograms really match up to one another. Generally due to the varying conditions of the night sky throughout a night of imaging, or various nights of imaging, as well as the filter being used, the average brightness of the background and signal may not match up well between images we need to color-combine. This is above and beyond the DynamicBackgroundExtraction process but that process step should have definitely helped matching the histogram peaks through subtraction of background gradients in each monochrome image. Later Color calibration process will also correct for this, but it is generally good practice to match up the average background and signal brightness between the images you are going to color-combine later. It is easy to do. You chose you brightest of the MasterLight frames by looking at their histogram with the HistorgramTransformation. The brightest should be the Luminance or the Ha MasterLight frames, but not necessarily so it is good to check each image. This frame is chosen as the master in the dialog and then the process is applied to the other MasterLight frames. The registered, cropped, DBE and LinearFit adjusted MasterLight Frames the final step of processing prior to color combining for either narrowband or RGB images. The Luminance is usually processed separately from this point onward.

 

 

I hope to write down my post-processing flow at some point once I figure out what works best for me. I think that this is where most of the artistry come in. However, there is no substitute for good data and the more data you collect the better - MORE PHOTONs.

 

Comments and questions welcome.


  • jrs, okiedrifter, Jim Waters and 21 others like this

#2 cyclops12321

cyclops12321

    Vostok 1

  • *****
  • Posts: 199
  • Joined: 14 Dec 2016

Posted 03 May 2018 - 09:27 PM

This needs to be a sticky :)

 

Sunil


  • Seanem44 likes this

#3 rockstarbill

rockstarbill

    Fly Me to the Moon

  • *****
  • Posts: 6320
  • Joined: 16 Jul 2013
  • Loc: Snohomish, WA

Posted 03 May 2018 - 10:13 PM

Here is mine:

 

https://1drv.ms/u/s!...ZpFD3wLG5VO583A

 

Note that this is for CCD Cameras, and not CMOS.


  • gnagy001 likes this

#4 iwannabswiss

iwannabswiss

    Viking 1

  • *****
  • Posts: 812
  • Joined: 14 Feb 2014
  • Loc: Charleston SC

Posted 03 May 2018 - 11:18 PM

There's this thread with people's PixInsight Workflow.


Edited by iwannabswiss, 03 May 2018 - 11:19 PM.


#5 KBALLZZ

KBALLZZ

    Vostok 1

  • *****
  • Posts: 136
  • Joined: 28 Dec 2015

Posted 15 May 2018 - 02:05 AM

I write my PI workflow in detail on every image, check it out! :D

Check the descriptions on the photos here: https://www.flickr.c...hotos/ak_astro/


  • jrs, NMCN, Ballyhoo and 5 others like this

#6 cfosterstars

cfosterstars

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2579
  • Joined: 05 Sep 2014
  • Loc: Austin, Texas

Posted 21 May 2018 - 08:11 PM

I am working on a cleaned up version of my process flow. I noticed a bunch of typos and thing that were not clear. I am also working on the post processing flow. I hope to post it in the next few weeks. I have been working on deconvolution for LUM and NB data and that has been working great. I will try and include that as well.


  • GaPrunella likes this

#7 GaPrunella

GaPrunella

    Lift Off

  • *****
  • Posts: 9
  • Joined: 03 Jul 2017
  • Loc: Augusta, Georgia USA

Posted 01 June 2018 - 09:07 AM

I'm curious how many dark frames you take at each exposure.  I'm asking because I see your setting at Windsor Clipping.  I'm typically at Linear Clipping with about 25 frames.



#8 cfosterstars

cfosterstars

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2579
  • Joined: 05 Sep 2014
  • Loc: Austin, Texas

Posted 01 June 2018 - 11:13 AM

I do between 10 and 50 depending on the length of exposure. For Flats and Dark Flats, I do 40 frames. I could probably use linear clipping just as easily, but I have been using Winsorized Clipping. It was recommended in several YouTube videos and the Light Vortex tutorials. I am not really sure how much of a difference it makes one way of another. My guess is that with a decent camera and many exposures for calibration, the degree of pixel rejection is low no matter which you use. I think it would matter with older cameras with lots of hot pixels and such. 

 

For cosmetic correction, I see so few hot pixels anyway when I look at previews, It seems CC is not really doing that much. I try and only do a low number of pixels by count now since the preview method is not showing much.

 

At the end of the day, I think getting more exposures for light frames is always the #1 way to improve your images. 



#9 cfosterstars

cfosterstars

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2579
  • Joined: 05 Sep 2014
  • Loc: Austin, Texas

Posted 03 July 2018 - 11:23 PM

As promised I have update and expanded my PixInsight process flow. It now covers the preprocess and the linear-state processing up to creating non-linear RGB and LUM images prior to LRGB combination as well as use of NB data. I will update this further when I add my non-linear state processing flow and NB color manipulation process. I also did a fair amount of typo corrections and some update to the preprocessing flow that I put at the start of the thread. So the beginning repeats the first part from above:

 

Over the past year, I have been transitioning my imaging from DSLR with a full spectrum Canon 6D to Mono camera with filter wheel using a ASI1600MM-C with ZWO filters and more recently to an ASI1600MM-PRO camera with Astrodon filters. I struggled with issues due to filters reflections and imaging artifacts, but that subject was covered in other threads:

https://www.cloudyni...ing-wrong-here/

https://www.cloudyni...ring-artifacts/

and here are some others of value from other posters with the same issues:

https://www.cloudyni...gs-in-my-flats/

https://www.cloudyni...se-reflections/

At the same time, I was hiking up this learning curve, I simultaneously migrated my image processing from separate programs: Images Plus 6.0 for calibration and stacking and Photoshop CC for image post processing – and moving to pixinsight for all aspects of image processing. Of the two tasks, the change to pixinsight has been more difficult. This is mainly due to the very different nomenclature, program feel and the frankly myriad of so many options for how to do anything. I once saw a statement that said – if you can do something two different ways, pixinsight will offer all 20! The progress was slow. I have been using the tutorials on YouTube by Richard Block:

 

https://www.youtube....UOe4R5Hng&t=13s

 

https://www.youtube....h?v=zU5jJgjKuQQ

 

https://www.youtube....h?v=ZLef9GlHLrs

 

I have also been using the book: Inside Pixinsight by Warren Keller and the tutorials from Light Vortex Astronomy:

 

http://www.lightvort.../tutorials.html

 

These resources have been quite valuable. However, the newer CMOS cameras are not the focus of these tutorials and references as they were written with CCD cameras specifically in mind. I have also received extremely valuable insights and processing tips from members of Cloudy Nights. At this point, I am comfortable with image pre-processing to reach master light frames and to prepare images though to full LRGB or NB color images from my monochrome MasterLight frames. This is my process flow:

 

1) I generated a dark frame library for three different camera temperatures: -10C, -15C and -20C, with two different gain setting for each temperature: 76 gain, 40 offset, 90 USB – for LRGB frames and 200 gain, 40 offset, 90 USB for narrow band filters, with exposure times of 10s, 30s, 60s, 90s, 120s, 150s, 180s, 240s, and 300s for my ASI1600MM-COOL. I also generated an analogous Dark Library for the ASI1600MM-PRO for the same gain setting, temperatures and exposures. However, the -PRO camera does not have USB settings or offsets.  For each set of dark frames, I used PI to integrate the frames to generate DarkMaster frames. These frames are reused.

 

2) For the MasterDark generation I used these setting in PixInsight (Differences from Default):
     a. Image Integration
              i. Combination: Average
              ii. Normalization: No Normalization
              iii. Weights: Don’t Care (All = 1)
    b. Pixel Rejection (1)
              i. Rejection Algorithm: Winsorized Sigma Clipping
              ii. Normalization: No Normalization
    c. Pixel Rejection (2)
              i. High and low Sigma = 3.0

 

3) All sets of dark frames were integrated into MasterDarkFrames.

 

4) I have found that the ASI1600MM-COOL and -PRO both do not work well with bias frame calibration. I use DarkFlat frames for calibrating my flat frames and use only MasterFlat and MasterDark Frames for calibrating my light frames. For my Canon 6D, I used bias frames to calibrate my Flat Frames and Light frames and do not use DarkFlat Frames.

 

5) For generating Flat frames for either of the ZWO ASI1600MM cameras, I used SGP flats calibration wizard to determine flat frame exposures for the seven filters with my flatman EM panel. I have also generated good flat frames with both a homemade lightbox and with tee shirt sky flats with equal success. For my flatman, I point the OTA to the zenith and park the scope. I do the Flat Frame capture in the early evening after dark, but before full astronomical dark so that there is no light leakage to affect my flats. I use an illumination setting of 200 on the panel and a single layer of tee shirt fabric over the OTA for all seven filters. I use a 76 gain for the LRGB filters and a 200 gain for the NB filters. This determines the exposure conditions for both the flats and dark flats. The gain setting by filter can be set in SGP by event so that all Flat frames for all filter/gain combinations can be taken sequentially. I also collect dark flats after full dark with the scope OTA capped to eliminate any possibility of light leakage. I then collected a dark flat library for each camera and filter at the three-different camera temperatures: -10C, -15C and -20C. For each set of dark flat frames, I used PI to integrate the frame to generate MasterDarkFlat frames with the same setting as the MasterDarkFrames above. The MasterDarkFlat frames are reused similarly to the MasterDarkFrames.

 

6) I collect flat frames using the flatman EM as stated in 2). I collect new flats if there is any change to my optical train or if I have not used the scope for an extended period of time such as through bad weather. The scope is semi-permanently setup in my backyard and is covered with a Telegizmos 360-day covers for bad weather. The processing of covering and uncovering the rig can displace the camera slightly or move the dust around so I take new flat frames. The flats are taken at the same gain and temperature settings as the light frames: a 76 gain for the LRGB filters and a 200 gain for the NB filters and the exposure times as determined by SGP flat calibration wizard.

 

7) For Flat Frame Calibration – to calibrate Flat frames for sensor bias and dark current

      a. Use MasterDarkFlats ONLY for calibration at the same gain, offset, USB and temperature as the flat frames

      b. I uncheck the Optimize on the MasterDark options. Although less important for short exposures, this is to eliminate effects of AMP glow.

 

8) For Flat frame integration and MasterFlat Frame generation, I used these setting in PixInsight (Differences from Default):
    a. Image Integration
             i. Combination: Average
             ii. Normalization: Multiplicative
             iii. Weights: Don’t Care (All = 1)
    b. Pixel Rejection (1)
             i. Rejection Algorithm: Winsorized Sigma Clipping
             ii. Normalization: Equalize Fluxes
    c. Pixel Rejection (2)
             i. High and low Sigma = 3.0

 

9) I then collected all light frames and start image processing. Depending on the target, I may have multiple sets of flats to process for different sets of light frames. The images are collected in folders by date and the flat frames are tracked by date also. I try to do the calibration and cosmetic correction of the raw light frames in batches as I collect them so that the flat frames corresponding to the light frames don’t get mixed up or unclear due to time passing.

 

10) For Light Frame calibration, I use the MasterDark frames from the appropriate DarkFrame Library for the combination of camera, temperature and gain used for the light frames. I also use the appropriate MasterFlat frame for the combination of camera, temperature, gain and filter. I do not use a MasterBias frame for the ZWO cameras. I output the Calibrated light frames to appropriate folders by filter. Since many of my targets have taken 15 or more sessions, I have been getting in the habit of doing image calibration of my light frames as I go so as not to lose track of which set of flat frames go with which set of light frames. This also lets me figure out how many bad frames I need to replace as I continue to collect more data. I recently stopped using the optimize option for my dark frame calibration. Since my dark frames are taken with exactly the same conditions as my light frames, the optimize option will try to scale the darks unnecessarily causing incorrect calibration. This is most noticeable in correction for amp glow. I have found that amp glow is calibrated out much better if the optimize option is unchecked. This hint came from Jon Rista on Cloudy nights.

 

11) I then do CosmeticCorrection in Pixinsight on all the calibrated light frames. I use the MasterDark and Auto Detect methods. I used a single light frame by filter to test settings for the sigma value using the real-time preview. I adjust the sigma for Hot and Cold pixel corrections for both the MasterDark and Auto Detect methods by eye. Since my cameras, ASI1600MM-C and -PRO have really good sensors, I just try and get about a total of >10K and 1K pixels for MasterDark and about 300 and 100 pixels for Auto Detect. I check this for each set of light frames by filter. This could probably be improved upon as a method of setting the Hot/Cold pixel sigma values, but this works. I output the Cosmetic-Correction light frames to appropriate folders by filter.

 

12) After CosmeticCorrection of the light frames, I use Blink to inspect my light frames to discard any further obviously-bad frames that I did not catch while doing imaging. I use the option to set the histogram screen stretch by frame even though this take more time. This is the first pass of grading the light frames.

 

13) Then use the SubFrameSelector script in PixInsight to grade and weight my light frames for further culling of less than optimal subframes. When loading the files for the light frames into the script, make sure that you chose the CosmeticCorrection frames and not the calibrated frames. I have wasted a lot of time by making this error. Hit measure and wait for the script to complete. After running the algorithm to measure the subframes, I look at the graphs for SNRWeight, eccentricity, and FWHM and mark any obvious outliers to discard particularly for Eccentricity. I then use the table tab to find the Min/Max values for FWHM, Eccentricity and SNRWeight. I feed these into the following expression for weighting the light frames:

 

(10*(1-(FWHM-Min(FWHM))/(Max(FWHM)-Min(FWHM))) + 10*(1-(Eccentricity - Min(Eccentricity))/(Max(Eccentricity)-Min(Eccentricity)) + 30*(SNRWeight - Min(SNRWeight))/(Max(SNRWeight)-Min(SNRWeight))+50

 

to set the weighting for each light subframe based on 10% FWHM, 10% Eccentricity and 30% SNRWeight with a baseline of 50% weight for the worst non-discarded light frame. The final factor of 50 is adjust up or down so that the weight for the best image comes out to 100.0. This will at worst rate images from 50-100 and not completely discount even poorer images. I set the Weight keyword for the FITS header for the scoring to SSWEIGHT for later use in the image integration process to weight the contribution of each subframe to the final integrated MasterLight Frame. I then output the weighted and discarded light frames to appropriate folders by filter. I also save the scoring table from the script to the folder with the scored subframes for later reference since I used the highest weighted subframe as the reference frame for StarAlignment and ImageIntegration in subsequent preprocessing steps.

 

14) The light frames are registered by filter using the StarAlignment process and the highest weighted light frame from the SubframeSelector as the reference frame. I check the distortion correction option at default settings and check the generate drizzle data check box. I have found that I can sometimes suppress some color artifacts of my stars with the distortion model active but you can disable this if you want at this point since I will also use it later on for registering the MasterLight Frames. I use basically the default star detection settings since they do not seem to fail. The registered light frames are outputted to appropriate folders by filter and the drizzle date files are in the same folders by default.

 

15) LocalNormalization is next run on the calibrated, cosmetically-corrected, weighted and registered light frames by filter using the default settings accept for the scale factor which is increased to 256. Outlier rejection is checked with Hot pixel set to 2. This step is time consuming but seems to work well. The LocalNormalization data files are output to the same folders as the registered LightFrames and the drizzle data files.

 

16) The light frames are then integrated without drizzle expansion to update the drizzle data using the ImageIntegration process. Again, the highest weighted light frame from the SubframeSelector is selected as the reference frame. The light frames are loaded along with the LocalNormalization and Drizzle data files.  For this ImageIntegration, I used these setting in PixInsight (Differences from Default):
    a. Image Integration
           i. Combination: Average
           ii. Normalization: LocalNormalization
           iii. Weights: FITS Keyword
           iv. Weight Keyword: SSWEIGHT
           v. Scale Estimator: (MAD)
           vi. Generate Drizzle Data: Checked
    b. Pixel Rejection (1)
          i. Rejection Algorithm: Winsorized Sigma Clipping
          ii. Normalization: LocalNormalization
    c. Pixel Rejection (2)
          i. High and low Sigma = 3.0
    d. Large Scale Pixel Rejection
          i. Reject low large-scale structures: checked
          ii. Reject High large-scale structures: checked

 

17) When the ImageIntegration routine is completed, the integrated MasterLight that is generated is not Drizzled to a different pixel scale. If this image has issues, it is not worth it to do the DrizzleIntegration which takes considerably longer to run. This integrated image stack can be saved but is not used to generate final drizzled MasterLightFrame. The DrizzleIntegration process is used to generate the final MasterLightFrames. In DrizzleIntegration, the drizzle and LocalNormalization files are loaded into the DrizzleIntegration process. The default settings are used with a 2X drizzle integration. This process generates the final MasterLight frames by filter.

 

18) The above process steps are repeated for subframes from each filter used for the image. When complete, the calibrated, cosmetically-corrected, registered, integrated, normalized and drizzled MasterLight Frames have been created in linear form for each filter. This is the end of the image preprocessing flow.

 

19) After preprocessing and generating a set of MasterLight Frames for each filter, these MasterLight frames must now be registered to a single Reference MasterLight Frame. I register the MasterLight frames to each other using either the LUM or the Ha as the reference image for LRGB or NB imaging, respectively.  I used the default setting for star alignment, but I do check the distortion option. This produces a set of registered MasterLight Frames by filter. I found that by allowing the distortion correction, you can reduce the amount of what is called “lateral chromatic aberration” that can turn up in some images due to limits in Pixinsight’s StarAlignment process.  This is the effect when combining RGB channels (with either RGB or NB data) that even after registration where stars and particularly towards the edges of the image, stars can show uneven colors across their disks.  Blue, Green and Red can appear brighter around certain edges, due to slight and normally uncorrected shifts in registration between these channels. This effect can be mitigated by checking “Distortion correction” and using the “2-D Surface Splines” Registration model.

 

20) To clean up the edge artifacts caused by the registration process and dithering, I then do a DynamicCrop using the MasterLight frame with the most restricted FOV as the template making a process icon for DynamicCrop. I then apply DynamicCrop using the process icon to all MasterLight frames so that they are the same size and FOV. I try to err on the side of preserving FOV rather than cropping excessively. I save the process icon so that the exact crop can be used on all the MasterLight Frames with identical results. The cropping process is necessary in order for the next step in the flow to be successful.

 

21) To flatten the background and remove gradients in the MasterLight frames, I repeatedly run DynamicBackgroundExtraction on the registered and cropped MasterLight frames until the images are clean and free of gradients or AMP glow. This was the only way I have been able to remove the residual artifacts I have had from the leakage of light around my ZWO filters that leads to false color in the image corners. Even with the Astrodon filters, which are free of that issue, there may still residual amp glow from the camera especially with the SII and Ha filters. This issue is improved by the not using the Darkframe optimization option in the ImageCalibration, but even that is not perfect and there will be residual AMP glow. This is perhaps the most important step in the post process work flow. I start to set up the DBE by focusing on the MasterLight frame that has the greatest amount of nebulosity or DSO. This is typically the Luminance or Ha filter. For sample generation, I start with a tolerance of 1.0, a sample radius of 20 with 20 samples per row and see how many samples are generated and how many are active. I may need to up the tolerance to as high as 3.0 or 4.0 to get then all to generate, but typically a tolerance of 2.0 should generate all the samples from these setting across the image. I then use the inspection tab to verify that the sample points are free of stars and move them as needed to avoid stars. I also delete any sample points in areas of nebulosity or the DSO that were generated by the algorithm. After all the pre-generated sample points are inspected and sample points in areas of nebulosity/DSO are deleted, I go back and manually add sample point in areas of high gradient such as the AMP glow areas or that were missed in the automatic sample generation that should be background sky. For Ha, it is sometimes hard to differentiate between the AMP glow and real nebulosity, that make the darkframe calibration without scaling very helpful. I set the Target Image Correction to subtraction. I then create a process icon for this baseline DBE process settings and close the DBE process. For each MasterLight frame, I will first start from the result of the DBE process icon by making the target image active by clicking on it and then double click on the DBE process icon. I do not just drop and drag the process icon on to the target MasterLight frame since the DBE settings for each image should customized with the DBE process icon setting as only the starting point. I first see how many of the sample points are rejected by DBE for sample weighting and tolerance. If all points are accepted, the I lower the tolerance setting until the sample point begin to be rejected and find the minimum value that all points are accepted. On the other hand, if after just starting the DBE process, many sample points are rejected due to high gradient, I increase the tolerance until all sample points are accepted. This is usually the areas of the image on narrowband MasterLight frames where there is residual amp glow. These areas of the image may require the tolerance to be increase to as high as 4-5 and I have found this acceptable for the first pass of DBE. In these high gradient areas, I will add many more sample points or these area will not be flattened. At this point, I then run the DBE process icon on the MasterLight frame for the first pass of background extraction. For each MasterLight frame, I repeat the above tuning of the BDE process settings from the DBE icon 2, 3 or more times until the background if flat. With each successive pass of DBE you should see lower and lower tolerance values working with all sample point active. The better this process step is done, the fewer artifacts that will need to be removed at later steps in processing which tends to be less effective at addressing these issues. However, since all the MasterLight frames have been registered and identically cropped, the baseline DBE process icon can also be used later on for RGB or LRGB color composite images to further deal with gradients if necessary. The residual AMP glow is the main issue I have seen later in processing so getting rid of this issue early is the best solution. Otherwise you may be forced to further crop your final image if you do not want the AMP glow in your final image. After this processing is done, the now cropped and flattened MasterLight frames are saved as to a new folder.

 

22) After completing background and gradient removal with DBE, I have used the LinearFit process on and off. I was not entirely sold on it, and I have changed my processing flow to the method in step 23, but I am including it here for those who prefer it. The idea is that when we have calibrated and stacked images that are meant to be color-combined, we have to consider how well the histograms really match up to one another. Generally due to the varying conditions of the night sky throughout a night of imaging, or various nights of imaging, as well as the filter being used, the average brightness of the background and signal may not match up well between images we need to color-combine. This is above and beyond the DynamicBackgroundExtraction process but that process step should have definitely helped matching the histogram peaks through subtraction of background gradients in each monochrome image. Later Color calibration process will also correct for this, but it is generally good practice to match up the average background and signal brightness between the images you are going to color-combine later. It is easy to do. You chose you brightest of the MasterLight frames by looking at their histogram with the HistorgramTransformation. The brightest should be the Luminance or the Ha MasterLight frames, but not necessarily so it is good to check each image. This frame is chosen as the master in the dialog and then the process is applied to the other MasterLight frames.

 

23) From another tip from Jon Rista and much of this step is from his explanation, there is an alternative method for normalizing the background levels between the R, G, and B channels or the Ha, OIII and SII data. Instead of doing a linear fit, which will often bloat stars (particularly in one channel), you can also try a linear alignment with PixelMath in PI: K: $T + (median (<refImage>) - median($T)) and I call this median-adjustment normalization. Pick a reference image...say the LUM, green or Ha channel. Apply the above PixelMath to the blue and red channels. When you combine them later, you should find the color is better right off the bat compared to not balancing the channels. Without this balancing, you may find your color images have a very strong color cast such as reddish or blueish. From there, you can further calibrate the color as you need, using BN/CC, or using PCC (photometric calibration). You should like how the color looks just with the channel alignment, though, maybe with a bit of SCNR or perhaps just a BN to remove any remnant color cast. Which image you choose as the reference depends on the distribution of the signals. You can clip at either end, and depending on the data, you may clip at both with a fit or alignment regardless. It's a matter of deciding what you need to preserve most. If you want star color, then finding the darkest frame and "pushing down" might be best. On the other hand, if you want to preserve faint details, the opposite may be better, find the brightest frame and lift the rest up. Or, find the middle frame and go for the happy medium.

 

24) The registered, cropped, DBE and Linear Fit/median-adjusted MasterLight Frames have now completed processing prior to color combining for either narrowband or RGB images. For LRGB or NbLRGB images, the RGB and Nb data supplies all the color (chrominance) information for the final image. It also contains a significant amount of luminance (lightness) data that can also contribute to the image. From this point on, the RGB data processing is focused on color balance, white balance and color saturation enhancements which will eventually “paint” the final processed luminance data. The luminance (or lightness) processes will focus on enhancing the sharpness, contrast and detail of the image. The (Nb)RGB and (Nb)LUM are processes separately from this point onward until combined to form the LRGB image later.

 

25) For RGB images, I use the LRGBCombination process to create the composite color image from the individual monochrome MasterLight frames for the RED, GREEN and BLUE filters. I previously believed that this step was really basic and fool proof. This could not be farther from the truth. First, I open the RED, GREEN and BLUE MasterLight Frames from the above flow. I then assign them appropriately to the channels in the LRGBCombination process window with the L channel unchecked, default lightness, typically default saturation and equal channel weights. I don’t do chrominance noise reduction at this point. When complete, I do a STF stretch of the output image and look at the color balance. It is easy to get too much total saturation and a washed-out image. If the image is not showing basically the right color balance, then it will use trial and error to find the best lightness and saturation settings to use. I usually decrease the saturation slider – which actually increases the color saturation – and perhaps, but not always, increase the lightness slider – which actually reduces the luminance component of the image. I start with lightness = 0.5 (default) and saturation at 0.35 (increased). I will try several settings to see which give the best result. Once I am satisfied, I then rename the identifier and save the image as generated in the linear state as the starting point for further processing. For RGB images, an autostretch of the LRGBCombination RGB output image at this point should show a reasonably good color due to the effect of the LinearFit/median-adjustment of the images from the previous step and the optimization of lightness and saturation settings. If the color is not reasonably close to what you expect or want, continue to optimize at this step. It is much better to get closer to the right color saturation balance at this early stage of the process than to have to try to recover the color balance later in the flow which may be next to impossible.

 

26) For Narrowband images, I use either the LRGBCombination or PixelMath processes to create the composite color image from the individual monochrome MasterLight frames for the Ha, OIII and SII filters. I open the Ha, OIII and SII MasterLight Frames from the above flow. I then assign them appropriately to the channels in the LRGBCombination process window with the L channel unchecked, default lightness, default saturation and equal channel weights. As with RGB images, I will do a STF stretch of the output image and look at the color balance. It is easy to get too much saturation and a washed-out image. If the image is not showing basically the right color to lightness balance, then it will take trial and error to fine the best lightness and saturation settings to use. These false color image will look very off in color with SHO looking very green and SHO looking very red so concentrate on the lightness to color balance instead. I usually decrease the saturation slider – this actually increases the color saturation and perhaps, but not always, increase the lightness slider – this actually reduces the luminance component of the image. I start with lightness = 0.5 (default) and saturation at 0.35 (increased). I will try several settings to see which give the best result. These adjustments are usually less than those that are used for RGB and often lightness = 0.5 and saturation = 0.5 are fine. However, for very strong Ha emission targets, the total lightness may be too strong and less lightness/more saturation may look better. Once I am satisfied, these channels for the Hubble Palate are SII for R, Ha for G and OIII for B and then apply the process. For more complex channel assignment, I typically use PixelMath to enter the combination expression for each channel. For example, for a false color image, I would assign R = (0.5*SII) + (0.5*Ha), G = (0.4*Ha) + (0.6*OIII) and B = (0.15*Ha) + (0.85*OIII) and apply the process. For PixelMath the output frame type must be specified as RGB Color from the Color space drop-down list and the create new image box must be checked.  I then rename the identifier and save the image as generated in the linear state as the starting point for further processing. For Narrowband images, an autostretch with the channels linked of the LRGBCombination or PixelMath output image at this point likely show a very un-natural color. With the channels unlinked, the color should be better but still off. The bulk of the initial non-linear-state processing work for Narrowband images is focused on fixing the false color to a more artistically pleasing pallet. The balance between the luminance and chrominance is just as important for narrowband false color images as for RGB since the balance of tones will be off too much later on in the processing.

 

27) Depending on the image, I may choose at this point to apply additional runs of DBE using the process icon as described in step 21 to the RGB or NB-composite color images if there is residual gradient or AMP glow still present. Ideally, with the correct Masterdark calibration and the individual MasterLight Frame DBE processing, there should be minimal to no AMP glow at this point.

 

28) For RGB images, after completing DBE, I now work on color corrections of the image. First, I do BackgroundNeutralization. I first find as large a region of blank background sky that is completely free of stars as I can find. It is not critical that it be huge, but I like to get a reasonably sized sample region. I create a preview region in this are that I label as background. This preview will be used a couple of times so I rename it. I had been using the default setting for BackgroundNeutralization since I had not been able to get Pixinsight to popup the readout mode to check pixel values. I now have that fixed so I can set the background upper limit to something close to the maximum value seen in the readout mode. I then execute. For Narrowband composite images, I will do a similar BackgroundNeutralization but no further color correction since the colors will be set later in the processing.

 

29) The combination of LinearFit/median-adjustment, optimized lightness and saturation settings in LRGBCombination, and Background-Neutralization should have the color of the RGB images fairly close to accurate. At this point any narrowband filter data should be added to the RGB data using the NbRGBCombination script. I have tried many times to use PixelMath for this step, but the NBRGBCombination script just works the best. However, I have found that it is better to add any NB data such as Ha to the RGB image just after background neutralization and before color calibration. I have read several tutorials that state that doing the color calibration after adding the NB data defeats the effect. I find this to be the case only if you use the photometric ColorCalibration and not ColorCalibration. At this point at my experience level with Pixinsight, achieving the “right” color balance is the biggest challenge. Through lots of trial and error, I have found the most difficult step is adding in the Ha data to RGB images. When this is done, it is very easy for the image to gain a way too reddish background with the blue star forming regions drowned out. For the actual NBRGBCombination, I just set the narrowband filter width and run with the default setting. This works with many types of narrowband data. I will make combination color enhancement MasterLight frames with the narrowband filter data such as RED: (0.5*SII) + (0.5*Ha), GREEN: (0.4*Ha) + (0.6*OIII) and BLUE (0.15*Ha) + (0.85*OIII) and add that color enhancement to the R, G, and B channels with this script. However, I mostly have added Ha data alone to the R channel.

 

30) Once the narrowband data is added, then I run the ColorCalibration process. I use the same preview that I used for the BackgroundNeutralization and use either the entire image as the WhiteReference or if there is a prominent galaxy, I create a preview around the galaxy for the WhiteReference. Running the ColorCalibration process should correct the overall color balance nicely. This step is not done for narrowband composite images since they use a false color pallet anyway.

 

31) If I do not wish to added narrowband data to my RGB image, then I prefer to use the PhotometricColorCalibration process for the white balance. However, prior to running the PhotometricColorCalibration process, I run the ImageSolver Script to platesolve the image. For drizzled image, the pixel scale will be reduced for the captured pixel scale by the same factor that you drizzled the integration – typically 2X. Be sure to save the image at this point to capture the RA and DEC of the image in the FITS header.

 

32) I will then run the PhotometricColorCalibration process. I have directly compared PhotometricColorCalibration to ColorCalibration and have found them to be mostly equivalent. For the PhotometricColorCalibration, the image center RA and DEC should be read from the FITS header that was set with the ImageSolver script. For galaxies, I use either the average spiral galaxy or the Elliptical galaxy depending on the target. For nebula, I use the either the average spiral galaxy or the G2V star for white reference. All other setting at default. This step is not done for narrowband composite images since they use a false color pallet anyway.

 

33) At this point, the RGB image or the NBRGB image should have a good color balance with the star color correct and the saturation reasonable. The image should look mostly as you would expect for the target. NB false color images will look very saturated for whichever channel you use for the Ha data. This is expected since the color pallet will require a significant amount of manipulation at a later point in the process.

 

34) After completing the color balance correction of the NbRGB, I extract the lightness for the NbRGB in the linear state. This image contains a significant fraction of the lightness data from the image and this can be added to the actual LUM or NbLUM data later in the process to further improve the signal to noise and to add further sharpness and detail. For NB, I will also do the lightness extraction regardless for any false color image to add back to the image after color manipulation or to use as a mask for further processing such as noise reduction and deconvolution for sharpness.

 

35) For RGB/NBRGB images, I will sometimes use either a CurvesTransformation in increase the overall color saturation or use the ColorSaturation process to selectively increase the saturation of a specific color prior to stretching the image to non-linear. It may be helpful to generate and apply a star mask to the image to help preserve the star color balance while you are trying to enhance the color of a DSO. For the StarMask, I use Small-scale = 2, Large-Scale = 2 and Compensation = 3. I use the readout mode to see what the average background intensity of the image is and set the noise threshold to be just above this level. I will then do a HistrogramTransformation of the StarMask to clip the black point to remove any opening in the background and isolate the stars in the mask. I try not to overdo this saturation change in the linear state and will look at the STF of the linear image carefully. Since the RGB/NbRGB is mainly for the chrominance information, some degree of oversaturation is often desirable for combination with the Luminance data for the LRGB image. I have also used the DCONprotectionMask that I detail later in the flow for color saturation when I only want to touch the DSO and strongly protect the stars and background.

 

36) Prior to stretching the RGB/NBRGB image, I do an initial noise reduction while still in the linear state. For a protection mask, I clone the image and use a HistogramStretch using the default STF screen stretch settings and invert the image for the mask. This will protect the bright parts of the image and highlight the background effectively. For this noise reduction, I use MultiscaleLinearTransform. The settings I use are:
     a. Algorithm: Starlet Transformation
     b. Layers: 4
     c. Noise Reduction: Enabled
            i.   Layer 1: Threshold: 3.0; Amount: 0.5; Iterations: 3
            ii.  Layer 2: Threshold: 2.0; Amount: 0.5; Iterations: 2
            iii. Layer 3: Threshold: 1.0; Amount: 0.5; Iterations: 2
            iv. Layer 4: Threshold: 0.5; Amount: 0.5; Iterations: 1

 

37) I now stretch the RGB/NBRGB image to non-linear. Previously, I preferred to use MaskStretch process for this step and not a STF-specified HistrogramTransformation. I felt that the MaskStretch does a better job in preserving the color balance in the image. For the MaskStretch, I use the same preview for the background as used for the BackgroundNeutralization and set the background target to 0.2, a bit lower than default. I have now switched to mostly using the ArcSinHStretch. For this process, you set the Stretch factor using a real time preview. I find that ArcSinHStretch is extremely good for preserving the color of the image and not washing the color out. This give you more flexibility for tweaking the color rather than having to work to recover the color saturation in the image. For both methods, I tend to under-stretch the image with either process to allow me more room to play with a second less aggressive stretch using the HistrogramTransformation. For this step, I will reset the back-point to the point just before starting to clip and move the mid-tone down until the background just starts to become visible. Once I am satisfied with the non-linear stretch, this completes the linear processing of the RGB or NBRGB image.

 

38) For the LUM, the first step is to add any Nb data to enhance the contrast. Since the LUM MasterLight frame is monochrome and the NbRGBCombination script require an RGB file as the input, we first need to convert the LUM MasterLight frame to RGB from Grayscale using the GrayscaletoRGB process. This will assign equal weighting the R, G, and B channels equally in the RGB version of the LUM image. I then add the narrowband data that I wish to add to the RGB LUM with NbRGBCombination script. The resulting image is in RGB format and we need to convert the new NbLUM image back to grayscale using the RGBtoGrayscale process.

 

39) The next step I credit to John Hayes from Cloudy Nights. Here we also add in the extracted lightness image from the RGB, RGB_L to the NbLUM image to create a SuperLuminance (SLUM) image with all the lightness information that was collected in one image. Depending on the number of subframes collected, these two images will have different signal-to-noise levels and different background intensities. I first equalize the background levels by applying a similar a median adjustment with PixelMath in PI: K: $T + (median (<NBLum>) - median($T)) to the RGB_L and create an RGB_L_LF image. I then create a clone of both the NbLUM and the RGB_L_LF so that I have a total of four images: NbLUM, NbLUM_Clone, RGB_L_LF and RGB_L_LF_clone. We want to average the NbLUM and the RGB_L_LF using noise weighting to adjust for the differences in signal-to-noise with the ImageIntegration process, but that process will only do noise evaluation weighting with a minimum of four frames. The clone frames solve this problem. In the ImageIntegration process we do not want to do any pixel rejection since that was already done on the MasterLight Frames that generated the input files. I set the Normalization to additive with scale, but this is probably unnecessary since I normalized the background levels with PixelMath, but I have not determined if this can be set to none. For this integration, I use these setting in PixInsight (Differences from Default):
         a. Image Integration
                 i. Combination: Average
                 ii. Normalization: Additive with scale
                 iii. Weights: Noise Evaluation
                 iv. Scale Estimator: (MAD)
                 v. Generate Drizzle Data: Unchecked
         b. Pixel Rejection (1)
                i. Rejection Algorithm: None
                ii. Normalization: None
        c. Pixel Rejection (2)
                i. High and low Sigma = N/A
        d. Large Scale Pixel Rejection
                i. Reject low large-scale structures: Unchecked
                ii. Reject High large-scale structures: Unchecked

 

40) The next step is to run Deconvolution on the lightness data in whichever form is final: LUM, NbLUM or SLUM. For NbLRGB, it is the SLUM generated in the previous step. For Nb images, I would be the extracted lightness from the false color NbRGB image and apply deconvolution to that data. I have found deconvolution to be the trickiest and most time consuming part of the post processing. It requires many preparation steps to be able to even start testing the Deconvolution process and many steps of trial and error on many preview regions to get the Deconvolution setting optimal. It is very easy to overdo this step and create ugly artifacts in the image. For this step, it is always the case of less is more. Deconvolution does not do magic and you should not expect incredible results. The effects are subtle but quite nice if done right. If done carefully, the image sharpness can be improved very early in the linear state that will trickle down through the further post processing in the non-linear state.

 

41)  There are four preparation steps needed for the Deconvolution process. The first, second and fourth steps are rather straight forward. I find the third step tricky to get the best results:
             a. Generate a PSF (point spread function) for the stars in the image
             b. Create a StarMask to provide protection for the stars from ringing artifacts
             c. Create a lightness Mask, a RangeMask or a DCONprotectionMask  from a RangeMask/StarMask combo mask to isolate the bright features of the image that are

                 targeted for sharpening and to further protect the background and further protect the stars from sharpening and ringing, respectively
             d. Create a set of preview area in the image, both of star region, bright nebula or galaxy regions and fainter areas to test setting of the deconvolution process and the

                 effectiveness of the masking protections to the stars and background regions

 

42) The point spread function for the image is generated with the DynamicPSF process. After opening the process, I enlarge the image enough to clearly see individual stars. I then systematically click on stars go over the whole image making sure that the stars are not saturated, very small, or close to adjacent stars so that the selection box does not isolate the selected star. If I click on a star that I don’t like, I simple hit delete to remove it from the list. Each star selected shows up in a list of selected stars on the DynamicPSF screen. I will try and get as many stars as I can to get a good statistical sample across the entire image. Once I have sampled the whole image, the list will be sorted and reduced using the generated statistics for A (Amplitude), r (Aspect Ratio) and MAD (Mean Absolute Difference). The list can be ordered by these parameters and I clip the distribution on both the high and low sides of the distribution for each of these parameters in the order of MAD, A, and then r. I will go from maybe 200-300 stars down to about 20-30. Once the list is clipped, PSF image is generated by clicking on the icon that looks like a camera. I save the rename and save the PSF image.

 

43) To create the StarMask to provide ringing protection to the stars in the image, I used the StarMask process with the following settings:
             a. set the scale = 7 or 8 in order to capture the largest stars. You may need to try both setting and see which capture the largest stars best.
             b. look at the background intensity to determine a good value for the Noise threshold – it needs to be just a bit above the background maximum
             c. Structure Growth: Large scale = 2, Small scale = 2, compensation = 3
             d. Mask Generation: Smoothness = 16.
             e. This should generate a good star mask with most of the bright stars, but it may miss the smaller stars. To capture more of the fainter stars, you need to lower the

                 noise threshold, but not too low or you will see a lot of noise being captured as stars.

 

44) A protection mask is necessary to isolate the bright features of the image that are targeted for sharpening and to further protect the background and further protect the stars from sharpening and ringing from the deconvolution process but there are various flavors of mask to choose from. The simplest to create is a basic a lightness mask. Just clone the NbLUM/SLUM image and do a HistogramStretch using the STF settings. This is one of the most useful masks for many purposes particularly for noise reduction if you invert the image. More complex is a RangeMask or RangeMask/StarMask combo mask. Mask creation in general has not been that straight forward for me and it seems to require a lot of trail and error to get the best results. The RangeMask process has several sliders for the Lower limit and Upper limit for allowed brightness values to include. It also has sliders for feathering the mask with both a Fuzziness and Smoothness. Creating the right RangeMask for deconvolution is not an exact science. You need to protect the background strongly to prevent sharpening of the noise and the halos of stars to protect from ringing and this is a compromise with a pure RangeMask. The RangeMask should be set to select only the brightest portions of the image and protect the background, but it can’t, by itself, exclude the stars as well. For this, a RangeMask combined with a StarMask in a better approach since the StarMask is applied independently to deringing settings and the Combo RangeMask/StarMask will isolate the halos of the stars for further protection. You create the RangeMask and StarMask the same ways as above, but for the RangeMask for a combo, you want to not feather it very much and use a smoothness ~ 2 since the combo mask will be blurred later on. For the StarMask, you also need to modify it by stretching it a bit with the HistrogramTransformation process to bring out the fainter stars but not so much as to start to generate noise in the background. Furthermore, the stars in the star mask need to be dilated using the MorphologicalTransformation process so that they will cover more of the halos of the stars in the RangeMask when combined. For the MorphologicalTransformation, we increase the size of the filter applied and make it more star-shaped. To do this, select 5 (25 elements) from Size and click the top, bottom, left and right three black squares to make a more star-shape transformation. The Selection setting must also be adjusted. A setting of 0.50 does nothing to the star sizes. A setting lower than 0.50 decreases star sizes and a setting higher than 0.50 increases star sizes. I set a value of Selection = 0.75 and increase Iterations to 3. Amount can be tweaked to something like 0.50 if you would like the end result to be a 50% blend between the original image and the modified image. 0.30 would blend 30% of the modified image with 70% of the original image. Since this is only a mask image, we keep Amount = 1. Now from the RangeMask and the modified StarMask, I use PixelMath to combine them with RGB/K set to: RangeMask – StarMask, with Create new image checked and Grayscale selected under Color space to create the combo DCONprotectionMask. You then need to closely inspect the mask to see in both the background and in the bright nebulosity/galaxy if there are stars that only the core has been removed and not the whole star and its surrounding halo. These stars need to be cleaned up using the CloneStamp to cut segments out of the combined mask image - to produce a larger black mark where the pronounced stars are for better protection. There should not be that many of these that need to be manually fixed. Be sure to click the Execute button on CloneStamp to apply the operation to your image before you close CloneStamp or all your work will disappear - DAMHIK. We then blur the entire combined DCONprotectionMask to smooth out any harsh transitions introduced by in the previous steps using the ATrousWaveletTransform before we can apply the mask in the Deconvolution process. For this process leave Layers set to the default of 4 and simply select each one to disable Detail Layer for all four numbered 1, 2, 3 and 4, but keep the R (residual) layer enabled as this represents all the rest of the wavelet layers. Applying this process blurs the image by deleting the detail on the smaller scale structures but keep the largest scale structures intact. I will typically apply the ATrousWaveletTransform process twice to the image. This completes the DCONprotectionMask generation. This mask is not only very critical for the Deconvolution process is also useful for many other post processing steps where we wish to only attack the nebulosity or galaxy and protect both the stars and background for either sharpening, color manipulation, color saturation, or contrast enhancement. In some cases, however, such as for globular clusters, even using the simple basic a lightness mask will work well enough for the deconvolution mask. It simply takes some trial and error to determine how sophisticated this protection mask needs to be.

 

45) The fourth preparation step is to set up several preview boxes on the image. These previews should cover the bright areas that you want to sharpen, background areas that you want leave alone, transition areas between, and areas of stars. I may have five or six preview boxes created for this purpose. These previews are very useful. Deconvolution is processer and memory intensive and can take a long time to run on the full image. The settings for the best results can take many iterations to optimize and you have to check multiple types of structures with these setting since what may look great on the galaxy will totally trash out the stars or background. This is why the protection masking and StarMask protection are so important. Running multiple setting on smaller previews takes much less time so that more optimization can be done before running the settings on the entire image.

 

46) With the star protection mask, point spread function, lightness selection mask and previews created, we can now start the deconvolution process itself. First, click on the ExternalFSP and select your generated FSP image. For settings, I use:
            a. Algorithm: Regularized Richardson-Lucy
            b. Target: Luminance (CIE-Y)
            c. Deringing: Enabled
                   i. Global Dark: 0.0100
                   ii. Global Light: 0.0020
                   iii. Local Deringing: Enabled
                   iv. Local Support: choose your StarMask
                   v. Local Amount: 0.7
             d. Wavelet Regularization: Enabled
                   i. Settings at default

 

47) The tuning of deconvolution is the trickiest issue in processing in my opinion. It takes a lot of trial and error. Using the different preview regions, test running the deconvolution starting with your area for highest interest such as the main galaxy or structural areas of a nebula like elephant trunk features. You will be tweaking the Global dark to remove any dark halos and the Global Light to suppress bright sting-like effects. Also start with a low number of iterations. I start with 30-40. You may also have to try different protection masks to see which work best for both local deranging support or the background protection. The amount you tweak for Global dark is in small amounts (~0.005) and even less for Global Light (0.0005). Work on Global Dark first. Once you have something that works for your area of high interest, then try the same settings on the other previews. You may find that what works best on the main feature, makes the other areas look terrible. Its take a lot of back and forth. Once you have settings that work for all previews, up the iterations to 50-60 and repeat. You may not even want to go this high on iterations. I have used as few as 30 on some targets. The key is to remember that deconvolution is not fix-all and it is important to have a light hand with this tool. However, getting a good deconvolution is well worth the time (which can be considerable) since it can make a huge difference in the final image. Also remember that further sharpening can be done later in the processing with other tools.

 

48) Prior to stretching the deconvoluted LUM or SLUM, I do an initial noise reduction while still in the linear state in the same manner as the RGB/NbRGB. For a protection mask, I clone the image and use a HistogramStretch using the default STF screen stretch settings and invert the image for the mask. This will protect the bright parts of the image and highlight the background effectively. For this noise reduction, I use MultiscaleLinearTransform. The settings I use are:

            a. Algorithm: Starlet Transformation
            b. Layers: 4
            c. Noise Reduction: Enabled
                        i.   Layer 1: Threshold: 3.0; Amount: 0.5; Iterations: 3
                        ii.  Layer 2: Threshold: 2.0; Amount: 0.5; Iterations: 2
                        iii. Layer 3: Threshold: 1.0; Amount: 0.5; Iterations: 2
                        iv. Layer 4: Threshold: 0.5; Amount: 0.5; Iterations: 1

 

49) With the LUM/SLUM noise reduced, it is ready to stretch to non-linear. As with the RGB, I had also previously preferred to use MaskStretch for this and not a STF-specified HistrogramTransformation. For the MaskStretch, I use a preview for the background and set the background target to 0.2, a bit lower than default. I have also started to mostly use the ArcSinHStretch for the LUM/SLUM. For this you up the stretch factor using a real time preview. I find that ArcSinHStretch seams to also help to preserve the dynamic range of the LUM/SLUM especially for HDR type targets like M16 or M42. This also gives you more flexibility for tweaking the contrast of the LUM later in the process flow. For both methods, I tend to under-stretch the image and this gives me more room to play with a second stretch using the HistrogramTransformation. For this step, I will reset the back-point to the point where I am just starting to clip and move the mid-tone down until the background just starts to become visible. Once I am satisfied with the non-linear stretch, this completes the processing of the LUM/SLUM image.


  • NMCN, Scott Rose, S1mas and 20 others like this

#10 jpbutler

jpbutler

    Apollo

  • *****
  • Posts: 1100
  • Joined: 05 Nov 2015
  • Loc: Edison, NJ

Posted 24 July 2018 - 11:09 AM

Wow! That is quite an impressive writeup.

Hopefully you save all of the process icons with defaulted settings so that you don't have to regenerate them each time.

 

John



#11 jpbutler

jpbutler

    Apollo

  • *****
  • Posts: 1100
  • Joined: 05 Nov 2015
  • Loc: Edison, NJ

Posted 24 July 2018 - 12:43 PM

4) I have found that the ASI1600MM-COOL and -PRO both do not work well with bias frame calibration. I use DarkFlat frames for calibrating my flat frames and use only MasterFlat and MasterDark Frames for calibrating my light frames. 

 

I really appreciate your writeup and am going through it incorporating into my process flow things that are different from my process flow, if I find that it makes sense to me.

DarkFlat/FlatDark has always kind of confused me. 

 

When you say a DarkFlat I am assuming that you are calibrating the flats with a specific dark that matches the exposure time of the flat and also NOT adding a bias frame into the calibration of the flats.

This is because the DarkFlat should actually also have the bias info in it as well as the added benefit of modeling any amp glow that might be present.

Does that sound correct?

 

Now also, when you calibrate the light frames you add in the MasterFlat and the MasterDark but do not add in the bias frame, correct?

 

thanks 

 

John


  • starhunter50 likes this

#12 cfosterstars

cfosterstars

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2579
  • Joined: 05 Sep 2014
  • Loc: Austin, Texas

Posted 24 July 2018 - 08:34 PM

Wow! That is quite an impressive writeup.

Hopefully you save all of the process icons with defaulted settings so that you don't have to regenerate them each time.

 

John

John,

 

I did not at first because I simply did not know how. Pixinsight is such a deep program and I sort of just plunged it. As I have gone along, I am getting much more disciplined in my approach and methodology. For instance right now, I am reprocessing some SHO Hubble Palette images and I am finding that each time through, even though I am trying to use the same process, I get to a different final result. I am trying to use process history and the process icon generators from the history to try and see what differences I have for each iteration. I am also not closing any views and using workspaces to organize the work. It take so much practice to get a good flow and method and I am still learning. However, each pass get better and that is what I am after. I just subscribe to Adam Blocks tutorial series and going through it methodically. But it is such a time investment. I am seriously considering taking a workshop to learn more.


Edited by cfosterstars, 24 July 2018 - 08:35 PM.

  • starhunter50 and bsavoie like this

#13 cfosterstars

cfosterstars

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2579
  • Joined: 05 Sep 2014
  • Loc: Austin, Texas

Posted 24 July 2018 - 08:38 PM

Also, my flow is always a work in progress. Although I added a big section to this post and will likely add more as time goes on. I have several updates from even the last version. I am adding the section right now on color manipulation for the Hubble Palette for NB filters, HDR imaging and how to add NB to 64bit HDR files, and all my sharpening, curves and contrast enhancements. I am now trying to learn star removal, star size reduction and RGB star color replacement. Its a never ending process so that is why I am writing it all down. I have to go back to my processing notes all the time since I cant remember all the details...


  • starhunter50 and souls33k3r like this

#14 cfosterstars

cfosterstars

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2579
  • Joined: 05 Sep 2014
  • Loc: Austin, Texas

Posted 24 July 2018 - 08:43 PM

4) I have found that the ASI1600MM-COOL and -PRO both do not work well with bias frame calibration. I use DarkFlat frames for calibrating my flat frames and use only MasterFlat and MasterDark Frames for calibrating my light frames. 

 

I really appreciate your writeup and am going through it incorporating into my process flow things that are different from my process flow, if I find that it makes sense to me.

DarkFlat/FlatDark has always kind of confused me. 

 

When you say a DarkFlat I am assuming that you are calibrating the flats with a specific dark that matches the exposure time of the flat and also NOT adding a bias frame into the calibration of the flats.

This is because the DarkFlat should actually also have the bias info in it as well as the added benefit of modeling any amp glow that might be present.

Does that sound correct?

 

Now also, when you calibrate the light frames you add in the MasterFlat and the MasterDark but do not add in the bias frame, correct?

 

thanks 

 

John

Yes, I am only using a dark frame that is exactly the gain, temperature and exposure time as my flats and not using any bias frames. You are correct, the dark flat also has the bias information in it. So by using a dark flat, you also incorporate the camera bias at the same time. 

 

For the ASI1600MM-PRO/COOL it really help to NOT use the calibrate option in the masterdark and masterdarkflat calibration. If you leave this uncheck, the amp glow is significantly reduced and much better calibrated out. I got this tip from Jon Rista and I have found that it works very well. By this calibration method and learning to use DBE both correctly and more aggressively, I have eliminated all the amp glow issues I used to have.


  • Maudy and rottielover like this

#15 starhunter50

starhunter50

    Viking 1

  • *****
  • Posts: 564
  • Joined: 07 Sep 2010
  • Loc: Tilbury,Ontario 42N , 82W

Posted 20 September 2018 - 05:01 PM

Hi everyone , for all those who are absolute beginners in Pixinsight 1.8 here is my approach to a simple workflow that will have your friends drooling.

 

https://www.youtube....JJqew6rQ&t=470s

 

12 Part series and great tips and tricks to get you started in the right directions.

 

Follow these steps and you will be amazed at your progress, no need to buy books and search the Net anymore.

 

Enjoy!

 

Astrodude.

 

 

Trifid Nebula  M20 -  V-RC6 Altair Astro 183C/M combo L-RGB ( no filters )

30 minutes of integration only...

 

https://www.flickr.c...eposted-public/


Edited by starhunter50, 20 September 2018 - 05:04 PM.

  • okiedrifter, N1ghtSc0p3, BobInTexas and 4 others like this

#16 rlsarma

rlsarma

    Ranger 4

  • -----
  • Posts: 310
  • Joined: 24 Aug 2015
  • Loc: Digboi, Assam, India

Posted 27 September 2018 - 06:35 AM

Hi everyone , for all those who are absolute beginners in Pixinsight 1.8 here is my approach to a simple workflow that will have your friends drooling.

 

https://www.youtube....JJqew6rQ&t=470s

 

12 Part series and great tips and tricks to get you started in the right directions.

 

Follow these steps and you will be amazed at your progress, no need to buy books and search the Net anymore.

 

Enjoy!

 

Astrodude.

 

 

Trifid Nebula  M20 -  V-RC6 Altair Astro 183C/M combo L-RGB ( no filters )

30 minutes of integration only...

 

https://www.flickr.c...eposted-public/

Hi Astrodude,

 

I have not only thoroughly enjoyed but also learnt a lot from your 12-series YouTube videos on PixInsight (for absolute beginners) after I have downloaded these in my hard drive. Looking forward to learning from your Intermediate series as well (I have downloaded all the intermediate videos you have uploaded in YouTube. Besides, I have also learnt how to take flats using LED panel in Sequence Generator Pro which you have uploaded in YouTube. I must say these are absolutely helpful videos for learners like us.

 

I would like to request you to upload few video tutorials on Sequence Generator Pro also.

 

Best regards.

 

Rajib


Edited by rlsarma, 27 September 2018 - 06:36 AM.

  • starhunter50 and N1ghtSc0p3 like this

#17 starhunter50

starhunter50

    Viking 1

  • *****
  • Posts: 564
  • Joined: 07 Sep 2010
  • Loc: Tilbury,Ontario 42N , 82W

Posted 27 September 2018 - 09:30 AM

Thank you , sir, 

 

what would you like to see in a Video concerning SGP ??

 

Mitch / Starhunter / Astrodude ...


  • rlsarma likes this

#18 cfosterstars

cfosterstars

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2579
  • Joined: 05 Sep 2014
  • Loc: Austin, Texas

Posted 25 November 2018 - 12:45 PM

I have been watching Adam Blocks tutorial videos and have made several adjustments to my workflow as a result. I transition my imaging from DSLR with a full spectrum Canon 6D to Mono camera with filter wheel using an ASI1600MM-COOL with ZWO filters and more recently to an ASI1600MM-PRO camera with Astrodon filters. I have also just purchased an ASI072MC-PRO OSC camera as a replacement for my Canon 6D. I struggled with issues due to filters reflections and imaging artifacts, but that subject was covered in other threads:

https://www.cloudyni...ing-wrong-here/

https://www.cloudyni...ring-artifacts/

and here are some others of value from other posters with the same issues:

https://www.cloudyni...gs-in-my-flats/

https://www.cloudyni...se-reflections/

At the same time, I was hiking up this learning curve, I simultaneously migrated my image processing from separate programs: Images Plus 6.0 for calibration and stacking and Photoshop CC for image post processing – and moving to pixinsight for all aspects of image processing. Of the two tasks, the change to pixinsight has been more difficult. This is mainly due to the very different nomenclature, program feel and the frankly myriad of so many options for how to do anything. I once saw a statement that said – if you can do something two different ways, pixinsight will offer all 20! The progress was slow. I have been using the tutorials on YouTube by Richard Block:

 

https://www.youtube....UOe4R5Hng&t=13s

 

https://www.youtube....h?v=zU5jJgjKuQQ

 

https://www.youtube....h?v=ZLef9GlHLrs

 

I have also been using the book: Inside Pixinsight by Warren Keller and the tutorials from Light Vortex Astronomy:

 

http://www.lightvort.../tutorials.html

 

I have also subscribed to two different video tutorial libraries: Adam Block’s Pixinsight fundamentals and Pixinsight Horizons (these are absolutely great):

 

https://www.adamblockstudios.com/

 

I also subscribed to IP4AP.com

 

https://www.ip4ap.com

 

These resources have been quite valuable. However, the newer CMOS cameras are not the focus of these tutorials and references as they were written with CCD cameras specifically in mind, even the newer ones. This issues really only effects the image calibration process and not the bulk of the post calibration techniques and processes. I have also received extremely valuable insights and processing tips from members of Cloudy Nights. My intent is to continue to update this flow as I learn new or improved methods from all sources. Any issues or mistakes in this flow are my own and no one elses. At this point, I am quite comfortable with image pre-processing to reach master light frames and to prepare images though to full LRGB or NB color images from my monochrome MasterLight frames and RGB with my OSC all using Pixinsight. However, this is very much an art and not science. There is no right or wrong way to process the data you collect. If you like the way your picture looks, then that is all that matters. This is my process flow:

 

Calibration Frame Collection:

 

1) For my ASI1600MM-COOL, I generated a dark frame library for three different camera temperatures: -10C, -15C and -20C, with two different gain setting for each temperature: 76 gain, 40 offset, 90 USB – for LRGB frames and 200 gain, 40 offset, 90 USB for narrow band filters, with exposure times of 10s, 30s, 60s, 90s, 120s, 150s, 180s, 240s, and 300s. I also generated an analogous Dark Library for the ASI1600MM-PRO for the same gain setting, temperatures and exposures. However, the ASI1600MM-PRO and ASI071MC-PRO cameras ASCOM driver does not, by default, show that the cameras USB setting or offset can be modified. However, these settings are accessible by clicking the advanced settings tab and then they can be adjusted if you choose to do so. I have left the USB and offsets at their default values.  For my ASI071MC-PRO, the TEC cooler is unable to reach as low a temperature as my ASI1600MM cameras – at least in the heat of summer – so I currently have libraries at -5C and -10C. The dark current for the ASI071MC-PRO is basically flat below about 0C so there is not much to be gained by going lower than -10C anyway. For the ASI071MC-PRO, I use two gain settings: gain of 90 (similar to ISO800) and gain of 150 (similar to ISO1600) with exposure times of 10s, 30s, 60s, 90s, 120s, 150s, 180s, 240s, 300s and 600s. For each set of dark frames, I used PI to integrate the frames to generate DarkMaster frames. These frames are reused.

 

2) For the MasterDark generation I used these setting in PixInsight (Differences from Default):
     a. Image Integration
            i. Combination: Average
           ii. Normalization: No Normalization
          iii. Weights: Don’t Care (All = 1)
      b. Pixel Rejection (1)
           i. Rejection Algorithm: Winsorized Sigma Clipping
          ii. Normalization: No Normalization
      c. Pixel Rejection (2)
           i. High Sigma = 2.5
          ii. Low Sigma = 4.0

 

3) All sets of dark frames were integrated into MasterDark Frames to form the DarkFrame Library that is reused since the dark current of the sensor is very stable and should not change much over time. I then archived all the individual subframes and keep only the integrated MasterDark Frames on my laptop to save disk space.

 

4) I have found that the ASI1600MM-COOL and -PRO and the ASI071MC-PRO all do not work well with bias frame calibration and when I tried to use bias frames, both my Flat Frames and Light Frames showed fixed pattern noise artifacts. I now use DarkFlat frames for calibrating my flat frames and use only MasterFlat and MasterDark Frames for calibrating my light frames. For my Canon 6D, I still used bias frames to calibrate my Flat Frames and Light frames and do not use DarkFlat Frames.

 

5) For generating Flat frames for either of the ZWO ASI1600MM or my ASI071MC cameras, I used the Sequence Generator Pro flats calibration wizard to determine flat frame exposures for the seven filters for the mono or the five filters for the OSC using my Flatman EM panel. I have also generated good flat frames with both a homemade lightbox and with tee shirt sky flats with equal success. For my flatman, I point the OTA to the zenith and park the scope. I do the Flat Frame capture in the early evening after dark, but before full astronomical dark so that there is no light leakage to affect my flats. I use an illumination setting of 200 on the panel and a single layer of tee shirt fabric over the OTA for all filters. For the ASI1600MM-C/PRO cameras, I use a 76 gain for the LRGB filters and a 200 gain for the NB filters. This determines the exposure conditions for both the flats and dark flats. For the LRGB flats with my Astrodon filters, the exposure times are typically 0.16s for LUM, ~0.45s for BLUE and GREEN, and ~0.7s for RED. For the narrowband, the exposure times are significantly longer even with the high gain: 1.7s for OIII, 6.7s for Ha and 11.1s for SII. For my OSC camera, I use two gain setting for 90 or 150. Since these two setting are basically a factor of two in gain, I used 0.46s for 90 gain and 0.23s for 150 gain with a UVIR cut filter. The gain setting by filter needs to be set in SGP by event so that all Flat frames for all filter/gain combinations can be taken sequentially. I also collect dark flats after full dark with the scope OTA capped to eliminate any possibility of light leakage. I then collected a dark flat library for each camera and filter at the three-different temperatures for the ASI1600MM-C/PRO cameras at -10C, -15C and -20C and at -5C and -10C for the ASI071MC-PRO. For each set of dark flat frames, I used PI to integrate the frame to generate MasterDarkFlat frames with the same method as the MasterDark frames above. The MasterDarkFlat frames are reused similarly to the MasterDarkFrames.

 

6) I collect flat frames using the flatman EM as stated in 2). I collect new flats if there is any change to my optical train or if I have not used the scope for an extended period of time such as through bad weather. The scope is semi-permanently setup in my backyard and is covered with a Telegizmos 360-day covers for bad weather. The processing of covering and uncovering the rig can displace the camera slightly or move the dust around so I take new flat frames. Both flat frames and flat dark frames also depend on the optical path and optics. The exposures for the both need to be recalculated and redone if the reducer or flattener is changed or removed. The flats are taken at the same gain, (and offset/USB for the –COOL) and temperature settings as the light frames: a 76 gain for the LRGB filters and a 200 gain for the NB filters and the exposure times as determined by SGP flat calibration wizard. Similarly, for my OSC, flats are taken with the same gain and temperature settings as the light frames: 90 or 150 gain for the UVIR filter, and exposure times as determined by SGP flat calibration wizard.

 

7) For Flat Frame Calibration – to calibrate Flat frames for sensor bias and dark current
           a. Use MasterDarkFlats ONLY for calibration at the same exposure, gain, offset, USB and temperature as the flat frames
           b. Uncheck the Optimize on the MasterDark options. Although less important for short exposures, this significantly reduces effects of AMP glow in the integrated

               MasterLight frames.

 

8) For Flat frame integration and MasterFlat Frame generation, I used these setting in PixInsight (Differences from Default). Since I do not use sky flats, I do not use the large scale pixel rejection options:
          a. Image Integration
               i. Combination: Average
              ii. Normalization: Multiplicative
             iii. Weights: Don’t Care (All = 1)
          b. Pixel Rejection (1)
               i. Rejection Algorithm: Winsorized Sigma Clipping
              ii. Normalization: Equalize Fluxes
           c. Pixel Rejection (2)
               i. High Sigma = 2.5
              ii. Low Sigma = 4.0

 

9) This completes the calibration frame collection and calibration MasterFrame generation process.

 

LightFrame Calibration, Pre-Processing and Integration

 

10) I then collect all light frames. I use Sequence Generator Pro (SGP) for image acquisition, PHD2 for autoguiding, ASCOM and EQMOD for mount control and Cartes du Ceil as my planetarium software.  I have routinely used data collected over as many as 10-15 nights on a given target and will soon be using data from different years to complete a project. The framing and mosaic wizard and the target centering algorithm with platesolving as developed with SGP, work very well for allowing multi-night data collection. However, as a result of multiple nights or different years when frames are collected and depending on the target, I will often have multiple sets of flats to process for different sets of light frames. The images are collected in folders by date and the flat frames are tracked by date also. As I am collecting data, I try and start the image processing while I am still collecting. I try to do the calibration and cosmetic correction of the raw light frames in batches as I collect them so that the flat frames corresponding to the light frames don’t get mixed up or unclear due to the passage of time. This also allow me to determine how many good frames I have collected so makeup frames can be collected as necessary.

 

11) For Light Frame calibration, I use the MasterDark frames from the appropriate DarkFrame Library for the combination of camera, temperature and gain used for the light frames. I also use the appropriate MasterFlat frame for the combination of camera, reducer, temperature, gain and filter and imaging session. I do not use a MasterBias frame for the ZWO cameras. I output the Calibrated light frames to appropriate folders by filter. As I stated above, I recently stopped using the optimize option for my dark frame calibration. Since my dark frames are taken with exactly the same conditions as my light frames, the optimize option will unnecessarily try to scale the darks causing incorrect calibration. By not using the optimize option, fixed pattern noise (FPN) is better calibrated out. Specifically, for the ASI1600MM-COOL/-PRO or the ASI071MC-PRO cameras, this method is very effective for calibrating out the AMP glow more effectively than when the optimize option is checked. This hint came from Jon Rista on Cloudy nights.

 

12) I then do CosmeticCorrection in Pixinsight on all the calibrated light frames. Many of the issues with hot pixels and other artifacts should already have been calibrated out with the MasterDark and MasterDarkFlat image calibration. Adam Block’s video tutorial on the CosmeticCorrection process was very enlightening on how this works. I use the MasterDark and Auto Detect methods. I used a single light frame by filter to test settings for the sigma value using the real-time preview. I adjust the sigma for Hot and Cold pixel corrections for both the MasterDark and Auto Detect methods by eye to look at the elimination of hot pixels. Since my cameras, ASI1600MM-C and -PRO have really good sensors and so far, show no bad columns or rows but only a few hot pixels, it typically requires only a total of ~5K and <1K pixels for MasterDark and about 300 and 100 pixels for Auto Detect. I check this for each set of light frames by filter. I output the Cosmetic-Correction light frames to appropriate folders by filter.

 

13) After CosmeticCorrection of the light frames, I use the Blink process to inspect my individual light frames to discard any obviously-bad frames that I did not catch while doing imaging. In this case, I am only really looking for bad tracking or very severe passing clouds that clearly ruin the subframe. I use the option to set the histogram screen stretch by frame even though this takes more time. This is the first pass of grading the light frames. Issues like minor tracking blips, slight out-of-focus frames, satellite trails, aircraft trails, asteroid tracks, and even thin passing clouds are actually OK. The processes of pixel rejection, large scale pixel rejection and LocalNormalization will reject those issues. Any difference in signal to noise will be adjusted in the weighting process. Therefore, the data from most frames can be used and add to the image in a constructive way. This is one of the powers of Pixinsight. Mostly I am looking for obvious really bad tracing errors or images severely degraded by clouds. As long as the subframes are otherwise reasonable regardless of signal-to-noise, I do not discard them.

 

14) After image inspection with Blink and for my OSC ASI072MC-PRO or my Canon 6D, the frames need to be converted to color with the DeBayer process. The only setting is the Bayer/mosaic pattern which for my cameras is RGGB and use the VNG Demosaicing method. I output the color subframes to a new folder since there is typically only one filter. With my ASI071MC-PRO, I recently have been using both an Astronomiks 7nm Ha and I have an SCT dual narrowband filter coming that combines Ha and OIII bandpasses. With these filters, I extract the individual channels Ha from the RED channel and OIII from both the GREEN and BLUE channels of the RGB images later in the processing flow. However, to prepare for doing this later, you should use the Superpixel Demosaicing method instead of VNG to restrict from color interpolation between pixels.

 

15) I then use the new SubFrameSelector process (which replaces the SubFrameSelector script) in PixInsight to grade and weight my light frames to generating a weight factor for each subframes based on generated statistical comparisons. When loading the files for the light frames into the process, make sure that you chose the CosmeticCorrection frames and not the calibrated frames DAMHIK - but I have wasted a lot of time by making this error. Select Measure Subframes from the dropdown menu, click execute and wait for the process to complete. After running the algorithm to measure the subframes, I look at the graphs for SNRWeight, eccentricity, and FWHM and mark any obvious outliers to discard particularly for Eccentricity. I then use the table of the image statistics to find the Min/Max values for FWHM, Eccentricity and SNRWeight. I feed these into the following expression for weighting the light frames: (10*(1-(FWHM-Min(FWHM))/(Max(FWHM)-Min(FWHM))) + 10*(1-(Eccentricity - Min(Eccentricity))/(Max(Eccentricity)-Min(Eccentricity)) + 30*(SNRWeight - Min(SNRWeight))/(Max(SNRWeight)-Min(SNRWeight))+50 to set the weighting for each light subframe based on scoring of 10% FWHM, 10% Eccentricity and 30% SNRWeight with a baseline of 50% weight for the worst non-discarded light frame. The final factor of 50 is adjust up or down so that the weight for the best image comes out to 100.0. This expression rate images from 50-100 and not completely discount even poorer images. The process will default the Weight keyword for the FITS header for the scoring to SSWEIGHT for later use in the image integration process to weight the contribution of each subframe to the final integrated MasterLight Frame. I then set the dropdown to output subframes, choose the output folders and execute the processes a second time. This will output the subframes to the new folder skipping the discarded subframes.  I use the highest weighted subframe for each filter as the reference frame for StarAlignment and ImageIntegration in subsequent preprocessing steps. I also identify the 5-10 best scored subframes to integrate generating the references frame for LocalNormalization. With the new SubframeSelector process, you do not lose the scoring information in the table if you close the process but only if you run the process again. This allows you to go back to the scoring to identify the highest scored frame for later processes if you have not re-run the process on a different set of frames in the meantime. For each set of subframes, I generate a process icon of the SubframeSelector process after each outputting the score frames. This will save the scoring for that set so that you can determine which frames have the highest scoring at any time. This is better than the SubframeSelector script – DAMHIK.

 

16) Now the light frames need to be registered to each other using the StarAlignment process. I have typically done this separately for each filter using the highest weighted light frame from the SubframeSelector as the reference frame. You can also use a single reference frame from your brightest image – typically LUM or Ha to register all the subframes to the same reference. In this way, after integration, all the MasterLight frames are already registered to each other. If you register by filter, then MasterLight frames with have to be registered again to each other later. Both methods work but some have stated that the second registration step adds noise unnecessarily. I may try this at a later date. In either case, I check the distortion correction option using the 2D- Surface Spline algorithm with default settings and check the generate drizzle data check box. I have found that I can sometimes suppress some color artifacts of my stars with the distortion model active but you can disable this if you want at this point. I also used the distortion model option later on for registering the MasterLight Frames to each other. The decision on whether or not to drizzle your image data depends on several factors. It works best on data that is under sampled and when it has been dithered during acquisition. Since both my refractor systems are undersampled to some degree, I usually use the drizzle methodology. However, for my SCT which is significantly oversampled, drizzling the data make no sense. I use the default star detection settings since they do not seem to fail unless the frame is really bad – in this case it should be discarded anyway. The registered light frames are outputted to a new set of folders by filter and the drizzle date files are also outputted in the same folders by default. To this point, I have always used the Auto setting for Pixel Interpolation, but with the large number of subframes that I typically collect with my CMOS camera, forcing the method to Nearest Neighbor may give better results.

 

17) LocalNormalization is next run on the calibrated, cosmetically-corrected, weighted and registered light frames by filter. LocalNormalization is a powerful method for removing largescale non-uniformities such as thin clouds or light reflections in your frames but comparing to a reference image that is generated from frames that are clean. Having some clean frames and the generation of the reference frame is the most important part of this process. If your data is very clean with no variation in background sky or clouds in any of your subframes, this process can be skipped, but there is really no downside to using LocalNormalization other than the processing time. The simplest way to generate this reference is to simply use the best single frame from your scoring of your subframes with the SubframeSelector process. However, you can also take a subset of your best frames and integrate them with ImageIntegration to create an even cleaner reference. You could select the top 10 or so frames from the score with SubframeSelector to integrate or use Blink to manually choose frames, but you want to only pick frames that only show true astronomical structures. Previously, I mainly used the single frame method with success, but I began using the integrated frame method and it worked very well for removing some light passing clouds from some subframes so they did not contaminate the subsequent stacked master light frame. When LocalNormalization is done to the best effect, there will be much less background gradients left for removal with processes like DynamicBackgroundExtraction. To generate an integrated reference frame, use the ImageIntegration process with the following settings:
           a. Image Integration
                  i. Combination: Average
                 ii. Normalization: Additive with scaling
                iii. Weights: Noise Evaluation
                iv. Scale Estimator: (MAD)
                 v. Evaluate Noise: Checked
           b. Pixel Rejection (1)
                 i. Rejection Algorithm: Sigma Clipping (depending on the number of images you are using)
                ii. Normalization: Scale + Zero Offset
           c. Pixel Rejection (2)
                 i. High Sigma = 2.5
                ii. Low Sigma = 4.0
           d. Large Scale Pixel Rejection
                i. Reject low large-scale structures: unchecked
               ii. Reject High large-scale structures: unchecked
When loading the files for the light frames into the either the ImageIntegration or LocalNormalization processes, make sure that you choose the Registered Light frames and not the graded or cosmetically corrected frames - DAMHIK - but I have wasted a lot of time by making this error. After creating the reference image, open the LocalNormalization process and select that image as your reference image. I have typically used the default settings accept for the scale factor which is increased from the default of 128 to 256 pixels. Then select all of your light frames and execute the process. By creating a high-quality reference image, you should not see any large-scale blotchiness in the background in your final integrated MasterLight frame in the next step. However, if you should first try further increasing the scale factor. I have found that a setting of 256 pixels works well for most cases when the default scale of 128 pixels has issues. Initially, I had found that no setting for the scale factor was free of blotchiness and had to use a different normalization method (scale with zero offset) but this was an early exception likely the result of my inexperience with the tool. Outlier rejection is checked with Hot pixel set to 2, but the image should already be free from hot pixels due to the previous ImageCalibration and CosmeticCorrection processes. This step can be time consuming especially on any older computer (i5) and for lots of subframes but it can do wonders for removing artifacts from your work. The LocalNormalization data files are output to the same folders as the registered LightFrames and the drizzle data files.

 

18) The light frames are then integrated without drizzle expansion to update the drizzle data using the ImageIntegration process. Again, the highest weighted light frame from the SubframeSelector is selected as the reference frame. The light frames are loaded along with the LocalNormalization and Drizzle data files.  If you made an error and ran the LocalNormalization on the wrong set of subframes, the ImageIntegration will complain with an error – DAMHIK. For this ImageIntegration, I use these setting in PixInsight (Differences from Default). NOTE: The Large Scale Pixel Rejection can cause issues with very high dynamic range images such as M42 and if you are seeing artifacts around bright stars, you may need to disable either high or low rejection. For dim images, both can be check but I normally use only the reject high:
          a. Image Integration
                 i. Combination: Average
                ii. Normalization: LocalNormalization
               iii. Weights: FITS Keyword
               iv. Weight Keyword: SSWEIGHT
                v. Scale Estimator: (MAD)
               vi. Generate Drizzle Data: Checked
          b. Pixel Rejection (1)
                i. Rejection Algorithm: Winsorized Sigma Clipping
               ii. Normalization: LocalNormalization
          c. Pixel Rejection (2)
               i. High Sigma = 2.5
              ii. Low Sigma = 4.0
          d. Large Scale Pixel Rejection
              i. Reject low large-scale structures: unchecked
             ii. Reject High large-scale structures: checked

 

19) When the ImageIntegration routine is completed, the integrated MasterLight that is generated is not Drizzled to a different pixel scale. For oversampled data from my SCT, drizzle integration is of no value and the non-Drizzled MasterLight Frame is the final MasterLight. For my refractors where I use Drizzle due to under sampling, the non-Drizzled MasterLight Frame is inspected for any issues. If it has a problem, it is not worth running the DrizzleIntegration which takes considerably longer to run. The likely cause is a subframe that should have been discarded but was missed; however, all the above processing steps should have been robust for generating a clean MasterLight frame at this point. If you have an issue, you will have to go back to the StarAlignment step and re-run the steps to this point. If you intend to Drizzle, the integrated image stack from the ImageIntegration routine can be saved but is not used to generate final drizzled MasterLightFrame and the DrizzleIntegration process is used to generate the final MasterLightFrames. In DrizzleIntegration, the drizzle and LocalNormalization files are loaded into the DrizzleIntegration process. The default settings are used with a 2X drizzle integration. This process generates the final MasterLight frames by filter. FYI, for a large number of subframes, the DrizzleIntegration can take a very long time and will consume most of the processing capacity of your computer – I often start this for overnight or before going on some errand and let it run.
20) The above process steps are repeated for subframes from each filter used for the image. When complete, the calibrated, cosmetically-corrected, registered, integrated, normalized and drizzled MasterLight Frames have been created in linear form for each filter. These MasterLight frames should be saved and archived so that you can go back and reprocess them as you get better you’re your processing technique. If you do the pre-processing correctly, the MasterLight frames are as good as the data can generate. This is the end of the image preprocessing flow.

 

MasterLight Frame Preparation Processing:

 

21) With my ASI071MC-PRO, I recently have been using both an Astronomiks 7nm Ha and I have an SCT dual narrowband filter coming that combines Ha and OIII bandpasses. With these filters, I extract the individual narrowband channels from the RGB color MasterLight Frame. Use the ChannelExtraction Process selecting the RGB method and applying the process. This will produce three grayscale images for the R, G, and B channels. For Ha, the R image is retained as the Ha MasterLight frame and the G and B images are discarded. For the STC dual narrowband filter, the R image contains the Ha data and is retained as the Ha MasterLight frame. Both the R and G images have the OIII data due to the overlap of the GREEN and BLUE filter in the Bayer matrix. To create the OIII MasterLight Frame, these two images, OIII_G and OIII_B, need to be combined. Depending on the exact overlap of the bandpass for the Bayer Matrix G and B filters, these two images will likely have different signal-to-noise levels and different background intensities. I first equalize the background levels by applying a similar a median adjustment with PixelMath in PI: K: $T + (median (<OIII_G>) - median($T)) to the OIII_B and create an OIII_B_LF image. I then create a clone of both the OIII_G and the OIII_B_LF so that I have a total of four images: OIII_G, OIII_G_Clone, OIII_B_LF and OIII_B_LF_clone. We want to average the OIII_G and the OIII_B_LF using noise weighting to adjust for the differences in signal-to-noise with the ImageIntegration process, but that process will only do noise evaluation weighting with a minimum of four frames. By using the clone frames, we solve this problem. In the ImageIntegration process we do not want to do any pixel rejection since that was already done on the MasterLight Frames that generated the input files. I set the Normalization to additive with scale, but this is probably unnecessary since I normalized the background levels with PixelMath, but I have not determined if this can be set to none. For this integration, I use these setting in PixInsight (Differences from Default):
            a. Image Integration
                 i. Combination: Average
                ii. Normalization: Additive with scale
               iii. Weights: Noise Evaluation
               iv. Scale Estimator: (MAD)
                v. Generate Drizzle Data: Unchecked
           b. Pixel Rejection (1)
                 i. Rejection Algorithm: None
                ii. Normalization: None
           c. Pixel Rejection (2)
                 i. High and low Sigma = N/A
           d. Large Scale Pixel Rejection
                i. Reject low large-scale structures: Unchecked
                ii. Reject High large-scale structures: Unchecked
This creates a new monochrome image that is the OIII MasterLight frame from the data in the G and B color channels. These extracted narrowband MasterLight frame can now be combined with data from a UVIR or CLS filter RGB image with the same methods that are used to combine narrowband and LRGB filter data from monochrome cameras that will be discussed later.

 

22) If after calibration, you chose to register your MasterLight frames for each filter, then these MasterLight frames must now be registered to a single Reference MasterLight Frame. If you instead registered all your Light frames to a single reference, then all your MasterLight frames are already registered to each other and this step is unnecessary. To register the MasterLight frames to each other, use either the LUM or the Ha as the reference image for LRGB or NB imaging, respectively.  I use the default setting for star alignment, but I do check the distortion option. This produces a set of registered MasterLight Frames by filter. I found that by allowing the distortion correction, you can reduce the amount of what is called “lateral chromatic aberration” that can turn up in some images due to limits in Pixinsight’s StarAlignment process.  This is the effect when combining RGB channels (with either RGB or NB data) that even after registration where stars, particularly towards the edges of the image, can show uneven colors across their disks.  Blue, Green and Red can appear brighter around certain edges, due to slight and normally uncorrected shifts in registration between these channels. This effect can be mitigated by checking “Distortion correction” and using the “2-D Surface Splines” Registration model. Since there are few frames, I use the Auto setting for the Interpolation model. I save the registered MasterLight frames to a separate folder.

 

23) If you are using a premium mount, you may not see much in the way of dark or distorted edges to your MasterLight Frames, but for my mount, my subframes tend to have some edges that need to be cropped. To clean up the edge artifacts caused by the registration process and dithering, I set up the DynamicCrop process using the MasterLight frame with the most restricted FOV as the template making a process icon for DynamicCrop. From this process icon, I can then apply DynamicCrop to all MasterLight frames so that they are the same size and FOV. I try to err on the side of preserving FOV rather than cropping excessively, but that is my preference. I save this process icon so that the exact crop can be used on all the MasterLight Frames with identical results. The cropping process is necessary to clean up the edges of the images in order for the next step in the flow to be successful.

 

24) To flatten the background and remove gradients in the MasterLight frames, I run DynamicBackgroundExtraction on the registered and cropped MasterLight frames repeatedly until the images are clean and free of gradients or AMP glow. This was the only way I have been able to remove the residual artifacts I have had from the leakage of light around my 1st generation ZWO filters that leads to false color in the image corners. Even with the Astrodon filters, which are free of that issue, there may still residual amp glow from the camera especially with the SII and Ha filters. This issue is improved by the not using the Darkframe optimization option in the ImageCalibration, but even that is not perfect and there will be residual AMP glow. This is perhaps the most important step in the post process work flow.

  • I start to set up the DBE by focusing on the MasterLight frame that has the greatest amount of nebulosity or DSO. This is typically the Luminance or Ha filter.
  • For sample generation, I start with a tolerance of 2.0, a sample radius of 20 with 12 samples per row and see how many samples are generated and how many are active. For images that are dominated with nebulosity and have very little background, I will just put in sample points manually since the most of the automatically generated sample will need to be removed anyway. For galaxy images with lots of background sky, I will usually use the automatically generated samples since it is easier. I may need to up the tolerance to as high as 3.0 or 4.0 to get then all to generate, but typically a tolerance of 2.0 should generate all the samples from these setting across the image. Tolerance just controls how stringent the algorithm rejects pixels to avoid stars that might be in the samples. If you are careful to avoid stars, a high tolerance will not have any negative effects.
  • I use the inspection tab to verify that the sample points are free of stars and move them as needed to avoid stars. When using the automatic sample point generation, I also delete any sample points in areas of nebulosity or the DSO that were generated by the algorithm.
  • After inspecting all sample points for stars and any sample points in areas of nebulosity/DSO are deleted, I go back and manually add sample points in areas of high gradient such as the AMP glow areas or that were missed in the automatic sample generation that should be background sky. It may be necessary to increase the tolerance setting during this process since the gradient lead to a high standard deviation of intensity values in the sample point. If you have avoided all stars in your sample, you can set the tolerance rather high with no issues.
  • For Ha, it is sometimes hard to differentiate between the AMP glow and real nebulosity, that makes the DarkFrame calibration without scaling very helpful.
  • The number of sample point does not need to be large but they need to be placed in areas of actual background and it is better if samples are placed in each quadrant of the image if possible. Basically, if there is an area of sky background, it should have some samples placed in that area.
  • I set the Target Image Correction to division and not subtraction for removing gradients and flat fielding artifacts.
  • I then create a process icon for this baseline DBE process settings, give it a descriptive label and close the DBE process. This process icon will allow the sample points from the Ha or LUM to be automatically created in the other MasterLight frames directly.
  • For each MasterLight frame, I will first start from the resulting samples of the DBE process icon by making the target image active by clicking on it and then double click on the DBE process icon. I do not just drop and drag the process icon on to the target MasterLight frame since the DBE settings for each image should customized with the DBE process icon setting as only the starting point.
  • I first see how many of the sample points are rejected by DBE for sample weighting and tolerance. I may need to add further sample point specific for each filter. Typically, OIII will have more areas of background and less nebulosity so more samples can be added.
  • If all points are accepted, the I lower the tolerance setting until the sample point begin to be rejected and find the minimum value that all points are accepted. On the other hand, if after just starting the DBE process, many sample points are rejected due to high gradient, I increase the tolerance until all sample points are accepted. This is usually the areas of the image on narrowband MasterLight frames where there is residual amp glow. These areas of the image may require the tolerance to be increase to as high as 4-5 and I have found this acceptable for the first pass of DBE.
  • In these high gradient areas, I may need to add many more sample points or these areas will not be flattened adequately.
  • At this point, I then run the DBE process icon on the MasterLight frame for the first pass of background extraction. For each MasterLight frame, I repeat the above tuning of the BDE process settings from the DBE icon. It may be necessary to apply DBE two, three or more times until the background if flat. The better job you can do with image calibration with dark frame, flat frame, and LocalNormalization in the previous steps, the less needs to be cleaned up at this point. With each successive pass of DBE you should see lower and lower tolerance values working with all sample point active. As you clean up the background, you may be able to  add sample point from one iteration to the next as areas of background become clearer and areas of residual gradient or AMP glow need to be addresses.
  • For this step, the better the DynamicBackgroundExtraction process step is done, the fewer the artifacts that will need to be removed at later steps in processing which tends to be more difficult and less effective at addressing these issues.
  • However, since all the MasterLight frames have been registered and identically cropped, the baseline DBE process icon can also be used later on for RGB or LRGB color composite images to further deal with gradients later in the processing if necessary. The residual AMP glow is the main issue I have seen persisting to later in processing so getting rid of this issue early, ideally with image calibration rather than DBE but DBE if necessary, is the best solution rather than trying to fix it on non-linear images after further processing. Otherwise, you may be forced to further crop your final image if you do not want the AMP glow in your final image.
  • After the DBE processing is completed, the now registered, cropped and flattened MasterLight frames are saved as to a new folder.

25) After completing background and gradient removal with DBE, I previously used the LinearFit process to normalize the background brightness between filters. I was not entirely sold on this process and I had changed my processing flow to the method in step 23. Adam Block has a whole set of five tutorial videos on this process alone that I just finished watching and now that I understand more, I will likely go back to it but not for the use that I previously thought was critical. The idea is that when we have calibrated and stacked images that are meant to be color-combined, we have to consider how well the histograms and background brightness really match up to one another. Generally due to the varying conditions of the night sky throughout a night of imaging, or various nights of imaging, as well as the filter being used, the average brightness of the background and signal may not match up well between images we need to color-combine. This is above and beyond the DynamicBackgroundExtraction process but that process step should have definitely helped matching the histogram peaks through subtraction of background gradients in each monochrome image. For NB masters, this process is just fine, but from the understanding of the PhotometricColorCalibration process, it really does not make sense to use for R, G, and B masters. The later Color calibration processes, PhotometricColorCalibration or ColorCalibration with BackgroundNeutralization will correct for this anyway, so it is debatable whether it is even necessary to match up the average background and signal brightness between the images you are going to color-combine color calibrate later. Based on this learning, I will probably not use LinearFit on RGB data but will continue to use it for Narrowband data. Regardless, LinearFit is an easy process to use. You chose your brightest of the MasterLight frames by comparing their histograms using the HistrogramTransformation process. The brightest should be the Luminance or the Ha MasterLight frames, but not necessarily so it is good to check each image. This frame is chosen as the master in the dialog and then the process is applied to the other MasterLight frames.

 

26) There is an alternative method for normalizing the background levels between the R, G, and B channels or the Ha, OIII and SII data if you desire to do so. This method came from another tip from Jon Rista and much of this step is from his explanation. Instead of doing a linear fit, which will often bloat stars (particularly in one channel), you can also try a linear alignment with PixelMath in PI: K: $T + (median (<refImage>) - median($T)) and I call this median-adjustment normalization. Pick a reference image...say the LUM, green or Ha channel. Apply the above PixelMath to the blue and red channels. When you combine them later, you should find the color is better right off the bat compared to not balancing the channels. Without this balancing, you may find your color images have a very strong color cast such as reddish or blueish. From there, you can further calibrate the color as you need, using BN/CC, or using PCC (photometric calibration). You should like how the color looks just with the channel alignment, though, maybe with a bit of SCNR or perhaps just a BN to remove any remnant color cast. Which image you choose as the reference depends on the distribution of the signals. You can clip at either end, and depending on the data, you may clip at both with a fit or alignment regardless. It's a matter of deciding what you need to preserve most. If you want star color, then finding the darkest frame and "pushing down" might be best. On the other hand, if you want to preserve faint details, the opposite may be better, find the brightest frame and lift the rest up. Or, find the middle frame and go for the happy medium. One issue I had with this method is for NB images. You really need to adjust upward to the brightest image since you will likely be clipping the Ha severely if you try and adjust downward.

 

27) For OSC camera, most of the previous steps are applicable even though a fully color RGB image is obtained as the output of the drizzle integration. You still need to do a DynamicCrop and DBE to remove ragged edges and background gradients, but the LinearFIt/Normalization is not applicable.

MasterLight Frame Combination Processing

 

28) The registered, cropped, DBE and, and Linear Fit/median-adjusted (if desired) MasterLight Frames have now completed processing prior to color combining for either narrowband or RGB images. For OSC cameras, the RGB image is already produced naturally with debayering the raw images as part of calibration. However, with the advent of dual wavelength NB filters such as STC or TRIAD, OSC camera are now better able to take NB data – though still not nearly as efficiently as a mono camera. This type of narrowband data can be split into channels and combined with RGB data from the same sensor in a similar way to how HB data is combined with LRGB data from a mono camera. Consequently, the following discussion is becoming more applicable to even OSC cameras.

 

29) One significant difference between CMOS and CCD cameras is that while CCD cameras have hardware binning, CMOS cameras currently only do binning in software. With CCD cameras, it makes sense to take RGB color data in 2X2 binning versus 1X1 binning for LUM. For CMOS, there is no advantage of binning RGB data. For CMOS cameras and for LRGB or NbLRGB images, the RGB and Nb data still supplies all the color (chrominance) information for the final image. However, the RGB data also contains a significant amount of luminance (lightness) data that can be extracted from the RGB composite image, processed separately and recombined with the RGB to contribute Luminance to the final image. This statement is also applicable to OSC camera data with an extracted Luminance. In either case, from this point on, the RGB data processing is focused on color balance, white balance and color saturation enhancements which will eventually “paint” the final processed luminance data. The luminance (or lightness) processes will focus on enhancing the sharpness, contrast and detail of the image. Narrowband (Nb) data straddles both areas since it can be used to both enhance chrominance and luminance components of the image. The (Nb)RGB and (Nb)LUM are processes separately from this point onward until combined to form the LRGB image later.

 

30) For RGB images, I use the LRGBCombination process to create the composite color image from the individual monochrome MasterLight frames for the RED, GREEN and BLUE filters. I previously believed that this step was really basic and fool proof. This could not be farther from the truth. First, I open the RED, GREEN and BLUE MasterLight Frames from the above flow. I then assign them appropriately to the channels in the LRGBCombination process window with the L channel unchecked, default lightness, typically default saturation and equal channel weights. I don’t do chrominance noise reduction at this point. When complete, I do a STF stretch of the output image and look at the color balance. It is easy to get too much total lightness and a washed-out image. If the image is not showing basically the right color balance, then I use trial and error to find the best lightness and saturation settings to use. I usually decrease the saturation slider – which actually increases the color saturation – and perhaps, but not always, increase the lightness slider – which actually reduces the luminance component of the image. I start with lightness = 0.5 (default) and saturation at 0.35 (increased). I will try several settings to see which give the best result. Once I am satisfied, I then rename the identifier and save the image as generated in the linear state as the starting point for further processing. For RGB images, an autostretch of the LRGBCombination RGB output image with channels unlinked at this point should show a reasonably good color due to the effect of the optimization of lightness and saturation settings. If the color is not reasonably close to what you expect or want, continue to optimize at this step. It is much better to get closer to the right color saturation/lightness balance at this early stage of the process than to have to try to recover the color balance later in the flow which may be next to impossible.

 

31) For Narrowband images, I use either the LRGBCombination or PixelMath processes to create the composite color image from the individual monochrome MasterLight frames for the Ha, OIII and SII filters. I open the Ha, OIII and SII MasterLight Frames from the above flow. I then assign them appropriately to the channels in the LRGBCombination process window with the L channel unchecked, default lightness, default saturation and equal channel weights. As with RGB images, I will do a STF stretch with channels unlinked of the output image and look at the color/lightness balance. It is easy to get too much lightness component and a washed-out image. If the image is not showing basically the right color to lightness balance, then it will take trial and error to fine the best lightness and saturation settings to use. These false color image will look very off in color with SHO looking very green and SHO looking very red so concentrate on the lightness to color balance instead. I usually decrease the saturation slider – this actually increases the color saturation and perhaps, but not always, increase the lightness slider – this actually reduces the luminance component of the image. I start with lightness = 0.5 (default) and saturation at 0.35 (increased). I will try several settings to see which give the best result. These adjustments are usually less than those that are used for RGB and often lightness = 0.5 and saturation = 0.5 are fine. However, for very strong Ha emission targets, the total lightness may be too strong and less lightness/more saturation may look better. With Narrowband images, the channel assignments for the filters can vary. For the Hubble Palate these assignments are SII for R, Ha for G and OIII for B. For more complex channel assignments, I typically use PixelMath to enter the combination expression for each channel. For example, for a false color more-RGB-like image, I would assign R = (0.5*SII) + (0.5*Ha), G = (0.4*Ha) + (0.6*OIII) and B = (0.15*Ha) + (0.85*OIII). For PixelMath the output frame type must be specified as RGB Color from the Color space drop-down list and the create new image box must be checked.  I then rename the identifier and save the image as generated in the linear state as the starting point for further processing. For Narrowband images, an autostretch with the channels linked of the LRGBCombination or PixelMath output image at this point likely show a very un-natural color. With the channels unlinked, the color should be better but still off. The bulk of the initial non-linear-state processing work for Narrowband images is focused on fixing the false color to a more artistically pleasing pallet. The balance between the luminance and chrominance is just as important for narrowband false color images as for RGB since the balance of tones will be off too much later on in the processing.

Color Correction for Color Images

 

32) Depending on the image, I may choose at this point to apply additional runs of DBE using the same process icon as generated and as-described in step 23 to the RGB or NB-composite color images if there is residual gradient or AMP glow still present. In addition, sometimes the color RGB image will highlight any residual gradients that are difficult to see in the individual-filter monochromatic MasterLight frames. Ideally, with the correct Masterdark calibration and the individual MasterLight Frame DBE processing, there should be minimal to no AMP glow at this point.

 

33) For RGB images, after completing the additional DBE if required, I now work on the color corrections of the image. There are two methods for doing this process: the combination of BackgroundNeutralization and ColorCalibration processes, or the PhotometricColorCalibration process by itself without using BackgroundNeutralization since that is part of the PhotometricColorCalibration process and not a separate step.

 

34) For the combination of BackgroundNeutralization and ColorCalibration processes, I do BackgroundNeutralization first. I find as large a region of blank background sky that is completely free of stars as I can find. It is not critical that it be huge, but I like to get a reasonably sized sample region. I create a preview region of this area that I label as <background>. This preview will be used a couple of times so I rename it. Since the preview has no stars, I can leave the background upper limit at default since there should be no stars that need to excluded. If your image it is difficult to find a preview area of background that is not star free, find the dimmest star in the preview and set the upper background limit to less than the brightness of the dimmest star. I then hit execute. For Narrowband composite images, I will do a similar BackgroundNeutralization but not ColorCalibration or PhotometricColorCalibration since NB images will use an entirely different set of tonal remapping methods in the non-linear state to do color correction and this will be described separately.

 

35) Once the sky background is neutralized, then I run the ColorCalibration process. I use the same background preview that I used for the BackgroundNeutralization and use either the entire image as the WhiteReference or if there is a prominent galaxy, I create a preview around the galaxy for the WhiteReference. If using the entire image for the white reference, the ColorCalibration process uses structure detection to isolate the stars in the image and assumes that the average star color is a good white reference. Running the ColorCalibration process should correct the overall color balance nicely. This step is not done for narrowband composite images since they use a false color pallet.

 

36) After watching the Adam Block tutorials, I now prefer to use the PhotometricColorCalibration process for the white balance. However, prior to running the PhotometricColorCalibration process, I run the ImageSolver Script to platesolve the image. For drizzled image, the pixel scale will be reduced for the captured pixel scale by the same factor that you drizzled the integration – typically 2X. Be sure to save the image at this point to capture the RA and DEC of the image in the FITS header.

 

37) I will then run the PhotometricColorCalibration process. I have directly compared PhotometricColorCalibration to ColorCalibration and have found them to be mostly equivalent. For the PhotometricColorCalibration, the image center RA and DEC should be read from the FITS header that was set with the ImageSolver script. For galaxies, I use either the average spiral galaxy or the Elliptical galaxy depending on the target. For nebula, I use the either the average spiral galaxy or the G2V star for white reference. I enable the background Neutralization option and the Region of Interest option. I then click on the From Preview button and choose the same background preview that would have been used for the BackgroundNeutralization. All other setting at default. After executing this process, a graph of the degree of fit for the star color to the online reference database will popup. There should be a rather nice linear fit between the image and the database star colors if the process is successful. Again, this step is not done for narrowband composite images since they use a false color pallet.

 

38) The combination of optimized lightness and saturation settings in LRGBCombination, and BackgroundNeutralization/ColorCalibration or PhotometricColorCalibration (PCC) should have the color of the RGB images fairly close to accurate. At this point any narrowband filter data should be added to the RGB data using the NbRGBCombination script. I have tried many times to use PixelMath for this step, but the NBRGBCombination script just seems to works the best for me. Adding the data now in the linear state is one method, but it can be added later in the non-linear state with the Blend script which is more similar to the layer method used in Photoshop. When adding Ha data, it is very easy for the image to gain a way too reddish background with the blue star forming regions drowned out. At this point at my experience level with Pixinsight, achieving the “right” color balance is still one of the biggest challenges particularly when adding Nb data to the RGB. In this case, getting the color “right” will likely require trial and error and trying different manipulations of the Ha data. To prepare the narrowband data for mixing, I clip the blackpoint of the Nb monochrome image using the HistrogramTransformation process. This helps prevent the narrowband data from strongly influencing the overall colortone of the image. For the actual NBRGBCombination process itself, select the RGB target image and the narrowband monochrome images for the appropriate channels. I set the appropriate narrowband filter width (5nm for my Astrodon filters or 12nm for my Astrometriks Ha), set the RGB bandwidth to 200nm and start by running with the default setting. The degree of addition is controlled by the scale setting for each channel. I find the default scale setting of 1.2 to be very strong and normally end up with a much lower value. This type of process works with many types of narrowband data. I will make combination color enhancement MasterLight frames with the narrowband filter data such as RED: (0.5*SII) + (0.5*Ha), GREEN: (0.4*Ha) + (0.6*OIII) and BLUE (0.15*Ha) + (0.85*OIII) and add that color enhancement to the R, G, and B channels with this script. However, I mostly have added Ha data alone to the R channel. If the image has too red a case, either reduce the red channel scale amount or go back to the PPC step and choose a redder WhiteReference.

 

39) Even by blackclipping the Nb data, you may still have an undesired hue shift in the final image after adding in the narrowband data. I find this to be a re-occurring issue with adding strong Ha data to RGB images. The method described above using PPC give the best natural color balance and white point for the RGB data as the starting point. However, you will be adding a significant amount of red to the image with the Ha data and this leads to an overall reddish image. I have found two ways of address this problem. One is to use CurvesTransformation in the non-linear state to remove red and add blue in a similar way to tonal mapping for true Narrowband SHO images as discussed later on. Is can work well and you can use various masking techniques to target specific parts of the image. The second method is for the image still in the linear state. In this method, you simply choose a different white reference than Average Spiral Galaxy in the PPC process prior to adding the Ha data. If you choose a K or M star as the white reference, you will be shifting the hue of initial RGB image to the blue. When you then add in the Ha data with NBRGBCombination, you will have already compensated for the additional red channel data that you are adding. Both of these methods are trial and error and are purely artistic in nature. As with much of astrophotography, there is no “right” or “wrong” answer here.

 

40) At this point, the RGB image or the NBRGB image should have a good color balance with the correct star color and the saturation reasonable. The image should look mostly as you would expect for the target. NB false color images will look very saturated for whichever channel you use for the Ha data. This is expected since the color pallet will require a significant amount of manipulation in the non-linear state at a later point in the process.

Extracted Luminance, Final Linear Processing, and Non-Linear Stretching for RGB Data

 

41) After completing the color balance correction of the NbRGB, I extract the lightness for the NbRGB in the linear state. Prior to using the ChannelExtraction process, I first run the RGBWorkingSpace process and set all color channels to equal weight so all channels contribute equally to the extracted Luminance. For the ChannelExtraction process, I set the color space to CIE L*a*b, uncheck the a and b channels and then apply the process to the RGB image. For CMOS cameras without binning, the resulting image contains a significant fraction of the lightness data from the image and this can be added to the actual LUM or NbLUM data later in the process to further improve the signal to noise and to add further sharpness and detail. For NB, I will also set the channels to equal weight with the RGBWorkingSpace process and do the lightness extraction regardless for any false color image to add back to the image after color manipulation or to use as a mask for further processing such as noise reduction and deconvolution for sharpness.

 

42) For RGB/NBRGB images, I will occasionally apply either a CurvesTransformation in increase the overall color saturation or use the ColorSaturation process to selectively increase the saturation of a specific color prior to stretching the image to non-linear. It may be helpful to generate and apply a star mask to the image to help preserve the star color balance while you are trying to enhance the color of a DSO. For the StarMask, I use Small-scale = 2, Large-Scale = 2 and Compensation = 3. I use the readout mode to see what the average background intensity of the image is and set the noise threshold to be just above this level. I will then do a HistrogramTransformation of the StarMask to clip the black point to remove any opening in the background and isolate the stars in the mask. I try not to overdo this saturation change in the linear state and will look at the STF of the linear image carefully. Since the RGB/NbRGB is mainly for the chrominance information, some degree of oversaturation is often desirable for combination with the Luminance data for the LRGB image. I have also used the DCONprotectionMask that I detail later in the flow for color saturation when I only want to touch the DSO and strongly protect the stars and background.

 

43) Prior to stretching the RGB/NBRGB image, I do an initial noise reduction while still in the linear state. For true narrowband images that will go into a tonal mapping process such as the Hubble palette, which I will document later, I do not do any noise reduction in the linear state. This is because the tonal mapping process will introduce a fair amount of chromatic noise into the image itself so it pays to wait until after that process is complete to do noise reduction. For RGB images, you use a protection mask which is typically the simple lightness mask. Typically, after using the RGBWorkingSpace process and setting all color channels to equal weight so all channels contribute equally to the extracted Luminance, I extract the lightness component from the (Nb)RGB image and convert to non-linear using the HistogramStretch with the default STF screen stretch settings and invert the image for the mask. This will protect the bright parts of the image and expose the background effectively. This noise reduction completes the linear portion of the processing for the (Nb)RGB image. For the noise reduction itself, I use MultiscaleLinearTransform. The settings I use are:
          a. Algorithm: Starlet Transformation
          b. Layers: 4
          c. Noise Reduction: Enabled
                   i. Layer 1: Threshold: 3.0; Amount: 0.5; Iterations: 3
                  ii. Layer 2: Threshold: 2.0; Amount: 0.5; Iterations: 2
                 iii. Layer 3: Threshold: 1.0; Amount: 0.5; Iterations: 2
                 iv. Layer 4: Threshold: 0.5; Amount: 0.5; Iterations: 1

 

44) I now stretch the RGB/NBRGB image to non-linear. I believe that this is one area that really takes practice to improve your technique since it is easy to really get off track at this point in processing. It is very important to understand that you are not necessarily trying to get an image that look perfect at this stage, but are striving for an image that will paint the luminance data most effectively. This often oversaturated and does not require high resolution or sharpness since those characteristics will come from the Luminance data. Initially, I just used the default STF-default HistrogramTransformation. I then migrated to using MaskStretch process for this step. I felt that the MaskStretch does a better job in preserving the color balance in the image. For the MaskStretch, I use the same preview for the background as used for the BackgroundNeutralization and set the background target to 0.2, a bit lower than default. I have now switched to mostly using the ArcSinHStretch or ArcSinHStretch in combination with other stretching methods. For the ArcSinHStretch process, you set the Stretch factor using a real time preview. I set the stretch factor low and apply the stretch iteratively instead of using a single more aggressive stretch. I find that ArcSinHStretch is extremely good for preserving the color of the image and not washing the color out. This give you more flexibility for tweaking the color rather than having to work to recover the color saturation in the image. For both methods, I tend to under-stretch the image with either process to allow me more room to play with a second less aggressive stretch using the HistrogramTransformation. With the HistrogramTransformation, I reset the back-point to the just before starting to clip and move the mid-tone down until the background just starts to become visible. Once I am satisfied with the non-linear stretch, this completes the linear processing of the RGB or NBRGB image.


Linear Processing of Luminance Data

 

45) For the LUM, the first step is to add any Nb data to enhance the contrast. Since the LUM MasterLight frame is monochrome and the NbRGBCombination script require an RGB file as the input, we first need to convert the LUM MasterLight frame to RGB from Grayscale using the GrayscaletoRGB process. This will assign equal weighting the R, G, and B channels equally in the RGB version of the LUM image. I then add the narrowband data that I wish to add to the RGB LUM with NbRGBCombination script. The resulting image is in RGB format and we need to convert the new NbLUM image back to grayscale using the RGBtoGrayscale process.

 

46) For Narrowband data, the extracted lightness from the combined NbRGB is used in the same manner at the LUM or NbLUM.

 

47) The next step I credit to John Hayes from Cloudy Nights. Here we also add in the extracted lightness image from the RGB, RGB_L to the NbLUM image to create a SuperLuminance (SLUM) image with all the lightness information that was collected in one image. This does not apply to Narrowband data since the extracted LUM already has all the lightness data. Depending on the number of subframes collected, these two images will have different signal-to-noise levels and different background intensities. I first equalize the background levels by applying a similar a median adjustment with PixelMath in PI: K: $T + (median (<NBLum>) - median($T)) to the RGB_L and create an RGB_L_LF image. I then create a clone of both the NbLUM and the RGB_L_LF so that I have a total of four images: NbLUM, NbLUM_Clone, RGB_L_LF and RGB_L_LF_clone. We want to average the NbLUM and the RGB_L_LF using noise weighting to adjust for the differences in signal-to-noise with the ImageIntegration process, but that process will only do noise evaluation weighting with a minimum of four frames. The clone frames solve this problem. In the ImageIntegration process we do not want to do any pixel rejection since that was already done on the MasterLight Frames that generated the input files. I set the Normalization to additive with scale, but this is probably unnecessary since I normalized the background levels with PixelMath, but I have not determined if this can be set to none. For this integration, I use these setting in PixInsight (Differences from Default):
              a. Image Integration
                      i. Combination: Average
                     ii. Normalization: Additive with scale
                    iii. Weights: Noise Evaluation
                    iv. Scale Estimator: (MAD)
                     v. Generate Drizzle Data: Unchecked
              b. Pixel Rejection (1)
                      i. Rejection Algorithm: None
                     ii. Normalization: None
               c. Pixel Rejection (2)
                     i. High and low Sigma = N/A
               d. Large Scale Pixel Rejection
                     i. Reject low large-scale structures: Unchecked
                    ii. Reject High large-scale structures: Unchecked

 

48) The next step is to run Deconvolution on the lightness data in whichever form is final: LUM, NbLUM or SLUM. For NbLRGB, it is the SLUM generated in the previous step. For Nb images, I would be the extracted lightness from the false color NbRGB image and apply deconvolution to that data. I have found deconvolution to be the trickiest and most time consuming part of the post processing. It requires many preparation steps to be able to even start testing the Deconvolution process and many steps of trial and error on many preview regions to get the Deconvolution setting optimal. It is very easy to overdo this step and create ugly artifacts in the image. For this step, it is always the case of less is more. Deconvolution does not do magic and you should not expect jaw-dropping results. The effect should be somewhat subtle yet noticeable. Even so, the sharpening effect Deconvolution with the image still in the linear state is quite nice if done right and will trickle down through the further post processing in the non-linear state. There can be further sharpening in the non-linear state with different processes such as MultiScaleLinearTransform or UnsharpMask, so it is not necessary and unwise to be too aggressive with Deconvolution.

 

49)  There are four preparation steps needed for the Deconvolution process. The first, second and fourth steps are rather straight forward. I find the third step tricky to get the best results:

  • Generate a PSF (point spread function) for the stars in the image
  • Create a StarMask to provide protection for the stars from ringing artifacts
  • Create a lightness Mask, a RangeMask or a DCONprotectionMask  from a RangeMask/StarMask combo mask to isolate the bright features of the image that are targeted for sharpening and to further protect the background and further protect the stars from sharpening and ringing, respectively
  • Create a set of preview areas in the image, both of star region, bright nebula or galaxy regions and fainter areas to test setting of the deconvolution process and the effectiveness of the masking protections to the stars and background regions

 

50) The point spread function (PSF) for the image is generated with the DynamicPSF process. After opening the process, I enlarge the image enough to clearly see individual stars. I then systematically click on stars go over the as much of the whole image where the stars are clear and not over heavy luminosity.  Make sure that the stars are not saturated, very small, or close to adjacent stars so that the selection box does not isolate the selected star. As you click on stars, look at the statistics generated in the table. The selected stars should all be showing the Moffat4 as the Modeling function. If the selected stars show a different modeling function, they should be excluded. Depending on the image, this may not be possible and other modeling functions may be used. The amplitude should be between 0.1 and 0.5. If I click on a star that I don’t like, I simple hit delete to remove it from the list. Each star selected shows up in a list of selected stars on the DynamicPSF screen. I will try and get as many stars as I can to get a good statistical sample across the entire image. If the image has distortion of the stars at the edge of the field, these should not be included. Once I have sampled the whole image, the list will be sorted and reduced using the generated statistics for A (Amplitude), r (Aspect Ratio) and MAD (Mean Absolute Difference). For MAD, the lower the number the better since this is a measure of how well the algorithm is able to fit the star and you should exclude stars with fit >5e-3. For A and r, I clip the distribution on both the high and low sides of the distribution for each of these parameters in the order of A and then r. I will go from maybe 100-120 stars down to about 20-30. Once the list is clipped, PSF image is generated by clicking on the icon that looks like a camera. I save the rename and save the PSF image.

 

51) Now we create the StarMask to provide ringing protection to the largest stars in the image. The purpose of the mask is not as a mask per se, but to provide a map for the local deringging protection during the deconvolution process. The mask works together with the global deringging setting to prevent dark ring artifacts from being generated around the stars in the image. This is mostly a problem around bright stars so it is not necessary to have a mask to protect all the stars in the image. The dimmer stars are affected less inherently with the regularized Richardson-Lucy algorithm of the deconvolution. This mask is generated directly from the LUM, NbLUM or SLUM while still in the linear state. The settings below should generate a StarMask capturing the brighter stars in the image and will both enlarge and blur the mask directly. I used the StarMask process with the following settings:

  • set the scale = 6 or 7 in order to capture the largest stars. You may need to try both setting and see which capture the largest stars best.
  • look at the background intensity to determine a good value for the Noise threshold – it needs to be just a bit above the background maximum
  • Structure Growth: Large scale = 2, Small scale = 2, compensation = 3
  • Mask Generation: Smoothness = 16.
  • Check the Binarize and Aggregate options to increase the mask protection since the mask will be generated from a linear image.
  • Mask Preprocessing: set the Midtones slider to 0.16.

These settings should generate a good star mask with most of the bright stars, but it may miss the smaller stars. To capture more of the fainter stars, you need to lower the noise threshold, but not too low or you will see a lot of noise being captured as stars. Sample the background levels and the fainter stars with the cursor and set the noise threshold high enough so that it distinguishes between the fainter stars and the background. It is not necessary to capture the faintest stars since the lightness/Range masking protection will exclude these areas from the effects of the deconvolution anyway. 

 

52) A protection mask is necessary to isolate the bright features of the image that are targeted for sharpening, to protect the background and further protect the stars from sharpening and ringing from the deconvolution process. Creating the best protection mask is not that straight forward is subject to trial and error Furthermore, modification of the mask during the deconvolution process is part of the optimization and can address specific issues and artifacts. There are also several different approaches to choose from for generating this mask with varying degrees of complexity.

  • The simplest to create is a basic a lightness mask. Just clone the NbLUM/SLUM image and do a HistogramStretch using the STF settings. This is one of the most useful masks for many purposes particularly for noise reduction if you invert the image. The next type of mask is derived by manipulating as generated lightness mask: the object mask. This is similar to but generated differently that the next type of mask: a RangeMask. Even more complex is a ObjectMask/StarMask or RangeMask/StarMask combination mask. Mask creation in general has not been that straight forward for me and it seems to require a lot of trial and error to get the best results. I have tried all variants, but now typically use the object mask or RangeMask for most targets.
  • To create the object mask, start with the lightness mask. Then using the HistrogramTransformation, strongly black clip the sky background to be completely dark and at the same time clip the white point to increase the contrast and lighten the bright areas of the image. This should generate a high contrast mask with the background strongly protected and the bright portions of the image open to the effects of deconvolution. This is what you are striving for with the protection mask for deconvolution.
  • To create a RangeMask, you also start with the stretched Lightness mask describe in the part a) above. The RangeMask process has several sliders for the Lower limit and Upper limit for allowed brightness values to include. Creating the right RangeMask for deconvolution is not an exact science. You need to sample the image with the cursor in both the brighter areas and background areas to get an idea of the levels to choose. It also has sliders for feathering the mask with both a Fuzziness and Smoothness to feather the mask to prevent sharp transitions. You need to protect the background strongly to prevent sharpening of the noise and to protect the dimmer stars from ringing that were not protected by the StarMask. You can also use the same contrast enhancement methods using the Histogram transformation as described for the object mask. In the end, both the RangeMask and object mask should get to similar places, but using different techniques.
  • The object mask and RangeMask should be created to select only the brightest portions of the image and protect the background, but it often can’t, by itself, exclude all the stars as well. This is particularly true for stars that are embedded in the bright areas of the image. In this sense, the protection of a object mask and RangeMask is a compromise for ease of generation and relies more heavily on the local deringing support of the StarMask. One method to improve on protection properties, an object mask or RangeMask numerically combined with a StarMask can be a better approach for difficult cases since the StarMask is applied independently to deringing settings and the Combo RangeMask/StarMask will isolate the halos of the stars for further protection.
  • You create the RangeMask and StarMask the same ways as above, but for the RangeMask for a combo, you want to not feather it very much and use a smoothness ~ 2 since the combo mask will be blurred later on. For the StarMask, you also need to modify it by stretching it a bit with the HistrogramTransformation process to bring out the fainter stars but not so much as to start to generate noise in the background.
  • Furthermore, the stars in this StarMask need to be dilated using the MorphologicalTransformation process so that they will cover more of the halos of the stars when combined with the RangeMask. For the MorphologicalTransformation, we increase the size of the filter applied and make it more star-shaped. To do this, select 5 (25 elements) from Size and click the top, bottom, left and right three black squares to make a more star-shape transformation. The Selection setting must also be adjusted. A setting of 0.50 does nothing to the star sizes. A setting lower than 0.50 decreases star sizes and a setting higher than 0.50 increases star sizes. I set a value of Selection = 0.75 and increase Iterations to 3. Amount can be tweaked to something like 0.50 if you would like the end result to be a 50% blend between the original image and the modified image. 0.30 would blend 30% of the modified image with 70% of the original image. Since this is only a mask image, we keep Amount = 1.
  • To create the combo DCONprotectionMask from the object mask or RangeMask and the modified StarMask, I use PixelMath to combine them with RGB/K set to: RangeMask (or ObjectMask) – StarMask, with Create new image checked and Grayscale selected under Color space. Unlike the RangeMask or object mask, this combo mask should protect stars in the bright areas of the image; however, it is not clean at this point.
  • You then need to closely inspect the mask to see in both the background and in the bright nebulosity/galaxy if there are stars that only the core has been removed and not the whole star and its surrounding halo. These stars need to be cleaned up using the CloneStamp to cut segments out of the combined mask image - to produce a larger black mark where the pronounced stars are for better protection. There should not be that many of these that need to be manually fixed. Be sure to click the Execute button on CloneStamp to apply the operation to your image before you close CloneStamp or all your work will disappear - DAMHIK.
  • We then blur the entire combined DCONprotectionMask to smooth out any harsh transitions introduced by in the previous steps using the ATrousWaveletTransform before we can apply the mask in the Deconvolution process. For this process leave Layers set to the default of 4 and simply select each one to disable Detail Layer for all four numbered 1, 2, 3 and 4, but keep the R (residual) layer enabled as this represents all the rest of the wavelet layers. Applying this process blurs the image by deleting the detail on the smaller scale structures but keep the largest scale structures intact. I will typically apply the ATrousWaveletTransform process twice to the image.
  • This completes the DCONprotectionMask generation. This mask is not only very critical for the Deconvolution process is also useful for many other post processing steps where we wish to only attack the nebulosity or galaxy and protect both the stars and background for either sharpening, color manipulation, color saturation, or contrast enhancement. In some cases, however, such as for globular clusters, even using the simple basic a lightness mask will work well enough for the deconvolution mask. It simply takes some trial and error to determine how sophisticated this protection mask needs to be.

53) The fourth preparation step is to set up several preview boxes on the image. These previews should cover the bright areas that you want to sharpen, background areas that you want leave alone, transition areas between, and areas of stars. I may have five or six preview boxes created for this purpose. These previews are very useful. Deconvolution is processer and memory intensive and can take a long time to run on the full image. The settings for the best results can take many iterations to optimize and you have to check multiple types of structures with these setting since what may look great on the galaxy will totally trash out the stars or background. This is why the protection masking and StarMask protection are so important. Running multiple setting on smaller previews takes much less time so that more optimization can be done before running the settings on the entire image.

 

54) With the star protection mask, point spread function, lightness selection mask and previews created, we can now start the deconvolution process itself. First, click on the ExternalFSP and select your generated FSP image. For settings, I use:
              a. Algorithm: Regularized Richardson-Lucy
              b. Target: Luminance (CIE-Y)
              c. Deringing: Enabled
                     i. Global Dark: 0.0150
                    ii. Global Light: 0.0015
                   iii. Local Deringing: Enabled
                   iv. Local Support: choose your StarMask
                    v. Local Amount: 0.7
              d. Wavelet Regularization: Enabled
                    i. Set Layers to 3
                             Layer 1: Threshold: 3.0; Amount: 0.7
                             Layer 2: Threshold: 3.0; Amount: 0.5
                             Layer 3: Threshold: 1.0; Amount: 0.35

 

55) The tuning of deconvolution is one of the trickiest issues in processing in my opinion. It takes a lot of practice with trial and error. Using the different preview regions, test running the deconvolution starting with your area for highest interest such as the main galaxy or structural areas of a nebula like elephant trunk features.

  • You will be first tweaking the Global dark higher to remove any dark halos that are visible. You may find that if you do a screen stretch on the preview box, you can get higher contrast of the dark halos and that way it is easier to see when you have raised the global dark high enough.
  • As you do this, you will start to see stringy artifacts starting to appear in both the background and the light areas. To remove these, you will very slowly increase the Global Light to suppress these bright sting-like effects. Make very small increments for increasing the global light since the response is very sensitive to this setting.
  • Also start with a low number of iterations. I start with ~20. You may also have to try different protection masks to see which work best for both local deringing support and the background protection. The amount I typically tweak for Global dark is in small amounts (~0.005) and even less for Global Light (0.0005). Remember to work on Global Dark first.
  • Once you have setting for both Global Dark and Global Light that works for your area of high interest, proceed to try the same settings on the other previews. You may find that what works best on the main feature makes the other areas look terrible. Its take a lot of back and forth – hence the previews. Once you have settings that work for all previews, up the iterations to 30-40 and repeat the process through all your previews.
  • You may not even want to go this high on iterations. I have used as few as 20 on some targets. The key is to remember that deconvolution is not fix-all and it is important to have a light hand with this tool. However, getting a good deconvolution is well worth the time (which can be considerable) since it can make a huge difference in the final image. Also remember that further sharpening can be done later in the processing with other tools. Once you have the final setting determined from your preview, execute it on full image and sit back and wait…
  • After the Deconvolution process has completed on your full image, you need to inspect it carefully at 1:1 or 1:2 resolution to look for any artifacts in the image especially at any bright stars in the brighter nebula regions. If there are artifacts, you may have to modify either your global dark or global light, but most likely your protection mask so that areas of with artifacts are better protected from the effects of the algorithm. Since each iteration on the full image takes a long time, it is better to spend more time using preview regions up front to tune the settings.

56) Prior to stretching the deconvoluted LUM or SLUM, I do an initial noise reduction while still in the linear state in the same manner as the RGB/NbRGB. For a protection mask, I clone the image and use a HistogramStretch using the default STF screen stretch settings and invert the image for the mask. This will protect the bright parts of the image and highlight the background effectively. For this noise reduction, I use MultiscaleLinearTransform. The settings I use are:
               a. Algorithm: Starlet Transformation
               b. Layers: 4
               c. Noise Reduction: Enabled
                        i. Layer 1: Threshold: 3.0; Amount: 0.5; Iterations: 3
                       ii. Layer 2: Threshold: 2.0; Amount: 0.5; Iterations: 2
                      iii. Layer 3: Threshold: 1.0; Amount: 0.5; Iterations: 2
                      iv. Layer 4: Threshold: 0.5; Amount: 0.5; Iterations: 1

 

57) With the LUM/SLUM noise reduced, it is ready to stretch to non-linear. Its important to understand that the stretched LUM/SLUM should have areas that are brighter than 0.8 or they will not be colored effectively with the chrominance data. Therefore, the image may not look like what you really want the image to look like at the end. As with the RGB, I had also previously preferred to use MaskStretch for this and not a STF-specified HistrogramTransformation. For the MaskStretch, I use a preview for the background and set the background target to 0.2, a bit lower than default. I have also started to mostly use the ArcSinHStretch for the LUM/SLUM. For this you up the stretch factor using a real time preview. I find that ArcSinHStretch seams to also help to preserve the dynamic range of the LUM/SLUM especially for HDR type targets like M16 or M42. This also gives you more flexibility for tweaking the contrast of the LUM later in the process flow. For both methods, I tend to under-stretch the image and this gives me more room to play with a second stretch using the HistrogramTransformation. For this step, I will reset the back-point to the point where I am just starting to clip and move the mid-tone down until the background just starts to become visible. However, I make sure that the brightest intensities are not about ~0.8. Once I am satisfied with the non-linear stretch, this completes the basic processing of the LUM/SLUM image.

 

58) With both the LUM/NbLUM/SLUM and RGB/NbRGB images both stretched to non-linear, many imagers choose to combine them to form the corresponding type of LRGB image at this point. However, I do not form the LRGB image at this point. I prefer to continue to split the processing for the LUM/NbLUM/SLUM and RGB/NbRGB. I apply all the contrast enhancement, HDR transform, sharpening and other non-linear processing to the LUM/NbLUM/SLUM since this image transmits the detail to our eye and apply just color saturation enhancements to the RGB/NbRGB image since this image contains the chrominance information.

 

Tonal Mapping for Narrowband Image – The Hubble Palette

 

59) For Narrowband images such as SHO Hubble Palette images, significant color manipulation is first required to create classic color schemes from the original color palette of the image obtained by the ChannelCombination process. These processes change the color palette from predominately greenish cast to the nice gold/cyan of the Hubble palette and operate on the image in the non-linear state. To this point, I have done no noise reduction on the Nb image. The process of tonal mapping will itself introduce an additional component of chromatic noise, so it is better to wait until that process is complete before doing noise reduction. In addition, the extracted LUM data from the Nb image, when re-applied as a luminance layer will also reduce the total noise in the image. A similar process of color manipulation would also be required with HOO, HSO and other Nb combinations, however, I am not experienced with the tonal mapping process for these palettes yet and I will focus on Hubble palette SHO images. This process follows the method from Christopher Gomez but I have modified it to some extent. However, it is well worth watching the YouTube video at this link:
https://www.youtube....h?v=RZZ4UlIQk0s.
This processes in broken into six parts:

  • Removing the magenta cast to stars and restoring blue color to the stars
  • Removing the green cast and converting to a more gold color for the nebulosity
  • Converting the bright emission areas from greenish to a blueish/cyan cast
  • Removing any remaining greenish cast
  • Creating new Cyan and Yellow ColorMasks to further manipulate the color palette
  • Enhancing the saturation and restoring the overall brightness of the image.

 

60) It is assumed that an extracted LUM was obtained from the data in the linear state and that the image has been stretched to non-linear without and noise reductions as discussed above. The process of changing the color palette is called tonal mapping. The first step is to create four color-specific masks using the ColorMask script applied to the final, stretched narrowband false RGB image. This script is now part of the latest version of PixInsight whereas previously it had to be added separately. The script is under the Utilities submenu for scripts. Select the NB RGB image from the dropdown. The color masks to create are Magenta, Green, Yellow and Cyan. The settings to use are chrominance mask – enabled, and a Mask Blur: Layer to remove: 3. The four masks will allow you to attack the color of specific parts of the image independently. Look at each mask to see what portions of the image each mask is isolating and often they will have overlap particularly the Green and Cyan ColorMasks. For some images, there is almost no data in the yellow ColorMask and it can be omitted from the initial color masks. During all the following steps, I find it essential to use the real-time preview and to do the changes in stages. This process is not an exact science and takes a lots trial-and-error as well as practice. I have also found that for the same data set, I will get a different result each time I apply these methods even when I am trying to recreate the same effects. There are no predetermined set of curves that will work for a given image so the settings have to be tuned to each set of data.

 

61) I will describe two methods for eliminating the magenta stars in the image and change them to a bluer cast. In the first method, apply the Magenta ColorMask to the image. Then open the CurvesTransformation and select the red channel.  Select the center point of the curve and drag down and to the right to create a negative curve. Using the real-time preview and focusing on the brightest magenta star shows that by reducing the red tone in image the magenta will begin to change cast to first to a whitish and then a bluer tone. I don’t try to make the complete change in a single curve adjustment. A less aggressive curve can be applied repeatedly to the image until the stars gain a blue cast. When the color has changed to the bluer tone that you want, you will notice that the stars may appear dimer and have lost saturation. With the Magenta ColorMask still applied, select the RGB channel. Select the center point of the curve and drag up and to the left to create a positive curve. The select the saturation channel and do the same. The amount of brightening and saturation adjustment required is not large so be careful to not over do these curves. At this point the magenta stars should now appear blue and at a similar brightness and saturation as they were before making the adjustment. I have found that this process can even restore blueish color to what were initially fully saturated cores since you are removing the predominate color from the saturated star. You should also inspect the whole image since the magenta ColorMask will be open to other areas of the image to some extent and will be affected by these curves adjustments. At this point, I change the identifier and save the image so that I can return to this point in the tonal mapping if needed.

 

62) The magenta stars from the image can also be removed using a different and arguably simpler (cruder) method. With no mask applied, if you invert the image, Magenta will now appear as Green in the inverted image and should mostly appear in just the areas of Magenta stars. Run the SCRN process for green removal on the entire inverted image and then invert the image back to normal color. This will very effectively eliminate the Magenta star color but the stars will likely look rather unsaturated. Some find the invert-SCNR-invert method both faster and more effective for removing the Magenta cast to the stars, but I find that it does not enhance the blue effectively. So even with this method, I do use the Magenta ColorMask to isolate just the previously Magenta stars so that I can increase the color saturation using the CurvesTransformation and the ColorSaturation processes as described in the previous section. For the CurvesTransformation, I select the color saturation curve, click on the midpoint of the curve and drag diagonally towards to upper left. This increases the saturation for all colors but only in the areas not protected by the Magenta ColorMask. Alternatively, using the ColorSaturation process, start with the saturation curve flat for all colors accept the blue region. For this region, just drag the curve upward to increase to saturation of only blue in the areas not protected by the Magenta ColorMask. By applying either of these processes, with the Magenta mask active, the bright starts should now look much bluer as desired. Using the Magenta ColorMask has the advantage of mostly attacking only the stars that should be blue and leaving the more reddish stars alone. This gives a more natural star color even with a false color image. Getting a real star color, is a more involved process and requires taking RGB data of the same imaging target, extracting just the stars from that image, and replacing the stars in the false color image. I have done this method also, but I will not describe it here. Either of the above methods are relatively simple and give a good enough result that the star color does not detract from the final image especially if star size reduction is used to shrink the star size – also discussed later.

 

63) The next step would be to apply the green ColorMask and remove the green color and add red color with the CurvesTransformation to change the nebulosity to the desired gold color. As with the Magenta ColorMask above, there are two approaches: 1) using the green ColorMask directly with curves and saturation processes to remove the green and add red, or 2) using SCNR globally on the image and then curves and saturation with green ColorMask to add the red. I will describe both, but I have found that the first approach works more consistently.

 

64) In the first method, first remove the Magenta ColorMask and apply the Green ColorMask to the image. There are two steps I do prior to trying to change the tonal map of the image.

  • Observe specifically where this ColorMask is most open by looking at the gray scale image of the mask directly.
  • Open the CurvesTransformation and select the Green channel. Using the real-time preview, select the center point of the curve and drag it up and to the left aggressively to exaggerate the areas of the mask that are affected by the curve. Then reset the CurvesTransformation process without applying the change.

These two pre-steps will help identify which areas of the image to focus on in the tonal mapping. Using the real-time preview and focusing on the brightest areas of the image identified as above, select the center point of the curve and drag down and to the right to create a negative curve to remove green. Also select the red channel and create a positive curve to add red. You should begin to see the color tone of the areas of nebulosity shifting from a strong green cast toward the desired more of a reddish gold cast. As the color changes towards the desired golden tone, you will likely notice from the real-time preview that the image has lost both brightness and contrast and appears dimer with low overall saturation in the areas affect by the green ColorMask. To address these issues, also select the RGB channel. Click the center point of the curve and drag up and to the left to create a positive curve and then select the midpoint between the black point and center point and drag slightly down and to the right to create an s-curve. This should address the brightness and contrast. Then select the saturation channel and do a slight positive brightening curve. The amount of brightening, contrast enhancement and saturation adjustment needed is not large so be careful to not over do these curves. As with the Magenta-stars tonal mapping, don’t try to make the complete change in a single curve adjustment and less aggressive curves can be applied repeatedly to the image until the nebulosity gains the golden cast. Also, any brightening, contrast enhancement or saturation should not be repeated or done very conservatively – it is very easy to over process/saturate the image at this point. As you progress, the overall tone of the image should no longer have a greenish cast and the nebulosity should have a golden hue with the goal to have a similar brightness and saturation as prior to making the adjustments. At this point, I change the identifier and save the image so that I can return to this point in the tonal mapping if needed.

 

65) The one complication to this process that I have seen in my own data occurs with objects where the OIII and Ha emission are both strong in the same area. Good examples of these sorts of targets are M16 – the Eagle Nebula or M17 – the omega nebula.  For these objects, I find that even with repeated iterations of this tonal mapping using the green ColorMask, areas of the image still appear greenish and by doing so, the image quality degrades, oversaturates and becomes unrecoverable. When this is the case, it is important to not over process the image with the green ColorMask but to make a conservative adjustment and stop. I find that it will require use of the Cyan mask in the later steps and iterations between the Green and Cyan masks to fully remove the green and obtain the golden cast without over processing the image.

 

66) For the second method, first remove the Magenta ColorMask and apply SCNR for green color removal to the entire image. This will seem harsh and appears to render a very bland and unsaturated image. However, to some extent, the desired gold and cyan hues are actually already in the image. To what extent you are able to now selectively increase the saturation and add some color tones to bring this color palette out will depend on the image and require trial and error. Since you have already removed the green globally from the image, we apply the green ColorMask to the image only to attack the regions that we want to enhance the saturation. Now using the CurvesTransformation, select the red channel. Click on the center point of the red curve and drag diagonally to the upper left to increase the red in the nebula dust regions. Also select the saturation channel and do the same. You should start to see the desired gold color to the dust clouds. As with the Magenta ColorMask tonal mapping above, you will also need to do a gentle brightening curve or perhaps an s-curve using the RBG channel to restore some brightness and contrast to the image. From this point, the processing is similar to the method above with the green ColorMask directly. You may need to do more than one pass with adding reddish tone and color saturations curves adjustments until you get tone that you want. You will have to determine which of the two methods works best for your data and do this to taste since there is no right or wrong way – this is art and not science. At this point, I change the identifier and save the image so that I can return to this point in the tonal mapping if needed.

 

67) The next step is to apply the Cyan ColorMask. This is to attack the areas that you want to bring out the Cyan color typically of Hubble Palette images. I find that this can be the most difficult portion of the tonal mapping to achieve. First remove the Green ColorMask and apply the Cyan ColorMask to the image. As with the Green ColorMask I recommend two steps I do prior to trying to change the tonal map of the image.

  • Observe specifically where this ColorMask is most open by looking at the gray scale image of the mask directly.
  • Open the CurvesTransformation and select the Blue channel. Using the real-time preview, select the center point of the curve and drag it up and to the left aggressively to exaggerate the areas of the mask that are affected by the curve. Then reset the CurvesTransformation process without applying the change.

These two pre-steps will help identify which areas of the image to focus on in the tonal mapping. Using the real-time preview and focusing on the brightest areas of the image identified as above, apply negative curves to the red and green channels, a positive curve to the blue channel and brightening s-curve to the RGB channel. This should remove most of any remaining greenish cast to the image and the cyan color of the OIII/Hb emission should start to appear. After applying these curves adjustments, a second and less aggressive green-channel and red-channel negative curves, positive blue-channel curve and slight brightening RGB curve may be needed. If at this point, the image still shows a persistent greenish cast, it may be necessary to go back to the Green Color mask at this point to eliminate the remaining greenish cast. After eliminating the residual greenish cast – if required – the image should be approaching the classic Hubble palette. I find that tonal mapping is a difficult and path dependent process. If I do not like my results to this point, I may keep iterating the Green and Cyan ColorMask steps or go back to after the Magenta star removal step and try again. When I achieve the result that I like, I change the identifier and save the image so that I can return to this point in the tonal mapping if needed.

 

68) Once the image is showing a nice Hubble palette color tone, it may be possible to further enhance the edges of the dust clouds using the Yellow ColorMask. Not all images will produce a Yellow ColorMask with any structure, but when it does, it can allow some very nice effects. The gray scale mask should show openings on the very edges of illuminated dust clouds. As with the Green and Cyan ColorMasks I recommend two steps I do prior to trying to change the tonal map of the image.

  • Observe specifically where this ColorMask is most open by looking at the gray scale image of the mask directly.
  • Open the CurvesTransformation and select the Red channel. Using the real-time preview, select the center point of the curve and drag it up and to the left aggressively to exaggerate the areas of the mask that are affected by the curve. Then reset the CurvesTransformation process without applying the change.

These two pre-steps will help identify which areas of the image to focus on in the tonal mapping. Using the real-time preview and focusing on the brightest areas of the image identified as above, apply a very-gentle negative curve to the Lab a channel, a very-gentle positive curve to the Lab b channel and slight brightening s-curve to the Lab L channel. This should result in a nice pop to the edges of the nebula dust clouds.

 

69) The next element of the tonal mapping is to re-apply the Cyan ColorMask and then invert the mask to address the background. Using the real-time preview, select the RGB channel and apply a contrast-enhancing s-curve. This is done to try and recover some of the brightness and contrast to the dimmer portions of the image nebulosity.

 

70) At this point, you use the current tonal-mapped image to create new Cyan and Yellow color masks with ColorMask script. I found that the Cyan ColorMask created at this point will highlight the actually desired Cyan areas that we wish to enhance and will be different than the initial Cyan ColorMask generated by the initial SHO image. With this new Cyan color mask, I apply positive curves for the blue channel to get the tone of blue that I like and can increase the saturation if needed. I can also again do a sight s-curve of the RGB channel for brightness and contrast enhancement. Similarly, a new Yellow color mask will now highlight the edges of the dust clouds nicely even if the original Yellow color mask did not. The changes described in step 65 can now be applied. This complete the tonal mapping of the Narrowband RGB image.

 

71) For both (Nb)RGB and SHO images, the chrominance portions of the processing are now complete. In both cases, I try to have the color images somewhat over saturated for color but not for brightness. The brightness and contrast of the image will be mostly dictated by the non-linear processing of the LUM/SLUM data. In addition, the process of re-applying the LUM/SLUM data to the chrominance information in the (Nb)RGB/SHO will also reduce the image noise.

 

Contrast Enhancement, Sharpening and Star Reduction for the Non-Linear Luminance Data

 

72) I do most of the contrast enhancement and sharpening for the image in the non-linear state on the ELUM/(Nb)LUM/SLUM prior to combining this data with the chrominance data from the (Nb)RGB or SHO/HOO Nb color data. The steps that I currently use are:

  • HDRMultiScaleTransform – to flatten the dynamic range of bright areas to enhance contrast
  • LocalContrastEqualization – to recover the overall brightness of the image
  • CurvesTransformation – for contrast enhancement
  • UnsharpMask and/or MultiScaleLinearTransformation – for sharpening in the non-linear state
  • DarkStructureEnhancement – for added contrast of dark features in the image

I don’t necessary use all these steps and often uses these steps blended into the mother image using PixelMath. To a great extent, these steps are to taste and there is no right or wrong way. Their application requires practice in order to find a good enhancement without over processing the image.

 

73) HDRMultiScaleTransform – For many images, there are areas that are both very bright and very dim and with typical stretching processes, it is very difficult to show all the contrast features in both the dim and bright areas at the same time. The process of HDRMultiScaleTransform with the appropriate object mask flattens the dynamic range of bright areas to significantly enhance the overall image contrast.

  • The first step is to use the HistrogramTransformation or the CurvesTransformation process to brighten the image so that the nebulosity detail in the dim areas is nicely visible at the expense of orver saturating the detail in the bright areas. The bright areas should not be completely white so the histogram should not show white clipping. I then change the identifier and save this image as the base image for the process
  • Now I make four clones of the image and name then <image>_HDRMST5, <image>_HDRMST6, <image>_HDRMST7, and <image>_HDRMST8. To each of these images, I apply the same object mask that I created for protection with the deconvolution process.
  • I then open the HDRMultiScaleTransform process and adjust the following setting:

                   i. Number of Iterations: 1
                  ii. Inverted: checked
                 iii. Overdrive: 0
                 iv. Median transform: checked
                  v. Deringing: unchecked
                 vi. To Lightness: checked
                 vii. Midtone Balanced: at defaults

 

  • Then for each of the clone images, I apply the HDRMultiScaleTransform process with the following setting respectively:

                i. <image>_HDRMST5 - Number of layers: 5
               ii. <image>_HDRMST6 - Number of layers: 6
              iii. <image>_HDRMST7 - Number of layers: 7
              iv. <image>_HDRMST8 - Number of layers: 8

 

  • This generates four images with HDRMultiScaleTransform process flattening different size structure in the bright areas of the image that are open with the applied object mask. I remove the mask from the images and save the images.
  • Now I use PixelMath to create a composite image between for the four clone images and the base image using the following expression: ((1-Factor)*<image>_Base)+(Factor*((weight5*<image>_HDRMT5)+(weight6*<image>_HDRMT6)+(weight7*<image>_HDRMT7)+(weight8*<image>_HDRMT8))) with symbols Factor = 0.4, weight5 = 0.1, weight6 = 0.2, weight7 = 0.4, weight8 = 0.3. These weighting factors are just starting points. I look at each of the four clone images as well as the base image to decide of how much of each image I want to emphasize. If the bright area of the image initially shows decent contrast in the base, then the base image can be weighted higher with a larger value for Factor. If the base image shows little contrast in the bright areas, then the base image can be weighted lower with a smaller value for Factor. Similarly, the individual weights for each of the clone image layer setting can be adjusted. It is again a matter of taste. I may try several combinations and base weightings until I get the effect that I like. I may also choose to only use a single HDRMT setting as the desired output or fewer than the four different scales.
  • After obtaining the composite image output from PixelMath, I rename the identifier and save the image.

 

74) LocalContrastEqualization – The HDRMultiScaleTransform process, by its very nature reduces the brightness of the image. This issue is resolved with the LocalContrastEqualization process that goes a long way to recovering some of the brightness in the key features in the image. I apply the same object mask used with the HDRMultiScaleTransform process. I used the following settings for the LocalContrastEqualization: Kernal radius: 160; Contrast limit: 1.3; Amount: 0.8; Histogram Resolution: 8-bit (256) and Circular Kernel: checked. I then apply the process to the image. This should recover much of the contrast in the bright nebulosity that was lost by applying HDRMultiScaleTransform. I then rename the identifier and save the image.

 

75) CurvesTransformation – At this point, depending on the look of the image, I may apply some slight curves adjustment to the ELUM/SLUM/(Nb)LUM to adjust the contrast in the darker regions of the image. This is done without a Mask. Since this is a monochrome image, I usually just apply an s-curve to drop the brightness of the dark portion of the image to further increase the contrast.

 

76) UnsharpMask and/or MultiScaleLinearTransformation – the result of the deconvolution process in the linear state was to primarily improve the sharpness of the image; however, there is further sharpening that can be applied in the non-linear state. The effects of the HDRMultiScaleTransformation process can leave the image a bit softer that desired, so if necessary, I will use both the UnsharpMask and MultiScaleLinearTransformation processes to sharpen the image. As with the HDRMultiScaleTransformation, I prefer to blend in sharpened images with the base image using PixelMath.

  1. I first make two clones of the image and name then <image>_USM, and <image>_MSLT_Sharp. To each of these images, I apply the same object mask that I created for protection with the deconvolution process.
  2. To the <image>_USM image, I use the UnsharpMask process with the following settings: StdDev = 2.5 with all others at default
  3. To the <image>_MSLT_Sharp, I use the MultiScaleLinearTransformation processes with the following settings: Algorithm: Starlet transform; Layers: Dyatic with 4 layers; All layers checked; Layers 2 and 3: bias = 0.075; Deringging: checked with Dark = 0.1 and Light = 0.0.
  4. This generates two images with different sharpening process applied to the bright areas of the image that are open with the applied object mask. I remove the mask from the two images and save the images.
  5. Now I use PixelMath to create a composite image between for the two clone images and the base image using the following expression: ((1-Factor)*<image>_Base)+(Factor*((weightUSM*<image>_USM)+(weightMSLT_Sharp*<image>_MSLT_Sharp))) with symbols Factor = 0.4, weightUSM = 0.5, weightMSLT_Sharp = 0.5,. These weighting factors are just starting points. I look at each of the two clone images as well as the base image to decide of how much of each image I want to emphasize. If the bright area of the image initially shows decent sharpening in the base, then the base image can be weighted higher with a larger value for Factor. If the base image shows to much softness in the bright areas, then the base image can be weighted lower with a smaller value for Factor. Similarly, the individual weights for each of the clone image layer setting can be adjusted. It is also possible to create more cloned images with for instance a higher StdDev setting in UnsharpMask and with higher bias settings in MultiScaleLinear-Transformation and mix in more variations of sharpening with PixelMath. It is again a matter of taste. I generally believe that the two sharpened images give me enough to blend in. try several combinations and base weightings until I get the effect that I like.
  6. After obtaining the composite image output from PixelMath, I rename the identifier and save the image.

77) DarkStructureEnhancement – to enhance the darker details in your image with a further boost in contrast over the brighter surrounding areas, I apply the DarkStructureEnhancement script to the entire image without using any mask protection. I find that this either works or fails. I will just try the script and decide weather or not to keep the results. If it works, it can add a very nice touch of contrast to the image. I just run the script, select the image and run the script with the default settings. If I like the results, I rename the identifier and save the image.

 

Combining the Luminance and Chrominance data to from (Nb)L(Nb)RGB Color Images

 

78) The next step is to combine the ELUM/SLUM/(Nb)LUM data with chrominance data from the (Nb)RGB or SHO images, using the LRGBCombination process. First, I uncheck the RED, GREEN and BLUE channels in the LRGBCombination process window and check the L channel with default lightness, typically default saturation and equal channel weights. Since I previously did noise reduction both the Luminance and chrominance images, I do not check the chrominance noise reduction option. I then apply the process to the (Nb)RGB/SHO color image. This should show two major effects: 1) much of the brightness and detail should have been added to the color image and 2) the chromatic noise in the background should have been significantly reduced. However, I find that it’s possible to get too much total lightness and a washed-out image. If the image is not showing basically the right color balance, then I use trial and error to find the best lightness and saturation settings to use. I usually decrease the saturation slider – which actually increases the color saturation – and perhaps, but not always, increase the lightness slider – which actually reduces the luminance component of the image. I start with lightness = 0.5 (default) and saturation at 0.40 (increased). I will try several settings to see which give the best result. Once I am satisfied, I then rename the identifier and save the image as the LRGB image. The result of the LRGBCombination at this point should show a good color balance and saturation, very nice contrast and sharpness, and low noise in the background.

 

79) MorphologicalTransformation – For many images, the sheer number of stars in the field can overwhelm the DSO or nebula that you want the image to emphasize. Using a star mask and the MorphologicalTransformation, the size of the stars can be reduced or the stars can be basically removed from the image. I have been experimenting with star removal and the use of starless images, but at this point I regularly just use star size reduction. Unlike the star mask for deconvolution, for this star mask we do want to capture as vast majority of the stars and even stars in the regions of bright nebulosity while at the same time excluding the nebulosity and the background. In order for the StarMask process to effectively capture just the stars, we first preprocess a seed image using other processes to strongly flatten the brightness across the image. To accomplish this, we use the HDRMultiScaleTransformation in the following manner:

  • Start with the original non-linear lightness mask that was used to create the object mask for deconvolution. I don’t use the LUM/SLUM image in its current state. Make a clone of that image.
  • Open the HDRMultiScaleTransformation process and use the following settings: Number of layers – 7; Number of Iterations – 6; Scaling function – B3 Spline (5). The number of layer can be increased or decreased depending on how well the eventual star mask captures the stars in the image
  • Apply the HDRMultiScaleTransformation process to the image.
  • This should create a rather ugly, very flat and grayish image. This is exactly what you want so in order to seed the StarMask structure algorithm so that it can capture the stars and separate out the nebulosity.
  • Next open the HistrogramTransformation process and permanently black clip this image until most of the remaining nebulosity is gone but the stars in the nebulosity should still remain if even faintly. This is all that is required for the StarMask structure algorithm to detect these stars.
  • The actual StarMask can now be generated from this pre-processed image. The settings below should generate a StarMask capturing the almost all the stars in the image. We do not want to increase the size of the stars and we do not need to use any of the Mask Preprocessing options. I used the StarMask process with the following settings:
  • set the scale = 6 or 7 in order to capture the largest stars. You may need to try both setting and see which capture the largest stars best.
  • The Noise threshold can be set at 0.1 since we have already black clipped the background and most of the nebulosity from the image
  • Structure Growth: This is not wanted so set Large scale = 0, Small scale = 0, compensation = 0
  • Mask Generation: We do want some smoothing so set Smoothness = 5.
  • Check the Contours option to select the edges of the star profiles and their halos for the detected stars.
  • Mask Preprocessing: leave at default since we pre-processed the image.

These settings should generate a good contour star mask capturing even the smaller stars and stars embedded in the nebulosity. Now apply this StarMask to the ELUM/SLUM image. The mask should protect everything accept the edges and halos of the stars. With the StarMask applied we now proceed to actually reduce the star sizes using the MorphologicalTransformation process. I use the following setting for this process:

  • Operator: Morphological Selection
  • Interfacing = 1
  • Iterations = 4
  • Selection = 0.25
  • Amount = 0.75
  • Elements = 7
  • Click on circular structure.

After applying the MorphologicalTransformation, the stars should be significantly de-emphasized with more of the diffuse nebulosity becoming visible and the nebula as a appearing much more dominate in the image. If this is not enough star reduction, you can increase the Iterations further.  You can also repeat the entire process, including the generation of a new star contour mask to further de-emphasize the stars. This completes the typical non-linear process step for increasing the contrast, sharpening and reducing the stars for the ELUM/SLUM/(Nb)LUM image.

 

80) The image at this point is very close to complete and what further tweaks to do will be very much a matter of taste. There are three processes that I will likely look at:

  • CurvesTransformation: a slight positive curve for saturation
  • CurvesTransformation: a slight s-curve for contrast final touch
  • ACDNR Noise reduction: for both chrominance and lightness

                 i. Lightness: StdDev = 1.2; Amount = 0.3

                ii. Chrominance: StdDev = 2.0; Amount = 0.5

 

These final touches are optional and can be skipped. Other tweaks can also be done, but I will often not even do the ACDNR.

 

Here is a PDF version:

 

Attached File  PixInsight Processing Work Flow 11125018.pdf   336.25KB   346 downloads


Edited by cfosterstars, 25 November 2018 - 02:22 PM.

  • Robert, sparkyht, kolsen and 14 others like this

#19 russ.carpenter

russ.carpenter

    Sputnik

  • *****
  • Posts: 28
  • Joined: 21 Jan 2016
  • Loc: Green Valley, Arizona

Posted 26 November 2018 - 10:15 PM

What a spectacular contribution. Thanks for all your hard work!



#20 Tom K

Tom K

    Viking 1

  • *****
  • Posts: 624
  • Joined: 19 Jan 2010
  • Loc: Escondido, CA

Posted 06 December 2018 - 04:03 PM

@cfosterstars - Wow - that is quite a document.   I will need a lot of cloudy nights to process my images!  I was wondering  whether PI would allow a script to be created that quasi-automated this process.   It would seem to me that for similar types of objects some of the similar workflows would be done.   It sure would be great to spend a lot of time setting this up and then just point the script to a set of folders and then go have a beer while it runs!

 

Tom



#21 artem2

artem2

    Ranger 4

  • ****-
  • Posts: 324
  • Joined: 08 Nov 2013
  • Loc: Österreich (no kangaroos in Austria)

Posted 10 December 2018 - 07:23 AM

Thank you for the great "Pixinsight Processing Flow" !!waytogo.gif waytogo.gif waytogo.gif


  • elmiko likes this

#22 cfosterstars

cfosterstars

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2579
  • Joined: 05 Sep 2014
  • Loc: Austin, Texas

Posted 10 December 2018 - 09:15 PM

I am working on an update TGVDenoise. I also tried a few new blending techniques for Ha data that worked well.


  • rlsarma likes this

#23 rlsarma

rlsarma

    Ranger 4

  • -----
  • Posts: 310
  • Joined: 24 Aug 2015
  • Loc: Digboi, Assam, India

Posted 02 January 2019 - 04:56 AM

I am working on an update TGVDenoise. I also tried a few new blending techniques for Ha data that worked well.

Very helpful post for processing images.

 

Maybe a silly question. Would you kindly take some pain to also provide the PixInsight workflow for images acquired with OSC cameras combined with STC Duo Narrowband filter?

 

Rajib

Digboi, Assam, India


  • 42itous1 likes this

#24 cfosterstars

cfosterstars

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2579
  • Joined: 05 Sep 2014
  • Loc: Austin, Texas

Posted 02 January 2019 - 11:44 AM

Very helpful post for processing images.

 

Maybe a silly question. Would you kindly take some pain to also provide the PixInsight workflow for images acquired with OSC cameras combined with STC Duo Narrowband filter?

 

Rajib

Digboi, Assam, India

I will at some point. I just received the filter and have not yet installed it in my ASI071MC-PRO filter wheel. However, I did already discuss that topic. I may need to post my update to the process flow.


  • rlsarma likes this

#25 rlsarma

rlsarma

    Ranger 4

  • -----
  • Posts: 310
  • Joined: 24 Aug 2015
  • Loc: Digboi, Assam, India

Posted 02 January 2019 - 10:10 PM

Thank you , sir, 

 

what would you like to see in a Video concerning SGP ??

 

Mitch / Starhunter / Astrodude ...

Hi Astrodude,

 

Sorry for my delayed response due to my other preoccupations. I want to learn more about plate solving and Framing& Mosaic in SG Pro. Every time I want to go for imaging after Framing & Mosaic, it asks me about camera rotator (even if I deselect the rotator). I don't have a rotator and I want to directly plate solve and image through SG Pro.

 

Wishing you a very happy New Year 2019.

 

Rajib




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics