Jump to content


Photo

Processing Messier 33 in PixInsight - A Tutorial

  • Please log in to reply
169 replies to this topic

#1 Madratter

Madratter

    Fly Me to the Moon

  • *****
  • Posts: 6376
  • Joined: 14 Jan 2013

Posted 08 September 2013 - 12:59 PM

First, let me state flat out, I am NOT an expert in either PixInsight or Image Acquisition. This data we will be using is not perfect; it has significant warts. But in a way, that is a good thing since it will require techniques and such that better data might not. Likewise, the choices I make when processing this may not be optimum. But I'm pretty sure we will end up with a usable result. If you have an alternative you think is better, by all means speak up. I'm open to learning something new.

This is not an ideal format for this for at least two reasons. First, there is a limit of one picture per post. And secondly, the max size for a picture is 800x800. So I'll have to be reducing the size of some of the results/and or cropping them for display here.

The data itself was taken on the night of September 4th, 2013. A Celestron Powerstar III 8" SCT with a f/6.3 reducer flattener was used so the focal length is close to 1260mm. All subs were 5 minutes in length. Luminosity was unbinned (1x1). R, G, and B were all binned 2x2. Flats were taken for each of those 4 channels individually. Total exposure time for luminosity was 1 hour 55 minutes. Each color channel had 45 minutes of time. I can get by with less than that and sometimes go with just 30 minutes color. However, I like using 45 minutes when I can get it because Winsorized sigma clipping works better.

This tutorial is going to start with the stacked, calibrated images (darks, flats, flat darks). I happen to really like doing the calibration and stacking within PixInsight but I know others still use other tools like Deep Sky Stacker.

The channel stacks are available on DropBox here:

Messier 33 Channel Stacks on Dropbox

I'm a strong believer that you learn better when you actually follow along and do the procedure. Still, it is your choice.

It is also possible I will have false starts and will back up in the processing. That is fine if it occurs. That will show how I think about the processing.

To a certain extent my workflow depends on what happens with the image, but here is a skeleton outline of some of the likely steps:

Luminosity:

Crop out stacking artifacts
Dynamic Background Extraction (or Automatic Background Extraction)
Create Point Spread Function for Deconvolution
Create Star Mask for Deconvolution
Create Overall Mask for Deconvolution
Deconvolution
Histogram Transformation
HDRMultiscaleTransform (Wavelets)
TGV Denoise
Histogram Transformation

RGB

Channel Combine the separate color stacks
Crop
Dynamic Background Extraction or Automatic Background Extraction to get rid of gradients
Histogram Transformation
TGV Denoise
Histogram Transformation
Saturation done using the Curves Tool

L and RGB
Register (Align) the separate L and RGB images
LRGB Combine
Crop
Curves (on the luminosity portion) to adjust contrast


#2 Madratter

Madratter

    Fly Me to the Moon

  • *****
  • Posts: 6376
  • Joined: 14 Jan 2013

Posted 08 September 2013 - 01:12 PM

Luminosity: Cropping Out Stacking Artifacts.

If you bring up the luminosity stack and do a [ctrl]a (which does an auto-stretch, you will notice that there are black borders on some of the sides. They are caused by when we stack, we are aligning the stars. Unless you got things absolutely perfect when guiding, you have no flexure, and you did not dither, the FOV will be slightly different between the different subs. That mismatch of FOVs will show up as darker borders. In this case, it is most obvious on the left side of the image. This particular image only has minor amounts of it most obvious on the left edge.

Attached Files



#3 Jeff2011

Jeff2011

    Surveyor 1

  • *****
  • Posts: 1873
  • Joined: 01 Jan 2013
  • Loc: Sugar Land, TX

Posted 08 September 2013 - 01:22 PM

That is a great tip. I have been doing the same thing in StarTools using the AutoDev module. It is important to remove the stacking artifacts since it can degrade the performance of gradient removal operations.

#4 Madratter

Madratter

    Fly Me to the Moon

  • *****
  • Posts: 6376
  • Joined: 14 Jan 2013

Posted 08 September 2013 - 01:23 PM

The dynamic crop tool is used to get rid of those edges. Here is what the image looks like while applying the tool.

Attached Files



#5 Madratter

Madratter

    Fly Me to the Moon

  • *****
  • Posts: 6376
  • Joined: 14 Jan 2013

Posted 08 September 2013 - 01:23 PM

And here is what the Dynamic Crop settings were:

Attached Files



#6 Madratter

Madratter

    Fly Me to the Moon

  • *****
  • Posts: 6376
  • Joined: 14 Jan 2013

Posted 08 September 2013 - 01:26 PM

Jeff is exactly right about why the cropping is important. If you don't crop those edge it will mess up gradient removal, and it will prevent you from stretching the image as much as is desirable.

#7 Madratter

Madratter

    Fly Me to the Moon

  • *****
  • Posts: 6376
  • Joined: 14 Jan 2013

Posted 08 September 2013 - 01:36 PM

To actually apply the Dynamic Crop, you click the green check mark at the bottom of the Dynamic Crop controls.

This might be a good time to mention another tool that is useful. While [ctrl]a is great for quick looks at the data, sometimes you want more control. The ScreenTransferFunction is used for this.

Attached Files



#8 Madratter

Madratter

    Fly Me to the Moon

  • *****
  • Posts: 6376
  • Joined: 14 Jan 2013

Posted 08 September 2013 - 01:45 PM

When using the ScreenTransferFunction, (and [ctrl]a, you are not actually changing the underlying data. What you are doing is changing how that data is presented on the screen. When using this, I almost always have the blue check mark on the bottom right clicked. This tells the screen transfer function to take on the settings of whatever image is selected. If you click the yellow and black icon that looks something like a radition symbol, this will do on auto-strech of the data similar to hitting [ctrl]a. Moving sliders around will affect the results. The only other thing I want to mention here is that the symbol that looks like to links of chain is currently selected. This ties all channels together, when doing the stretching. This is important for RGB data. Sometimes you want to do the best stretch for each channel independently. To do that you would deselect that link icon.

Here are the autostretched results of the Dynamic Crop.

Attached Files



#9 Madratter

Madratter

    Fly Me to the Moon

  • *****
  • Posts: 6376
  • Joined: 14 Jan 2013

Posted 08 September 2013 - 01:55 PM

One of the clearly bad things about this data is there is an obvious gradient across it. Even worse, that gradient is a little tougher than average to remove because Messier 33 fills so much of the frame. I often will use automatic background extraction to get rid of gradients but it is hopeless here.

Attached Files



#10 Madratter

Madratter

    Fly Me to the Moon

  • *****
  • Posts: 6376
  • Joined: 14 Jan 2013

Posted 08 September 2013 - 01:57 PM

Here are the settings in ABE for that miserable failure:

Attached Files



#11 Madratter

Madratter

    Fly Me to the Moon

  • *****
  • Posts: 6376
  • Joined: 14 Jan 2013

Posted 08 September 2013 - 02:00 PM

As can be seen, the results are actually worse than what we started with. All kinds of dark artifacts have been introduced around M33, and the upper right and lower left corners are still bad. Various other settings can be tried. For example you might want to try reducing the Deviation setting on Global Rejection (often a handy thing to do), but nothing will work well here.

#12 Madratter

Madratter

    Fly Me to the Moon

  • *****
  • Posts: 6376
  • Joined: 14 Jan 2013

Posted 08 September 2013 - 02:11 PM

When automatic fails, it is time to bring out the big guns. In this case, that is DynamicBackgroundExtraction (DBE).

Unfortunately, the automatic settings for DBE will also fail miserably. Bring up DBE. Then click on the image. Open up sample generation, and the click on the generate button.

Attached Files



#13 Madratter

Madratter

    Fly Me to the Moon

  • *****
  • Posts: 6376
  • Joined: 14 Jan 2013

Posted 08 September 2013 - 02:13 PM

Your image will look something like this:

Attached Files



#14 Madratter

Madratter

    Fly Me to the Moon

  • *****
  • Posts: 6376
  • Joined: 14 Jan 2013

Posted 08 September 2013 - 02:14 PM

If you actually tried to apply this the results would be even more disastrous than ABE.

The trick is to actually place those little samples where we actually need them.

Attached Files



#15 Madratter

Madratter

    Fly Me to the Moon

  • *****
  • Posts: 6376
  • Joined: 14 Jan 2013

Posted 08 September 2013 - 02:33 PM

Then this actually needs to be applied to the image. Here is the settings box.

Note that I change the tolerance setting under model parameters so that all samples would be good (not red). I also had to change the correction setting under target image correction to subtraction. Otherwise, it won't actually do anything, just show you the background it would subtract.

Attached Files



#16 Madratter

Madratter

    Fly Me to the Moon

  • *****
  • Posts: 6376
  • Joined: 14 Jan 2013

Posted 08 September 2013 - 02:39 PM

Here is the result. Those unsightly gradients are gone without the terrible artifacts introduced by ABE. By the way, to apply this, you drag the little triangle on the bottom left of the settings box over the image.

Attached Files



#17 Madratter

Madratter

    Fly Me to the Moon

  • *****
  • Posts: 6376
  • Joined: 14 Jan 2013

Posted 08 September 2013 - 02:42 PM

Next up will be deconvolution. This should be done while the data is still linear (unstretched). This is a good place to pause for a while.

#18 Raginar

Raginar

    Fly Me to the Moon

  • *****
  • Posts: 6138
  • Joined: 19 Oct 2010
  • Loc: Rapid CIty, SD

Posted 08 September 2013 - 02:52 PM

I'd make this on a website... You'll find it flows easier than the constant threads :)

Thanks for sharing your techniques. One request, could you post at the beginning your 'work flow'? I often don't need the individual frames... but seeing how you tackle going from mono to RGB and linear to non-linear is nice.

#19 Madratter

Madratter

    Fly Me to the Moon

  • *****
  • Posts: 6376
  • Joined: 14 Jan 2013

Posted 08 September 2013 - 03:17 PM

Hi Chris:

I've done many a website before, and it is possible this could end up as one or on one if enough people find it useful.

I have added an overall workflow that may or may not need to be tweaked as we go along.

#20 Raginar

Raginar

    Fly Me to the Moon

  • *****
  • Posts: 6138
  • Joined: 19 Oct 2010
  • Loc: Rapid CIty, SD

Posted 08 September 2013 - 03:39 PM

Thanks MR :)

#21 joelimite

joelimite

    Surveyor 1

  • *****
  • Posts: 1787
  • Joined: 01 Sep 2008
  • Loc: Colorado Springs, CO

Posted 08 September 2013 - 03:42 PM

Excellent tutorial, Madratter! I'm extremely new to deep-sky astrophotography and have only used DeepSkyStacker and Photoshop. I'm intrigued by your tutorial, though, and now want to try PixInsight.

#22 Jeff2011

Jeff2011

    Surveyor 1

  • *****
  • Posts: 1873
  • Joined: 01 Jan 2013
  • Loc: Sugar Land, TX

Posted 08 September 2013 - 04:48 PM

So far it all makes sense to me. I can can correlate the PI operations to what I do in StarTools. However, I am beginning to see some differences. Good stuff. Looking forward to the combining process.

#23 Madratter

Madratter

    Fly Me to the Moon

  • *****
  • Posts: 6376
  • Joined: 14 Jan 2013

Posted 08 September 2013 - 07:27 PM

Hi Joel. Thanks for the kind words. Even if you don't end up using PixInsight the concepts will still apply with other processing techniques. That goes hand in hand with what Jeff is saying about StarTools (which I also own).

Deconvolution is a way of tightening up the detail in your image. It can make stars smaller, and it can bring out detail in your images.

Perhaps the way to start is to show what happens if you actually use the defaults on an image. It is usually pretty dreadful, and will illustrate why we go to the trouble of all the steps that are about to occur.

If you have followed along, you might want to save your current results. I called mine L_dbe. I save as a 32 bit fit file. I actually save after every operation. There is actually a history explorer, but that won't help if the electric goes out.

Bring up that image and use either [ctrl]a or STF (ScreenTransferFunction) to stretch it so you can see what is happening. Next we want to define a preview. This can be done by pressing [alt]n. Move the cursor to where you want to start. Then press and hold the left mouse button and drag to where you want the preview to end.

Previews are very useful in PixInsight and we will be using them from time to time for a number of purposes.

If successful, you will end up with something like this:

Attached Files



#24 Madratter

Madratter

    Fly Me to the Moon

  • *****
  • Posts: 6376
  • Joined: 14 Jan 2013

Posted 08 September 2013 - 07:30 PM

If you left click on the tab down the left side of the image that says Preview01, you should now see this:

Attached Files



#25 Madratter

Madratter

    Fly Me to the Moon

  • *****
  • Posts: 6376
  • Joined: 14 Jan 2013

Posted 08 September 2013 - 07:35 PM

Notice I made my preview large enough that it does not all fit, when showing it at scale 1:1. (You can change the scale by using your middle mouse scroll wheel). However, we really want this at 1:1. We can either live with it this way, or using common windows methodology, we can grab the end of the frames and enlarge the window.

Now one of the things that makes previews useful, is that because they are not as big as the full image, computationally expensive processing can be tried out before committing the settings to the entire image.

Bring up the deconvolution tool. I have opened mine up to also show the deringing parameters. We aren't currently using them but we will.

Attached Files








Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics