Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Beginner Pixinsight tutorials?

This topic has been archived. This means that you cannot reply to this topic.
114 replies to this topic

#51 mostlyemptyspace

mostlyemptyspace

    Messenger

  • *****
  • topic starter
  • Posts: 444
  • Joined: 05 Jan 2014

Posted 17 January 2015 - 10:37 PM

Mostlyemptyspace,

 

It looks like a good improvement and I'm glad you are making progress.

 

I have found some features will simply not calibrate correctly if the gradients from local lights, light pollution or the Moon are very strong in your light frame data.  I have this problem when I do broadband imaging from my house or the nearby university and schools where I do a lot of outreach.  I have tried all sorts of things, like doing a 3 to 8 point DBE on each subframe to remove the light pollution gradient before calibration to just spending a lot of time with DBE at the end of the integration and even modeling a flat with the gradient built in.  All of these things have had moderate success, but not nearly as much as I would like for the amount of work I put into it.  I don't have any of these issues when I switch to my 3nm Ha filter or when I'm doing broadband from a dark sky site.

 

Regards,

David

 

To be honest I wasn't expecting anything nearly this good considering how much light pollution I have here, and the fact that I'm at sea level pretty close to the ocean, so seeing is usually terrible. It's just odd how the light pollution and poor seeing manifest themselves. The background is sort of brown and blotchy, rather than being say gray and uniform. Is there any way to cut the background to a darker black without compromising the subject? I like how M42 looks just fine, but if I look at the dark area underneath it, it's all brown and blotchy.



#52 17.5Dob

17.5Dob

    Voyager 1

  • *****
  • Posts: 10,350
  • Joined: 21 Mar 2013

Posted 17 January 2015 - 10:58 PM

Ok, just to tag along in this thread, how much will I able to learn in the 30 day trial ? I've read and seen so many great results using PI , that appear better than my standard PS workflow, that I'm willing to give it a go. But realistically, how much am I going to be able to digest in just 30 days ?



#53 mostlyemptyspace

mostlyemptyspace

    Messenger

  • *****
  • topic starter
  • Posts: 444
  • Joined: 05 Jan 2014

Posted 18 January 2015 - 12:25 AM

Well I learned a hell of a lot in 3 days, enough to have a workflow that surpasses my old one in DSS and Photoshop. You just gotta go through the walk throughs and have a lot of patience. PI doesn't hold your hand though. A lot of things that are automated in other tools you need to do explicitly in PI, but the results are better.

 

Now I think I've ironed out the kinks and I could get through a workflow in maybe 90 minutes, which us mostly processing time.



#54 David Ault

David Ault

    Gemini

  • -----
  • Posts: 3,424
  • Joined: 25 Sep 2010

Posted 18 January 2015 - 12:56 AM

Mostlyemptyspace, how dark you want the background is a personal preference.  You can probably push it down further and actually increase the overall contrast of the image but you may also see a bit more of the noise.  I tend to like the background to be just visible with a small amount of noise present.  To my eyes it looks more natural.  If you want to remove some of the blotchy background you may want to try out some of the PI noise reduction tools.  I typically use a combination of TGVDenoise and MultiscaleMedianTransform with different masks.

 

17.5Dob, I think the PI trial is 45 days, not 30.  When I first started using it I learned enough to be able to make significant improvements over my previous process (DSS and Gimp) in just the first few days.  By the end of the 45 days I was using some of the HDR tools and noise reduction tools.  I wouldn't say I was using them to great effect as I was over using them but I think learning where to draw the line comes with experience.  There are free and paid tutorials for PI, all of which are very useful to help you get started.  If you are not averse to trying another tool for image processing I would definitely download the trial and check out some of the free tutorials.  Harry's Astro Shed is a good place to start.

 

Regards,

David



#55 David Ault

David Ault

    Gemini

  • -----
  • Posts: 3,424
  • Joined: 25 Sep 2010

Posted 18 January 2015 - 01:55 AM

Mostlyemptyspace,

 

This is what I was referring to by using the NR tools in PI to remove some of the blotchyness in the background so you don't have to clip the blacks so much:

mostlyemptyspace_M42_final.jpg

 

You can see that there is actually some faint dust between NGC1977 (the Running Man) and M42 and a gradual transition.  If you clip the blacks you lose that detail and end up with a sharp edge to the black.  I also did some very mild HDRMultiscaleTransform to make the core of M42 a bit more visible.  You can't resolve the individual stars of the Trapezium cluster, but you can see that they are there.  I desaturated the stars a little, to keep the halos from dominating so much, and did a little color balancing as well.

 

Regards,

David



#56 mostlyemptyspace

mostlyemptyspace

    Messenger

  • *****
  • topic starter
  • Posts: 444
  • Joined: 05 Jan 2014

Posted 18 January 2015 - 02:37 AM

Mostlyemptyspace,

 

This is what I was referring to by using the NR tools in PI to remove some of the blotchyness in the background so you don't have to clip the blacks so much:

 

 

You can see that there is actually some faint dust between NGC1977 (the Running Man) and M42 and a gradual transition.  If you clip the blacks you lose that detail and end up with a sharp edge to the black.  I also did some very mild HDRMultiscaleTransform to make the core of M42 a bit more visible.  You can't resolve the individual stars of the Trapezium cluster, but you can see that they are there.  I desaturated the stars a little, to keep the halos from dominating so much, and did a little color balancing as well.

 

Regards,

David

 

David, that is fantastic. Just when I thought I had gone as far as I could. I especially love how you brought out more detail in M42. Can you please give me the details (e.g. processes and settings) on what you did so I can try to replicate? Thanks so much!



#57 mostlyemptyspace

mostlyemptyspace

    Messenger

  • *****
  • topic starter
  • Posts: 444
  • Joined: 05 Jan 2014

Posted 18 January 2015 - 01:12 PM

Ok so, now for a tangent discussion, whether or not to take the plunge and buy PI when my trial runs out. After putting in the work, I can definitely see the utility of PI, especially the DBE and NR functions. My old workflow primarily consisted of DSS and Photoshop, and it looks like I can grab these Carboni tools for $22. What does PixInsight bring to the table that these tools won't? For those of you who own both, what are the PI functions that you can't live without?



#58 terry59

terry59

    Voyager 1

  • *****
  • Posts: 11,052
  • Joined: 18 Jul 2011

Posted 18 January 2015 - 02:23 PM

Dynamic crop, ABE/DBE, histogram transformation and curves transformation give me superior results over PS, noise reduction, analysis tools, etc....

 

Of course this is my opinion and I'm sure there will be those that disagree


Edited by terry59, 18 January 2015 - 02:24 PM.


#59 Goofi

Goofi

    Cosmos

  • *****
  • In Memoriam
  • Posts: 8,137
  • Joined: 03 May 2013

Posted 18 January 2015 - 02:34 PM

In addition to what Terry said, SHO-AIP script in Multichannel synthesis ...

I can mix any filter, into any color channel, and set with sliders how much of each filter applies.

Plus, it has some noise reduction and allows Luminance mixing too.

 

PI allows you to do most of your processing while the data is still linear; with PS you have to stretch the data, then try to beat back noise.

I feel you get much better results with PI because of how long you can stay linear.



#60 mostlyemptyspace

mostlyemptyspace

    Messenger

  • *****
  • topic starter
  • Posts: 444
  • Joined: 05 Jan 2014

Posted 18 January 2015 - 02:45 PM

In addition to what Terry said, SHO-AIP script in Multichannel synthesis ...

I can mix any filter, into any color channel, and set with sliders how much of each filter applies.

Plus, it has some noise reduction and allows Luminance mixing too.

 

PI allows you to do most of your processing while the data is still linear; with PS you have to stretch the data, then try to beat back noise.

I feel you get much better results with PI because of how long you can stay linear.

 

Yeah that really stood out to me as well. If I use DSS, I pretty much have to stretch the histogram before I do anything else, and then I end up with a noisier image and end up beating up the image in PS trying to get rid of it.



#61 mostlyemptyspace

mostlyemptyspace

    Messenger

  • *****
  • topic starter
  • Posts: 444
  • Joined: 05 Jan 2014

Posted 18 January 2015 - 03:15 PM

Dynamic crop, ABE/DBE, histogram transformation and curves transformation give me superior results over PS, noise reduction, analysis tools, etc....

 

Of course this is my opinion and I'm sure there will be those that disagree

 

Are these things that PS simply can't do, or are simply easier in PI? I'm looking at these Carboni PS tools, and they claim to do a lot of the same things, gradient removal, noise removal, deconvolution, etc.

At $275, that's almost the price of the refractor I was hoping to buy. Even though I'm really liking it, I want to know if it's really essential, or if I could get similar results with DSS+PS+Plugins with just a little more elbow grease.



#62 terry59

terry59

    Voyager 1

  • *****
  • Posts: 11,052
  • Joined: 18 Jul 2011

Posted 18 January 2015 - 03:31 PM

 

Dynamic crop, ABE/DBE, histogram transformation and curves transformation give me superior results over PS, noise reduction, analysis tools, etc....

 

Of course this is my opinion and I'm sure there will be those that disagree

 

Are these things that PS simply can't do, or are simply easier in PI? I'm looking at these Carboni PS tools, and they claim to do a lot of the same things, gradient removal, noise removal, deconvolution, etc.

At $275, that's almost the price of the refractor I was hoping to buy. Even though I'm really liking it, I want to know if it's really essential, or if I could get similar results with DSS+PS+Plugins with just a little more elbow grease.

 

 

The real key, as Goofi said, is working while the data is linear. That is huge...and PS can't do it. Only you can decide if it is "worth it".

 

Edit: I've done an initial stretch with histogram transformation and tried unsuccessfully to replicate it in PS. I really do like layers though and PI doesn't have that


Edited by terry59, 18 January 2015 - 03:33 PM.


#63 mostlyemptyspace

mostlyemptyspace

    Messenger

  • *****
  • topic starter
  • Posts: 444
  • Joined: 05 Jan 2014

Posted 18 January 2015 - 03:37 PM

 

 

The real key, as Goofi said, is working while the data is linear. That is huge...and PS can't do it. Only you can decide if it is "worth it".

 

Edit: I've done an initial stretch with histogram transformation and tried unsuccessfully to replicate it in PS. I really do like layers though and PI doesn't have that

 

 

Yeah that's pretty huge. I personally like dark backgrounds, so background noise drives me crazy. I always end up with my images looking "overprocessed" because I did so much to remove the background noise that I mangled the subject.



#64 nitegeezer

nitegeezer

    Galactic Ghost

  • *****
  • Posts: 7,691
  • Joined: 27 Nov 2007

Posted 18 January 2015 - 03:40 PM

It looks like GIMP has improved in a couple of areas.  Reading their update email these two lines caught my eye:

To make things even more fun, we added 64bit per color channel precision to GIMP.

 

added loading and saving of 32bit TIFF files.

 

 

How does this alter the playing field?



#65 17.5Dob

17.5Dob

    Voyager 1

  • *****
  • Posts: 10,350
  • Joined: 21 Mar 2013

Posted 18 January 2015 - 03:41 PM

I use DSS + PS+ HLVG+ GradientXterminator+Carboni's . I don't know of way to run a "deconvolution" script in PS and the the noise removal tools in PS are terrible. I've been using Carboni's "Deep Sky Noise Removal" for most of my images which is just a PS action using some masks.

From everything I've read, PI is hands down the winner in the noise reduction front.

I think the vast majority of the people using PI here, started out using the same things I'm using, but all of them have gravitated towards PI. That tells me something.



#66 David Ault

David Ault

    Gemini

  • -----
  • Posts: 3,424
  • Joined: 25 Sep 2010

Posted 18 January 2015 - 04:06 PM

Mostlyemptyspace, I've been working on the steps and tools I used to make that image, but it's taking a while as I did quite a bit once you take into account all the mask generation and variations.

 

I've been using PixInsight for about 3 years now and it's grown as a program, introducing new tools and functions (all of which I didn't have to pay a cent for past the initial purchase).  I recently decided I would try to educate myself in Photoshop, so I signed up for a creative cloud license and download Carboni's and Anna's actions.  I enjoy learning to use different programs and if that combination can do something for me that I can't do in PI then I'm all for it.  So far, I haven't found anything but I've only been working with it for about a month.  To be completely fair, there are some things that are easier to do in PS, like dealing with blending data.  The layers in PS make this easy and using PixelMath or scripts to do this in PI can be cumbersome, although I will argue that if you learn how to do all that blending in PI you will have a much greater understanding about what's going on under the hood in PS.  On top of this PI can handle all of the calibration, registration and stacking duties and do it better than tools DeepSkyStacker (I haven't used CCDStack or some of the other software out there for this so I can't compare them).  The noise and gradient reduction tools are excellent and some of the tools for mosaics are very hard to beat.  It also has lots of non picture related tools like photometry, plate solving, sensor characterization, etc..  The extremely powerful PixelMath tool can do so many things (see my blog post about halo removal) if you take the time to learn and understand it and when combined with the javascript tools built in for scripting and the PCL development support there's really nothing you can't do with it.

 

There are only two scenarios where I don't recommend PI to anyone doing astrophotography: you are already very experienced with PS or you simply do not have the up-front cash.  Even that second case is a statement of not now, but look at saving up for it later.

 

Regards,
David



#67 Madratter

Madratter

    Hubble

  • *****
  • Posts: 13,277
  • Joined: 14 Jan 2013

Posted 18 January 2015 - 05:36 PM

I use both PI and Photoshop in my processing. My personal opinion is that doing without either one is handicapping yourself. I know they are expensive, but consider that even an 80mm triple apo will run you in the same range as getting both.

 

As for what there is in PI that I find essential, the gradient removal tools are simply excellent. I also, for reasons spelled out on my website, believe that you are far better off doing the initial saturation work while still linear. I like the color balancing tools. TGV Denoise is my goto method for noise reduction.

 

I also really like the image calibration and stacking in PI although that can be done elsewhere for free. But PI does a very high quality job of it.



#68 mostlyemptyspace

mostlyemptyspace

    Messenger

  • *****
  • topic starter
  • Posts: 444
  • Joined: 05 Jan 2014

Posted 18 January 2015 - 05:42 PM

So I'm running an experiment. I attempted to mimic the results I got in PixInsight using DSS and PS. The stacked calibrated image looked about the same. I was able to remove the background gradient using GradientXterminator, and I have a few different noise removal programs like Noiseware and Neat Image. I could color calibrate easily enough just by aligning the histograms.

 

Where I got stuck was actually bringing out the detail of the nebula. I'm not sure how PI does it, but probably half of M42 was invisible, and you can't see the running man at all. The image is clean and flat, but missing a ton of detail. Perhaps I'm missing a key step in PS. What would I be missing?



#69 David Ault

David Ault

    Gemini

  • -----
  • Posts: 3,424
  • Joined: 25 Sep 2010

Posted 19 January 2015 - 01:00 PM

So, here’s my processing example using Mostlyemptyspace’s data.

The first thing I do when I bring up any raw data is to use the ScreenTransferFunction auto stretch feature. This is a non-destructive process. It simply modifies how the image is displayed and does not alter the data itself. This is really great and allows you to get the data in linear form longer. You can access the STF auto stretch from the menu bar and from a dedicated tool. The radiation like symbol is the auto stretch button. The linked chain icon locks the RGB channels to the same stretch, so if your data is not color balanced then it is best to turn this off before stretching, which is what I did for Mostlyeptyspace’s data.

Figure01.jpg
Figure 1: The STF functions on the menu bar

Figure02.jpg
Figure 2: The STF Tool

So the first thing I noticed when I brought up Mostlyemptyspace’s Orion Nebula data is that there are low signal areas. These occur when there is some amount of drift over the total exposure time. This could be intentional, from dithering between frames, or unintentional due to differential flexure, polar mis-alignment or unguided imaging. Before we do anything else we need to crop the image to include just the higher signal area. This is usually obvious in PixInsight when we have the ScreenTransferFunction auto stretch applied to an image. The STF tool allows you to stretch the image so the data is easily visible without modifying anything. It is a display only function.

Continued...

Regards,
David



#70 David Ault

David Ault

    Gemini

  • -----
  • Posts: 3,424
  • Joined: 25 Sep 2010

Posted 19 January 2015 - 01:01 PM

To crop the image you can use the DynamicCrop tool, the Crop tool or if you are feeling very brave you can even use PixelMath, but for our case DynamicCrop is the easiest to use.  Simply launch the tool, either go the Process Explorer and double click on it or go to the Process Menu and click on it there.  Once the tool is active if you click and drag in an image window it will create the boundaries for the crop.

 

Figure03.jpg
Figure 3: Croping the image

 

Continued...

 

Regards,
David



#71 David Ault

David Ault

    Gemini

  • -----
  • Posts: 3,424
  • Joined: 25 Sep 2010

Posted 19 January 2015 - 01:01 PM

After we’ve cropped the image we can see that there are some gradients and banding visible.  The first thing we I’m going to do is go after the gradients.  The banding we can remove with a script later on.  Bring up the DynamicBackgroundExtraction process and click on your image.  I generally start by changing the samples per row setting to give me quite a few sample point and then click on the Generate button.  Once I have a bunch of sample I review their placement and make sure that none are placed on stars or on nebulosity.  If any samples are coving something other than background either shift it to the side or delete it.  Generally I set the correction method to ‘Subtraction’ before executing the process.  After all the samples look good click the execute button (green check mark).  This will give you two new images, one is the model of the background the other is the modified image.  After reviewing the output make any modifications to the samples you feel necessary, close the new images and try it again.  Once you are comfortable with the results, check the ‘Discard background model’ and ‘Replace target image’ and execute the process again.  This will update the existing image.  Because the background has shifted, you will probably need to redo the STF stretch again.  Depending on your data it may look worse after the STF stretch as gradient removal allows more of the data, and its associated noise, to be seen.  Also, you will likely be able to link the RGB channels now since during the subtraction process the background levels will be normalized.

Figure04.jpg
Figure 4: DynamicBackroundExtraction

 

Continued...

 

Regards,
David



#72 David Ault

David Ault

    Gemini

  • -----
  • Posts: 3,424
  • Joined: 25 Sep 2010

Posted 19 January 2015 - 01:02 PM

This particular image was taken with a DSLR and has some banding left over after calibration.  This can happen due to gradients present in the light frame data that are not in the calibration data.  Light pollution, the Moon and your neighbors flood lights can all create gradients in your light frames that throw off your flat calibration.  Since some of these same patterns are in the flat frames the gradient will throw the calibration off.  Luckily there is a PixInsight script called CanonBandingReduction that does this amazingly well.  It is designed to work on horizontal banding so we first need to rotate the image.  The way I chose to do this is by going to the Image->Geometry->Rotate 90° Clockwise menu item (there are other tools to do this as well).  Next bring up the CanonBandingReduction script.  I didn’t change any of the settings and just hit OK.  After this is done, rotate the image back.  At this point the background should be looking fairly even.  With this image, there is some streakiness left over which we will try to kill with noise reduction.  Sometimes doing the CanonBandingReduction can cause or expose additional gradients.  This was the case with this data so I went back and did another pass of DBE.

 

Figure05.jpg
Figure 5: CanonBandingReduction

 

Figure06.jpg
Figure 6: Where we are at after DBE and CanonBandingReduction

 

Continued...

 

Regards,
David



#73 David Ault

David Ault

    Gemini

  • -----
  • Posts: 3,424
  • Joined: 25 Sep 2010

Posted 19 January 2015 - 01:03 PM

Next we are going to prep some masks for other processing.  The first mask I will call a light mask, which is essentially just the luminance data from the image with a STF stretch permenantly applied.  Go to Image->Extract->Lightness (CIE L*).  Apply a STF auto stretch to it.  Next drag the triangle icon (New Instance) from the STF window the bottom bar of the HistogramTransformation window.  Drag the triangle icon from the HistogramTransformation window to the luminance image.  This applies the STF stretch permenantly to the image (make sure to clear the STF stretch otherwise your image will appear all white even though it isn’t).

 

After we make the light mask we are going to use it make a star mask.  Bring up the StarMask process and for running on stretched data we will need to modify the default values.  This takes some experimentation and in some cases you may need to use different scale values to catch all the stars, which is the case with the Orion area.  For the first pass I used the settings in Figure 7 and applied it to the light mask.  Figure 8 shows the second pass, also applied to the light mask.  I had problems getting the Trapezium stars to show up when using the light mask, so instead for the last pass I used the settings in Figure 9 and applied it to the main image.

 

Figure07.jpg
Figure 7: First Pass StarMask

 

Figure08.jpg
Figure 8: Second Pass Star Mask

 

Figure09.jpg
Figure 9: Third Pass Star Mask

 

We need to make one star mask out of these and the best option I’ve found is to take the max of all the images.  Bring up PixelMath and enter this equation into the RGB/L field: max(star_mask, star_mask1, star_mask2) then apply it to the original star_mask image.

 

Figure10.jpg
Figure 10: PixelMath Max of star masks

 

Figure11.jpg
Figure 11: The Final Star Mask

 

Continued...

 

Regards,
David



#74 David Ault

David Ault

    Gemini

  • -----
  • Posts: 3,424
  • Joined: 25 Sep 2010

Posted 19 January 2015 - 01:03 PM

Next we are going to make some copies of the light mask to be used for deconvolution and noise reduction.  Make 3 copies of the light mask by draging on the tab in the left bar to somewhere on the PI desktop.  Rename them: deconv_mask, tgv_mask and mmt_mask.

 

Masks are used to protect certain parts of an image when manipulating it.  The brightest parts of the mask allow the image to be altered while the dark parts protect it.  For example, if you want to do noise reduction but don’t want to affect the stars or brighter parts of your image you can take the light mask and apply it to the image.  In this case the mask is actually opposite of what we want, so we can invert it with a click of the ‘Invert Mask’ button which is on the menu bar (or through the Mask menu).  You can also enable or disable the visibility of the mask so you can see what’s happening to your image while processing it with the ‘Show Mask’ button.

 

Let’s start on the deconvolution mask.  Normally when I do deconvolution I use the light_mask without modification, but for this image it was difficult to control the effects so I decided to reduce the brights of the mask.  In this case I simply cut the histogram range of the image in half, so instead of ranging from 0 to 1 it now ranges from 0 to 0.5.  I used the CurvesTransformationTool and clicked on the upper right end point inside the histogram displan and drug it down to 0.5 (See Figure 12).  Apply this to the deconv_mask image and you will see it get darker.

 

Figure12.jpg
Figure 12: Decreaseing the range of the deconvolution mask

 

We are going to do something similar for the TGV mask.  In this case I want to compress the range from the top and bottom.  TGVDenoise can use an image for support, but I find that restricting it’s application to our image is useful.  I want to protect the bright portions of the image more than the darker parts but not by too much which is why I compress the range of the mask.  Figure 13 shows the CurvesTransformation window before applying it to the tgv_mask window.  After this I want to shift the peak of the histogram to the mid point.  I used the HistogramTransformation tool with a change to the midpoint and applied it to the tgv_mask (see Figure 14).  When used with the TGV process this will give us close to a 50% blend between the original image and the noise reduced image with a slight preference to more NR data being allowed for the background than the high signal areas.

 

Figure13.jpg
Figure 13: Modifying the TGV mask

 

Figure14.jpg
Figure 14: Further modification of the TGV mask

 

I also use the MultiscaleMedianTransform for large scale noise reduction.  I use very aggressive settings for this tool so our mask needs to be very protective to keep more of the original image.  In this case I just want to shift the midpoint of the histogram up quite a ways.  I usually center it around the ¾ mark, but sometimes go all the way to the 7/8 mark.  Use the HT tool as showin in Figure 15 and apply it to the mmt_mask.

 

Figure15.jpg
Figure 15: HistogramTransformation modification of the mmt_mask

 

Continued...

 

Regards,
David



#75 David Ault

David Ault

    Gemini

  • -----
  • Posts: 3,424
  • Joined: 25 Sep 2010

Posted 19 January 2015 - 01:04 PM

We are almost done with support images, but we need one more.  For deconvolution to work best we need to measure the point spread function (PSF) of the image.  The PSF will tell the deconvolution process how the image was convolved.  Since stars are point sources and only cover more than one pixel because of how our atmosphere and optics distorts the light we can measure them to try and undo some portion of that blurring.  Bring up the DynamicPSF process and start clicking on stars in the main image.  For each star you click on you will see a green box show up in the image window as well as some data fill out in the DynamicPSF window.  You want to select a range of stars across the image.  Pick varying brightness levels, but if you see any stars show up as Gaussian models delete them.  The Moffat equation is a better fit to the actual profile of stars and usually the tool will only give you a Gaussian model if the star is over exposed or is possibly a galaxy or other nebulous structure.  Figure 16 shows the DynamicPSF window after I’ve selected several stars.

 

Figure16.jpg
Figure 16: DynamicPSF

 

Next select one of the lines in the DynamicPSF window then hit CRTL+A to select all of the star data (or you can shift click).  Click on the camera icon which will generate a synthetic PSF based on the data provided.  For some reason the PSF generated by this can have the wrong size.  In my test it was way off and generated a PSF of 13251 x 13251 pixels.  We need to crop that down to a proper size and adjust it.  Click on the Sigma icon next.  This will pop up a window with information about the PSF.  Click on the camera icon in that window and it will generate an appropriately sized PSF, however this model is not correct in terms of its rotation.  Close the Average Star Data window and bring up the Crop process.  Select the PSF view from the drop down menu and then in the target px field put in the width and height of the second PSF we generated (from the Average Star Data).  You can get the size information by clicking on that image and looking at the bottom bar of PI.  In this case my PSF1 image had a width and height of 27.  Put those values into the Crop tool and apply it to the PSF window.  You can close the DynamicPSF window and PSF1 image.  We need to make one last modification to the PSF image though.  The original PSF was normalized and when we cropped the image we messed that up some, so we need to re-normalize it.  Bring up the HistogramTransformation tool and if it is not tracking the active window select the PSF view from the drop down list.  Click on the ‘Auto zero shadows’ button (it’s on the the Shadows row just past the clipping readout) and then apply it to the PSF image.

 

We should now a large set of support images as shown in Figure 17.

 

Figure17.jpg
Figure 17: The support images

 

Continued...

 

Regards,
David




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics