Return to the Cloudy Nights Telescope Reviews home pageAstronomics discounts for Cloudy Nights members
· Get a Cloudy Nights T-Shirt · Submit a Review / Article

Click here if you are having trouble logging into the forums

Privacy Policy | Please read our Terms of Service | Signup and Troubleshooting FAQ | Problems? PM a Red or a Green Gu… uh, User

Astrophotography and Sketching >> DSLR & Digital Camera Astro Imaging & Processing

Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | (show all)
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Learning PixInsight
      #5738115 - 03/17/13 07:49 AM

Besides Harry's video tutorials..., are there any decent step-by-step instructions that someone completely new to PixInsight could use from start to finish for astro image processing? Thx

Post Extras: Print Post   Remind Me!   Notify Moderator  
terry59
Post Laureate
*****

Reged: 07/18/11

Loc: Colorado, USA
Re: Learning PixInsight new [Re: mmalik]
      #5738120 - 03/17/13 07:55 AM

http://www.ip4ap.com/

Post Extras: Print Post   Remind Me!   Notify Moderator  
bilgebay
Post Laureate
*****

Reged: 11/06/08

Loc: TĂĽrkiye - Istanbul and Marmar...
Re: Learning PixInsight new [Re: mmalik]
      #5738123 - 03/17/13 07:56 AM

Yes, have a look at IP4AP website here.

It's not free though.


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: bilgebay]
      #5738129 - 03/17/13 08:04 AM

Thanks Terry/Sedat; I was looking for something like I... and Scott... have produced, free/practical online instructions. Thx

Post Extras: Print Post   Remind Me!   Notify Moderator  
terry59
Post Laureate
*****

Reged: 07/18/11

Loc: Colorado, USA
Re: Learning PixInsight new [Re: mmalik]
      #5738133 - 03/17/13 08:08 AM

There are a number of those on the PI site

Post Extras: Print Post   Remind Me!   Notify Moderator  
hytham
scholastic sledgehammer
*****

Reged: 12/25/12

Loc: Canadian in the US
Re: Learning PixInsight new [Re: terry59]
      #5739074 - 03/17/13 04:42 PM

Rogelio Bernal Andreo aka my freaking inspiration.

http://www.deepskycolors.com/tutorials.html


Post Extras: Print Post   Remind Me!   Notify Moderator  
bluedandelion
Carpal Tunnel
*****

Reged: 08/17/07

Loc: Hazy Hollow, Western WA
Re: Learning PixInsight new [Re: hytham]
      #5739234 - 03/17/13 05:45 PM

Harry's tutorials are the best. Add to that a basic workflow as follows and you are on your way

Stack (calibrate+integrate) -> crop boundaries
Gradient removal and color correction (DBE, Color calibration, SCNR)
Noise removal via Atrous Wavelets
---
Stretch - Histogram transfer (levels)
HDR wavelets to increase dynamic range
LHE Local contrast enhancement
ACDNR, Nose reduction
Saturation Boost via curves

The dashed line separates the first three steps you do on linear data. The latter are done on stretched and therefore non-linear data. Harry has a tutorial for each of these major steps. I do these steps (in addition to others) on every single image I process.

Ajay


Post Extras: Print Post   Remind Me!   Notify Moderator  
srosenfraz
Post Laureate
*****

Reged: 03/06/11

Loc: United States
Re: Learning PixInsight new [Re: hytham]
      #5739591 - 03/17/13 08:32 PM

Quote:

Rogelio Bernal Andreo aka my freaking inspiration.





Ditto.


Post Extras: Print Post   Remind Me!   Notify Moderator  
kbev
super member


Reged: 12/29/10

Loc: Far, far east Mesa
Re: Learning PixInsight new [Re: terry59]
      #5740007 - 03/18/13 02:31 AM

Quote:

There are a number of those on the PI site



+1, in particular I have this one bookmarked so I can refer to it as needed when processing images: Processing a M45 DSLR image There are others dealing with using specific PI processing tools but as a general overall flow I like referring to this one to help me learn to navigate PI.


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: kbev]
      #5740051 - 03/18/13 03:30 AM

Thanks hytham/Ajay/Scott/Kevin.

Ajay, your outline is very helpful for someone completely new to this; exactly what I was looking for at this time.

Everyone, this is just a start; will have more specific questions as I try tackling PixInsight. Keep the discussion going, keep your feedback coming, and let's get more granular as we more forward. If possible and if all goes well, I would like to come up with an instructions doc with your help. Regards


Post Extras: Print Post   Remind Me!   Notify Moderator  
bluedandelion
Carpal Tunnel
*****

Reged: 08/17/07

Loc: Hazy Hollow, Western WA
Re: Learning PixInsight new [Re: mmalik]
      #5740432 - 03/18/13 10:43 AM

You are welcome Mike. I myself started with a basic outline from another imager I know (Rainycityastro on CN). One thing I left out is Morphological Transform to reduce stars.

Most of my DSLR images were captured with Nebulosity and I saved the images as compressed FITS. Pixinsight's stacking routine has problems with this. So with these images I do calibration and stacking with the new batch process facility in Neb and then move to PI for registration and final stacking.

PI can handle these files if you first convert them, but that takes a while.

Rogelio's site has some great bits of information too, but Harry teaches you the nuts and bolts of various controls in PI with examples.

Ajay


Post Extras: Print Post   Remind Me!   Notify Moderator  
guyroch
Vendor (BackyardEOS)
*****

Reged: 01/22/08

Loc: Under the clouds!
Re: Learning PixInsight new [Re: bluedandelion]
      #5740446 - 03/18/13 10:52 AM

I just bought Part 1 and Part 2 from IP4AP .

I really need all the help I can get for PixInsight... powerfull... but steep learning curve.

Guylain


Post Extras: Print Post   Remind Me!   Notify Moderator  
hytham
scholastic sledgehammer
*****

Reged: 12/25/12

Loc: Canadian in the US
Re: Learning PixInsight new [Re: guyroch]
      #5741954 - 03/18/13 10:29 PM

Quote:

I just bought Part 1 and Part 2 from IP4AP .

I really need all the help I can get for PixInsight... powerfull... but steep learning curve.

Guylain




Just keep on playing with it and you will come to discover that all of your thoughts about its difficulties were a slight exaggeration. I started learning both PS and PI at the same time and quite honestly I prefer the work flows in PI and I absolutely love the scripting aspect of it - so much control. It's fantastic.

You'll never look back

If you have not already signed up on the PI Forums, do so immediately. Wealth of information!


Post Extras: Print Post   Remind Me!   Notify Moderator  
harry page 1
super member


Reged: 07/25/09

Re: Learning PixInsight new [Re: hytham]
      #5742385 - 03/19/13 07:55 AM

Hi
what can I do to help make the basics better to understand

Always open to suggestions

Harry


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: harry page 1]
      #5743016 - 03/19/13 01:49 PM

Quote:

Hi
what can I do to help make the basics better to understand
Always open to suggestions
Harry




Hi Harry - thanks for popping up here, I really appreciate your videos/tutorials. They've helped me come a long way in my processing skills. I bought PI about 6 months ago, but I still feel like a newbie.

One item I'm noticing in the tutorials and the videos is the focus on CCD cameras over DSLR cameras. I'm using a modded Canon 450D, I'm shooting inside a white zone, and I'm having a lot of issues dealing with noise. I feel I've got DBE and ABE down, but noise keeps frustrating me. I've read all the threads on the PI forum about noise, but I always seem to overdo it on the noise reduction and end up with lumps.

Not sure if you have the equipment or if someone else can point me to the tutorial, but I'd like to see more examples of processing with DSLRs in bad conditions. The data in the tutorials I've watched/read is so clean to begin with compared to my DSLR exposures, it looks like Hubble data.

(I know I also have my own issues about expecting too much data in the little imaging time I get, but I'm working on that. I must repeat - I can't get 6 hours of data out of 1 hour of imaging time. )

Thanks!


Post Extras: Print Post   Remind Me!   Notify Moderator  
harry page 1
super member


Reged: 07/25/09

Re: Learning PixInsight new [Re: jsines]
      #5743152 - 03/19/13 02:54 PM

Hi
I can assure you that I shoot my images from a very orange place:)
we all have to learn to get the best from our images and data , where ever taken .

Could you let me have some of your data and let me look at it and see if I can help
Harry


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: harry page 1]
      #5743178 - 03/19/13 03:05 PM

Quote:

what can I do to help make the basics better to understand




Quote:

Add to that a basic workflow as follows and you are on your way

Stack (calibrate + integrate) -> crop boundaries
Gradient removal and color correction (DBE, Color calibration, SCNR)
Noise removal via Atrous Wavelets
---
Stretch - Histogram transfer (levels)
HDR wavelets to increase dynamic range
LHE Local contrast enhancement
ACDNR, Nose reduction
Saturation Boost via curves

The dashed line separates the first three steps you do on linear data. The latter are done on stretched and therefore non-linear data. Harry has a tutorial for each of these major steps. I do these steps (in addition to others) on every single image I process.






Harry, thanks for offering to help; greatly appreciated!


For the start, above is a high-level outline by Ajay if you could validate which I am pretty sure would be agreed upon succession and totality of basic processing steps. Once we have this high-level workflow laid down, then we can get into the details and granularity of each in methodical manner. Look forward to your validation and/or suggestions? Regards


Post Extras: Print Post   Remind Me!   Notify Moderator  
harry page 1
super member


Reged: 07/25/09

Re: Learning PixInsight new [Re: mmalik]
      #5743270 - 03/19/13 03:40 PM

Hi

Yes its about right , but of cousre there may not be a need for hdr wavlets or lhe , depends on your image

And of course sometimes less noise reduction is better than too much

Harry


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: harry page 1]
      #5743295 - 03/19/13 03:51 PM Attachment (45 downloads)

Thanks for the offer to help, Harry. The data is on the Rosette Nebula, I posted an inquiry thread in the Image Processing Challenges subforum over at the PixInsight forum explaining my problem. I posted all the lights, the master dark, the master bias, and the master flats on the pixinsight server under the jsines folder. I'm in between light pollution filters because I went from a 1 1/2 inch adapter to a 2 inch adapter to get rid of vignetting. The Astronomik CLS should arrive tomorrow.

I don't mean to cross streams here by referencing another forum, not sure if it's allowed. :P

From the pixinsight post -

* Orion ED80T
* Orion Sirius mount, unguided
* Canon 450D, modded, using a 2 inch adapter
* 50 x 120 second lights at 400 ISO (1 hour, 40 minutes) over 3/3/13 and 3/4/13
* 20 darks at 400 ISO
* 20 flats at 100 ISO
* 100 bias
* stacked and processed in PixInsight
* imaged in a white zone

This is probably processing attempt #6, and I'm still not completely happy with it. I don't know what I can do or if I can do anything. I started imaging at the meridian and imaged until it was about 25 degrees from the horizon. I'm at 42 degrees North in the US. I'm in a white zone, so I'm wondering if that's an issue.

* stacked using the BatchPreProcessing script - I ended up with average SNR increases of 14, 16, and 13 (RGB)
* integrated using windsorized sigma clipping
* cropped
* used Dynamic Alignment to match a jpeg image from the APOD website to the cropped image as a guide for DBE. I probably put about 10-15 DBE samples in the whole image.
* then processed as I usually do - AtrousWavelets, Background Neutralization, Color Calibration, Histogram, SCNR, ACDNR, HDRWavelets, Histogram, Curves, Histogram, LHE, etc.

I couldn't exactly get the red like I wanted, it's probably over-saturated, and I see I need a field flattener. I'm also wondering if it's a combination of using a dslr and imaging in a white zone that's preventing me from getting a better image. I'm also totally open to the idea that I messed up somewhere. I've tried this object like 3 times over the past few months, and I've ended up throwing it all out because I'm not getting anything good enough to show anyone.

Attached is the best I've been able to come up with so far, and I'm not sure how to proceed - I'm totally open to feedback.


Post Extras: Print Post   Remind Me!   Notify Moderator  
Alfredo Beltran
sage


Reged: 08/01/10

Loc: Bogota, Colombia
Re: Learning PixInsight new [Re: jsines]
      #5743505 - 03/19/13 05:25 PM

I've used this tutorial with good success. It's in Spanish but you can follow it. Has a lot of useful information.

Best regards

Alfredo


Post Extras: Print Post   Remind Me!   Notify Moderator  
harry page 1
super member


Reged: 07/25/09

Re: Learning PixInsight new [Re: jsines]
      #5743649 - 03/19/13 06:21 PM

Hi
Ok I will download the files and have a play

Give me a day or so and I will get back to you

Harry


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: harry page 1]
      #5743857 - 03/19/13 07:49 PM

Thanks! I'm also redoing them after learning a few things in the past week. I also didn't have the DSLR_RAW settings changed in Format Explorer. I changed them, but I can't find on the forums if that would have affected the output.

Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: harry page 1]
      #5743880 - 03/19/13 07:56 PM

Asking this for learning sake...

How does DBE Magic compare to GradientXTerminator; are these doing necessarily the same thing or one doing something different/additional than the other? If one had both, would running both be an over-kill/moot? If one had both and were to pick one, which one would be preferred? Thx


Post Extras: Print Post   Remind Me!   Notify Moderator  
Peter in Reno
Postmaster
*****

Reged: 07/15/08

Loc: Reno, NV
Re: Learning PixInsight new [Re: mmalik]
      #5743886 - 03/19/13 08:00 PM

I feel that PI DBE works better than GradX. DBE appears to be easier and more powerful.

Peter


Post Extras: Print Post   Remind Me!   Notify Moderator  
bluedandelion
Carpal Tunnel
*****

Reged: 08/17/07

Loc: Hazy Hollow, Western WA
Re: Learning PixInsight new [Re: Peter in Reno]
      #5743937 - 03/19/13 08:26 PM

Just to be clear, light pollution gradients are not noise. These are removed by the DBE or ABE tools.

All the images I have posted in this form were acquired with a Canon 350D which is quite a bit noisier than the newer DSLRS most people are shooting with these days. I do an Atrous Wave subtraction as demonstrated by Harry and some times I do a *mild* GreyCStoration correction directly afterwards. This is done on unstretched data. After stretching I do ACDNR for a little more noise reduction.

With this strategy I could retrieve a decent image at the top of this post. To see how noisy the stacked image was before all these routines, scroll down to the bottom of the same post.

Imaging with a DSLR is going to be inherently noisy. Some of Harry's images, I believe were taken with a CCD with a Sony chip.

I think we should probably get back to the question that started this thread.

Ajay


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: bluedandelion]
      #5744527 - 03/20/13 05:39 AM

Some very basic questions/scenarios if someone can help:

Note: CR2 files I am working with at the moment are already in-camera noise reduced.


•Harry's first video 'Alignment and Stacking' starts off with ‘Star Alignment’ process using FIT files but doesn't mention how to convert CR2s into FIT. What manual (not batch) process is used to convert CR2s to FIT in PixInsight, especially the ones that are already noise reduced/calibrated?

•What's the easiest/quickest way to get to 'Star Alignment' stage with in-camera noise reduced CR2s? How optimal it is to directly use noise reduced CR2s [not FITs] in ‘Star Alignment' process in PixInsight?

•If I try ImagesPlus converted CR2->FITs in 'Star Alignment', I get error "multiple images cannot be used as registration references", although I am using one reference image.

•'Star Alignment' process RUNs if I directly feed noise reduced CR2s, but running 'Image Integration' afterwards [on FITs produced in 'Star Alignment'] produces somewhat garbled 'rejection high/low' and 'integration' outputs.


In short I am very familiar with this CR2 conversion/alignment/combine in ImagesPlus but am kind of stuck getting things going in PixInsight (i.e., CR2 conversion, star alignment, image integration); I have good data by the way since same files work fine in ImagesPlus and produce excellent results. Your help will be greatly appreciated. Thx


Post Extras: Print Post   Remind Me!   Notify Moderator  
Alfredo Beltran
sage


Reged: 08/01/10

Loc: Bogota, Colombia
Re: Learning PixInsight new [Re: mmalik]
      #5744642 - 03/20/13 08:18 AM

Here you will find the recommended workflow for DSLR images with PixInsight.

Best regards

Alfredo


Post Extras: Print Post   Remind Me!   Notify Moderator  
bluedandelion
Carpal Tunnel
*****

Reged: 08/17/07

Loc: Hazy Hollow, Western WA
Re: Learning PixInsight new [Re: Alfredo Beltran]
      #5744859 - 03/20/13 10:53 AM

Mike, I 've not used CR2 files in PI. In one of the menu items is a batch file conversion utility. I am pretty sure PI can read and display CR2 files. I'll chck into it later and report, but perhaps someone else has already tried it.

As I said in an earlier post I used Nebulosity to calibrate my images with Darks, Flats, Bias and Flat Darks because I saved my initial files as comtressed FITS. After calibration I have Neb outpur 32 bit Fits that can be easily read by PI. This is the most efficient path for me time, CPU and disk-space wise. You could also use the output of calibrated files from ImagesPlus and move on to the stacking routine (Image integration) in PI.

If you are doing in camera noise reduction you may still want to do Flat and Bias correction. So in the batch processing utility leave out Darks and ignore any warnings related to this.

As long as PI can read a file, I doubt whether the format is CR2 or Fits.

Ajay


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: mmalik]
      #5744950 - 03/20/13 11:42 AM

Quote:

Some very basic questions/scenarios if someone can help:

Note: CR2 files I am working with at the moment are already in-camera noise reduced.

•Harry's first video 'Alignment and Stacking' starts off with ‘Star Alignment’ process using FIT files but doesn't mention how to convert CR2s into FIT. What manual (not batch) process is used to convert CR2s to FIT in PixInsight, especially the ones that are already noise reduced/calibrated?
•What's the easiest/quickest way to get to 'Star Alignment' stage with in-camera noise reduced CR2s? How optimal it is to directly use noise reduced CR2s [not FITs] in ‘Star Alignment' process in PixInsight?
•If I try ImagesPlus converted CR2->FITs in 'Star Alignment', I get error "multiple images cannot be used as registration references", although I am using one reference image.
•'Star Alignment' process RUNs if I directly feed noise reduced CR2s, but running 'Image Integration' afterwards [on FITs produced in 'Star Alignment'] produces somewhat garbled 'rejection high/low' and 'integration' outputs.

In short I am very familiar with this CR2 conversion/alignment/combine in ImagesPlus but am kind of stuck getting things going in PixInsight (i.e., CR2 conversion, star alignment, image integration); I have good data by the way since same files work fine in ImagesPlus and produce excellent results. Your help will be greatly appreciated. Thx





I suggest watching Harry's video on the Batch PreProcessing script. You don't need to deal with Star Alignment or converting CR2 files to fit files. I'm using the Batch PreProcessing script with CR2 files. Think of it as a replacement for Deep Sky Stacker (with some added benefits!). I open the script, load my bias, flats, darks, and lights (as CR2 files), and it outputs fit files that have already been Star Aligned, along with master bias, master flat, and master dark if they are being created (as fit files). I can also add master frames to the script if I already have those.

After the Star Aligned fit files are created, I only need to run the Image Integration process a few times using "no rejection" as a basis to determine the best SNR increase. There is an excellent PowerPoint tutorial they created showing how to do this effectively to get the best SNR increase.

You can also use the Batch PreProcessing script with CR2 files that have in-camera noise reduction applied. Just add bias, flats, and lights, no darks, then run as usual. Harry mentions this in the video.

Edited by jsines (03/20/13 11:53 AM)


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: Alfredo Beltran]
      #5744963 - 03/20/13 11:47 AM

Quote:

Here you will find the recommended workflow for DSLR images with PixInsight.

Best regards

Alfredo




Just to clarify, that thread is a step-by-step process to stack CR2 files. The entire process has been replaced by the Batch PreProcessing script, which can accept CR2 files and output fit files. It is a good thread to understand more how the script works or if you want to start doing it manually before using the script.


Post Extras: Print Post   Remind Me!   Notify Moderator  
Alfredo Beltran
sage


Reged: 08/01/10

Loc: Bogota, Colombia
Re: Learning PixInsight new [Re: jsines]
      #5745048 - 03/20/13 12:28 PM

Quote:


Just to clarify, that thread is a step-by-step process to stack CR2 files. The entire process has been replaced by the Batch PreProcessing script, which can accept CR2 files and output fit files. It is a good thread to understand more how the script works or if you want to start doing it manually before using the script.




Yes, but as the dialog box warns, the Batch process doesn't produce as good results as doing the workflow. I did it both ways and can tell you there's a HUGE difference in the final stacked image. The workflow is not as hard as appears to be.

Having said that, results for every one might be different.

Best regards

Alfredo


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: Alfredo Beltran]
      #5745170 - 03/20/13 01:40 PM

Quote:


Yes, but as the dialog box warns, the Batch process doesn't produce as good results as doing the workflow. I did it both ways and can tell you there's a HUGE difference in the final stacked image. The workflow is not as hard as appears to be.
Having said that, results for every one might be different.
Best regards
Alfredo





This is very interesting, and I'm glad you mentioned this. I started using PI after the Batch process was put in place, and it seems like the default option now for most. I've only been using the Batch process but I was wondering if I could get better results (as I mentioned above). I was just thinking last night to try the manual process to see if the results were better.

What kind of difference did you find between the two methods to determine the manual process is better?

Also, since the note says that best results are obtained by fine tuning, did you make any (consistent) adjustments different to the thread instructions to get better results?

Thanks in advance,
Jeff


Post Extras: Print Post   Remind Me!   Notify Moderator  
Alfredo Beltran
sage


Reged: 08/01/10

Loc: Bogota, Colombia
Re: Learning PixInsight new [Re: jsines]
      #5745289 - 03/20/13 02:36 PM

Quote:


What kind of difference did you find between the two methods to determine the manual process is better?

Also, since the note says that best results are obtained by fine tuning, did you make any (consistent) adjustments different to the thread instructions to get better results?

Thanks in advance,
Jeff




Hi Jeff

What I found is that the "manual" workflow gives smoother and cleaner results. In fact, for me, it has produced the best result when stacking images.

I didn't do any fine tuning. I just used the recommended settings in the workflow and used bias, dark and flat frames.

Best regards,

Alfredo


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: bluedandelion]
      #5745310 - 03/20/13 02:49 PM

Quote:

...I used Nebulosity to calibrate my images with Darks, Flats, Bias and Flat Darks because I saved my initial files as compressed FITS. After calibration I have Neb output 32 bit Fits that can be easily read by PI. This is the most efficient path for me time, CPU and disk-space wise.





First and foremost, I do understand there are various other programs that will do CR2 conversion/align/combine but I want to learn to do it in PixInsight, that's the whole idea.


Quote:

You could also use the output of calibrated files from ImagesPlus and move on to the stacking routine (Image integration) in PI.




I did try a permutation of this but it didn't work if you read my notes. Will try again with a different approach and report back.


I'll try what others have suggested to try in PI and report back. In short, let's please focus on doing things in PixInsight ONLY at this time before going off and stat mixing and matching. Please keep your good feedback coming. Regards


Post Extras: Print Post   Remind Me!   Notify Moderator  
harry page 1
super member


Reged: 07/25/09

Re: Learning PixInsight new [Re: mmalik]
      #5745474 - 03/20/13 03:49 PM

Hi

The script will calibrate and register your images perfectley and you do not need to do the manual stuff,

It is only the intergration you might need to run seperatley , only a check of the rejection maps will tell you this . i.e you can use the newley created registered and calibrated frames and run intergration on its own

Harry


Post Extras: Print Post   Remind Me!   Notify Moderator  
Ugmul
super member


Reged: 05/17/11

Loc: Tucson AZ
Re: Learning PixInsight new [Re: harry page 1]
      #5745548 - 03/20/13 04:24 PM

I second Harrys post. Use preprocces script to get the registered images, from that point you use the image integration tool to get your image. The preprocessor script can integrate for you, but it has limited options and is not recommended.

Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: harry page 1]
      #5745861 - 03/20/13 06:19 PM Attachment (16 downloads)

Quote:

The script will calibrate and register your images perfectley and you do not need to do the manual stuff,

It is only the intergration you might need to run seperatley , only a check of the rejection maps will tell you this . i.e you can use the newley created registered and calibrated frames and run intergration on its own




As I have mentioned above, I have ONLY CR2 lights that are IN-camera noise reduced (I am NOT using any darks, flats or bias frames); here is batch process I have tried and error I get in the end. Thx

Note: I am taking all the defaults for batch processing!

PixInsight Version: 1.8 RC4

Continues into the next post...


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: mmalik]
      #5745862 - 03/20/13 06:19 PM Attachment (13 downloads)

...continues from the previous post. Any ideas?

Post Extras: Print Post   Remind Me!   Notify Moderator  
Falcon-
Post Laureate
*****

Reged: 09/11/09

Loc: Gambier Island, BC, Canada
Re: Learning PixInsight new [Re: mmalik]
      #5745908 - 03/20/13 06:38 PM

The batch pre-process script can it seems be a bit picky about not working if you do not have things the way it expects. It could be that it just does not work without ANY calibration files.

So, ignore the script for this particular data set. Here are the steps I would follow:

1) go to Script -> Batch Processing -> BatchDebayer (convert all .CR2 files into colour FITS)
2) run the "StarAlignment" process to register/align the images
2a) Select all your debayered images using he Add Files button
2b) at the top of the process dialog change the Reference Image type from View to File then click the little down-arrow / blue triangle next to it and select one of your debayered images (I usually select one from the middle hoping my drift has been fairly constant)
2c) (optional) select an output directory for the new aligned files and/or change the postfix to prefix to make it easier to select your aligned images later
2d) hit the blue circle at the bottom of the process dialog
3) run the ImageIntergration process on the aligned images. Try various methods to get the best SNR in the end result (the tool-tips on the rejection algorithm type selector is useful here)

4) (and 5 and 6 and....... 73 ) make the pretty picture!


Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: mmalik]
      #5745912 - 03/20/13 06:39 PM

well, here's the thing about BPP. it's hiding the calibration workflow from you. the purpose of BPP *is* to calibrate images, meaning apply bias, dark, flat. etc.

you have not specified any calibration frames, so the script fails.

if you want to convert CR2 to fits (with no calibration), you can use the BatchFormatConversion script. however, be aware that the BFC script does not accept format hints. this means that however the DSLR_RAW file handler is configured, that is the way BFC will process the CR2.

DSLR_RAW can open the file as raw, or debayer it in any number of ways. since you're not calibrating, you might as well set up DSLR_RAW to just debayer the CR2.

to get to the DSLR_RAW configuration, click on the format explorer on the left edge, then double-click DSLR_RAW in the menu that appears.


Post Extras: Print Post   Remind Me!   Notify Moderator  
harry page 1
super member


Reged: 07/25/09

Re: Learning PixInsight new [Re: mmalik]
      #5745915 - 03/20/13 06:40 PM

Hi
This is mainley a calibration script and you are not doing any calibration , so it throws you out .
Just register the images and then use the intergration module

Harry


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: Alfredo Beltran]
      #5745954 - 03/20/13 06:54 PM

Quote:

Here you will find the recommended workflow for DSLR images with PixInsight.
Best regards
Alfredo





Sorry, didn't realize you're not using flats, bias, or darks. I thought you were just not using darks. If I were you, I'd use the link above and start on Step 4 (Batch Debayer) with your CR2 files. Those windows that pop up after the script you posted are normal. Debayer, Align, Integrate, and then you've got a stacked file. Then process.


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: Falcon-]
      #5746171 - 03/20/13 08:25 PM Attachment (13 downloads)

Quote:

The batch pre-process script can it seems be a bit picky about not working if you do not have things the way it expects. It could be that it just does not work without ANY calibration files.

So, ignore the script for this particular data set. Here are the steps I would follow:

1) go to Script -> Batch Processing -> BatchDebayer (convert all .CR2 files into colour FITS)
2) run the "StarAlignment" process to register/align the images
2a) Select all your debayered images using he Add Files button
2b) at the top of the process dialog change the Reference Image type from View to File then click the little down-arrow / blue triangle next to it and select one of your debayered images (I usually select one from the middle hoping my drift has been fairly constant)
2c) (optional) select an output directory for the new aligned files and/or change the postfix to prefix to make it easier to select your aligned images later
2d) hit the blue circle at the bottom of the process dialog
3) run the ImageIntergration process on the aligned images. Try various methods to get the best SNR in the end result (the tool-tips on the rejection algorithm type selector is useful here)




Falcon, some great info you provided there; I tried following processes in order;

-Batch DeBayer
-Star Alignment
-Image Integration (I tried Winsorized Sigma Clipping, Sigma clipping, and the default (No rejection); all with similar looking result...green integration)

Any ideas?


Note: An IP/PS converted/aligned/combined/processed image result of the same data here... for reference and for validity of the data.


Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: mmalik]
      #5746195 - 03/20/13 08:35 PM

the green may or may not be correct. zoom in on the image and see if you still see a bayer pattern.

doing batchdebayer directly on the CR2 only makes sense if the DSLR_RAW is set to raw mode. otherwise you're debayering an already debayered image.

if the bayer pattern is really gone, then uncouple the channels in the STF tool (click the little RGB blocks) and then click the A again to recompute the stf.


Post Extras: Print Post   Remind Me!   Notify Moderator  
Falcon-
Post Laureate
*****

Reged: 09/11/09

Loc: Gambier Island, BC, Canada
Re: Learning PixInsight new [Re: pfile]
      #5746228 - 03/20/13 08:45 PM Attachment (15 downloads)

There are 2x the number of green pixels as blue pixels or red pixels so that could be correct (just in need of colour balancing)

However pfile is correct, you need to make sure PixInsight is treating raw files as RAW instead of converting them to colour during loading. Below is what my DSLR_RAW settings from the Format Explorer bar are currently set to.

The most important bit is that it is set to "Create RAW Bayer CFA image"


Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: Falcon-]
      #5746264 - 03/20/13 08:55 PM

RGB raw works also... i think this is just a matter of preference.

Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: Falcon-]
      #5746399 - 03/20/13 10:00 PM

Quote:

Below is what my DSLR_RAW settings from the Format Explorer bar are currently set to.




How do I get to 'RAW Format Preferences'; sorry, new to the interface. Don't even know where to look for format explorer bar you mention?


Post Extras: Print Post   Remind Me!   Notify Moderator  
Falcon-
Post Laureate
*****

Reged: 09/11/09

Loc: Gambier Island, BC, Canada
Re: Learning PixInsight new [Re: mmalik]
      #5746551 - 03/20/13 11:00 PM

On the right side of the PixInsight window you have several pop-out... um... areas (not sure what they are properly called).

These are (not necisairly in this order) Process Console, View Explorer, Process Explorer, Format Explorer, File Explorer, Script Editor and History Explorer.

The Format Explorer bar contains the settings for each file-type module (DSLR raw, FITS, TIFF, etc). just mouse-over it and it will pop-open, double click the DSLR_RAW module


Post Extras: Print Post   Remind Me!   Notify Moderator  
Falcon-
Post Laureate
*****

Reged: 09/11/09

Loc: Gambier Island, BC, Canada
Re: Learning PixInsight new [Re: Falcon-]
      #5746556 - 03/20/13 11:03 PM

As a general note with PixInsight no standard interface element conventions have been followed - the PI developer made up his own interface standards. This is unfortunate for it's impact on the learning curve but on the other hand it does mean it is identical regardless of platform.

Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: Falcon-]
      #5746611 - 03/20/13 11:33 PM

its probably because it's based on the Qt toolkit. there's a lot of UI rope there. if you are a programmer the interface makes perfect sense

Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: Falcon-]
      #5746635 - 03/20/13 11:49 PM

Quote:


The most important bit is that it is set to "Create RAW Bayer CFA image"




Quote:


RGB raw works also... i think this is just a matter of preference.





The "DSLR_RAW work flow tools" thread says to select "Create RAW debayer image". I selected the first one, which is "Create RAW Debayer" instead of the second one, which is "Create RAW Bayer CFA image".

Are there pluses/minuses to each selection?

Thanks!
Jeff


Post Extras: Print Post   Remind Me!   Notify Moderator  
Falcon-
Post Laureate
*****

Reged: 09/11/09

Loc: Gambier Island, BC, Canada
Re: Learning PixInsight new [Re: pfile]
      #5746642 - 03/20/13 11:52 PM

Oh sure the interface makes *sense*, especially given the under the hood design of PI's processing backend - it just ignores decades of user training in UI conventions from Windows, Mac OS, KDE, Gnome, etc.

Like for example the Triangle, Square, Circle elements in the bottom of a Process dialog window.... those are hardly self-explanatory to a new user. Luckily the UI's opaque nature has greatly improved simply by better/more tool-tips in the last couple versions and now actual inline documentation is starting to show up for many of the Processes. As I have said in the past though the learning curve is steep it is worth climbing!


Post Extras: Print Post   Remind Me!   Notify Moderator  
zerro1
Postmaster
*****

Reged: 08/02/09

Loc: Smokey Point , 48.12°N 122.25...
Re: Learning PixInsight new [Re: pfile]
      #5746647 - 03/20/13 11:53 PM Attachment (11 downloads)

well now you're chasing your tail because the horsehead color was correct. You just needed to re-align the color channels.

I would reccomend that you take a look at Warrens latest release http://www.ip4ap.com/ . it'll save you a lot of frustratingly wasted effort just trying to learn the process in PI. he shows how to navigate the interface and simple ways to perform things like aligning the color channels in two clicks of the mouse.

Edited by zerro1 (03/21/13 12:04 AM)


Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: jsines]
      #5746758 - 03/21/13 01:16 AM

Quote:

Quote:


The most important bit is that it is set to "Create RAW Bayer CFA image"




Quote:


RGB raw works also... i think this is just a matter of preference.





The "DSLR_RAW work flow tools" thread says to select "Create RAW debayer image". I selected the first one, which is "Create RAW Debayer" instead of the second one, which is "Create RAW Bayer CFA image".

Are there pluses/minuses to each selection?

Thanks!
Jeff




it's "create raw bayer image" vs. "create raw bayer CFA image".

the raw bayer image is a 3-plane image. the CFA image is a monochrome image. the data represented by both images is the same, but represented in different ways.

in the RGB image, the red pixels are on the red plane, green on the green and blue on the blue. on a given plane, wherever there would be pixels of a different color, there are black pixels on that plane. the CFA image is sort of more like the sensor itself - for a canon camera, the red pixel is next to a green pixel and above the other green pixel. the blue pixel is diagonally opposed to the red pixel.

at some point in PixInsight history the Debayer process could only handle one type of image... now i can't remember which. but now Debayer can handle CFA or RGB bayer images so it does not matter which one you use.


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: Falcon-]
      #5746772 - 03/21/13 01:23 AM Attachment (9 downloads)

Quote:

These are (not necessarily in this order) Process Console, View Explorer, Process Explorer, Format Explorer, File Explorer, Script Editor and History Explorer.




All I have is this:

-Process Console
-Process Explorer
-Object Explorer
-History explorer


Mine is fairly new install and was cognizant to NOT modify default look and feel. Am I missing something here; how do I get back the left tabs that I might be missing? Mine is PixInsight 1.8 RC4. Thx


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: mmalik]
      #5746780 - 03/21/13 01:27 AM Attachment (13 downloads)

Spoke too soon, found it here:

Post Extras: Print Post   Remind Me!   Notify Moderator  
Falcon-
Post Laureate
*****

Reged: 09/11/09

Loc: Gambier Island, BC, Canada
Re: Learning PixInsight new [Re: mmalik]
      #5746785 - 03/21/13 01:29 AM

hmp. Well ok then. I am also using 1.8RC4 but it is possible that preferences from 1.7 and 1.6 have carried forward.

Well in that case look under the Explorer Windows sub-menu of the View menu for Format Explorer


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: pfile]
      #5746838 - 03/21/13 02:56 AM Attachment (9 downloads)

Quote:

"create raw bayer image" vs. "create raw bayer CFA image".

...

but now Debayer can handle CFA or RGB bayer images so it does not matter which one you use.




I tried this...

Edited by mmalik (03/21/13 03:04 AM)


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: mmalik]
      #5746839 - 03/21/13 02:56 AM Attachment (11 downloads)

and this...

...without any impact on the final outcome of integration.


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: mmalik]
      #5746840 - 03/21/13 02:56 AM Attachment (14 downloads)

After running Batch DeBayer, Star Alignment and Image Integration, Auto Stretch looks like this...

Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: mmalik]
      #5746842 - 03/21/13 02:56 AM Attachment (17 downloads)

Auto DD in ImagesPlus of PixInsight Integration.FIT looks like this...

Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: mmalik]
      #5746843 - 03/21/13 02:56 AM Attachment (11 downloads)

Auto DD in ImagesPlus of ImagesPlus CombineFilesAVG.FIT looks like this...

Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: mmalik]
      #5746845 - 03/21/13 02:57 AM Attachment (17 downloads)

Histogram of PixInsight Integration.FIT looks like this...

Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: mmalik]
      #5746846 - 03/21/13 02:57 AM Attachment (13 downloads)

Histogram of ImagesPlus CombineFilesAvg.FIT looks like this...

Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: mmalik]
      #5746859 - 03/21/13 03:09 AM

Regardless of the greenish looking auto-stretched Integration.FIT in PixInsight, results of my testing thus far tell me that PixInsight's "natively" converted/aligned/integrated file will be almost impossible to process based on DD and histogram findings above. Your feedback will be welcome if you suggest otherwise? Regards

Post Extras: Print Post   Remind Me!   Notify Moderator  
Falcon-
Post Laureate
*****

Reged: 09/11/09

Loc: Gambier Island, BC, Canada
Re: Learning PixInsight new [Re: mmalik]
      #5746866 - 03/21/13 03:18 AM

That looks fairly expected. PixInsight's histogram may *look* odd at first but it is displaying data that originated in a 12bit or 14bit space in a 32bit space - once the black and white points are set correctly it will look very much more like the one from ImagesPlus.

The other issue is the fact that PixInsight has *at this state* done no colour balance corrections AT ALL, not even to account for the fact that there are 2x the number of green pixels.

From here you get into post processing including colour calibration. Check out things such as Harry's video tutorials. He has examples with background gradient removal (DBE) and colour calibration.


Post Extras: Print Post   Remind Me!   Notify Moderator  
Falcon-
Post Laureate
*****

Reged: 09/11/09

Loc: Gambier Island, BC, Canada
Re: Learning PixInsight new [Re: Falcon-]
      #5746868 - 03/21/13 03:20 AM

BTW - if you can upload that 32bit floating point FITS to some place and link it I would be happy to give your green image a try in PixInsight here....

Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: Falcon-]
      #5746880 - 03/21/13 04:24 AM Attachment (9 downloads)

Quote:

if you can upload that 32bit floating point FITS to some place and link it I would be happy to give your green image a try in PixInsight




Sure; PixInsight integration.fit uploaded here....
EDIT: Download the latest integration.fit in case you had downloaded an earlier version.

If you can, please elaborate on granular tasks you may perform so folks could follow along for learning sake as well as any future documentation I might create. Regards


Note: ImagesPlus CombinedFilesAvg.FIT can be found here....


While we are on the subject, if I try opening an ImagesPlus converted/aligned/combined file (CombinedFilesAvg.FIT) in PixInsight I get following error; please see if you or someone can help. Thx


Post Extras: Print Post   Remind Me!   Notify Moderator  
bluedandelion
Carpal Tunnel
*****

Reged: 08/17/07

Loc: Hazy Hollow, Western WA
Re: Learning PixInsight new [Re: mmalik]
      #5747297 - 03/21/13 10:39 AM

Try this. Its part of my workflow and works with images acquired with my 350D.

Ignore the greenish cast. Trim edges of the the stack and do a DBE correction right away. Most of the color balance will be restored. An auto screen stretch should look mostly neutral.

Do a further color correction via the Star method since there are no large galaxies in the field. If the greens still persist use SCNR.

Ajay


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: pfile]
      #5747456 - 03/21/13 12:02 PM

Quote:


in the RGB image, the red pixels are on the red plane, green on the green and blue on the blue. on a given plane, wherever there would be pixels of a different color, there are black pixels on that plane. the CFA image is sort of more like the sensor itself - for a canon camera, the red pixel is next to a green pixel and above the other green pixel. the blue pixel is diagonally opposed to the red pixel.
at some point in PixInsight history the Debayer process could only handle one type of image... now i can't remember which. but now Debayer can handle CFA or RGB bayer images so it does not matter which one you use.





thanks, pfile. I appreciate your help.


Quote:


After running Batch DeBayer, Star Alignment and Image Integration, Auto Stretch looks like this...





That is in part because you have the channels linked. Uncheck the icon at the top left, the one with the chains, and you will unlink your channels, then click the orange/black circle to auto-stf. It's the first thing I do when I open Screen Transfer Function...always.


Quote:


Sure; PixInsight integration.fit uploaded here....
EDIT: Download the latest integration.fit in case you had downloaded an earlier version.
If you can, please elaborate on granular tasks you may perform so folks could follow along for learning sake as well as any future documentation I might create. Regards




I'll also download your integrated file tonight when I get to my home computer with PI on it and see what I get. If possible, could you upload the original CR2 files that have in-camera noise reduction? I could run them through the whole process to see what I end up with.

Hope this helps.


Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: jsines]
      #5747515 - 03/21/13 12:34 PM

cant download the fits files without a microsoft account which i don't have/don't want... can you host the files somewhere that does not need a login?

Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: pfile]
      #5747608 - 03/21/13 01:10 PM

Quote:

cant download the fits files without a microsoft account which i don't have/don't want... can you host the files somewhere that does not need a login?




I see the problem; let me try fixing if I could. Thx


Post Extras: Print Post   Remind Me!   Notify Moderator  
Ugmul
super member


Reged: 05/17/11

Loc: Tucson AZ
Re: Learning PixInsight new [Re: mmalik]
      #5747635 - 03/21/13 01:20 PM

You can always run a background neutralization tool first to remove the greens. Follow with DBE tool to remove gradients from your image. After that run a color calibration to bring the color channels back in line.

Remember to do these on the linear fit. Using the screen transfer to preview.

Colors will always look odd until you do these steps. Green or red backgrounds are very common with DZLR.


Post Extras: Print Post   Remind Me!   Notify Moderator  
guyroch
Vendor (BackyardEOS)
*****

Reged: 01/22/08

Loc: Under the clouds!
Re: Learning PixInsight new [Re: Ugmul]
      #5747722 - 03/21/13 02:19 PM

Being on the PixInsight learning wagon myself I'm loving this thread. Keep it alive!

Thanks to all who are posting tips.

Guylain


Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: guyroch]
      #5747795 - 03/21/13 03:00 PM

also it may not make sense to open fits files from one application in another. the fits standard is poorly defined, so lots of representations are legal.

pixinsight saves fits files as 32-bit (or 64-bit) floating point numbers in the range [0..1]. other apps (like maxim) save fits files in the range [0.0...65535.0].

the fits reader in pixinsight does have a rescaling function so if you know how the data is represented you can handle it. images plus, i don't know. it might be interpreting the PI data differently and that's why it looks so dark. theoretically though all the data should still be in there.


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: bluedandelion]
      #5747971 - 03/21/13 04:20 PM Attachment (8 downloads)

Quote:

Ignore the greenish cast. Trim edges of the stack and do a DBE correction right away. Most of the color balance will be restored. An auto screen stretch should look mostly neutral.




Thanks for the tip; I tried running DBE on Integration.FIT and results look better...no more green cast upon Screen Auto Stretch. See pic below...


Quote:

Most of my DSLR images were captured with Nebulosity and I saved the images as compressed FITS. PixInsight's stacking routine has problems with this. So with these images I do calibration and stacking with the new batch process facility in Neb and then move to PI for registration and final stacking.





Thing I am not so sure about if DBE is the "right" way to fix the green cast of 'conversion/alignment/integration' processes or is this just masking the problem? Ideally, wouldn't one want to have a normal looking integration.fit upon screen Auto Stretch like in other programs? At least I would like to see an 'OK' looking conclusion to a core step in the process instead of hoping the next will take care of the previous.


Would it be a correct presumption that 'conversion/alignment/integration' processes are NOT optimized in PixInsight? NOT a critique of PixInsight in any way, just would like an honest assessment of the 'conversion/alignment/integration' problem if there is one and if one should skip these processes in PixInsight if they may be deficient?


If true, this complicates matters further. I am NOT able to process CR2s converted/aligned/combined in ImagesPlus for e.g., DBE processing in PixInsight?

Quote:

...if I try opening an ImagesPlus converted/aligned/combined file (CombinedFilesAvg.FIT) in PixInsight I get following error; please see if you or someone can help.







Post Extras: Print Post   Remind Me!   Notify Moderator  
harry page 1
super member


Reged: 07/25/09

Re: Learning PixInsight new [Re: mmalik]
      #5747996 - 03/21/13 04:42 PM




Thing I am not sure about if DBE is the "right" way to fix the green cast of 'conversion/alignment/integration' processes or is this just masking the problem? Ideally, wouldn't one want to have a normal looking integration.fit upon screen Auto Stretch like in other programs? At least I would like to see an 'OK' looking conclusion to a core step in the process instead of hoping the next will take care of the previous.
--------------------------------------------------------------------------------



The problems with other programs is that they carry out a automatic white balance ( sometimes ok sometimes not)
Pi will only do what you tell it to do , gives you more control which you will come to love

Harry

Edited by harry page 1 (03/21/13 04:43 PM)


Post Extras: Print Post   Remind Me!   Notify Moderator  
Falcon-
Post Laureate
*****

Reged: 09/11/09

Loc: Gambier Island, BC, Canada
Re: Learning PixInsight new [Re: harry page 1]
      #5748055 - 03/21/13 05:13 PM Attachment (22 downloads)

Here is my quick go at your data. The green cast is actually not all that strong, the auto-STF makes it look worse then it is.

In any case this is what I did:

1) cropped out the black borders left over from alignment
2) used BackgroundNeutralization
3) ran SCNR on the green channel
4) ran ColorCalibtration (but it did not change things very much)
5) used HistogramTransformation to do initial rough-stretch
6) ran ACDNR to reduce noise a bit
7) used Curves a couple of times to make smaller changes to intensity and saturation
8) Created a StarMask and ran MorphologicalTransformation to reduce star sizes a bit (was not really needed here, but I want to see what difference it made - answer was not much )
9) one final very small HistogramTransformation tweak.


Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: Falcon-]
      #5748080 - 03/21/13 05:21 PM

the "right" way to color calibrate images is to do color calibration. however, even from the darkest skies almost every astrophotograph is going to have gradients, so DBE is pretty much always something you are going to do. by default DBE does not normalize the image (which is why the color cast goes away) but if you check 'normalize' you'll see that your color issues come back.

anyway, you are kidding, right? the calibration, registration and alignment in pixinsight is the best there is, hands down.

you're jumping way in over your head and just going wild all over the place, which is fine, because that's a good way to learn. but don't come to wrong conclusions about the software because of user error.


Post Extras: Print Post   Remind Me!   Notify Moderator  
Falcon-
Post Laureate
*****

Reged: 09/11/09

Loc: Gambier Island, BC, Canada
Re: Learning PixInsight new [Re: Falcon-]
      #5748094 - 03/21/13 05:24 PM

Regarding the processing I did - I did not try and do anything to preserve or accentuate the details in the horse-head or the reflection nebulosity - a bit of time spent there would likely improve things quite a bit. Also I did not use DBE since there really was no clean background to work with to do background extraction. With a narrow FOV like this entirely full of nebulosity DBE is likely to remove some of your nebula colour unintentionally so I used BackgroundNeutralization instead.



An important tip for ColorCalbiration and BackgroundNeutralization processes is to make sure you set the "Upper Limit" for the background level. It defaults to 0.1 while in your image the background was closer to .2. If you do not do this the process will have a hard time keeping the background a neutral/balanced colour.

The way I figure out what level to set the Upper Limit to is by using the HistogramNeutralization process window. In the lower-left of the process window is a checkmark. Enabeling that will cause the histogram displayed to always be whatever image View you are currently working on. If you hover your mouse over the histogram graph it will show you the coordinates of the mouse pointer/crosshair in text just below the histogram graph. Put the crosshair just to the left of the base of the histogram's sharp background peak and you have your upper limit value. In this case I *think* it said 0.18something so I used 0.2 in the BackgroundNutralization process.


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: mmalik]
      #5748114 - 03/21/13 05:34 PM

Quote:


Thing I am not so sure about if DBE is the "right" way to fix the green cast of 'conversion/alignment/integration' processes or is this just masking the problem? Ideally, wouldn't one want to have a normal looking integration.fit upon screen Auto Stretch like in other programs? At least I would like to see an 'OK' looking conclusion to a core step in the process instead of hoping the next will take care of the previous.

Would it be a correct presumption that 'conversion/alignment/integration' processes are NOT optimized in PixInsight? NOT a critique of PixInsight in any way, just would like an honest assessment of the 'conversion/alignment/integration' problem if there is one and if one should skip these processes in PixInsight if they may be deficient?


If true, this complicates matters further. I am NOT able to process CR2s converted/aligned/combined in ImagesPlus for e.g., DBE processing in PixInsight?






It seems to me like you're trying to open a fit file created in ImagesPlus in PixInsight, or trying to open a fit file created in PixInsight in ImagesPlus, finding that the files are not 100% compatible, then concluding that PixInsight has a problem. I don't think this is the fault of ImagesPlus or PixInsight, I think it's just the way the 2 programs define the fit file.

The solution is to do everything in one program, and since this thread is about PixInsight, lets do it all in PixInsight. Not sure why you would want to calibrate/align/register in ImagesPlus and then move over to PI anyway, since the PI process is better.

Was the PixInsight integration.fit file you uploaded created solely in PixInsight using your CR2 files?


Post Extras: Print Post   Remind Me!   Notify Moderator  
Falcon-
Post Laureate
*****

Reged: 09/11/09

Loc: Gambier Island, BC, Canada
Re: Learning PixInsight new [Re: mmalik]
      #5748136 - 03/21/13 05:40 PM

Quote:

Thing I am not so sure about if DBE is the "right" way to fix the green cast of 'conversion/alignment/integration' processes or is this just masking the problem? Ideally, wouldn't one want to have a normal looking integration.fit upon screen Auto Stretch like in other programs? At least I would like to see an 'OK' looking conclusion to a core step in the process instead of hoping the next will take care of the previous.




DBE, BackgroundNeutralization, and ColorCalibration are the way to fix the green cast. (not necessarily all three, but nearly always at least one of the three).

Keep in mind that other apps either do a rough-guess initial version of BackgroundNeutralization or ColorCalibration or try and automatically do it. In other words the "problem" is always there with any program, PixInsight just chooses not to hide it from you

As to if DBE specifically is the right way - I would say that for *THIS* instance where the field of view is narrow and the "background" is all desired target I would not use it. DBE is going to look at the gradient of brightness in the HA curtain wall behind the horse-head and try and remove it as if it was a light pollution gradient. MOST of the time DBE would be desirable, for this image good flats and no DBE is what I would use for best results.


Quote:

Would it be a correct presumption that 'conversion/alignment/integration' processes are NOT optimized in PixInsight? NOT a critique of PixInsight in any way, just would like an honest assessment of the 'conversion/alignment/integration' problem if there is one and if one should skip these processes in PixInsight if they may be deficient?




Basic alignment may be only equal with many apps, but the mosaic tools are superior. Calibration and especially integration are superior to my experiences with Nebulosity and DeepSkyStacker.

Quote:

I am NOT able to process CR2s converted/aligned/combined in ImagesPlus for e.g., DBE processing in PixInsight?




Check ImagesPlus for FITS format options. It may be that a slight change in the way ImagesPlus saves FITS files will let PixInsight work with them.

Unfortunately FITS files in general are just not as standard as one might hope. It is such a open/permissive "standard" that each program has decided to implement it in different ways. PixInsight can save and open various types of FITS (16 and 32bit integer, 32 and 64 bit floating point, a few others) as well as a few of the ways to arrange the coordinates in the FITS files themselves. I also know Nebulosity has preferences for saving it's FITS files either in the "ImagesPlus" way or the "Maxim" way just to try and work around such problems (but it still can not open the PixInsight 32bit float types). Perhaps ImagesPlus can output to a 16bit or 32bit TIFF?


Post Extras: Print Post   Remind Me!   Notify Moderator  
bluedandelion
Carpal Tunnel
*****

Reged: 08/17/07

Loc: Hazy Hollow, Western WA
Re: Learning PixInsight new [Re: Falcon-]
      #5748233 - 03/21/13 06:24 PM

Mike, you raise some valid points:

About the color cast. I like control of my data so I do not do auto white balance. Nebulosity always gave me a yellowish tinged stack and DSS a blue tinged one. The point is that all images have to be color balanced at some point as Harry and Rob (pfile) have pointed out. Sometimes this happens in camera as with "daylight" photography. No modded DSLR is not going to give a color balanced image. People use grey cards and other techniques to know the correct balance. I had to do this with Nebulosity.

The cast you see with the screen stretch (STF for Screen Transfer Function) is because the colors are equally weighted by default. Unlink them and you should see something better. Also remember what you see with STF hasn't been applied to your data. Its just a way of previewing the image. Only after a color calibration and a proper histogram stretch will you see an accurate representation on the screen. At that point you should not use STF.

About image formats: Your difficulty with ImagesPlus files in PI is exactly what I experienced with Compressed Fits from Nebulosity. Fits is supposed to be a standard, but often the output of one program cannot be directly used in another. This is an old story. The PI forums may have some solution to your problem. The developer reads posts and responds.

Hope this helps.

Edit: Mostly what Sean said

Ajay

Edited by bluedandelion (03/21/13 06:32 PM)


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: jsines]
      #5748259 - 03/21/13 06:38 PM

Quote:

The solution is to do everything in one program, and since this thread is about PixInsight, lets do it all in PixInsight. Not sure why you would want to calibrate/align/register in ImagesPlus and then move over to PI anyway, since the PI process is better.

Was the PixInsight integration.fit file you uploaded created solely in PixInsight using your CR2 files?




Fully agree with you; let's keep all things PixInsight. I was just trying to validate/investigate the green cast, that's all. But yes, I would like to stick to PixInsight; broke my own rule I advocated above.


Yes, the PixInsight integration.fit file I uploaded was created solely in PixInsight using my CR2 files; all that was done to get to integration.fit was:

1. Batch DeBayer
2. Star Alignment
3. Image Integration


Will see if I could upload all CR2 for all us to try integrating and/or to reproduce the same scenarios. Thx


Post Extras: Print Post   Remind Me!   Notify Moderator  
Peter in Reno
Postmaster
*****

Reged: 07/15/08

Loc: Reno, NV
Re: Learning PixInsight new [Re: mmalik]
      #5748274 - 03/21/13 06:46 PM

If there's too much nebulosity, galaxy or IFN all over the image, I suggest you try ABE instead of DBE. ABE does a wonderful job to automatically build background model. It's very difficult to pick DBE points if you can't find a good background.

Peter


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: Peter in Reno]
      #5748394 - 03/21/13 07:44 PM

OK, let's all start over and together; I have uploaded 11 CR2 here....
Note: CR2s are IN-Camera noise reduced so no calibration is needed.

Process I would like to follow is what Ajay laid out:

1. Stack (calibrate+integrate) -> crop boundaries
2. Gradient removal and color correction (DBE, Color calibration, SCNR)
3. Noise removal via Atrous Wavelets
---
4. Stretch - Histogram transfer (levels)
5. HDR wavelets to increase dynamic range
6. LHE Local contrast enhancement
7. ACDNR, Nose reduction
8. Saturation Boost via curves

Note: The dashed line separates the first three steps you do on linear data.


What I would like is we move forward with one step/task at a time and also share some granular details how it was done. I am completely new to PixInsight interface, so if you tell someone to do something, please also explain how to actually do it in somewhat follow-able detail and/or share screen clips.


Let's do #1 which consists of following granular steps; at the end of it let's confirm/reproduce the green cast problem being discussed and move forward from there.


-Batch DeBayer (Script menu-Batch Processing, take defaults)
-Star Alignment (Process menu-All Processes, take defaults, define a reference image)
-Image Integration (Process menu-All Processes, take defaults)


Version I have is 1.8 RC4. Thx


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: mmalik]
      #5748895 - 03/22/13 12:54 AM Attachment (16 downloads)

I like your plan. Let's go 1 step at a time on this.

1. Stack (calibrate and integrate). Let's stop there. You've done the Debayering, Star Alignment, and now you're on the Image Integration. But you need to set the parameters of Image Integration so that you're maximizing the SNR increase. How do you do that?

First, you need a baseline to see what is basically the maximum SNR increase you can get out of it. So you set it to No Rejection and integrate. This gives you a stack of pictures without any rejected pixels. I do this with my images and I see the hot pixels trailing across the screen because my alignment isn't the best right now, or I see a satellite trailing through the middle of my image. I want to get rid of these.

I took your 11 CR2 files and debayered and aligned them. Then I set the Image Integration to No Rejection. This is also where you can set your reference frame, the picture everything else aligns to. I usually use Blink to look at them and pick the one with the best star, but for this I picked one at random, the middle.

Run it, then hover over "Process Control" on the left, and you'll see the history. At the bottom, you see the average SNR increases for each channel. For No Rejection, I got Red 3.097, Green 2.7100, and Blue 2.5144. You want to get as close to these numbers as possible, so you're rejection as few pixels as possible but keeping the SNR high as possible.

Open Image Integration and go to Pixel Rejection (1). Hover over the words "Rejection Algorithm" and it'll explain each one of the options. That's one plus to PI - hovering over things will bring up the explanation of what they do. You've got several options - Harry's video suggests Windsorized Sigma Clipping is best for large stacks, and I agree although you need to adjust the low and high clipping to suit your needs. I ran Windsorized Sigma Clipping (low 4.8, high 3.0) and got Red 2.9215, Green 2.5194, and Blue 2.3299. I should see if I can get closer to the No Rejection SNR levels.

I ran linear fit clipping (low 5, high 2.5) and got Red 2.9906, Green 2.6129, and Blue 2.4174. Better. I'm going to stick with this integrated file.

Next - Dynamic Crop - I cropped out the edges of the picture to remove any possible inconsistent stacking and the black edges.

Next - Uncheck aligned channels on STF and do auto-STF. If I leave it checked and have the channels aligned, I do see the green overcast. I never leave that checked. It's the first thing I do when I open STF. I unchecked it to un-align the channels and did auto-STF. The reason I don't have a green overcast to it now is because of the un-aligned channels in the auto-STF, and the black level is moved higher to compensate. see photo. The integrated file at the top left is the No Rejection stack that I can compare. I usually have stuff that gets removed this way (hot pixel trails, satellites, etc.)

I'm going to bed, I have to work in the AM, but I'll continue tomorrow on this or continue wherever we are at.


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: jsines]
      #5749004 - 03/22/13 02:56 AM Attachment (6 downloads)

Quote:

Uncheck aligned channels on STF and do auto-STF. If I leave it checked and have the channels aligned, I do see the green overcast. I never leave that checked. It's the first thing I do when I open STF. I unchecked it to un-align the channels and did auto-STF. The reason I don't have a green overcast to it now is because of the un-aligned channels in the auto-STF, and the black level is moved higher to compensate.




Thanks jsines; that was it!


Folks, jsines has found the root cause of the green cast and it was 'Link RGB Channels' setting STF (Screen Transfer Function).


To sync up with you and everyone else, I did following:

-Batch DeBayer (defaults)
-Star Alignment (defaults)
-Image Integration (Linear Fit Clipping)
-Unlinked RGB Channels in STF
-Auto stretched in STF to verify image (no green cast, etc.)
-Reset STF
-Saved the new file as integration.fit (have also uploaded the same here...)


Once everyone catches up to this point we all can move onto #2 (Gradient removal and color correction (DBE, Color calibration, SCNR). Even though DBE might not be well advised for this particular image, let's do that any way for learning sake. I'll look forward to some granular instructions for #2 items, whoever may want to volunteer. Regards


Post Extras: Print Post   Remind Me!   Notify Moderator  
terry59
Post Laureate
*****

Reged: 07/18/11

Loc: Colorado, USA
Re: Learning PixInsight new [Re: mmalik]
      #5749161 - 03/22/13 07:51 AM

Quote:



Folks, jsines has found the root cause of the green cast and it was 'Link RGB Channels' setting STF (Screen Transfer Function).





You were already told that


Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: terry59]
      #5749466 - 03/22/13 11:09 AM

this thread is hilarious.

Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: pfile]
      #5749944 - 03/22/13 03:23 PM Attachment (15 downloads)

Here is DBE sample I tried, along with settings; suggestions welcome.

(Process Menu-<All Processes>-Dynamic Background Extraction)


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: mmalik]
      #5749950 - 03/22/13 03:25 PM Attachment (12 downloads)

For comparison, I also tried ABE with these settings:

(Process Menu-<All Processes>-Automatic Background Extraction)


Post Extras: Print Post   Remind Me!   Notify Moderator  
harry page 1
super member


Reged: 07/25/09

Re: Learning PixInsight new [Re: mmalik]
      #5749963 - 03/22/13 03:28 PM

Hi
Put at least one sample in each corner and lower the tollerance as low as possible without turning the samples red.
DBE used with care will work well with this image
and don't forget to relink the channels ( on stf) after running dbe to get a real view of the data

Harry


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: mmalik]
      #5749969 - 03/22/13 03:30 PM Attachment (18 downloads)

DBE and ABE comparison side by side; DBE on top (with greenish tint), ABE at the bottom (no greenish tint).

Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: harry page 1]
      #5750038 - 03/22/13 04:02 PM Attachment (5 downloads)

Quote:

Put at least one sample in each corner and lower the tollerance as low as possible without turning the samples red.
DBE used with care will work well with this image




Thanks Harry; below are DBE corner samples and low tolerance I re-tired; results in the next post.


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: harry page 1]
      #5750039 - 03/22/13 04:02 PM Attachment (14 downloads)

Quote:

don't forget to relink the channels ( on stf) after running dbe to get a real view of the data




Here is DBE output (integration_DBE.fit) with channels linked. (Note: DBE settings used are in the previous post)


Post Extras: Print Post   Remind Me!   Notify Moderator  
harry page 1
super member


Reged: 07/25/09

Re: Learning PixInsight new [Re: mmalik]
      #5750080 - 03/22/13 04:17 PM

Hi
Maybe you might try a sample in the middle to remove the slight tint or just use scnr
Harry


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: harry page 1]
      #5750108 - 03/22/13 04:28 PM

Quote:

Maybe you might try a sample in the middle to remove the slight tint or just use scnr





Sounds good Harry; I might wait to see what SCNR can do. Not too concerned with the tint as HLVG kind of tools can help as well, if you agree?

Continuing on with Ajay's list...
2. Gradient removal and color correction (DBE, Color calibration, SCNR)


What is the next logical step (after DBE); color calibration or SCNR? Does it matter which one is done before the other?

By the way, I did forget to crop before I did DBE. Might try again after cropping. Thx


Post Extras: Print Post   Remind Me!   Notify Moderator  
Peter in Reno
Postmaster
*****

Reged: 07/15/08

Loc: Reno, NV
Re: Learning PixInsight new [Re: mmalik]
      #5750126 - 03/22/13 04:36 PM

I use Background Neutralization followed by Color Calibration after DBE/ABE. Sometimes after BN, CC may not make much difference and that's okay. It looks like your ABE did a better job than DBE, so why not use ABE instead?

I typically use SCNR after Histogram Transformation. I've never tried SCNR while data is still linear.

Yes, it's important to crop after Image Integration and before any kind of processing including DBE/ABE.

Peter


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: harry page 1]
      #5750394 - 03/22/13 06:59 PM

As a matter of preference, I always have "Discard background model" unchecked on both DBE and ABE, because I want to see what it is removing. The background model will be a linear image, but if you do an unaligned auto-STF on it, you can see what you're subtracting. It should look like a gradient. I think if it looks like a bunch of mismatched, circular colors, it wasn't done right, but someone can correct me if I'm wrong.

Quote:

Hi
Maybe you might try a sample in the middle to remove the slight tint or just use scnr
Harry




I would do this, but I'd do it after stretching, which is what I think Harry is suggesting. I always do SCNR after stretching and it completely removes any green tint for me.

I may continue processing your image tonight, but it's also looking like it is my first clear night in about 2 weeks. I have also received a new Astronomik CLS, so I may be outside all night.


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: mmalik]
      #5750406 - 03/22/13 07:06 PM

Quote:

Sounds good Harry; I might wait to see what SCNR can do. Not too concerned with the tint as HLVG kind of tools can help as well, if you agree?

Continuing on with Ajay's list...
2. Gradient removal and color correction (DBE, Color calibration, SCNR)

What is the next logical step (after DBE); color calibration or SCNR? Does it matter which one is done before the other?

By the way, I did forget to crop before I did DBE. Might try again after cropping. Thx





Discard this image. Go back to the integrated image, crop using Dynamic Crop, and then do DBE on the cropped image. If you don't, you're using the dark, non-image areas of the integrated file to determine the background of the image.

SCNR is done after stretching.

I also suggest at this point that you go back and watch all of Harry's videos again. Have you watched all of his videos?


Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: jsines]
      #5750551 - 03/22/13 08:32 PM

from RBA's website:

http://www.deepskycolors.com/archivo/2010/04/26/hasta-La-Vista-green.html

"HLVG is a chromatic noise reduction tool that attempts to remove green noise and the green casts such noise may cause in some images. It is based on PixInsight's SCNR Average Neutral algorithm."

why use HLVG when the original tool is SCNR?

echoing the above post, it is important to do the DBE on a cropped image for a couple of reasons. one is the problem above and the other is that DBE can get confused about the statistics of the image with those borders left in the image. sometimes so much so that you can not find a tolerance value that works.


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: jsines]
      #5750578 - 03/22/13 08:46 PM

Quote:

...so I may be outside all night.




No rush; we'll take slow and steady approach. Regards


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: pfile]
      #5750584 - 03/22/13 08:47 PM

Quote:

...why use HLVG when the original tool is SCNR?




Let's stick to SCNR if that's the case; thanks for the info.


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: jsines]
      #5750822 - 03/22/13 10:29 PM Attachment (10 downloads)

Quote:

crop using Dynamic Crop, and then do DBE on the cropped image.




I have redone the DBE after cropping. Below is the background model that I generated along with DBE. Thx


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: Peter in Reno]
      #5750833 - 03/22/13 10:36 PM

Quote:

I use Background Neutralization followed by Color Calibration after DBE/ABE.




About the next step, color calibration, I have watched Harry's video but I can't seem to draw the preview rectangles, any ideas (Harry mentions something Alt+?, but not quite understandable)?


Harry runs Background Neutralization before color calibration in his video; is that the general approach? (I ask this since this was not on the list, unless it is considered part and parcel of color calibration). I have the same problem of drawing rectangles in Background Neutralization. Thx


Post Extras: Print Post   Remind Me!   Notify Moderator  
Escher
Pooh-Bah
*****

Reged: 08/30/07

Loc: Fenton, MI
Re: Learning PixInsight new [Re: mmalik]
      #5750849 - 03/22/13 10:43 PM

For the previews - Its Alt + N... On the Mac its Fn+Alt+N

Post Extras: Print Post   Remind Me!   Notify Moderator  
bluedandelion
Carpal Tunnel
*****

Reged: 08/17/07

Loc: Hazy Hollow, Western WA
Re: Learning PixInsight new [Re: mmalik]
      #5750851 - 03/22/13 10:43 PM

Alt+N and then draw a rectangle on the image. This will open a preview of the area you select and you can experiment your technique on it.

Harry also says that after running DBE background neutralization does not do much. This is also my experience.

Ajay


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: bluedandelion]
      #5751023 - 03/23/13 12:33 AM Attachment (14 downloads)

Thanks Escher/Ajay.

I tried star method for Color Calibration for which I had to use aggregated Background Neutralization. Not sure if I did it correctly (the star method instead of galaxy, hence the need for Background Neutralization; otherwise I would have done galaxy method and not done aggregated Background Neutralization. Your thoughts?

One thing I noticed is that my Aggregated_white window is boxy, while Harry’s video didn’t look boxy, is that OK?


I also went ahead and ran SCNR since it was just one click and it took care of green tint.


Everyone, on a side note, the CR2s I am working with are online... or you may use your own data; please feel free to jump in with your own findings/questions, etc. Open discussion. Regards


Recap:
-Completed (#1): (Batch DeBayer/Star Alignment/Image Integration/Crop)
-In progress... (#2): (DBE/Color Calibration/SCNR)


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: mmalik]
      #5751044 - 03/23/13 12:55 AM Attachment (15 downloads)

Comparison:

Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: bluedandelion]
      #5757386 - 03/26/13 01:45 AM Attachment (15 downloads)

Quote:

Stack (calibrate+integrate) -> crop boundaries
Gradient removal and color correction (DBE, Color calibration, SCNR)
Noise removal via ATrous Wavelets
---
Stretch - Histogram transfer (levels)
HDR wavelets to increase dynamic range
LHE Local contrast enhancement
ACDNR, Nose reduction
Saturation Boost via curves

The dashed line separates the first three steps you do on linear data.




Going onto the next step (3. ATrous Wavelet Transform), I ran it with all the defaults if that's the right way; your feedback welcome if any customization was needed? Hard to tell visually any change from precious step's output. Thx

Following is the output with Linked STF:


Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: mmalik]
      #5759272 - 03/26/13 10:07 PM

atrous with the defaults is the identity transform... to perform NR with atrous you'll need to turn on noise reduction at whichever wavelet scales you are interested in (typically the first 2 or 3). furthermore you probably want to extract the L* from your image, stretch it, and use it as a mask. the high SNR areas of the image probably need less NR than the low SNR areas and there's no reason to smooth them out high SNR areas too much.

Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: pfile]
      #5759529 - 03/27/13 02:52 AM Attachment (10 downloads)

Thanks for the feedback; here is another go at ATrous Wavelet Transform with NR enabled. I went with the settings that were there after enabling NR. Suggestions welcome. Thx

Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: bluedandelion]
      #5759539 - 03/27/13 03:21 AM Attachment (10 downloads)

Quote:

Stack (calibrate+integrate) -> crop boundaries
Gradient removal and color correction (DBE, Color calibration, SCNR)
Noise removal via Atrous Wavelets
---
Stretch - Histogram transfer (levels)
HDR wavelets to increase dynamic range
LHE Local contrast enhancement
ACDNR, Nose reduction
Saturation Boost via curves

The dashed line separates the first three steps you do on linear data.




Moving onto... (4. Stretch - Histogram transfer (levels); hope 'Histogram Transformation' is that it means?

Here is what I did; instead of dragging mid-tones manually, I used linked STF stretch and dragged it onto 'Histogram Transformation' as first step (next step in the next post). This is what it looked like at this point:


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: mmalik]
      #5759541 - 03/27/13 03:26 AM Attachment (13 downloads)

In the next step of 'Histogram Transformation', I reset & closed STF to get back to linear data and then applied 'Histogram Transformation' (from the previous step) to the image to make the change (stretch) permanent (no longer non-linear image), saved as FIT. This is what it looked like; your suggestions/corrections welcome. Thx

Post Extras: Print Post   Remind Me!   Notify Moderator  
bluedandelion
Carpal Tunnel
*****

Reged: 08/17/07

Loc: Hazy Hollow, Western WA
Re: Learning PixInsight new [Re: mmalik]
      #5760055 - 03/27/13 11:41 AM

You are on your way. Remember your goal here is to learn the tools and controls of PI. So far so good. I would suggest that making this look as good as your esthetic sense tells you should be a secondary goal.

Also on histogram stretches I do, sometimes, up to five different stretches from scratch and save them separately. Then I step back and pick one that had the least bit of compromise in terms of clipping, noise and dynamic range. Again, I would make this a secondary goal. Move on to the next step...

On looking back at the histogram, it looks like there is no headroom to adjust the black point. It would be wise to leave a little room for adjustment for later. Some of the processing steps I listed will affect the dynamic range of your image. If you then want to darken the background, you may have to clip some of the good data.

Edit: The STF method is less controlled. Often it is too extreme. If have good sky conditions (dark and minimal gradients)it seems to work ok. As I indicated above, I prefer to set the midtone manually. However for learning PI it is a good place to start.

Ajay


Post Extras: Print Post   Remind Me!   Notify Moderator  
Peter in Reno
Postmaster
*****

Reged: 07/15/08

Loc: Reno, NV
Re: Learning PixInsight new [Re: bluedandelion]
      #5760070 - 03/27/13 11:50 AM

STF is usually too aggressive. I never use STF and apply to Histogram Transformation (HT). Use baby steps by moving black and mid points on HT until the image looks good and noise is minimal. Do not clip the black point.

Peter


Post Extras: Print Post   Remind Me!   Notify Moderator  
Madratter
Post Laureate


Reged: 01/14/13

Re: Learning PixInsight new [Re: harry page 1]
      #5760224 - 03/27/13 01:04 PM

Quote:

Hi
what can I do to help make the basics better to understand

Always open to suggestions

Harry




Hi Harry. Thanks so much for your great tutorials. I'm currently in the trial period for PixInsight and without them, I wouldn't be able to make heads or tails of the program. With them, I'm making good progress.

In terms of what I would change, as someone else already mentioned above, I also had a tough time understanding that I was supposed to use Alt-n for preview. I was hearing alt-10. Maybe just a quick note under the video would help with that. Also, the one tool (wavelets) does not appear to be named the same thing anymore as what appears in the video. Again, just a quick note on the correct name. Finally, when I started, somehow my tools on the right were not exactly the same as in your video, I finally reset my UI and they came back. That helped me a great deal.

On some of the tools, it would be a help knowing what some of the sliders actually do and how to pick appropriate values. As it is, they are kind of magic numbers. Just as one example, the sample size and # of samples for the DBE tool. Just in that tool alone there are a # of other magic numbers that a beginner (at least me) just doesn't understand.

One of the most helpful things for me was actually watching your demo video of M106 after watching your separate videos. That helped solidify things for me. Maybe a note on the site suggesting that.

Thanks again. The tutorials are just extraordinarily helpful.


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: bluedandelion]
      #5760562 - 03/27/13 03:01 PM

Quote:

I would suggest that making this look as good as your esthetic sense tells you should be a secondary goal.




Very true; I am least concerned with the look and feel of the image at this time, just want to understand PI. Thanks for all the help. Regards


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: mmalik]
      #5760575 - 03/27/13 03:07 PM Attachment (4 downloads)

Quote:

Thanks for the feedback; here is another go at ATrous Wavelet Transform with NR enabled. I went with the settings that were there after enabling NR. Suggestions welcome. Thx




One suggestion I have for the ATrous Wavelet Transform is that NR is NOT checked by default, which if I understand correctly should be? Thx


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: bluedandelion]
      #5761486 - 03/27/13 10:53 PM Attachment (7 downloads)

Quote:

Stack (calibrate+integrate) -> crop boundaries
Gradient removal and color correction (DBE, Color calibration, SCNR)
Noise removal via Atrous Wavelets
---
Stretch - Histogram transfer (levels)
HDR wavelets to increase dynamic range
LHE Local contrast enhancement
ACDNR, Nose reduction
Saturation Boost via curves

The dashed line separates the first three steps you do on linear data.




Next in order is HDR Multiscale Transform, I think that's what you meant by HDR wavelets? Settings I had to modify from default are marked. Thx

[Process-Wavelets-HDRMultiscaleTransform]


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: bluedandelion]
      #5761547 - 03/27/13 11:32 PM Attachment (9 downloads)

Quote:

Stack (calibrate+integrate) -> crop boundaries
Gradient removal and color correction (DBE, Color calibration, SCNR)
Noise removal via Atrous Wavelets
---
Stretch - Histogram transfer (levels)
HDR wavelets to increase dynamic range
LHE Local contrast enhancement
ACDNR, Nose reduction
Saturation Boost via curves

The dashed line separates the first three steps you do on linear data.




We reach Local Histogram Equalization, I think that's what you meant? Note: Settings I modify per tutorials are encircled in red.

I did use a mask as Harry's describes in the tutorial, but I have few questions:

•What does hiding mean, is mask still there, in effect I mean, during LHE processing? In other word, hiding is NOT the same as removing, correct?

•Is CTRL+K is the correct way to unhide or is that how it was done in the tutorial?


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: bluedandelion]
      #5761660 - 03/28/13 01:31 AM Attachment (10 downloads)

Quote:

Stack (calibrate+integrate) -> crop boundaries
Gradient removal and color correction (DBE, Color calibration, SCNR)
Noise removal via Atrous Wavelets
---
Stretch - Histogram transfer (levels)
HDR wavelets to increase dynamic range
LHE Local contrast enhancement
ACDNR, Nose reduction
Saturation Boost via curves

The dashed line separates the first three steps you do on linear data.




Here is ACDNR; modified settings highlighted. Thx


Post Extras: Print Post   Remind Me!   Notify Moderator  
bluedandelion
Carpal Tunnel
*****

Reged: 08/17/07

Loc: Hazy Hollow, Western WA
Re: Learning PixInsight new [Re: mmalik]
      #5761669 - 03/28/13 01:42 AM

The Atrous Wavelet (AtW) transformation is done on linear data before the stretch.

At this stage noise reduction is the primary reason to do AtW. Harry has a complete tutorial on this. I mainly follow his recommendations. As he says, those settings work on most images. Check noise reduction and set the parameters per Harry's recommendation.

HDR wavelets is the HDR multiscale transformation. Too many things to discuss here, but once again Harry saves the day! If you follow his tutorial you should get good results. If you watch the Starmask tutorial, Harry shows you how to protect your stars with a Starmask so that the stars don't turn into donuts during the HDR transform step. If you want to take a small step, set that aside for now and just do the default and observe what it does to your image.

You are correct about the mask. You can see what the mask is protecting by clicking the icon with the Star on it. If you click it again the image looks normal but the mask is still in effect.

Ajay


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: bluedandelion]
      #5761690 - 03/28/13 02:11 AM Attachment (9 downloads)

Quote:

Stack (calibrate+integrate) -> crop boundaries
Gradient removal and color correction (DBE, Color calibration, SCNR)
Noise removal via Atrous Wavelets
---
Stretch - Histogram transfer (levels)
HDR wavelets to increase dynamic range
LHE Local contrast enhancement
ACDNR, Nose reduction
Saturation Boost via curves

The dashed line separates the first three steps you do on linear data.




Finally, Curves Transformation with 'mask' as described in the video. Thx


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: mmalik]
      #5761696 - 03/28/13 02:22 AM Attachment (15 downloads)

Following is the PixInsight final JPG per process/settings used in this thread:

Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: bluedandelion]
      #5761708 - 03/28/13 02:53 AM Attachment (16 downloads)

Quote:

Stack (calibrate+integrate) -> crop boundaries
Gradient removal and color correction (DBE, Color calibration, SCNR)
Noise removal via Atrous Wavelets
---
Stretch - Histogram transfer (levels)
HDR wavelets to increase dynamic range
LHE Local contrast enhancement
ACDNR, Nose reduction
Saturation Boost via curves

The dashed line separates the first three steps you do on linear data.




Following is the reference image that was NOT processed in PixInsight (using the same data). I was wondering if someone would like to take a stab at the data (CR2s here...) and see if we can get close to the reference image, in quality I mean, for learning sake?


If yes, what I would also like out of such an exercise is that we limit the workflow ONLY to the outline laid out by Ajay and provide every setting (possibly with screen clips) that was NOT default, for documentation that I'll be creating. Thanks in advance if there are any takers? Regards


Post Extras: Print Post   Remind Me!   Notify Moderator  
jwheel
professor emeritus


Reged: 01/23/08

Loc: Fort Davis TX
Re: Learning PixInsight new [Re: mmalik]
      #5762336 - 03/28/13 12:41 PM

Thanks for all of this information. I am a newbie with PI and this is very informative.

Joe Wheelock


Post Extras: Print Post   Remind Me!   Notify Moderator  
bluedandelion
Carpal Tunnel
*****

Reged: 08/17/07

Loc: Hazy Hollow, Western WA
Re: Learning PixInsight new [Re: jwheel]
      #5762554 - 03/28/13 03:03 PM

Mike, I might take a stab at this time permitting, but I don't have too much of that available now. So can you post the calibrated and integrated stack generated by PI?

Ajay


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: mmalik]
      #5762764 - 03/28/13 05:27 PM

Quote:

Thanks Escher/Ajay.

I tried star method for Color Calibration for which I had to use aggregated Background Neutralization. Not sure if I did it correctly (the star method instead of galaxy, hence the need for Background Neutralization; otherwise I would have done galaxy method and not done aggregated Background Neutralization. Your thoughts?

One thing I noticed is that my Aggregated_white window is boxy, while Harry’s video didn’t look boxy, is that OK?


I also went ahead and ran SCNR since it was just one click and it took care of green tint.






go back to this step. You're doing color calibration after background neutralization. The purpose of Background Neutralization is to generate a neutral background. You are taking a bunch of preview windows and aggregating them to use as a background, but your preview windows are all full of nebula dust, which isn't in the background.

You only want preview windows on the background in the aggregated preview. You are confusing the program by telling it that the nebula dust is the background, when it isn't. You need to find areas of the picture that are "space" and put previews on those. Sometimes you have images full of nebula dust, and you can only put very small previews on it to tell the program what is background and not nebula dust.

Go back and look at the picture in this post. You're using the aggregated preview for both the background reference and the white reference. I think that isn't the best way to do it. For color calibration, I would take an aggregated preview of only the background (space) as the background reference, and I would draw a small preview around a white star or a group of white stars and use that as the white reference. Remember, you're telling PI what is white and what is background.

Nowhere in the picture do you use the aggregated_white image you created.

Also, SCNR should be done after it's stretched. I would not do that on linear data.

I would do background neutralization by only selecting the background as previews, aggregate, and apply. Then I would close out the aggregated image and the previews, reselect background images, aggregate them, and select a white star for the white reference, for color calibration.


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: mmalik]
      #5762770 - 03/28/13 05:33 PM

Quote:

Thanks for the feedback; here is another go at ATrous Wavelet Transform with NR enabled. I went with the settings that were there after enabling NR. Suggestions welcome. Thx





Redo background neutralization and color calibration. Then create a preview of a portion of the image, say the horsehead, and then show us that preview. This will be basically showing us an enhanced image. We can't see the amount of noise in the picture. Then we can suggest the Atrous Wavelet settings to use. When you hover over the settings, you should see that the Atrous Wavelet settings should be gradually lower as you go down the row. Sometimes you need to start at 3 and 2 iterations, then down to 0.5 and 60% apply. Sometimes you need to start at 2 and 1 iteration, then down to 0.5. This step is really a judgement call.

You can go overboard on this step and it produces "lumpy" backgrounds. I'm noticing better results for my situation by skipping AtrousWavelets and doing ACDNR after stretching. Using this step is not mandatory, and it's more a personal decision.

You're starting to get into the stage of PixInsight where an image can branch out in different directions. You can do some steps on an image and get one result, and you can do different steps on the same image and get different results. It's not really a 10 step process where all 10 steps are followed in a certain way. One of the benefits of PI is that you can have multiple copies of the same image open, even in different windows (the 4 at the bottom), and then try different things on the same image. Harry does this in his video on HDRWavelets, and I do this all the time for different steps.


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: Peter in Reno]
      #5762777 - 03/28/13 05:36 PM

Quote:

STF is usually too aggressive. I never use STF and apply to Histogram Transformation (HT). Use baby steps by moving black and mid points on HT until the image looks good and noise is minimal. Do not clip the black point.

Peter





This is what I do also. I also find applying STF to the histogram to be too aggressive. I make a large adjustment, reset the black point, sometimes make another large adjustment, reset the black point, then make gradually smaller adjustments. Reset the black point without clipping. Also, you can make adjustments that are too large and you'll generate noise, so be careful. I find it better to be conservative on histogram transform and then go back, rather than go large and you're stuck.


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: mmalik]
      #5762791 - 03/28/13 05:42 PM

Quote:

Quote:

Thanks for the feedback; here is another go at ATrous Wavelet Transform with NR enabled. I went with the settings that were there after enabling NR. Suggestions welcome. Thx




One suggestion I have for the ATrous Wavelet Transform is that NR is NOT checked by default, which if I understand correctly should be? Thx





No, it shouldn't be checked by default, because there are different things that can be done with AtrousWavelets. For example, I can create a clone of the Orion Nebula, then use Atrous Wavelets to remove the stars and have an image of only nebula. I can do this by unchecking "detail layer" for the first 4 and leaving the 5th checked. I can also take the Orion Nebula, clone the image, and produce an image of stars only, by unchecking the 5th and keeping 1-4 checked. I can then use those images with PixelMath to create new images, then adjust saturation and color, then use PixelMath to recombine them.

Someone posted a guide on how to do that with the Orion Nebula, I can't find it right now but someone else may be able to, I found it on the PixInsight forum.


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: mmalik]
      #5762800 - 03/28/13 05:45 PM

Quote:

Quote:

Stack (calibrate+integrate) -> crop boundaries
Gradient removal and color correction (DBE, Color calibration, SCNR)
Noise removal via Atrous Wavelets
---
Stretch - Histogram transfer (levels)
HDR wavelets to increase dynamic range
LHE Local contrast enhancement
ACDNR, Nose reduction
Saturation Boost via curves

The dashed line separates the first three steps you do on linear data.




Following is the reference image that was NOT processed in PixInsight (using the same data). I was wondering if someone would like to take a stab at the data (CR2shere...[/url]) and see if we can get close to the reference image, in quality I mean, for learning sake?

If yes, what I would also like out of such an exercise is that we limit the workflow ONLY to the outline laid out by Ajay and provide every setting (possibly with screen clips) that was NOT default, for documentation that I'll be creating. Thanks in advance if there are any takers? Regards





Not sure if I have time, but if I do I will.


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: bluedandelion]
      #5762943 - 03/28/13 06:46 PM

Quote:

I might take a stab at this time permitting, but I don't have too much of that available now. So can you post the calibrated and integrated stack generated by PI?




Ajay/jsines, I have uploaded DeBayered/Aligned/Integrated/edge-cropped FIT here... if you could give it a try; your processing insights will be greatly appreciated. Regards


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: pfile]
      #5763079 - 03/28/13 07:59 PM Attachment (7 downloads)

Ok. So we started with in-camera noise reduction on the images, and no bias or flats. We did the manual dslr integration steps starting at a later stage where we didn't need to worry about masters. We ended up with an integrated file. We needed to maximize the SNR increase, so we initially did the integration with no pixel rejection, then tried the different methods to get the SNR increase closest to the no rejection that removes the *BLEEP* from the picture. I ended up with using linear fit on the integrated file. I took STF and unchecked the top right box, and checked auto-STF, and end up with this.

Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: jsines]
      #5763093 - 03/28/13 08:07 PM Attachment (8 downloads)

DBE - sample only the background, and put samples in the corners, so you get a gradient that is subtracted from the image.

(I'm working on adjusting the image sizes so you can see them better)


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: jsines]
      #5763130 - 03/28/13 08:32 PM Attachment (9 downloads)

Background Neutralization - applied previews only to background, then aggregated previews. Set lower limit to 0.01 per Harry's video.

Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: jsines]
      #5763134 - 03/28/13 08:35 PM Attachment (2 downloads)

Color Calibration - delete the aggregated preview and close out the background neutralization. open color calibration and redo the previews on the background, aggregate using preview aggregator, and then select a star for a preview for the white reference. use the aggregated as the background reference

Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: jsines]
      #5763136 - 03/28/13 08:36 PM Attachment (4 downloads)

reset the STF on the linear image. you won't be using STF anymore because you're stretching the data. first stretch of histogram transform. a large one -

Edited by jsines (03/28/13 08:37 PM)


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: jsines]
      #5763140 - 03/28/13 08:39 PM Attachment (6 downloads)

reset black point and stretch a little more. I've closed out the STF because we won't use it anymore.

Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: jsines]
      #5763145 - 03/28/13 08:41 PM Attachment (7 downloads)

One more slight stretch and reset of the black point. Yes, I'm clipping the blacks slightly, but you see it's like less than half a percent, so it's ok.

Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: jsines]
      #5763149 - 03/28/13 08:45 PM Attachment (12 downloads)

That was the final stretch. Can't remember if I stretched it 3 or 4 times, but it was a large stretch, then a few smaller and smaller stretches. Now I do SCNR to remove the green -

Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: jsines]
      #5763152 - 03/28/13 08:47 PM Attachment (9 downloads)

Image after SCNR - notice the green is gone from the lower center of the picture. Much better.

Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: jsines]
      #5763159 - 03/28/13 08:50 PM Attachment (6 downloads)

ACDNR - notice the settings at the bottom. I opened the lightness mask, clicked on preview, and then set the sliders so there is a lot of dark areas to mask. Then I unchecked preview, closed the preview out, and then set the lightness stdv to 2 and chrominance stdv to 4. check both lightness mask boxes on each.

Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: jsines]
      #5763171 - 03/28/13 08:53 PM Attachment (6 downloads)

Second ACDNR. Opened lightness mask, opened preview, checked preview, and made fewer dark areas. Note the settings on the lightness mask compared to last time. This is entirely subjective and changes with each picture. change the lightness stdv to 3 and change the chrominance stdv to 5. check lightness mask on both.

Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: jsines]
      #5763174 - 03/28/13 08:55 PM Attachment (9 downloads)

Then I opened Histogram Transform and reset the black point. (not pictured).

Next, create a star mask to apply for HDR.


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: jsines]
      #5763178 - 03/28/13 08:58 PM Attachment (8 downloads)

Apply the star mask, have the mask only cover the stars, and then apply HDR. The application of HDR is also subjective and based on each image. Sometimes I do 8, then 6, then 4. Sometimes I do 6,5,4. Sometimes I do 4,3. This time I only did 6. Also checked to lightness, lightness mask, and deringing. On this step, I make sure not to make it look fake, and not to get dark circles in the stars.

Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: jsines]
      #5763181 - 03/28/13 09:00 PM Attachment (10 downloads)

LGRB Combination - I extract the Luminance layer. Then I reapply it using LGRB. I lower the Saturation to 0.25 and check chrominance noise reduction. This step brings out the colors more and reduces the noise more.

After this step I reset the black point again with histogram transform (not pictured)


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: jsines]
      #5763190 - 03/28/13 09:05 PM Attachment (9 downloads)

Extract a luminance layer and reset the white and black points of the luminance layer so that it's a heavy mask. I'm going to use this mask for curves transformation. I want heavy protection on the nebula and no protection on the background space so I can adjust each separately.

Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: jsines]
      #5763192 - 03/28/13 09:08 PM Attachment (8 downloads)

The adjusted luminance layer is used as a mask for Curves Transformation. I invert the mask, adjust saturation and color, then I invert the mask again, and adjust saturation, color, and luminance.

In this step, the pink is saturation adjustment and the white is RGB/K adjustment.

this step is entirely subjective, and varies image to image and person to person.


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: jsines]
      #5763195 - 03/28/13 09:10 PM Attachment (5 downloads)

In this step, the mask was inverted again (back to original) and then I applied adjustments to saturation (pink), luminance (small reduction at the bottom left) and RGB/K (small increase at bottom right).

Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: jsines]
      #5763198 - 03/28/13 09:13 PM Attachment (9 downloads)

Final step - LocalHistogramTransform. Sometimes I use a mask on this to isolate certain areas to adjust, sometimes I adjust the whole thing. On the Orion Nebula, I would use a mask to isolate only the nebula and do LHE on that. For this, I just did a small LHE on the whole thing. This brings out more detail and is kind of like HDR.

Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: jsines]
      #5763200 - 03/28/13 09:15 PM Attachment (25 downloads)

Finished image - Saved as integrated fit file and then saved as a jpeg file for showing. I like it this way, but it may be a little dark, and I've noticed that sometimes I over-saturate an image, so I'm working on that. I do see on my computer a lot of dust underneath the Horsehead, and this image keeps that dust in there, but I don't see it on the uploaded image. I could have worked the image more, but I just did this in about an hour for this thread. Sometimes I work on an image 6-10 times before I find one I'd show someone.

Like I said before, it's like a branch - you start out with an integrated file, and then you branch out into different directions. Each branch may either be a dead end or may be a nice looking image. There just isn't one way to create an image in PixInsight.

Thanks for reviewing, and hope this helps!


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: jsines]
      #5763220 - 03/28/13 09:28 PM Attachment (17 downloads)

...and here is mmalik's final image not processed in PixInsight for comparison. I'm sure someone else who uses PixInsight could probably produce a better image than mine, but this is just for learning.

Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: jsines]
      #5763259 - 03/28/13 09:54 PM

Thanks so much jsines; let me digest the info before asking any questions. Regards

Post Extras: Print Post   Remind Me!   Notify Moderator  
Peter in Reno
Postmaster
*****

Reged: 07/15/08

Loc: Reno, NV
Re: Learning PixInsight new [Re: mmalik]
      #5763321 - 03/28/13 10:26 PM

I processed the image with PixInsight but I got similar result as jsines. The noise is quite high making it difficult to bring out the nebulosity without increasing the noise.

mmalik,

Did you calibrate the light subs with darks, flats and bias? If not, calibration should help reduce noise and also help processing the image easier. Also, calibrating each sub is necessary before DeBayering. Finally stack the DeBayered subs. I tried re-reading this long thread but it's not clear whether the light subs were calibrated before any further processing was done.

Peter


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: Peter in Reno]
      #5763335 - 03/28/13 10:34 PM

Peter, CR2s are in-camera noise reduced. No calibration frames were taken nor I felt the need. Here... is a high-res version of the image that was NOT developed in PixInsight. In short, CR2s... and integrated file... are quite clean. Thx


Note: There is NO password protection on file downloads; if prompted, please provide your own live/hotmail/msn account to get past the glitch.


Post Extras: Print Post   Remind Me!   Notify Moderator  
bluedandelion
Carpal Tunnel
*****

Reged: 08/17/07

Loc: Hazy Hollow, Western WA
Re: Learning PixInsight new [Re: mmalik]
      #5763542 - 03/29/13 02:06 AM

I agree that the noise is rather high. I hit it with GREYCstoration late in the game. Normally I'd do this on linear data just before the stretch. Anyway, here's the jpeg.



Edit: Mike I don't know how this .xosm file works. It loads the project history fine on my PC but if you download it from my skydrive folder, it doesn't work. I'll email you screenshots of the settings if you send me your email address. Perhaps you can upload it to CN for all. Thank you for letting me play with your data.

The final fits file is here.

Ajay

Edited by bluedandelion (03/29/13 02:21 AM)


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: bluedandelion]
      #5763620 - 03/29/13 04:25 AM

Quote:

I'll email you screenshots of the settings if you send me your email address. Perhaps you can upload it to CN for all.




Thanks Ajay for giving it a try; have sent you my email. Will look forward to your screenshots. Regards


Post Extras: Print Post   Remind Me!   Notify Moderator  
Mike Unsold
Vendor, MLUnsold - ImagesPlus
*****

Reged: 05/21/09

Re: Learning PixInsight new [Re: bluedandelion]
      #5764819 - 03/29/13 02:07 PM

Guys

What happened to the color in the stars and nebula as shown by the last few images posted?

The 32 bit floating point FITS file with image values between 0.0 and 1.0 posted as integrated_cropped.FIT does have a lot of color. Even the brightest star have a little color in their outer halo. A few steps applied to the [0.0, 1.0] posted FITS file yields

http://www.mlunsold.com/Temp/integration_cropped-V1.jpg

Mike


Post Extras: Print Post   Remind Me!   Notify Moderator  
Peter in Reno
Postmaster
*****

Reged: 07/15/08

Loc: Reno, NV
Re: Learning PixInsight new [Re: mmalik]
      #5765245 - 03/29/13 04:53 PM

Quote:

Peter, CR2s are in-camera noise reduced. No calibration frames were taken nor I felt the need.




I never used DSLR for astro-photography but I wouldn't think it's much different than astro cameras for processing. I would assume the in-camera noise reduction is designed for daytime photography and may not be as effective as calibrating lights with darks, bias and flats. I find AP calibration to be effective in pre-processing noise reduction. Also dithering images is very effective that you may not need to do dark subtraction. Stacking dithered images is pretty much or even better than dark subtraction.

Your PixInsight processing is pretty good. The more you practice the better.

As for your Horsehead Nebula processing, with this much noise, I really could not bring out more red or nebulosity without bringing too much noise. The non-PixInsight image appear to have extensive noise reduction because it looks like it has aggressive smoothing. Over-smoothing can have negative results but that's a personal preference.

Bottom line, I believe for every image captured, calibrating each light sub is a must to get excellent results and easier for post processing. In case you are not aware of it, if your camera capture images and the software automatically converts to color on the fly, then you cannot calibrate color subs with darks, flats and bias. Calibration can only work with RAW aquisition (grey or B&W images).

Peter

Edited by Peter in Reno (03/29/13 05:16 PM)


Post Extras: Print Post   Remind Me!   Notify Moderator  
bluedandelion
Carpal Tunnel
*****

Reged: 08/17/07

Loc: Hazy Hollow, Western WA
Re: Learning PixInsight new [Re: Mike Unsold]
      #5766623 - 03/30/13 11:41 AM

Quote:

Guys

What happened to the color in the stars and nebula as shown by the last few images posted?

The 32 bit floating point FITS file with image values between 0.0 and 1.0 posted as integrated_cropped.FIT does have a lot of color. Even the brightest star have a little color in their outer halo. A few steps applied to the [0.0, 1.0] posted FITS file yields

http://www.mlunsold.com/Temp/integration_cropped-V1.jpg

Mike




Personally, I did not push the saturation more. Its an easy thing to do but one of personal preference.

Ajay


Post Extras: Print Post   Remind Me!   Notify Moderator  
bluedandelion
Carpal Tunnel
*****

Reged: 08/17/07

Loc: Hazy Hollow, Western WA
Re: Learning PixInsight new [Re: Peter in Reno]
      #5766650 - 03/30/13 11:49 AM

Quote:


I would assume the in-camera noise reduction is designed for daytime photography and may not be as effective as calibrating lights with darks, bias and flats. I find AP calibration to be effective in pre-processing noise reduction.
Peter




Is this horse still standing?

There seem to be some very strong proponents for in-camera noise reduction on this forum, but may be this example will help some folks see why most skilled imagers, both professional and amateur, calibrate every light frame as Peter outlines.

Ajay


Post Extras: Print Post   Remind Me!   Notify Moderator  
Peter in Reno
Postmaster
*****

Reged: 07/15/08

Loc: Reno, NV
Re: Learning PixInsight new [Re: bluedandelion]
      #5767187 - 03/30/13 05:16 PM

Ever since I learned about in-camera noise reduction in Canon cameras, I researched more about it via Google and I found this:

http://www.astropix.com/HTML/I_ASTROP/SETTINGS.HTM

There are several statements related to astrophotography but for an advanced AP user, it stated the following:

"Once you get to a more advanced level in your astrophotography experience, you will definitely want to turn in-camera long-exposure noise reduction off. You will do better by shooting a series of dark-frame exposures yourself that you can use later in a more sophisticated way in calibrating the light-frame images."

Peter


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: mmalik]
      #5767973 - 03/31/13 12:59 AM

Quote:

Peter, CR2s are in-camera noise reduced. No calibration frames were taken nor I felt the need. Here... is a high-res version of the image that was NOT developed in PixInsight. In short, CR2s... and integrated file... are quite clean. Thx

Note: There is NO password protection on file downloads; if prompted, please provide your own live/hotmail/msn account to get past the glitch.





I think Peter is right. If this thread is about learning PixInsight, then you process the images the way PixInsight recommends. All of their documentation, and all of the master users and creators on the PixInsight forums say you need to take darks separately, you need bias and flat frames, you need a lot more bias than dark frames, etc. They talk about like 100-200 bias frames to take a master bias.

I think it was noisy to begin with, but separate darks, bias, and flat frames could have helped, in my opinion. We started on like step 4, and we may have had a less noisy image if we started on step 1.

PixInsight has certain steps that you need to follow to get good results, and it's not PixInsight's fault if those steps aren't followed and then the results are not what you expect.


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: Mike Unsold]
      #5767978 - 03/31/13 01:04 AM

Quote:

Guys

What happened to the color in the stars and nebula as shown by the last few images posted?

The 32 bit floating point FITS file with image values between 0.0 and 1.0 posted as integrated_cropped.FIT does have a lot of color. Even the brightest star have a little color in their outer halo. A few steps applied to the [0.0, 1.0] posted FITS file yields

http://www.mlunsold.com/Temp/integration_cropped-V1.jpg

Mike





Well, since this thread is called "Learning PixInsight", I'd love to hear how you did this in PixInsight.

I could have created a Star Mask with less protection around the edges of the stars, applied it to the image, inverted it so the stars and edges are not protected, and then pushed the saturation and color more on the Curves Transformation. I think it's just a personal preference, plus I look at APOD images or other better images for guidance.


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: jsines]
      #5768004 - 03/31/13 01:32 AM

Thanks all; special thanks to Ajay and jsines for their efforts. I think we achieved the core purpose and that was learning PixInsight. As a next step, I'll try creating some documentation out of this discussion.


Note: As far the practice CR2s go, I am merely comparing the 'NON' PixInsight results using the SAME source data (regardless of how it was captured and whether or not it was noisy). Practically speaking, the practice CR2s represent data which is better than an average capture. What makes results somewhat relevant is NOT that it was not a perfect dataset, BUT it was the same dataset that was used to produce PixInsight and NON PixInsight results. Anyway, that's a discussion for some other time.

Edited by mmalik (03/31/13 01:40 AM)


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: Mike Unsold]
      #5768744 - 03/31/13 12:50 PM

Quote:

Guys

What happened to the color in the stars and nebula as shown by the last few images posted?

The 32 bit floating point FITS file with image values between 0.0 and 1.0 posted as integrated_cropped.FIT does have a lot of color. Even the brightest star have a little color in their outer halo. A few steps applied to the [0.0, 1.0] posted FITS file yields

http://www.mlunsold.com/Temp/integration_cropped-V1.jpg

Mike




Thanks Mike, your image has nice color saturation. I think you could have brightened the whole image a bit, would have made it even better. You seem to have the better looking blue nebula at the bottom than the rest of us.

On a side note, thing I feel common among most processors (you, Ajay, Jeines) is that all of you left images quite dark in the end, not sure what's up with the dark streak ?


While I have your attention, wanted to talk about FIT interoperability. As discussed in the early part of this thread, 32bit FITs were not quite transferrable between PixInsight and ImagesPlus.

I have run into similar situation with TIFs where 32bit TIFs saved in ImagesPlus are not transferrable to Photoshop. I end up using 16bit TIFs for ImagesPlus and Photoshop interoperability. I have yet to go back and try the same between PixInsight and ImagesPlus or between PixInsight and Photoshop. Folks were saying FIT format it NOT standardized (read early part of the thread); your thoughts on this matter will be appreciated. Regards

Edited by mmalik (04/01/13 03:26 AM)


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: bluedandelion]
      #5770089 - 04/01/13 03:07 AM Attachment (16 downloads)

Ajay, posting a JPG of the FIT you sent me after elevating the curves a bit for brightening and adjusting for bluish tones in your FIT. Image you have produced is the most contrast I have seen thus far; this is a great processing attempt on your part. I am parsing through the data you and jsines have sent/provided; will be providing some documentation (possibly a reprocess of my own) soon. Thanks!

Post Extras: Print Post   Remind Me!   Notify Moderator  
bluedandelion
Carpal Tunnel
*****

Reged: 08/17/07

Loc: Hazy Hollow, Western WA
Re: Learning PixInsight new [Re: mmalik]
      #5772017 - 04/02/13 12:19 AM

Thank you Mike. In the larger version (click "Attachments" in Mike's post) you'll see some of the problems associated with an inadequate Star Mask setting. Play around with the settings as recommended by jsines above so that the mask bends in with the surroundings better.

Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: bluedandelion]
      #5772155 - 04/02/13 04:32 AM

Everyone, first draft of PixInsight instructions is available here.... Any feedback/corrections welcome. Thx

Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: mmalik]
      #5780356 - 04/05/13 07:07 PM Attachment (7 downloads)

When saving FIT->TIFF in PixInsight, following message comes up; what is happening here? Is any date being lost saving FIT as TIFF in PixInsight? How are folks handling image transitions between PixInsight and say Photoshop for final touchups, etc. Thx

Post Extras: Print Post   Remind Me!   Notify Moderator  
Falcon-
Post Laureate
*****

Reged: 09/11/09

Loc: Gambier Island, BC, Canada
Re: Learning PixInsight new [Re: mmalik]
      #5780385 - 04/05/13 07:29 PM

FITS keywords are metadata. In theory you can have FITS keywords set that describe the location in the sky of the picture, the evaluated noise profile, and a whole bunch of other random information. With the exception of passing scientific data back and forth you can basically ignore that warning since your goal is just outputting a nice picture after processing.

(BTW, those two examples I used are not entirely random, there is a script that tries to solve the image in the same way astrometry.net does and it saves that info into FITS keywords, likewise the batch-debayer script can do noise evaluation on each frame and save it in FITS keywords so that ImageIntegration will not have to do it's own noise evaluation later. In both cases it just does not matter when it comes time to output a TIFF at the end).


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: Falcon-]
      #5780442 - 04/05/13 08:20 PM Attachment (7 downloads)

Thx.

Everyone, here is what I have found out about PixInsight's FIT->TIFF compatibilities:


Note: As we all know PixInsight's FIT is NOT compatible with ImagesPlus

•FIT->32-bit IEEE 754 floating point TIFF seems compatible with ImagesPlus (but NOT Photoshop)

•FIT->32-bit unsigned integer TIFF is neither compatible with ImagesPlus nor Photoshop

•FIT->16-bit unsigned integer TIFF "IS" compatible with both, ImagesPlus and Photoshop


Comments/corrections welcome!

Edited by mmalik (04/05/13 08:24 PM)


Post Extras: Print Post   Remind Me!   Notify Moderator  
Falcon-
Post Laureate
*****

Reged: 09/11/09

Loc: Gambier Island, BC, Canada
Re: Learning PixInsight new [Re: mmalik]
      #5780451 - 04/05/13 08:28 PM

I generally use 16bit Unsigned Integer for TIFF output and if you uncheck the "Associated Alpha Channel" option you can reduce the size of the file a bit (it adds a 4th channel for transparency on top of the Red, Green and Blue channels - generally not needed)

Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: mmalik]
      #5781901 - 04/06/13 01:49 PM

Quote:

When saving FIT->TIFF in PixInsight, following message comes up; what is happening here? Is any date being lost saving FIT as TIFF in PixInsight? How are folks handling image transitions between PixInsight and say Photoshop for final touchups, etc. Thx





Why would you need to do "final touchups" in Photoshop?

I can't think of any reason why I would process an image in PixInsight and then use Photoshop on it. Maybe someone who does this can provide an example.


Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: jsines]
      #5781982 - 04/06/13 02:28 PM

yes... i would like to hear about that because i can't think of anything i've ever had to do to an image outside of pixinsight...

Post Extras: Print Post   Remind Me!   Notify Moderator  
waassaabee
Postmaster
*****

Reged: 11/26/07

Loc: Central California Coast
Re: Learning PixInsight new [Re: pfile]
      #5794268 - 04/12/13 11:48 AM

Quote:

yes... i would like to hear about that because i can't think of anything i've ever had to do to an image outside of pixinsight...




This is great news for me!! I'm in the process of divorcing Bill Gates and hopping on the MacWagon, so PI is going to be my processing software and didn't really want to invest in PS for Mac.


Post Extras: Print Post   Remind Me!   Notify Moderator  
bluedandelion
Carpal Tunnel
*****

Reged: 08/17/07

Loc: Hazy Hollow, Western WA
Re: Learning PixInsight new [Re: pfile]
      #5795670 - 04/13/13 12:02 AM

Quote:

yes... i would like to hear about that because i can't think of anything i've ever had to do to an image outside of pixinsight...




I do not use anything other than PI either, but a couple of things that PS has in terms of plugins lacking in PI are "star rounding" and "star removal". Mask making is also perhaps more flexible in PS.


Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: bluedandelion]
      #5805994 - 04/18/13 01:31 AM

star removal = star masks and atrous wavelets or multiscale median transform. see gerard's video linked from here:

http://pixinsight.com/forum/index.php?topic=5384.0

star rounding... well, that goes against the core philosophy of pixinsight. you can however do deconvolution with an egg-shaped PSF or creative structuring elements in MorphologicalTransformation to fix up the stars.


Post Extras: Print Post   Remind Me!   Notify Moderator  
Tom and Beth
Post Laureate


Reged: 01/08/07

Loc: Tucson, AZ
Re: Learning PixInsight new [Re: bluedandelion]
      #5809789 - 04/19/13 11:00 PM

shameless bump. Thanks for this discussion.
Mike, Thanks for your documentation of the process.
I've gone over a bunch of videos but find having
a piece of paper in front of me while learning which
button starts each function facilitates the process, for me anyway


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: Tom and Beth]
      #5809815 - 04/19/13 11:21 PM

Welcome; glad to hear. Regards

Post Extras: Print Post   Remind Me!   Notify Moderator  
Tom and Beth
Post Laureate


Reged: 01/08/07

Loc: Tucson, AZ
Re: Learning PixInsight new [Re: mmalik]
      #5809901 - 04/20/13 12:43 AM

Guess I should have verified this, but which version is your guide based on?

Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: Tom and Beth]
      #5809938 - 04/20/13 01:25 AM

Quote:

which version is your guide based on




It is mentioned in the doc..., Section II.

(Version: 1.8 RC4)


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: mmalik]
      #5870407 - 05/18/13 10:19 PM Attachment (8 downloads)

One gets following message when trying to calibrate RAWs in PixInsight using Script-Batch Processing-Batch Preprocessing routine. I am wondering if this is NOT an optimal way of calibrating light/dark frames in PixInsight then what exactly is or is there a right way to calibrate in PixInsight? Detailed response will be appreciated. Thx

Post Extras: Print Post   Remind Me!   Notify Moderator  
zerro1
Postmaster
*****

Reged: 08/02/09

Loc: Smokey Point , 48.12°N 122.25...
Re: Learning PixInsight new [Re: mmalik]
      #5870458 - 05/18/13 10:55 PM Attachment (6 downloads)

There is an option in the controls to disable integration. This message indicates that you opted to calibrate, register and integrate the files that you loaded.

There is nothing wrong with the calibration process using the batch script. They just are suggesting that you have more control with the options provided using "image integration" from the standard menu driven option rather than the script.

you can disable the integration by "unchecking" the apply box seen in attached screen capture

Edited by zerro1 (05/18/13 11:05 PM)


Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: zerro1]
      #5870750 - 05/19/13 04:21 AM

did you read what the dialog box says? it does not say anything about calibration, it is talking exclusively about integration.

that dialog box contains just about as detailed an explanation as one could hope for.


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: pfile]
      #5870760 - 05/19/13 04:38 AM

Understood; thanks Robert/pfile.

So what is the proper/recommended use of 'Batch Preprocessing Script'?

Would checking 'Calibrate only' be a good use of 'Batch Preprocessing Script' then?


Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: mmalik]
      #5871417 - 05/19/13 01:23 PM

you can do it either way, there's no proper or recommended way. if you think you need to tune your rejection parameters, then you should use II to integrate the output frames from the script.

Post Extras: Print Post   Remind Me!   Notify Moderator  
bouffetout
scholastic sledgehammer


Reged: 11/21/12

Loc: Canada
Re: Learning PixInsight new [Re: pfile]
      #5872180 - 05/19/13 06:38 PM

Just looked at what came out of DSS. I saved the image as is from DSS ,and looked in the files and looked at the autosave and its darker than the one I saved from where there is the histogram ,( or colour curves in DSS ).
when I put them in pixinsight and do a Transfer Fonction stretch on it ,both are turns out very different...
Are they both still linear...Wich one should I process in pixinsight ???
Thank you for helping me !
Maxx


Post Extras: Print Post   Remind Me!   Notify Moderator  
bluedandelion
Carpal Tunnel
*****

Reged: 08/17/07

Loc: Hazy Hollow, Western WA
Re: Learning PixInsight new [Re: pfile]
      #5872300 - 05/19/13 07:39 PM

Rob, Thanks for the link to Gerard's videos. They will be most useful.

In any case I have used Deconvolution and Morphological Transform lately and they work well. However the Point Spread Function is applied to all stars. I found the following post by Carlos Milovic on the PI forums that says, "Deconvolutions are applied isotropically over the entire image. That means, the PSF doesn't change. One wat that may be used to deal with those images is to divide the image, and dynamically change the PSF over those small sections." [see http://pixinsight.com/forum/index.php?topic=3793.0]

My new imaging scope has a flatter field so that should help and I am planning to get a flattener in the future. These should help.

Ajay

Edited by bluedandelion (05/19/13 07:40 PM)


Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: bluedandelion]
      #5872355 - 05/19/13 08:07 PM

gerard announced a couple more videos today, dealing with pixelmath. if they are anything like his other videos they are required viewing!

right, the PSF is going to vary across the image due to CCD tip/tilt or even just the normal optical aberrations that your particular OTA exhibits. flatteners and good collimation can help this. but i think even in the presense of these OTA artifacts you can still get very good results from deconvolution.

unless you mask the image, the PSF is applied to the whole image, not just the stars, during deconvolution. lately i have been masking the background and the cores of bright stars during deconvolution, otherwise you can still get ringing artifacts at the cores of stars. it may not actually be necessary to mask the background if you play with the wavelet regularization parameters, though. i've played with this and noticed that the low SNR areas are left untouched if you get it right.


Post Extras: Print Post   Remind Me!   Notify Moderator  
bluedandelion
Carpal Tunnel
*****

Reged: 08/17/07

Loc: Hazy Hollow, Western WA
Re: Learning PixInsight new [Re: pfile]
      #5872863 - 05/19/13 11:51 PM

Quote:


unless you mask the image, the PSF is applied to the whole image, not just the stars, during deconvolution.




Understood. That is the point of deconvolution, isn't it? I do use local deringing support for bright stars and a mask for the background.

See this large scale version of M31 I posted recently. Deconvolution worked well to bring out details in the galaxy and for stars near the center. The elongated stars near the corners, I'll just have to just live with.

I will have to spend some time with Gerard's videos.

Ajay


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: bluedandelion]
      #5873175 - 05/20/13 07:44 AM

Quote:

I will have to spend some time with Gerard's videos.




Would you have the link?


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: mmalik]
      #5873254 - 05/20/13 08:50 AM

Once done with Batch Preprocessing and Image Calibration, is Batch DeBayer proper/logical next step for all processing? (i.e., before performing star alignment & integration)

Edited by mmalik (05/20/13 08:52 AM)


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: mmalik]
      #5873325 - 05/20/13 09:34 AM

If I were to pick 'Calibrate only' option in Batch Preprocessing (BPP), it creates a master dark file and _c.fit files. My questions:

Is running Image Calibration after BPP redundant?

If not, then when running Image Calibration should I use_c.fits created in BPP or should I use CR2s again as target frames?


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: mmalik]
      #5873358 - 05/20/13 09:57 AM

Quote:

If I were to pick 'Calibrate only' option in Batch Preprocessing (BPP), it creates a master dark file and _c.fit files. My questions:

Is running Image Calibration after BPP redundant?

If not, then when running Image Calibration should I use_c.fits created in BPP or should I use CR2s again as target frames?




My findings/dilemmas are:

_c.fit files produced (from CR2s) in BPP, 'Calibrate only' option, are about about 70MB each

-c.fit files produced (from CR2s) in Image Calibration are about 211MB each


Post Extras: Print Post   Remind Me!   Notify Moderator  
bluedandelion
Carpal Tunnel
*****

Reged: 08/17/07

Loc: Hazy Hollow, Western WA
Re: Learning PixInsight new [Re: mmalik]
      #5873423 - 05/20/13 10:32 AM

Quote:

Quote:

I will have to spend some time with Gerard's videos.




Would you have the link?





http://pixinsight.com/forum/index.php?topic=5384.0


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: bouffetout]
      #5873499 - 05/20/13 11:12 AM

Quote:

Just looked at what came out of DSS. I saved the image as is from DSS ,and looked in the files and looked at the autosave and its darker than the one I saved from where there is the histogram ,( or colour curves in DSS ).
when I put them in pixinsight and do a Transfer Fonction stretch on it ,both are turns out very different...
Are they both still linear...Wich one should I process in pixinsight ???
Thank you for helping me !
Maxx




If you only have those two options, I'd use the "as-is" image, the one where you didn't apply the histogram in DSS.

However, you'll probably get better results if you use the BatchPreProcessing script in PixInsight, and then the Image Integration proces to stack the resulting fit files from the BatchPreProcessing script.


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: pfile]
      #5873513 - 05/20/13 11:24 AM

Quote:

gerard announced a couple more videos today, dealing with pixelmath. if they are anything like his other videos they are required viewing!

right, the PSF is going to vary across the image due to CCD tip/tilt or even just the normal optical aberrations that your particular OTA exhibits. flatteners and good collimation can help this. but i think even in the presense of these OTA artifacts you can still get very good results from deconvolution.

unless you mask the image, the PSF is applied to the whole image, not just the stars, during deconvolution. lately i have been masking the background and the cores of bright stars during deconvolution, otherwise you can still get ringing artifacts at the cores of stars. it may not actually be necessary to mask the background if you play with the wavelet regularization parameters, though. i've played with this and noticed that the low SNR areas are left untouched if you get it right.




Is this because you're using a monochrome CCD? I'm wondering if there is a difference in the use of deconvolution with monochrome CCD images versus DSLR images. Alejandro, in his excellent tutorials over on the forum, seems to primarily use deconvolution to reduce noise in DSLR images. I may be mistaken, though.


Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: jsines]
      #5873713 - 05/20/13 01:04 PM

Quote:

Quote:

gerard announced a couple more videos today, dealing with pixelmath. if they are anything like his other videos they are required viewing!

right, the PSF is going to vary across the image due to CCD tip/tilt or even just the normal optical aberrations that your particular OTA exhibits. flatteners and good collimation can help this. but i think even in the presense of these OTA artifacts you can still get very good results from deconvolution.

unless you mask the image, the PSF is applied to the whole image, not just the stars, during deconvolution. lately i have been masking the background and the cores of bright stars during deconvolution, otherwise you can still get ringing artifacts at the cores of stars. it may not actually be necessary to mask the background if you play with the wavelet regularization parameters, though. i've played with this and noticed that the low SNR areas are left untouched if you get it right.




Is this because you're using a monochrome CCD? I'm wondering if there is a difference in the use of deconvolution with monochrome CCD images versus DSLR images. Alejandro, in his excellent tutorials over on the forum, seems to primarily use deconvolution to reduce noise in DSLR images. I may be mistaken, though.




well, the only difference with OSC that i've found is that usually your image is not well-focused in all 3 of the R,G,B planes, due to chromatic aberrations. in theory that should not happen with a reflector but usually there are refractive elements in your image train somewhere, which can cause that problem. anyway what happens if you use a PSF that's too big or too small for the image is that you can get really bad ringing artifacts that can not easily be suppressed with the deringing built in to PI's Deconvolution process. you have to use the right PSF for your image in order for deconvolution to work properly.

i used to deconvolve each channel separately using a different PSF for each channel (G and B were usually similar for me, with R being the outlier) but probably a better technique for OSC images is to make a synthetic L image and deconvolve that, then process the synthetic L and do an LRGB combine like you would do with mono. you can see scott rosen using this technique with great success.

it's worth skimming the wikipedia entry for deconvolution. it's a mathematical technique in which you try to recover the "real" image from the image that formed on your sensor, plus a point spread function (PSF). we're lucky with astronomical images - the stars are pretty much the PSF for the image, since stars are point sources of light. therefore it's reasonably straightforward to do a deconvolution since we can get a pretty good estimate of the PSF.

the main idea is to try to restore a sharp image from a blurred image since we know approximately how it's been blurred. so it's not really for NR, it's for sharpness. and you need to be careful not to apply the deconvolution to noisy areas of your image, or else you'll just create garbage. the SNR has to be reasonably good for deconvolution to produce meaningful results.


Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: mmalik]
      #5873743 - 05/20/13 01:16 PM

Quote:

If I were to pick 'Calibrate only' option in Batch Preprocessing (BPP), it creates a master dark file and _c.fit files. My questions:

Is running Image Calibration after BPP redundant?

If not, then when running Image Calibration should I use_c.fits created in BPP or should I use CR2s again as target frames?




you've got to take a step back here. the purpose of the BPP script is to handle calibration and registration of your images. if you are using a mono CCD this can become a file management nightmare - flat subs for each filter, perhaps dark flat subs which match the (possibly different) flat durations, bias subs, dark subs, etc. you may even have different binnings for your RGB vs. L so you have 2 sets of dark subs and 2 sets of bias subs and on and on...

the idea behind BPP is that your image aquisition program has probably put some keywords in the FITS header that say "this image is a flat with filter Red" or "this image has duration 600s" or "this image is a bias frame", and so a computer program can handle all the drudge work of matching up all your calibration subs with your light subs, make masters, and properly calibrate your lights.

the steps that BPP covers are:

1) calibration - bias/dark subtraction and flat division
2) optional cosmetic correction
3) debayering (if images are from an OSC)
4) registration (aligning all subs to a reference frame)
5) integration (though as discussed above, usually you want to tune your pixel rejection parameters, so you probably want to use ImageIntegration rather than just taking the 'preview' output stack from BPP)

the products of BPP are master calibration frames and calibrated, debayered and registered light subs. of course you can tell it "calibrate only" which stops the process early, in case you want to do the registration by hand.

file extensions are:

_c : calibrated image
_c_d : debayered image from calibrated image
_c_d_r : registered image from debayered, calibrated image.

if you are using a mono CCD you'll have

_c and _c_r

if you've turned on cosmetic correction you'll have _cc_ in there somewhere.

so, the _c_r.fit or _c_d_r.fit files are the ones that you'll want to re-load into ImageIntegration to make your fimal stack.

as for file sizes, the reason for the large files is that CR2 files contain 16-bit integers (and they are losslessly compressed). your master calibration frames are going to be 32-bit float images. when you calibrate one of the CR2 files with the master frame, the output is written out as an uncompressed 32-bit float image. so the file sizes grow considerably.

rob


Post Extras: Print Post   Remind Me!   Notify Moderator  
zerro1
Postmaster
*****

Reged: 08/02/09

Loc: Smokey Point , 48.12°N 122.25...
Re: Learning PixInsight new [Re: pfile]
      #5874575 - 05/20/13 08:18 PM

Quote:

as for file sizes, the reason for the large files is that CR2 files contain 16-bit integers (and they are losslessly compressed). your master calibration frames are going to be 32-bit float images. when you calibrate one of the CR2 files with the master frame, the output is written out as an uncompressed 32-bit float image. so the file sizes grow considerably.






And why is the default 32bit? Is there some configuration setting that you can set to 16bit and it would remain at that setting?


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: pfile]
      #5874992 - 05/20/13 11:53 PM

Quote:

if you are using a mono CCD...




Rob, thanks for your detailed response but I was referring to CR2s (FYI: CR2s are Canon RAW files); not sure how you are getting at mono CCD, plus this is DLSR forum


To reiterate, my question is simple; if I use BPP for calibration ONLY, it generates master files (master dark, etc.) and calibrated lights [as expected]. Does this mean running image calibration, after BPP, as separate operation against CR2s and BPP generated master files (master dark, etc.) is redundant? A simple yes/no answer will help. Regards


Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: mmalik]
      #5875048 - 05/21/13 12:38 AM

i was trying to explain the origin of the script and why it is useful, because you seem to not understand what it is that the script is supposed to do.

you have 3-4x the number of files for a mono CCD vs. OSC of any kind, thus making the calibration problem 3-4x harder for you.

however, the same applies for an OSC like a DSLR. you might have used an Ha filter or a CLS filter, so you have multiple flat files. your flats still have a shorter duration than your lights, so you need darks that match the flats as well as darks that match your lights. BPP can do everything i described for DSLR raw files. that includes nikon, sony and canon, and anything else DCRAW can handle.

look, i can't explain this to you any more than i have. if i just told you that the BPP script produces calibrated, registered lights, then what do you think? does it make sense to then calibrate your CR2s from scratch again? why would you do that, unless you are just fond of filling up your disk with 2 copies of the same files?


Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: zerro1]
      #5875061 - 05/21/13 12:53 AM

Quote:

Quote:

as for file sizes, the reason for the large files is that CR2 files contain 16-bit integers (and they are losslessly compressed). your master calibration frames are going to be 32-bit float images. when you calibrate one of the CR2 files with the master frame, the output is written out as an uncompressed 32-bit float image. so the file sizes grow considerably.






And why is the default 32bit? Is there some configuration setting that you can set to 16bit and it would remain at that setting?




it's because when you make a master calibration frame, you're averaging together a bunch of 16-bit values. invariably the pixel values are going to be floating point numbers.

then you subtract or divide one of these masters when calibrating. even though the light frame is i16, the result is again f32.

i suppose you *could* convert back to i16 but you will be losing precision. there's no global setting for this as far as i know, because PI always deals with images in f32 format behind the scenes.

i just read the source for the script and it looks like it's saving the masters without any casts or conversions, so since ImageIntegration returns f32 images, the masters get written as f32. as far as i know there's no global setting for this; you'd have to edit the script.


Post Extras: Print Post   Remind Me!   Notify Moderator  
zerro1
Postmaster
*****

Reged: 08/02/09

Loc: Smokey Point , 48.12°N 122.25...
Re: Learning PixInsight new [Re: pfile]
      #5875079 - 05/21/13 02:11 AM

Thank You

Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: pfile]
      #5876608 - 05/21/13 06:24 PM

Quote:


well, the only difference with OSC that i've found is that usually your image is not well-focused in all 3 of the R,G,B planes, due to chromatic aberrations. in theory that should not happen with a reflector but usually there are refractive elements in your image train somewhere, which can cause that problem. anyway what happens if you use a PSF that's too big or too small for the image is that you can get really bad ringing artifacts that can not easily be suppressed with the deringing built in to PI's Deconvolution process. you have to use the right PSF for your image in order for deconvolution to work properly.

i used to deconvolve each channel separately using a different PSF for each channel (G and B were usually similar for me, with R being the outlier) but probably a better technique for OSC images is to make a synthetic L image and deconvolve that, then process the synthetic L and do an LRGB combine like you would do with mono. you can see scott rosen using this technique with great success.

it's worth skimming the wikipedia entry for deconvolution. it's a mathematical technique in which you try to recover the "real" image from the image that formed on your sensor, plus a point spread function (PSF). we're lucky with astronomical images - the stars are pretty much the PSF for the image, since stars are point sources of light. therefore it's reasonably straightforward to do a deconvolution since we can get a pretty good estimate of the PSF.

the main idea is to try to restore a sharp image from a blurred image since we know approximately how it's been blurred. so it's not really for NR, it's for sharpness. and you need to be careful not to apply the deconvolution to noisy areas of your image, or else you'll just create garbage. the SNR has to be reasonably good for deconvolution to produce meaningful results.





I'm learning how to incorporate deconvolution into my work-flow process, and this helps a lot. Thanks.


Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: jsines]
      #5877136 - 05/21/13 11:17 PM

you're welcome, glad i can help.

Post Extras: Print Post   Remind Me!   Notify Moderator  
bluedandelion
Carpal Tunnel
*****

Reged: 08/17/07

Loc: Hazy Hollow, Western WA
Re: Learning PixInsight new [Re: pfile]
      #5879146 - 05/22/13 09:51 PM

Here's a nice tutorial on deconvolution:

http://www.manuelj.com/Tutorials/Deconvolution/22071685_r3Z6QC#!i=1767553866&...

Ajay


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: jsines]
      #5882540 - 05/24/13 03:26 PM Attachment (9 downloads)

Quote:

Quote:

Quote:


The most important bit is that it is set to "Create RAW Bayer CFA image"




Quote:


RGB raw works also... i think this is just a matter of preference.





The "DSLR_RAW work flow tools" thread says to select "Create RAW debayer image". I selected the first one, which is "Create RAW Debayer" instead of the second one, which is "Create RAW Bayer CFA image".

Are there pluses/minuses to each selection?

Thanks!
Jeff




it's "create raw bayer image" vs. "create raw bayer CFA image".

the raw bayer image is a 3-plane image. the CFA image is a monochrome image. the data represented by both images is the same, but represented in different ways.

in the RGB image, the red pixels are on the red plane, green on the green and blue on the blue. on a given plane, wherever there would be pixels of a different color, there are black pixels on that plane. the CFA image is sort of more like the sensor itself - for a canon camera, the red pixel is next to a green pixel and above the other green pixel. the blue pixel is diagonally opposed to the red pixel.

at some point in PixInsight history the Debayer process could only handle one type of image... now i can't remember which. but now Debayer can handle CFA or RGB bayer images so it does not matter which one you use.




I would like to confirm if following are the correct RAW format settings for all types of DSLR RAWs (in-camera as well as out-camera NR RAWs)?


Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: mmalik]
      #5883207 - 05/24/13 10:25 PM

yes, that is correct. raw bayer CFA also works - that creates a mono image similar to what would come off of an OSC CCD camera.

raw bayer (as you have checked) creates a 3-plane image where the bayered red pixels are on one plane, green on another and blue on another.


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: pfile]
      #5883418 - 05/25/13 01:51 AM Attachment (16 downloads)

Thanks Rob.

Would same be true of 'Batch DeBayer Script', i.e., applicable to all types of DSLR RAWs (in-camera as well as out-camera NR RAWs)?


Post Extras: Print Post   Remind Me!   Notify Moderator  
bouffetout
scholastic sledgehammer


Reged: 11/21/12

Loc: Canada
Re: Learning PixInsight Need help ! new [Re: mmalik]
      #5883862 - 05/25/13 12:03 PM

I am working on a picture of the pelican nebula taken recently. So far I did the Batch Processing/integration/calibration/Batch debayer and image integration...Histogram Transformation/ACDNR/Colour Calibration.
Now I don't know where to go next ! There is a lot of white pixels all over the picture ( Noise ??? ) ,and the colour is way off...
Note that there are no darks in the picture ,but I did a preprocess in DSS than a Histogram Transformation/Debayer/Histogram Transformation , and it turned out quite nice.
http://i1355.photobucket.com/albums/q713/BorisSputnik/Astrophotography/500x60...

But now I need to do everything on Pixinsight.
So this is the whole picture where I'm at now
https://www.dropbox.com/s/c46zfz8so8oa94h/integration_clone%20p%C3%A9lican%20...

Thank you for helping me !


Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: mmalik]
      #5883980 - 05/25/13 01:30 PM

Quote:

Thanks Rob.

Would same be true of 'Batch DeBayer Script', i.e., applicable to all types of DSLR RAWs (in-camera as well as out-camera NR RAWs)?




yes, although if you are running BPP the debayer step happens automatically right after the registration step, so no need for batchdebayer.

if you used in-camera darks and you don't intend to use flats, then you can use batchdebayer to debayer your lights, as long as the DSLR_RAW file handler is set to make raw cfa or raw mono files.


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight Need help ! new [Re: bouffetout]
      #5884022 - 05/25/13 01:54 PM

Quote:

So far I did the

1. Batch Processing/
2. Integration/
3. Calibration/
4. Batch Debayer
5. Image integration...
6. Histogram Transformation/
7. ACDNR/
8. Color Calibration

Now I don't know where to go next !




I would suggest following 'complete' order:

1. RAW File Processing & Calibration [Linear data]
2. Alignment (Registration), Integration and Crop [Linear data]
3. DBE (Gradient Removal) [Linear data]
4. Color Calibration [Optional] [Linear data]
5. SCNR (Noise Reduction) [Linear data]
6. ATrous Wavelet Transform (Noise Reduction) [Linear data]

---Linear/NON-Linear demark---

7. Histogram Transformation
8. HDR Multiscale Transform
9. Local Histogram Equalization (Contrast)
10. ACDNR (Noise Reduction)
11. Curves Transformation (Saturation)


More details in this... doc (page 66). Thx


Post Extras: Print Post   Remind Me!   Notify Moderator  
bouffetout
scholastic sledgehammer


Reged: 11/21/12

Loc: Canada
Re: Learning PixInsight Need help ! new [Re: mmalik]
      #5884366 - 05/25/13 05:11 PM

Quote:

Quote:

So far I did the

1. Batch Processing/
2. Integration/
3. Calibration/
4. Batch Debayer
5. Image integration...
6. Histogram Transformation/
7. ACDNR/
8. Color Calibration

Now I don't know where to go next !




I would suggest following 'complete' order:

1. RAW File Processing & Calibration [Linear data]
2. Alignment (Registration), Integration and Crop [Linear data]
3. DBE (Gradient Removal) [Linear data]
4. Color Calibration [Optional] [Linear data]
5. SCNR (Noise Reduction) [Linear data]
6. ATrous Wavelet Transform (Noise Reduction) [Linear data]

---Linear/NON-Linear demark---

7. Histogram Transformation
8. HDR Multiscale Transform
9. Local Histogram Equalization (Contrast)
10. ACDNR (Noise Reduction)
11. Curves Transformation (Saturation)


More details in this... doc (page 66). Thx




Thank you for showing me the way I should have written all the steps I took. It's much easyer to read...Next time I will do the same ! And thank you for your answer and for the link you provided me. I downloaded it and looked at it and it's exactly what I was looking for...Very well done Step by step with pictures ,it's perfect !
Now I'm going back to my reading
Thanks again !
Maxx


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: pfile]
      #5886203 - 05/26/13 08:48 PM

Quote:

if you are running BPP the debayer step happens automatically right after the registration step, so no need for batch debayer.




Everyone, help me clarify few things:

With DSLR_RAW set to 'Create RAW Bayer image' would following be the right ways to go about processing ICNR and OCNR images?


For ICNR:
•SKIP BPP/Image Calibration?
•Go straight to Batch DeBayer and use noise-reduced CR2s to create DeBayered FITs?
•Use DeBayered FITs to do Star Alignment & Image Integration?


For OCNR:
ONE OPTION:
•Use BPP to create master calibration files (master darks, etc.)?
•Use BPP created master files to calibrate light CR2s in 'Image Calibration' to create calibrated FITs?
•Run Batch DeBayer on calibrated FITS to create DeBayered FITs?
•Use DeBayered FITs to do Star Alignment & Image Integration?


For OCNR:
ALTERNATE OPTION:
•USE BPP to do everything including [Image Calibration, Batch DeBayer, Star Alignment & Image Integration]?


Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: mmalik]
      #5886531 - 05/27/13 03:12 AM

those flows are okay but not sure why you'd use only part of BPP. might as well let it do the whole thing, except for ImageIntegration which should always be tuned by hand.

for ICNR you may still want to apply flats, in which case you could do ImageCalibration followed by BatchDebayer, StarAlignment and ImageIntegration.

if just using flats then you either want to make your master flat from bias-calibrated flat subs, or load a master bias into ImageCalibration and turn on calibration of the master flat.

i am not sure if BPP would run if you only loaded flats.


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: pfile]
      #5886559 - 05/27/13 04:29 AM Attachment (15 downloads)

Thanks Rob.

Another thing I noticed is image (FIT) is decent looking upon integration (via STF) but looks quite granular/noisy after dynamic crop (via STF); is it supposed to be that way? Not sure why would there be so much disparity pre & post dynamic crop in how STF shows the same image? Has anyone notice this behavior? Thx


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: mmalik]
      #5886993 - 05/27/13 12:17 PM

Quote:

Thanks Rob.

Would same be true of 'Batch DeBayer Script', i.e., applicable to all types of DSLR RAWs (in-camera as well as out-camera NR RAWs)?




I don't think so. I think the bayer/mosaic pattern is camera specific. I think most Canons are RGGB, for example, but I don't think all DSLRs have the same pattern.


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight Need help ! new [Re: mmalik]
      #5887003 - 05/27/13 12:26 PM

Quote:



I would suggest following 'complete' order:

1. RAW File Processing & Calibration [Linear data]
2. Alignment (Registration), Integration and Crop [Linear data]
3. DBE (Gradient Removal) [Linear data]
4. Color Calibration [Optional] [Linear data]
5. SCNR (Noise Reduction) [Linear data]
6. ATrous Wavelet Transform (Noise Reduction) [Linear data]

---Linear/NON-Linear demark---

7. Histogram Transformation
8. HDR Multiscale Transform
9. Local Histogram Equalization (Contrast)
10. ACDNR (Noise Reduction)
11. Curves Transformation (Saturation)






There really isn't a "complete" order for processing images in PixInsight. Once you get past the integration phase and have a stacked image, you need to identify specific problems and find solutions to those specific problems. Each image will have it's own problems. There's noise, and ways to remove it. There's a background gradient, and ways to remove it, etc., etc. PixInsight isn't a tool where you follow steps 1 through 10 and get a processed picture. It seems like you're looking for that type of program, and PI isn't it.

For example, you also have the option of ABE in addition to DBE to remove gradients. Inside ABE and DBE, you have the option of division or subtraction. For division and subtraction, you've also got the option of normalizing the image after. It's like a tree that branches out in different directions.

SCNR is usually done after stretching the data, usually right after Histogram Transform.
Background Neutralization is usually done before Color Calibration.
Some also do Atrous Wavelet Transform after removing the gradient and before Background Neutralization.
Local Histogram Equalization is usually the last step for me.
You're also missing the masks that are applied to the star cores when doing HDR, etc.


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: pfile]
      #5887012 - 05/27/13 12:30 PM

Quote:

those flows are okay but not sure why you'd use only part of BPP. might as well let it do the whole thing, except for ImageIntegration which should always be tuned by hand.




I haven't found a way to run cosmetic correction inside BPP using only CR2 files, since the Cosmetic Correction tool won't accept them when I try to create a process icon. I think the only way to use Cosmetic Correction when you're starting out with CR2 files is to stop at Calibrate Only. I may be wrong, though.

The Cosmetic Correction script is one more reason why you should not do ICNR, though. You don't get the option of removing hot/cold pixels without a master dark.


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: mmalik]
      #5887018 - 05/27/13 12:35 PM

Quote:

Thanks Rob.
Another thing I noticed is image (FIT) is decent looking upon integration (via STF) but looks quite granular/noisy after dynamic crop (via STF); is it supposed to be that way? Not sure why would there be so much disparity pre & post dynamic crop in how STF shows the same image? Has anyone notice this behavior? Thx





..because you don't understand how Screen Transfer Function works. You're removing all those black pixels during the crop, and that'll adjust what STF uses for an auto-stretch. You're actually seeing a "more realistic" image with the STF after the crop.


Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: jsines]
      #5887023 - 05/27/13 12:38 PM

Quote:

Quote:

those flows are okay but not sure why you'd use only part of BPP. might as well let it do the whole thing, except for ImageIntegration which should always be tuned by hand.




I haven't found a way to run cosmetic correction inside BPP using only CR2 files, since the Cosmetic Correction tool won't accept them when I try to create a process icon. I think the only way to use Cosmetic Correction when you're starting out with CR2 files is to stop at Calibrate Only. I may be wrong, though.

However, Cosmetic Correction is also one more reason why you should not do ICNR. You don't get the option of removing hot/cold pixels without a master dark.




good point, i was not aware of that. i should probably just never say anything about BPP since i don't use it. i'm okay with doing it all by hand - i have a lot of saved process icons that help.

fits has this 'incremental reading' feature that other image formats don't support. i think this is why II requires fits files - it works on the images in slices and that requires incremental reading. maybe CC works the same, not sure.


Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: mmalik]
      #5887027 - 05/27/13 12:40 PM

Quote:

Thanks Rob.

Another thing I noticed is image (FIT) is decent looking upon integration (via STF) but looks quite granular/noisy after dynamic crop (via STF); is it supposed to be that way? Not sure why would there be so much disparity pre & post dynamic crop in how STF shows the same image? Has anyone notice this behavior? Thx




this is normal. STF uses the statistics of the image to compute the auto-stretch. if you have a bunch of black pixels around the edge, that affects the stats and you get a different STF computation.

the noise is there in your first image, you just don't see it because it is not stretched as hard. if you apply the STF that was computed on the cropped image to the uncropped image, you'll see the same thing.


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: jsines]
      #5887376 - 05/27/13 03:58 PM

Quote:

Quote:

Would same be true of 'Batch DeBayer Script', i.e., applicable to all types of DSLR RAWs (in-camera as well as out-camera NR RAWs)?




I don't think so. I think the Bayer/mosaic pattern is camera specific. I think most Canons are RGGB, for example, but I don't think all DSLRs have the same pattern.




If you re-read, my question was different. I was asking if Batch DeBayer script can be or should be used for both, ICNR as well as for OCNR image acquisition scenarios. (Not asking about Bayer pattern per se)


Note: On a side note I would like to confirm though if Canon 60Da is RGGB Bayer pattern?


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: jsines]
      #5887411 - 05/27/13 04:20 PM

Quote:

Quote:



I would suggest following 'complete' order:

1. RAW File Processing & Calibration [Linear data]
2. Alignment (Registration), Integration and Crop [Linear data]
3. DBE (Gradient Removal) [Linear data]
4. Color Calibration [Optional] [Linear data]
5. SCNR (Noise Reduction) [Linear data]
6. ATrous Wavelet Transform (Noise Reduction) [Linear data]

---Linear/NON-Linear demark---

7. Histogram Transformation
8. HDR Multiscale Transform
9. Local Histogram Equalization (Contrast)
10. ACDNR (Noise Reduction)
11. Curves Transformation (Saturation)






There really isn't a "complete" order for processing images in PixInsight. Once you get past the integration phase and have a stacked image, you need to identify specific problems and find solutions to those specific problems. Each image will have it's own problems. There's noise, and ways to remove it. There's a background gradient, and ways to remove it, etc., etc. PixInsight isn't a tool where you follow steps 1 through 10 and get a processed picture. It seems like you're looking for that type of program, and PI isn't it.

For example, you also have the option of ABE in addition to DBE to remove gradients. Inside ABE and DBE, you have the option of division or subtraction. For division and subtraction, you've also got the option of normalizing the image after. It's like a tree that branches out in different directions.

SCNR is usually done after stretching the data, usually right after Histogram Transform.
Background Neutralization is usually done before Color Calibration.
Some also do Atrous Wavelet Transform after removing the gradient and before Background Neutralization.
Local Histogram Equalization is usually the last step for me.
You're also missing the masks that are applied to the star cores when doing HDR, etc.




You are killing me; where were you when I asked about all this 10 pages back at the start of this thread .

For learners like me and others this was the core point; what basic steps to perform and in which order. This was the order we all agree to including Harry. Read this... to refresh your memory . I am not mad but little upset when experts like you take their leisurely time to chime in when things have progressed this far. Sorry if I am sounding harsh but I am not; I just would like folks to chime in when help is needed the most and from ordering perspective that was at the start of this thread. For the life of me, put it in 1.2.3... order what you think is the correct BASIC order instead of verbalizing what's wrong or right. Remember such order would be for "learners" and "starters", not "experts". Regards


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight Need help ! new [Re: mmalik]
      #5889066 - 05/28/13 02:16 PM

Quote:


You are killing me; where were you when I asked about all this 10 pages back at the start of this thread .

For learners like me and others this was the core point; what basic steps to perform and in which order. This was the order we all agree to including Harry. Read this... to refresh your memory . I am not mad but little upset when experts like you take their leisurely time to chime in when things have progressed this far. Sorry if I am sounding harsh but I am not; I just would like folks to chime in when help is needed the most and from ordering perspective that was at the start of this thread. For the life of me, put it in 1.2.3... order what you think is the correct BASIC order instead of verbalizing what's wrong or right. Remember such order would be for "learners" and "starters", not "experts". Regards




Harry said in that comment on page 1 - "...depends on your image".

I said this on 3/28/13, 2 months ago (on page 7)-

Quote:


You're starting to get into the stage of PixInsight where an image can branch out in different directions. You can do some steps on an image and get one result, and you can do different steps on the same image and get different results. It's not really a 10 step process where all 10 steps are followed in a certain way. One of the benefits of PI is that you can have multiple copies of the same image open, even in different windows (the 4 at the bottom), and then try different things on the same image. Harry does this in his video on HDRWavelets, and I do this all the time for different steps.




It depends on the image. You've already been given a "basic" order of processing on page 1, and you've been referred to Harry's tutorials, and I've provided you a sample work-flow, using your own data (pages 7-8). People have used your data in this thread to create a final image using different steps. But the steps you take depends on the image and the specific problem you want to solve. I'm not familiar with any post-processing program where you perform the same steps 1-10 in order on different images and you get final processed images.

Quote:

I am not mad but little upset when experts like you take their leisurely time to chime in when things have progressed this far.




In no way do I consider myself an expert in PixInsight. I bought the program only about 6-8 months ago, and I still consider myself a noob. The users at the PI forum are the experts. I try to help here when I can, and I admit when I don't know if I'm right about something.


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: jsines]
      #5889880 - 05/28/13 10:14 PM

Quote:

In no way do I consider myself an expert in PixInsight. I bought the program only about 6-8 months ago, and I still consider myself a noob.




Understood; as the subject of this thread says "Learning PixInsight", let's all learn then with an open mind and leave the assertions for the experts .


Another request I have for all, not just jsines, that let's stop the mantra "every image is different", "depends on your image" etc.; we all know that. Plus we all also understand there are multiple ways of doing things in PixInsight. There are basics that we all learners need to know/learn before we get to the next stage.


Everyone, with that said, is following general flow acceptable? Additions/modifications welcome!


1. DSLR_RAW Format Preferences [ONE time/FIRST time setting] (applicable to both, ICNR/OCNR)
2. Calibration [Linear data] (applicable to OCNR only)

•Batch Preprocessing Script ('Calibrate only' option; masters to be created...)
OR
•Image Calibration (masters available...)

3. Batch Debayer Script (applicable to both, ICNR/OCNR)
4. Star Alignment (Registration), Integration and Crop [Linear data]
5. DBE (Gradient Removal) [Linear data]
6. Color Calibration [Linear data]
7. ATrous Wavelet Transform (Noise Reduction) [Optional?] [Linear data]

---Linear/NON-Linear demark---

8. Histogram Transformation
9. SCNR (Noise Reduction) [Optional?]
10. HDR Multiscale Transform [IMPORTANT]
11. Local Histogram Equalization (Contrast) [Optional?]
12. ACDNR (Noise Reduction)
13. Curves Transformation (Saturation)

Edited by mmalik (05/29/13 02:54 AM)


Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: mmalik]
      #5890005 - 05/28/13 11:29 PM

before #6 you must do BackgroundNeutralization or else the results of #6 will be garbage...

on #6 it's really important to set the background and foreground thresholds properly - turn on "show white reference" and "show background reference" while you are tuning that. if you get two white squares (or two black squares) then your thresholds are set wrong. same goes for BackgroundNeutralization; you need to make sure you are only picking up background pixels.

for galaxies the best thing to do for the white reference is to draw a preview over the center of the galaxy and then tell CC to use a region of interest, taking the coordinates from the preview you just created. be sure to turn off structure detection in this situation.

for nebula and other non-galaxy DSOs, you should turn on structure detection and don't use a preview. again tune the thresholds so that the white reference is a bunch of white dots on a black background - you are isolating the stars because on average, the color of all the stars together will be white.

in both situations, for the background reference, draw a preview over a part of the background and take the region of interest from that preview. again tune the background thresholds so that you have a white field with black stars - you want to exclude the stars from the background reference.


color calibration is pretty much mandatory if you are dealing with RGB images. for narrowband the color is kind of arbitrary, so it's not important.

rob

p.s. the reason to use regions of interest rather than the previews directly is to make the CC process re-usable. if you use the previews directly then the CC process will not work on another image unless you define exactly the same previews on that image. this may not make sense at first, but say you collect 5h of data on a subject and then process the image. later you collect 5 more hours and you have a new image. if you saved a process icon for your BN and CC you can just use them again on the new image without worrying about replicating the previews.


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: pfile]
      #5890226 - 05/29/13 06:00 AM

Thanks Rob, good info; what do you think if this general flow?


1. DSLR_RAW Format Preferences [ONE time/FIRST time setting] (applicable to both, ICNR/OCNR)
2. Calibration [Linear data] (applicable to OCNR only)

•Batch Preprocessing Script ('Calibrate only' option; masters to be created...)
OR
•Image Calibration (masters available...)

3. Batch Debayer Script (applicable to both, ICNR/OCNR)
4. Star Alignment (Registration), Integration and Crop [Linear data]
5. DBE (Gradient Removal) [Linear data]
6. Background Neutralization [Linear data]
7. Color Calibration (for RGB images) [Linear data]
8. ATrous Wavelet Transform (Noise Reduction) [Optional?] [Linear data]

---Linear/NON-Linear demark---

9. Histogram Transformation
10. SCNR (Noise Reduction) [Optional?]
11. HDR Multiscale Transform [IMPORTANT]
12. Local Histogram Equalization (Contrast) [Optional?]
13. ACDNR (Noise Reduction)
14. Curves Transformation (Saturation)


Post Extras: Print Post   Remind Me!   Notify Moderator  
jsines
sage
*****

Reged: 09/06/11

Loc: Berkley. Michigan
Re: Learning PixInsight new [Re: mmalik]
      #5890558 - 05/29/13 11:32 AM

Quote:

Thanks Rob, good info; what do you think if this general flow?





I think you're really just re-inventing the wheel at this point, since Harry's video tutorials are considered the beginner's general work flow. Everyone who is learning PI gets referred to them, and I think they do a great job of explaining all the steps you're trying to re-create.

For beginners learning PixInsight, I'd skip the Calibration, Batch Debayering, and Star Alignment steps and just use the BatchPreProcessing script. This means you'll need to take separate darks, but the creators of PixInsight ("the experts") say you should do this anyway. The BatchPreProcessing script is also covered in one of Harry's videos.


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: jsines]
      #5894205 - 05/31/13 12:35 PM

Harry does SCNR right after color calibration in the video; that's how I had it all along and also how Ajay had suggested at the start and Harry seemed ok with it until you and some folks suggested otherwise. Wouldn't SCNR make more sense after color calibration while one is fixing color in the linear mode? Thx

Post Extras: Print Post   Remind Me!   Notify Moderator  
harry page 1
super member


Reged: 07/25/09

Re: Learning PixInsight new [Re: mmalik]
      #5894435 - 05/31/13 02:39 PM

Hi
indeed I do use scnr most of the time at the linear stage , but it also works very well when the data is stretched

So use as you please

Regards Harry


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: harry page 1]
      #5896792 - 06/01/13 09:17 PM

Thanks Harry; it has been a while we all heard from you. Please do provide your insights once in while, they are greatly appreciated. Regards

Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: harry page 1]
      #5955548 - 07/05/13 02:40 AM

I notice PixInsight goes through quite a rapid succession of revisions or updates; was wondering what happens once one purchases a particular version; would they be eligible for free updates and for how long? Is one locked into the version they buy like other software? In short, how are minor or major updates handled after one purchases a version of PixInsight?

Post Extras: Print Post   Remind Me!   Notify Moderator  
harry page 1
super member


Reged: 07/25/09

Re: Learning PixInsight new [Re: mmalik]
      #5956016 - 07/05/13 12:18 PM

Hi
I have had Pi for 4 years now and received all updates for free
There is a bit here from PI as we get closer to ver 2 of pixinsight which might be 2014 -2015

Quote from the FAQ page

When we release version 2.0, the price of a commercial 2.x license for existing users will be directly proportional to the time of use of their 1.x licenses.

For example, a user who purchased his or her license a few months before the 2.0 release date will have access to the new version at no cost. A user who purchased a license a few years before the 2.0 release will have to pay the full 2.x license price. These examples should not be taken literally; the exact terms of our upgrade policy will be determined when appropriate, but you get the idea.

Regards

Harry


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: harry page 1]
      #6174051 - 11/03/13 04:27 PM Attachment (5 downloads)

I get following error when re-activating PixInsight; I had to reinstall the OS and PixInight but original activation code is not working that worked before. Any ideas how to re-activate PixInsight after a fresh re-install of the OS/PixInsight? Note: I have a purchased license and Yes, I DID forget to backup the original license file after I activated first time. Regards

Edited by mmalik (11/03/13 04:43 PM)


Post Extras: Print Post   Remind Me!   Notify Moderator  
astroricardo
sage
*****

Reged: 11/14/11

Loc: Marietta, GA
Re: Learning PixInsight new [Re: mmalik]
      #6174070 - 11/03/13 04:37 PM

You need your license file, if you have it on another computer you can just copy the file. It is called pixinsight-license. I think it's even cross platform.

Post Extras: Print Post   Remind Me!   Notify Moderator  
Peter in Reno
Postmaster
*****

Reged: 07/15/08

Loc: Reno, NV
Re: Learning PixInsight new [Re: mmalik]
      #6174077 - 11/03/13 04:46 PM

Have you looked at FAQ at:

http://www.pixinsight.com/faq/index.html

There are several FAQ related to licensing.

Peter


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: astroricardo]
      #6174080 - 11/03/13 04:46 PM

Quote:

You need your license file, if you have it on another computer you can just copy the file. It is called pixinsight-license. I think it's even cross platform.




I didn't know this and didn't backup the license file before re-installing the OS. NOT having the original license file available, how can I re-activate PixInsight?


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: Peter in Reno]
      #6174095 - 11/03/13 04:58 PM

Quote:

Have you looked at FAQ at:

http://www.pixinsight.com/faq/index.html

There are several FAQ related to licensing.

Peter




Thanks Peter; I have sent support an email to get a new re-activation key, what a hassle... Thanks for all the help. Regards


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: mmalik]
      #6174108 - 11/03/13 05:13 PM

OK, it was a quick turn around from PixInsight support , got it re-activated. Lesson learned... backup your PixInsight-license file after first activation. FYI: I use Win7 and path to my license file is as follows (this is the file one needs to backup)

C:\Users\<user-name>\pixinsight-license


Thanks again to PixInsight support, Peter & astroricardo!


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: pfile]
      #6195824 - 11/15/13 01:00 AM

Thanks Rob for your input on 'CFA images' option in BPP in the other... thread; thought I'll post good stuff here for the record:

Quote:

if you are doing it manually you can do whatever you want, CFA or RGB. both formats are raw formats.

the only way BPP can be told that an input file is from a OSC camera is by checking the "cfa" box. otherwise it assumes that the files are from a mono camera.

when "cfa" is ticked, the batchpreprocessing script passes the format hint "raw cfa" to the DSLR_RAW module, which overrides whatever is set in the DSLR_RAW module (the screenshot you show above.)

attached is the tooltip for the "CFA images" checkbox.






Quote:

I think you're too conservative by one step. it's okay to let BPP do the alignment (registration) of images. it's even okay to let BPP integrate the image, but that integration should be considered as a preview. as i mentioned this is because you'll probably have to iterate a few times setting the pixel rejection sliders and looking at the rejection maps to make sure you got the rejection parameters right. you want to reject as few pixels as possible (just hot pixels, cold pixels, satellite trails, airplanes) and not any real DSO data.




Quote:

By the way with OSC cameras the task of calibration is somewhat straightforward - you usually have a single master flat. so it's not terribly difficult to do everything manually. but that still means setting up ImageCalibration, executing it, then running the batchdebayer script, then setting up StarAlignment, running it, and finally loading up ImageIntegration and running it.

if you imagine that you're using a mono camera with filters, the calibration task became that much more of a hassle… say you did L Ha RGB on some target… now you have to manage 5 flats and 5 runs of ImageCalibration, plus all the rest of the stuff above.

that's why BPP was developed in the first place; it tries to automatically match up flats with lights, and also choose the right dark for the particular frame being calibrated. in theory you should just be able to throw everything in there and let it rip. in practice i don't know, because i still do it the old-fashioned way




Quote:

At any rate 'raw CFA' and 'raw RGB' are entirely equivalent, it's just a matter of how the raw data is represented. the CFA file is smaller because the CFA file is one plane (mono) and the RGB is 3-plane. there's a bunch of wasted space in the raw RGB file - black pixels. so for the sake of disk space the CFA files are a little better.

the Debayer module (and by extension the BatchDebayer script) knows how to handle raw RGB or raw CFA files.




Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: mmalik]
      #6195832 - 11/15/13 01:07 AM

Note: There is 'CFA image' option in 'RAW Format Preferences' as well, the one in the previous post pertains to 'Batch Preprocessing' which will override the former if used. Regards



Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: mmalik]
      #6257503 - 12/17/13 04:13 AM

I am curious to find out how to stack on a comet (not star/s) in PixInsight?

Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: mmalik]
      #6257964 - 12/17/13 11:35 AM

1) register all images on the stars
2) use the CometAlign process to re-align the star registered images to the comet
3) integrate the images from step 2 with aggressive pixel rejection to get rid of the stars
4) integrate the images from step 1 with aggressive pixel rejection to get rid of the comet
5) create a star mask; with star mask applied to comet-aligned image, copy stars to comet aligned image with pixel math.

but i think step 2 depends on the acquisition time being in the FITS header and not sure if CR2 metadata makes it into the FITS header…

rob


Post Extras: Print Post   Remind Me!   Notify Moderator  
dmilligan
member


Reged: 11/21/13

Re: Learning PixInsight new [Re: pfile]
      #6258465 - 12/17/13 04:28 PM

Quote:

but i think step 2 depends on the acquisition time being in the FITS header and not sure if CR2 metadata makes it into the FITS header…




It must be, b/c I haven't had any issues.

Nice tip on the star mask, btw. I had been merging the two thinking of it as including the comet on the star data, by masking the comet. I can see how the star mask would work better.


Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: dmilligan]
      #6258832 - 12/17/13 07:30 PM

that's good because i was going to use my DSLR for comet images... that is until i discovered i could see neither ison nor lovejoy from my observing location

there is certainly more than one way to skin a cat; masking the comet could work just as well.


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: pfile]
      #6259461 - 12/18/13 04:54 AM

Quote:

5) create a star mask; with star mask applied to comet-aligned image, copy stars to comet aligned image with pixel math.




You lost me on #5; do I create star mask from the star integrated image? Which I did. Then I tried applying that mask to comet integrated image which made the image all red, though I could see the star mask getting applied. And to use your words, "copy stars to comet aligned image with pixel math", how do I do that exactly. As you an see, I am totally lost on your #5; your help with actual clicks, may be screen-shots, will be appreciated.

Also, what algorithm did you mean by aggressive pixel rejection in #3 and #4; is 'sigma clipping' ok for such rejection?

On a side note, I am NOT getting any color in the integrated images using PixInsight; I WAS able to get color with ImagesPlus integration of the same data (I mean after proper calibration in each of the software packages). Regards


Post Extras: Print Post   Remind Me!   Notify Moderator  
CharlesW
scholastic sledgehammer


Reged: 11/02/12

Loc: Chula Vista & Indio, CA
Re: Learning PixInsight new [Re: mmalik]
      #6259737 - 12/18/13 09:54 AM

I'm sure you all know this but PI just released an update that has to be accessed from the Software Distribution link on their home page. It requires deleting your current version and installing the new one. Pretty seamless install will no authentication required.

Post Extras: Print Post   Remind Me!   Notify Moderator  
pfile
Post Laureate


Reged: 06/14/09

Re: Learning PixInsight new [Re: mmalik]
      #6259950 - 12/18/13 11:59 AM

well the most common "no color" problem is forgetting to debayer the subs. even though the subs are all 'checker boarded' StarAlignment can sometimes still align them and so your integrated stack is truly a monochrome image.

the 2nd most common "no color" problem is simply not knowing that you have to increase the saturation, but you know that.

on step 5, create a star mask from the star-aligned image as you have done. you may need to tweak the parameters to really get all the stars exposed and perhaps tighten up the mask (by decreasing the growth and compensation parameters). then when the mask is applied what you described is correct; the red areas are masked and the non-red areas are exposed.

then if you put the name of the comet-aligned image into pixel math (the RGB/K field), turn off rescaling, and then apply the pixel math expression to the masked image, the stars should get copied over. if you did not use the mask the entire star-aligned image would be replaced with the comet-aligned image, but the mask prevents that and only the pixels that are revealed by the mask are going to get copied.

agressive pixel rejection means sigma clip, windsor sigma clip or linear fit with the "sigma low" and "sigma high" sliders set to low values. pixels greater than mean+sigma_high and pixels less than mean-sigma_low are rejected, so smaller values reject more pixels. you are cutting off more of the histogram by moving the sigmas closer to the mean.

rob


Post Extras: Print Post   Remind Me!   Notify Moderator  
mmalik
Post Laureate
*****

Reged: 01/13/12

Loc: USA
Re: Learning PixInsight new [Re: pfile]
      #6261321 - 12/19/13 04:43 AM