Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Help with ASI2600MC and Pixinsight

  • Please log in to reply
17 replies to this topic

#1 taipan

taipan

    Explorer 1

  • -----
  • topic starter
  • Posts: 70
  • Joined: 01 Sep 2019

Posted 22 October 2021 - 07:58 AM

Hello. I’m a newbe with  Pixinsight, Is there sombody out there who can share how you do preprocessing in PI? What calibration frames you are using.Darks, bias, dark-flats…. And if you can share what settings you are using when stacking in PI.

Maybe you can share a link to a tutorial i can lern from. smile.gif

Are using  DSS now and think my stacked pictures is very noisy so i would lite to test PI insted.

So if you have the same camera and is happy with the resault please help me. smile.gif


Edited by taipan, 22 October 2021 - 08:11 AM.

  • hornjs likes this

#2 hornjs

hornjs

    Viking 1

  • -----
  • Posts: 513
  • Joined: 04 Sep 2020
  • Loc: Billings, MT

Posted 22 October 2021 - 08:08 AM

Hello. I’m a newbe with  Pixinsight, Is there sombody out there who can share how you do preprocessing in PI? What calibration frames you are using.Darks, bias, dark-flats…. And if you can share what settings you are using when stacking in PI.

Maybe you can share a link to a tutorial i can lern from. smile.gif

Are using  DSS now and think my stacked pictures is very noisy so i would lite to test PI insted.

So if you have the same camera and is happy with the resault please help me. smile.gif

taipan:

I use the 294mc pro and 294mm pro.  I was doing some astro pixel processor to stack, then PI to finish, but have moved exclusively to using PI for pre processing and finishing.

Calibration frames for the 294mc pro should include flats, darks, dark flats, NO BIAS.  The 294mc doesn't behave nicely at short exposures.  Therefore, when you take your flats make sure to keep it 2-3 sec or longer (with corresponding dark flats).  I use WBPP in PI for preprocessing and am really happy with the results with both cameras now that I know how to tinker with some of the settings.  Adam Block has a series of videos that can be accessed on youtube for details on WBPP.  I also make use of the light vortex astronomy tutorials.  These are just a couple.  There are many more I am sure. 

 

https://youtu.be/aZqPjDN8e40

 

https://www.lightvor...pixinsight.html

 

Hope this helps you get started.  


Edited by hornjs, 22 October 2021 - 08:10 AM.

  • Delta608 likes this

#3 taipan

taipan

    Explorer 1

  • -----
  • topic starter
  • Posts: 70
  • Joined: 01 Sep 2019

Posted 22 October 2021 - 08:12 AM

taipan:

I use the 294mc pro and 294mm pro.  I was doing some astro pixel processor to stack, then PI to finish, but have moved exclusively to using PI for pre processing and finishing.

Calibration frames for the 294mc pro should include flats, darks, dark flats, NO BIAS.  The 294mc doesn't behave nicely at short exposures.  Therefore, when you take your flats make sure to keep it 2-3 sec or longer (with corresponding dark flats).  I use WBPP in PI for preprocessing and am really happy with the results with both cameras now that I know how to tinker with some of the settings.  Adam Block has a series of videos that can be accessed on youtube for details on WBPP.  I also make use of the light vortex astronomy tutorials.  These are just a couple.  There are many more I am sure. 

 

https://youtu.be/aZqPjDN8e40

 

https://www.lightvor...pixinsight.html

 

Hope this helps you get started.  

Sorry. The title  was wrong Hate my iPad sometimes…. Use the 2600MC…


  • hornjs likes this

#4 taipan

taipan

    Explorer 1

  • -----
  • topic starter
  • Posts: 70
  • Joined: 01 Sep 2019

Posted 22 October 2021 - 08:27 AM

taipan:

I use the 294mc pro and 294mm pro.  I was doing some astro pixel processor to stack, then PI to finish, but have moved exclusively to using PI for pre processing and finishing.

Calibration frames for the 294mc pro should include flats, darks, dark flats, NO BIAS.  The 294mc doesn't behave nicely at short exposures.  Therefore, when you take your flats make sure to keep it 2-3 sec or longer (with corresponding dark flats).  I use WBPP in PI for preprocessing and am really happy with the results with both cameras now that I know how to tinker with some of the settings.  Adam Block has a series of videos that can be accessed on youtube for details on WBPP.  I also make use of the light vortex astronomy tutorials.  These are just a couple.  There are many more I am sure. 

 

https://youtu.be/aZqPjDN8e40

 

https://www.lightvor...pixinsight.html

 

Hope this helps you get started.  

Tnx for the links. I will check them out :)



#5 WadeH237

WadeH237

    Cosmos

  • *****
  • Posts: 7,895
  • Joined: 24 Feb 2007
  • Loc: Ellensburg, WA

Posted 22 October 2021 - 08:46 AM

Here is my rough calibration and integration workflow that I use with my ASI2600MC Pro.

 

I use gain 100 for all of my imaging with this camera.  I have a master bias that I made with 100 subs.  I take flats by covering the scope with a white cloth, pointing it so that no sunlight touches the cloth, and using the flat wizard in NINA with "dynamic exposure".  I do not use darks with this camera.

 

I calibrate the subs mostly with defaults in ImageIntegration.  I do use an output pedestal of 100 to prevent black clipping during calibration.

 

After calibration, I run CosmeticCorrection with auto detect.  I set the hot sigma to 2.4 and cold sigma to 3.0.  It tends to correct between 20k and 30k pixels, which is about right for this sensor.

 

After CosmeticCorrection, I debayer the subs using RGGB as the pattern and "super pixel" as the method.  I make sure to debayer them into separate channels so that I have red subs, green subs and blue subs.

 

I blink through the red debayered subs to find the "best" one to my eye.  I make a copy of that image and call it "reference_R".  I find the green and blue images that were debayered from the same exposure and make copies called "reference_G" and "reference_B".

 

I run StarAlignment using "reference_R" as the reference.

 

After registering the subs, I run the NormalizeScaleGradient script on the registered red subs.  I use "registration_R" as the reference image.  At the completion of the script, it starts ImageIntegration with the appropriate settings for normalization.  I modify the settings to select "Generalized Extreme Student Deviate (ESD) Test".  I use 0.2 for "ESD outliers" and leave the other ESD settings at default.  I enable Large Scale Pixel rejection and integrate the images.

 

I repeat the StarAlignment and ImageIntegration steps for the green and blue subs (using registration_G and registration_B, respectively, as the NSG references).

 

This gives me the red, green and blue channel masters for continued processing.

 

I hope that this helps,

-Wade


  • R Botero, calypsob, RFtinkerer and 2 others like this

#6 taipan

taipan

    Explorer 1

  • -----
  • topic starter
  • Posts: 70
  • Joined: 01 Sep 2019

Posted 22 October 2021 - 09:03 AM

Here is my rough calibration and integration workflow that I use with my ASI2600MC Pro.

 

I use gain 100 for all of my imaging with this camera.  I have a master bias that I made with 100 subs.  I take flats by covering the scope with a white cloth, pointing it so that no sunlight touches the cloth, and using the flat wizard in NINA with "dynamic exposure".  I do not use darks with this camera.

 

I calibrate the subs mostly with defaults in ImageIntegration.  I do use an output pedestal of 100 to prevent black clipping during calibration.

 

After calibration, I run CosmeticCorrection with auto detect.  I set the hot sigma to 2.4 and cold sigma to 3.0.  It tends to correct between 20k and 30k pixels, which is about right for this sensor.

 

After CosmeticCorrection, I debayer the subs using RGGB as the pattern and "super pixel" as the method.  I make sure to debayer them into separate channels so that I have red subs, green subs and blue subs.

 

I blink through the red debayered subs to find the "best" one to my eye.  I make a copy of that image and call it "reference_R".  I find the green and blue images that were debayered from the same exposure and make copies called "reference_G" and "reference_B".

 

I run StarAlignment using "reference_R" as the reference.

 

After registering the subs, I run the NormalizeScaleGradient script on the registered red subs.  I use "registration_R" as the reference image.  At the completion of the script, it starts ImageIntegration with the appropriate settings for normalization.  I modify the settings to select "Generalized Extreme Student Deviate (ESD) Test".  I use 0.2 for "ESD outliers" and leave the other ESD settings at default.  I enable Large Scale Pixel rejection and integrate the images.

 

I repeat the StarAlignment and ImageIntegration steps for the green and blue subs (using registration_G and registration_B, respectively, as the NSG references).

 

This gives me the red, green and blue channel masters for continued processing.

 

I hope that this helps,

-Wade

Tnx. God information. Just flats and Bias… What ADU are you using for flats? Tried the autoflat in my ASIAIR(10s) and it somehow overkompensated.. The dark corners became light corners :)



#7 bobzeq25

bobzeq25

    ISS

  • *****
  • Posts: 26,208
  • Joined: 27 Oct 2014

Posted 22 October 2021 - 09:12 AM

Hello. I’m a newbe with  Pixinsight, Is there sombody out there who can share how you do preprocessing in PI? What calibration frames you are using.Darks, bias, dark-flats…. And if you can share what settings you are using when stacking in PI.

Maybe you can share a link to a tutorial i can lern from. smile.gif

Are using  DSS now and think my stacked pictures is very noisy so i would lite to test PI insted.

So if you have the same camera and is happy with the resault please help me. smile.gif

I use PI with 2600s (both).  One example below (2600MC), better version on my astrobin, referenced in my sig. 

 

Tips.

 

You shoot 3 types of calibration frames.  In order of importance.  Flats.  Bias.  Some cameras need dark flats instead of bias.  Not the 2600, so I don't take them.  Darks.  The last makes marginal contributions with a 2600.  But shoot them too, at first.  See how your images change when you include/omit them.  THEN, you decide.

 

Your best tool for lowering noise is more total imaging time.  PI is not some miracle (although I believe it's better than DSS).  The better your data (and the more skill you acquire), the better your result.

 

How to acquire that skill best.  One of these books.  I have both, and it's not a waste.  Block's videos are excellent, these are better tools for a beginner, to get a good comprehensive overview. 

 

Inside PixInsight.  Mastering PixInsight.

 

Warning.  They are not fully comprehensive.  There's only so much you can cover about PI in 400 pages.  <smiling but not kidding>  PI does not magically process better.  It gives you an unmatched set of highly adjustable tools so that _you_ can process better.  If you know which to use when, and how.  I have hundreds of hours in learning and using PI.

 

If that's daunting to you, Astro Pixel Processor can give excellent results (better than DSS also) with _much_ less time and effort.  For many people it would be a better choice.  I've seen some people even calibrate in APP, and process in PI.

 

Stacking settings are individualized to data.  The books (particularly the first) will give some general settings.   I use mostly those.  Linear fit clipping.  I suggest starting with the individual operations, rather than the WBPP script.  You'll learn far better.  I continue to use them, I've solved problems by doing that rather than WBPP.

 

PI is great.  And, it's not easy.  Good data is always better.

 

Shoot more subs.  <smile>

 

M8 M20 V1  j1.jpg


Edited by bobzeq25, 22 October 2021 - 09:25 AM.

  • Jim Waters and Delta608 like this

#8 ShortLobster

ShortLobster

    Vostok 1

  • -----
  • Posts: 197
  • Joined: 17 Sep 2016
  • Loc: Stamford, CT, USA

Posted 22 October 2021 - 09:15 AM

I use the 2600MC and have some observations:

  • The camera sensor is super sensitive and stray light will be more noticeable than with other cameras. I had light leak around my OAG that I needed to correct. It also makes light pollution and moon glow more challenging
  • Vignetting was more noticeable with this camera than my previous cameras, due to the larger sensor. I rearranged my imaging train, which helped. 
  • Good calibration files are very important. I use darks, flats and dark-flats. I have had problems getting good flats, but am now relatively successful using a light panel and an automated flats tool, such as the one in the ASIair. I had lots of problems with bias files and gave up on them. 
  • I've gone without darks, but I think the images are noisier and now use them all the time
  • I get better results with the WBPP script in PI than with DSS. I usually get decent results using the default setting, with the appropriate weighting parameter (nebula or galaxy)
  • I've tried exposures between 60 and 180 seconds. I'm in a Bortle 9 zone, and think I get better results with many shorter exposures, but need more experience with it to know for sure. 

It's a great camera, good luck!



#9 WadeH237

WadeH237

    Cosmos

  • *****
  • Posts: 7,895
  • Joined: 24 Feb 2007
  • Loc: Ellensburg, WA

Posted 22 October 2021 - 09:43 AM

Tnx. God information. Just flats and Bias… What ADU are you using for flats? Tried the autoflat in my ASIAIR(10s) and it somehow overkompensated.. The dark corners became light corners smile.gif

I use 50% for the NINA wizard.  That puts me about about 32000 ADUs for the brightest channel.

 

If you are getting over or under correction, the problem probably has nothing to do with the number of ADU's in your flat subs.  Correction problems like you describe are caused either by incorrect bias/dark calibration of the light frames, or by incorrect bias/flat-dark calibration of the raw flats before integrating them into the flat master.

 

I have not used an ASIAir, so I can't offer any suggestions there.  I can say that for PixInsight, I studied calibration math to the point that I can manually calibrate my subs with PixelMath if I choose.  Knowing the process in detail makes it much easier to diagnose and correct calibration problems if they happen using the normal calibration tools.


  • rgsalinger and jdupton like this

#10 calypsob

calypsob

    Fly Me to the Moon

  • *****
  • Posts: 6,934
  • Joined: 20 Apr 2013
  • Loc: Virginia

Posted 22 October 2021 - 11:43 AM

Here is my rough calibration and integration workflow that I use with my ASI2600MC Pro.

 

I use gain 100 for all of my imaging with this camera.  I have a master bias that I made with 100 subs.  I take flats by covering the scope with a white cloth, pointing it so that no sunlight touches the cloth, and using the flat wizard in NINA with "dynamic exposure".  I do not use darks with this camera.

 

I calibrate the subs mostly with defaults in ImageIntegration.  I do use an output pedestal of 100 to prevent black clipping during calibration.

 

After calibration, I run CosmeticCorrection with auto detect.  I set the hot sigma to 2.4 and cold sigma to 3.0.  It tends to correct between 20k and 30k pixels, which is about right for this sensor.

 

After CosmeticCorrection, I debayer the subs using RGGB as the pattern and "super pixel" as the method.  I make sure to debayer them into separate channels so that I have red subs, green subs and blue subs.

 

I blink through the red debayered subs to find the "best" one to my eye.  I make a copy of that image and call it "reference_R".  I find the green and blue images that were debayered from the same exposure and make copies called "reference_G" and "reference_B".

 

I run StarAlignment using "reference_R" as the reference.

 

After registering the subs, I run the NormalizeScaleGradient script on the registered red subs.  I use "registration_R" as the reference image.  At the completion of the script, it starts ImageIntegration with the appropriate settings for normalization.  I modify the settings to select "Generalized Extreme Student Deviate (ESD) Test".  I use 0.2 for "ESD outliers" and leave the other ESD settings at default.  I enable Large Scale Pixel rejection and integrate the images.

 

I repeat the StarAlignment and ImageIntegration steps for the green and blue subs (using registration_G and registration_B, respectively, as the NSG references).

 

This gives me the red, green and blue channel masters for continued processing.

 

I hope that this helps,

-Wade

That is a very interesting way to process osc. I need to try this.



#11 ghilios

ghilios

    Mariner 2

  • -----
  • Posts: 269
  • Joined: 20 Jul 2019
  • Loc: New York, NY

Posted 22 October 2021 - 01:47 PM

Wade's approach is a really interesting one for extracting robust color channels. They will be at half the resolution of the camera, which is quite reasonable since color exists at a higher scale (from a wavelet point of view). I recommend adding one more step - go through the regular calibration calibration and integration process using one of the other debayering algorithms (ie, VNG). Extract + linear fit the color channels, then recombine and turn them into a Luminance channel. You can then process your details with the luminance, and LRGB combine with the RGB channels extracted via Wade's method.



#12 taipan

taipan

    Explorer 1

  • -----
  • topic starter
  • Posts: 70
  • Joined: 01 Sep 2019

Posted 22 October 2021 - 03:28 PM

Tnx for answers and good advice.

Just came home from two hours under the almost full moon. smile.gif

This is 18subs, stacked with only bias and old flats. Stretched and colorcalibrated in Pixinsight. And ofcource converted to JPG 500kb to be posted here.

This time i think the noice is alright, just 18 300s subs under the full moon...

 

But, the red to the right in the picture. Is that faint nebulosity or is it red noice?? Skyglow?

Attached Thumbnails

  • Stretchadwebben.jpg

Edited by taipan, 22 October 2021 - 03:29 PM.


#13 WadeH237

WadeH237

    Cosmos

  • *****
  • Posts: 7,895
  • Joined: 24 Feb 2007
  • Loc: Ellensburg, WA

Posted 22 October 2021 - 03:33 PM

I would note that I have made an explicit decision that I am giving up some (interpolated) resolution by using "super pixel" instead of VNG.

 

In my case, this is my wide field setup.  The camera is on an 80mm scope with a reducer/flattener running at 384mm focal length.  My image scale with "super pixel" is about 2 arc seconds per pixel which, on a wide field, is fine with me.  My site does not have excellent seeing most of the time, so I don't gain much by using a debayer method that does interpolation.

 

Also, I have made the decision that the S/N improvement and gradient management that I get from NormalizeScaleGradient is more important than doing drizzle to try and improve spatial resolution.  At some point in the future, NSG will be able to work with drizzled data, but it will need to be ported from a script, to a native process, before that will work.  I'll revisit my steps when that happens.

 

Finally, I have been doing mono imaging for a very long time, so calibrating and integrating the OSC data into separate red, green and blue channels fits in well with my normal workflow.



#14 WadeH237

WadeH237

    Cosmos

  • *****
  • Posts: 7,895
  • Joined: 24 Feb 2007
  • Loc: Ellensburg, WA

Posted 22 October 2021 - 03:35 PM

But, the red to the right in the picture. Is that faint nebulosity or is it red noice?? Skyglow?

I've not looked at any other images to compare against yours, but what you are seeing is almost certainly Ha emission.

 

If you expose long enough and stretch deep enough, I doubt that there is even a single pixel of actual background sky in your image.  That whole area is completely filled with hydrogen clouds.


  • taipan likes this

#15 bobzeq25

bobzeq25

    ISS

  • *****
  • Posts: 26,208
  • Joined: 27 Oct 2014

Posted 22 October 2021 - 11:15 PM

Tnx for answers and good advice.

Just came home from two hours under the almost full moon. smile.gif

This is 18subs, stacked with only bias and old flats. Stretched and colorcalibrated in Pixinsight. And ofcource converted to JPG 500kb to be posted here.

This time i think the noice is alright, just 18 300s subs under the full moon...

 

But, the red to the right in the picture. Is that faint nebulosity or is it red noice?? Skyglow?

Nebulosity.  The whole area is covered by it.  Some is blocked by dark nebulae, that's what creates the shapes.

 

The dim stuff is difficult to capture under a full Moon, well done.

 

NA PEL V15b V2.jpg


Edited by bobzeq25, 22 October 2021 - 11:19 PM.

  • taipan likes this

#16 R Botero

R Botero

    Soyuz

  • -----
  • Posts: 3,509
  • Joined: 02 Jan 2009
  • Loc: Kent, England

Posted 23 October 2021 - 03:22 AM

Here is my rough calibration and integration workflow that I use with my ASI2600MC Pro.

 

I use gain 100 for all of my imaging with this camera.  I have a master bias that I made with 100 subs.  I take flats by covering the scope with a white cloth, pointing it so that no sunlight touches the cloth, and using the flat wizard in NINA with "dynamic exposure".  I do not use darks with this camera.

 

I calibrate the subs mostly with defaults in ImageIntegration.  I do use an output pedestal of 100 to prevent black clipping during calibration.

 

After calibration, I run CosmeticCorrection with auto detect.  I set the hot sigma to 2.4 and cold sigma to 3.0.  It tends to correct between 20k and 30k pixels, which is about right for this sensor.

 

After CosmeticCorrection, I debayer the subs using RGGB as the pattern and "super pixel" as the method.  I make sure to debayer them into separate channels so that I have red subs, green subs and blue subs.

 

I blink through the red debayered subs to find the "best" one to my eye.  I make a copy of that image and call it "reference_R".  I find the green and blue images that were debayered from the same exposure and make copies called "reference_G" and "reference_B".

 

I run StarAlignment using "reference_R" as the reference.

 

After registering the subs, I run the NormalizeScaleGradient script on the registered red subs.  I use "registration_R" as the reference image.  At the completion of the script, it starts ImageIntegration with the appropriate settings for normalization.  I modify the settings to select "Generalized Extreme Student Deviate (ESD) Test".  I use 0.2 for "ESD outliers" and leave the other ESD settings at default.  I enable Large Scale Pixel rejection and integrate the images.

 

I repeat the StarAlignment and ImageIntegration steps for the green and blue subs (using registration_G and registration_B, respectively, as the NSG references).

 

This gives me the red, green and blue channel masters for continued processing.

 

I hope that this helps,

-Wade

This is an excellent process routine (I only use dark flats and no bias). All the steps above can now be achieved using the  WBPP 2.2 script in PixInsight; including the channel separation and alignment to a specific colour. See Adam Block’s latest YouTube videos. 

 

Roberto



#17 TXLS99

TXLS99

    Vostok 1

  • -----
  • Posts: 104
  • Joined: 15 Feb 2019
  • Loc: Midwest USA

Posted 23 October 2021 - 08:29 AM

I have been getting good results using only flats and dark-flats, no bias.



#18 bobzeq25

bobzeq25

    ISS

  • *****
  • Posts: 26,208
  • Joined: 27 Oct 2014

Posted 23 October 2021 - 10:49 AM

For most cameras bias and dark flats are interchangeable.  A few don't do bias well, need dark flats.  The popular ones are the 1600s and 294s.

 

I use bias because I don't have those cameras, and they're simpler to do.  I may vary my exposure on flats, but one master bias covers all flat exposures.




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics