Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Nothing to do? Want to practice processing?

  • Please log in to reply
705 replies to this topic

#376 Invator

Invator

    Explorer 1

  • -----
  • Posts: 61
  • Joined: 22 May 2020

Posted 31 January 2021 - 03:10 PM

Fun, thanks a lot for the raw data!  Here's a quick take on M33 in PI  (cropped screenshot to keep file size under control)

 

 

 

 

Attached Thumbnails

  • TriangulumGalaxy_41x5min_DBE_Small.jpg

  • cybermayberry and the Elf like this

#377 Ivo Jager

Ivo Jager

    Vendor ( Star Tools )

  • *****
  • Vendors
  • Posts: 562
  • Joined: 19 Mar 2011
  • Loc: Melbourne, Australia

Posted 31 January 2021 - 11:02 PM

New week new data to play with. Seems like Nebulae are a hot commodity so this time, M20, the Trifid Nebula. Data is Ha, R, G, B. 
Knock yourself out and let's see how much details you can pull out. Link to data

 

Here's my take (of many), I've decided to leave the stars bright and shiny this time around.
 

This is a great one! Assuming you want visual spectrum ("correct") coloring and maximum visible detail, this is an deceptively tricky dataset to process.

 

The problem is that half of this object is visible as blue reflection nebulosity which does not show up at all in Ha (same issue exists  in the Running Man / NGC 1977, which too looks totally different in Ha).

Normally, to maximize signal fidelity you would strictly use Ha as luminance; in this case it is by far the cleanest signal. But when some of the detail is just not there at all in that band, then your consideration may shift.

 

To maximize the detail you can bring out, you will want to combine Ha and (R+G+B) into an synthetic luminance. There really is no specific blend to target for the combined Ha + visual spectrum signal, but a 50/50 blend will give algorithms a good chance to latch on to any detail.  Again, this will not maximize signal fidelity obviously, as there is no "optimal" way of combining narrowband and wide-spectrum data.

 

Creating the visual spectrum part of the synthetic luminance, is made a little trickier still. THat's due to the different amounts of exposure times and gains used (as quoted on AstroBin). If we give up on signal fidelity, then to maximize visual spectrum detail, you will want to create a synthetic luminance set that incorporates equal amounts of red, green and blue (to represent the full visual spectrum and no particular bias towards red, green or blue parts of the spectrum). Right now we have varying levels of red (15 x 180s x 1.0 gain), green (15 x 120s x 0.5 gain) and blue (15 x 120s x 0.5 gain) signal. So, to equalize the contribution to detail to match the strong red, we should multiply green and blue's contribution by (15 x 180 x 1.0) / (15 x 120 x 0.5) = 3x. This should give the green and blue channels the much needed boost to show the reflection nebulosity.

 

In the case of StarTools' Compose module, you can make a suitable blend by (ab)using the exposure sliders. You can totally disregard the numbers - they are just there to be helpful in more standard cases. All that matters is the ratios between the numbers;

 

First, load Ha as luminance, and red green and blue as... red, green and blue.

Keep 'Luminance, Color' set to 'L + Synthetic L from RGB, RGB'.

Set Lum Total Exposure to ~47m, red to 20m, and keep green and blue at 60m. Why these values? It's all about the ratios. 20m is one third of 60m. So our visual spectrum synthetic luminance will incorporate one third less red signal than green and blue. The ~47m for the Ha (loaded as Lum) will achieve a perfect 50/50 balance of Ha signal and visual spectrum signal. The visual spectrum synthetic luminance signal is computed as 1/3rd red + 1/3rd green + 1/3rd blue (e.g. it sort-of naively assumes all channel filters contribute exactly one third of the total signal), so that means 20m/3 red + 60m/3 green + 60m/3 blue = 46.66666m. So if we want to achieve a total/final synthetic luminance that is 50% Ha and 50% visual spectrum, our Ha counterpart should also be set to 47m.

 

Notice how in this entire story the coloring needs no special treatment at all to achieve visual spectrum-correct coloring; in StarTools the chrominance signal is kept completely separate. Simple white balancing sorts out the signal contributions of the color channels for the chrominance part (once you hit the Color module). No Ha needs to be added to the chrominance; it fully relies on the color signal from the visual spectrum data (which is not just red even for the purest of ionized hydrogen emissions; the other Balmer lines will always inject a measure of blue; see here).

 

From there, a standard workflow, using only defaults and presets (EDIT: except for Color module), should yield this;

 

NewComposite.jpg

 

You should be seeing good, varied star coloring/temperatures, blue reflection nebulosity with all its detail intact (ready for more aggressive manipulation if you so wish), pink-ish HII in the Trifid as expected (e.g. not just pure Ha emissions but also blue reflection nebulosity), with redder/purer ionized hydrogen emission in the surrounding areas.

 

Not your standard compositing, but I hope this helps and makes sense nonetheless when working with a complex dataset like this!


Edited by Ivo Jager, 31 January 2021 - 11:03 PM.

  • cybermayberry, moonrider, imtl and 2 others like this

#378 imtl

imtl

    Fly Me to the Moon

  • *****
  • Moderators
  • Posts: 5,906
  • Joined: 07 Jun 2016
  • Loc: Down in a hole

Posted 31 January 2021 - 11:12 PM

This is a great one! Assuming you want visual spectrum ("correct") coloring and maximum visible detail, this is an deceptively tricky dataset to process.

 

The problem is that half of this object is visible as blue reflection nebulosity which does not show up at all in Ha (same issue exists  in the Running Man / NGC 1977, which too looks totally different in Ha).

Normally, to maximize signal fidelity you would strictly use Ha as luminance; in this case it is by far the cleanest signal. But when some of the detail is just not there at all in that band, then your consideration may shift.

 

To maximize the detail you can bring out, you will want to combine Ha and (R+G+B) into an synthetic luminance. There really is no specific blend to target for the combined Ha + visual spectrum signal, but a 50/50 blend will give algorithms a good chance to latch on to any detail.  Again, this will not maximize signal fidelity obviously, as there is no "optimal" way of combining narrowband and wide-spectrum data.

 

Creating the visual spectrum part of the synthetic luminance, is made a little trickier still. THat's due to the different amounts of exposure times and gains used (as quoted on AstroBin). If we give up on signal fidelity, then to maximize visual spectrum detail, you will want to create a synthetic luminance set that incorporates equal amounts of red, green and blue (to represent the full visual spectrum and no particular bias towards red, green or blue parts of the spectrum). Right now we have varying levels of red (15 x 180s x 1.0 gain), green (15 x 120s x 0.5 gain) and blue (15 x 120s x 0.5 gain) signal. So, to equalize the contribution to detail to match the strong red, we should multiply green and blue's contribution by (15 x 180 x 1.0) / (15 x 120 x 0.5) = 3x. This should give the green and blue channels the much needed boost to show the reflection nebulosity.

 

In the case of StarTools' Compose module, you can make a suitable blend by (ab)using the exposure sliders. You can totally disregard the numbers - they are just there to be helpful in more standard cases. All that matters is the ratios between the numbers;

 

First, load Ha as luminance, and red green and blue as... red, green and blue.

Keep 'Luminance, Color' set to 'L + Synthetic L from RGB, RGB'.

Set Lum Total Exposure to ~47m, red to 20m, and keep green and blue at 60m. Why these values? It's all about the ratios. 20m is one third of 60m. So our visual spectrum synthetic luminance will incorporate one third less red signal than green and blue. The ~47m for the Ha (loaded as Lum) will achieve a perfect 50/50 balance of Ha signal and visual spectrum signal. The visual spectrum synthetic luminance signal is computed as 1/3rd red + 1/3rd green + 1/3rd blue (e.g. it sort-of naively assumes all channel filters contribute exactly one third of the total signal), so that means 20m/3 red + 60m/3 green + 60m/3 blue = 46.66666m. So if we want to achieve a total/final synthetic luminance that is 50% Ha and 50% visual spectrum, our Ha counterpart should also be set to 47m.

 

Notice how in this entire story the coloring needs no special treatment at all to achieve visual spectrum-correct coloring; in StarTools the chrominance signal is kept completely separate. Simple white balancing sorts out the signal contributions of the color channels for the chrominance part (once you hit the Color module). No Ha needs to be added to the chrominance; it fully relies on the color signal from the visual spectrum data (which is not just red even for the purest of ionized hydrogen emissions; the other Balmer lines will always inject a measure of blue; see here).

 

From there, a standard workflow, using only defaults and presets (EDIT: except for Color module), should yield this;

 

attachicon.gifNewComposite.jpg

 

You should be seeing good, varied star coloring/temperatures, blue reflection nebulosity with all its detail intact (ready for more aggressive manipulation if you so wish), pink-ish HII in the Trifid as expected (e.g. not just pure Ha emissions but also blue reflection nebulosity), with redder/purer ionized hydrogen emission in the surrounding areas.

 

Not your standard compositing, but I hope this helps and makes sense nonetheless when working with a complex dataset like this!

Ivo,

 

Just to correct (my own mistake). The Red channel was 120s at 0.5 gain as well. The same as the B and G. The abin info was not correct. I fixed it now thanks to you.


  • Ivo Jager likes this

#379 Ivo Jager

Ivo Jager

    Vendor ( Star Tools )

  • *****
  • Vendors
  • Posts: 562
  • Joined: 19 Mar 2011
  • Loc: Melbourne, Australia

Posted 31 January 2021 - 11:22 PM

Ivo,

 

Just to correct (my own mistake). The Red channel was 120s at 0.5 gain as well. The same as the B and G. The abin info was not correct. I fixed it now thanks to you.

You mean I did all that re-weighting for nothing!? tongue2.gif Awesome data & exercise nonetheless. Thank you for sharing!

 

The 50/50 Ha/visual split is arbitrary anyway. Also the relative (incorrect) boost of blue and green vs red is probably working in our favor here for the blue reflection nebulosity detail.


Edited by Ivo Jager, 31 January 2021 - 11:22 PM.


#380 imtl

imtl

    Fly Me to the Moon

  • *****
  • Moderators
  • Posts: 5,906
  • Joined: 07 Jun 2016
  • Loc: Down in a hole

Posted 31 January 2021 - 11:26 PM

You mean I did all that re-weighting for nothing!? tongue2.gif Awesome data & exercise nonetheless. Thank you for sharing!

 

The 50/50 Ha/visual split is arbitrary anyway. Also the relative (incorrect) boost of blue and green vs red is probably working in our favor here for the blue reflection nebulosity detail.

lol.gif

 

The same as I made people register the frames on another dataset tongue2.gif I put little bombs for everyone (well as I said this was my own stupid mistake).

 

I reprocessed my data and I think I managed to squeeze put more. I did some sort of a 50/50 blend of Ha into synthetic Lum. There are a lot of options here to play with.

 

Thank you for following all this monster thread. I'm sure a lot of people (including myself) are learning a lot.bow.gif


  • Ivo Jager, moonrider and jonnybravo0311 like this

#381 F.Meiresonne

F.Meiresonne

    Cosmos

  • *****
  • Posts: 8,519
  • Joined: 22 Dec 2003
  • Loc: Eeklo,Belgium

Posted 01 February 2021 - 03:11 PM

Triffid , one more

 

https://www.cloudyni...911_2650545.png

 

Bit more stretched, bit more blue nebulae (like the blue part), and some subtle environment.

Still reduced the starfield, there are just too many and too overwhelming for my taste....

 

@Ivo, i am probably wrong saying this but your picture looks so oversaturated to me...(please don't ban me from this forum grin.gif ) , i mean , does it really have to look like that.

I found quite some peculiar specimens on the internet too though

Attached Thumbnails

  • Triffid.jpg

Edited by F.Meiresonne, 01 February 2021 - 03:16 PM.

  • cybermayberry and moonrider like this

#382 jonnybravo0311

jonnybravo0311

    Soyuz

  • *****
  • Moderators
  • Posts: 3,825
  • Joined: 05 Nov 2020
  • Loc: NJ, US

Posted 01 February 2021 - 03:20 PM

I'm pretty sure Ivo doesn't have the power to ban you from these forums... he might revoke your Star Tools license, though :p


  • imtl likes this

#383 F.Meiresonne

F.Meiresonne

    Cosmos

  • *****
  • Posts: 8,519
  • Joined: 22 Dec 2003
  • Loc: Eeklo,Belgium

Posted 01 February 2021 - 03:34 PM

I'm pretty sure Ivo doesn't have the power to ban you from these forums... he might revoke your Star Tools license, though tongue2.gif

That was my second thought yesgrin.gif ,


  • jonnybravo0311 likes this

#384 moonrider

moonrider

    Viking 1

  • *****
  • Posts: 618
  • Joined: 16 Nov 2013
  • Loc: Murphy, NC

Posted 01 February 2021 - 04:51 PM

Ok, following Ivo's  advice here is my Triffid processed in ST and slightly tweaked in Gimp.

Attached Thumbnails

  • IMG_0252.jpg

Edited by moonrider, 01 February 2021 - 07:37 PM.

  • cybermayberry likes this

#385 limeyx

limeyx

    Surveyor 1

  • -----
  • Posts: 1,673
  • Joined: 23 May 2020
  • Loc: Seattle, WA

Posted 04 February 2021 - 12:54 AM

Definitely late to the game here. This is the M16 data but just the OSC data so far. I feel like I get so far with PixInsight and then fall down at bringing out colors sometimes

 

gallery_332078_15871_620162.jpg


  • F.Meiresonne, cybermayberry, moonrider and 1 other like this

#386 Mike in Rancho

Mike in Rancho

    Vanguard

  • -----
  • Posts: 2,145
  • Joined: 15 Oct 2020
  • Loc: Alta Loma, CA

Posted 04 February 2021 - 03:05 AM

I've had the dragon data for 3 or 4 days now...and it beat the snot out of me!  I threw just about everything I had at it short of Registax6 (though maybe I should have?).

 

Something is just wrong here. nonono.gif  Multi-night acquisition with differing skyglow?  Channel exposures not the same?  Or was it just me?

 

For a while I was thinking imtl should get another night of mismatched data and we could just settle on doing a triptych.

 

gallery_345094_15786_985220.png

 

I failed so many times I must have put a terabyte through the recycle bin.  Ultimately, even though with (a lot of) multi-stage precision strategic cropping I could get DSS or ASTAP to stitch them together in two panels, they just wouldn't blend right.  Background calibration in either stacker only helped so much, or maybe not at all.

 

In the end I fully processed both panels separately, matching all my Startools parameters except two:  I used different global stretches, and different color balancing.  The former I just eyeballed the best I could, and the latter I let ST use its star sampling routine to set the colors.  Close, but still not perfect!

 

So I took the two halves to Gimp and faded the crossover strips into each other with a gradient layer mask.  So there!

 

Beautiful area, hopefully that hides most of my flaws.  With so much effort put into the mosaic problem, I didn't spend as much time as I normally would processing all the little details carefully.  If I did it again I think I would go for more blue.


  • F.Meiresonne, cybermayberry, moonrider and 3 others like this

#387 imtl

imtl

    Fly Me to the Moon

  • *****
  • Moderators
  • Posts: 5,906
  • Joined: 07 Jun 2016
  • Loc: Down in a hole

Posted 04 February 2021 - 05:05 AM

Well, you didn't expect to improve your skill by just keep on processing the same images over and over hey!?  :p

From the image you uploaded here it looks like you did a pretty good job at the end.

So a few tips and tricks for others.

 

1. Background extraction needs to be implemented on each panel before attempting to stitch. That is to remove gradients.

2. Next is to balance brightness or gradient between panels. That is the tricky part and should be done in LINEAR stage. There are all sorts of tools in Pixinsight for that, such as photometric mosaic, DNAlinearfit etc. Astropixel processor is another software that handles stitching quite well. 

3. For the stitching itself, the problem is always to hide the seam. The two previous stages are important for that. Again in PI, GMM or Photometric mosaic are tools you can use.

 

After stitching you should go over the image and see that you don't get any artifacts like pinching of stars in the overlap region or leftover seam.

 

The real big problem with mosaic is not the brightness differences so much but the different noise levels between panels. That is a real problem and a hard one.

 

Mike, since you asked and after all this training, it looks like you are ready for a 9 panel mosaic. Do you want me to upload that or do you need a vacation first? :p :p :p

 

Job well done!


  • moonrider, Mike in Rancho and jonnybravo0311 like this

#388 the Elf

the Elf

    Skylab

  • *****
  • topic starter
  • Posts: 4,175
  • Joined: 06 Sep 2017
  • Loc: Germany

Posted 04 February 2021 - 09:11 AM

In PI I have never attempted a mosaic. At work I wrote software stitches life images of 3 cameras that observe a very wide scene. My implementation defines the width of the overlapping part and weighs the intensity of the right image by tanh and the left image by (1-tanh) so that a smooth transition with no steep gradients or oven seams can occur. That is for intensity and color. The images are rectified anyway so that they match geometrically. Here the geometry available in both images is also combined in a weighted average so that the rectification (i.e. registration in astro terms) corrects image distortion so that no geometrical mismatch can occur. Of course it works for a given flat or simple curved area only, a problem that we don't have in astronomy.

Does PI cause hard transitions? I expected it to be implemented similar to my approach. On a larger scale the tiles supposed to have the same average intensity of course.



#389 imtl

imtl

    Fly Me to the Moon

  • *****
  • Moderators
  • Posts: 5,906
  • Joined: 07 Jun 2016
  • Loc: Down in a hole

Posted 04 February 2021 - 09:30 AM

In PI I have never attempted a mosaic. At work I wrote software stitches life images of 3 cameras that observe a very wide scene. My implementation defines the width of the overlapping part and weighs the intensity of the right image by tanh and the left image by (1-tanh) so that a smooth transition with no steep gradients or oven seams can occur. That is for intensity and color. The images are rectified anyway so that they match geometrically. Here the geometry available in both images is also combined in a weighted average so that the rectification (i.e. registration in astro terms) corrects image distortion so that no geometrical mismatch can occur. Of course it works for a given flat or simple curved area only, a problem that we don't have in astronomy.

Does PI cause hard transitions? I expected it to be implemented similar to my approach. On a larger scale the tiles supposed to have the same average intensity of course.

In PI it will depend how well you prepare you panels before stitching. I think it is true for every software.

I personally use photometric mosaic after applying mosaic by coordinates to all the panels. It is magic!


  • limeyx likes this

#390 Mike in Rancho

Mike in Rancho

    Vanguard

  • -----
  • Posts: 2,145
  • Joined: 15 Oct 2020
  • Loc: Alta Loma, CA

Posted 04 February 2021 - 08:27 PM

Well, you didn't expect to improve your skill by just keep on processing the same images over and over hey!?  tongue2.gif

From the image you uploaded here it looks like you did a pretty good job at the end.

So a few tips and tricks for others.

 

1. Background extraction needs to be implemented on each panel before attempting to stitch. That is to remove gradients.

2. Next is to balance brightness or gradient between panels. That is the tricky part and should be done in LINEAR stage. There are all sorts of tools in Pixinsight for that, such as photometric mosaic, DNAlinearfit etc. Astropixel processor is another software that handles stitching quite well. 

3. For the stitching itself, the problem is always to hide the seam. The two previous stages are important for that. Again in PI, GMM or Photometric mosaic are tools you can use.

 

After stitching you should go over the image and see that you don't get any artifacts like pinching of stars in the overlap region or leftover seam.

 

The real big problem with mosaic is not the brightness differences so much but the different noise levels between panels. That is a real problem and a hard one.

 

Mike, since you asked and after all this training, it looks like you are ready for a 9 panel mosaic. Do you want me to upload that or do you need a vacation first? tongue2.giftongue2.giftongue2.gif

 

Job well done!

Well, aha!  Yes different gradient removal, that makes sense.  I gave it some minimal thought, but perhaps not enough and was focusing too much on the stretch and the color balance being the culprit.

 

Startools is often dynamic and auto-sensing.  In a way this is good, as the gradient removal module lets you "see the future" despite the changes being made on the linear data.  The question (Ivo?) then, is whether the Wipe module works objectively or subjectively.  If the parameters are objective regardless of image (i.e. only the preview is dynamic), then I should be able to match gradient removal across mosaic panels.  Or, perhaps more accurately, I can count on those parameters to be what they are, and modify them for each tile to try to get matching end result.  If the parameters are subjective to each image, however, that could make mosaic tile matching a real chore.

 

Good point too on examining the seams for cleanup afterward.  I should have.  My blend meant no seams, but there was still an overlap strip and the tiles were aligned manually.  Gimp was only letting me shift the image by a pixel, when what I really needed was half a pixel!  I could probably handle that with less of a bin, or no bin, at the very outset, though the little cooling fan on my laptop really starts cranking the more megapixels I'm processing.

 

9 panels holy moly!  I am not worthy lol.  I think 3 or 4 panels would be a heavy lift right now, having barely survived my first and only 2-panel.  Plus, we now have this monkey head thing to start capturing.  After I am done mangling that, then I'll be ready for more practice data.


  • imtl likes this

#391 F.Meiresonne

F.Meiresonne

    Cosmos

  • *****
  • Posts: 8,519
  • Joined: 22 Dec 2003
  • Loc: Eeklo,Belgium

Posted 05 February 2021 - 11:28 AM

Picture taken by Jim Misti

 

M13

https://www.cloudyni...911_5751199.png

 

Some stars turned out to blue

picture seems to lack some sharpness...

 

But it is a great glob...

Attached Thumbnails

  • M13_2.jpg

Edited by F.Meiresonne, 05 February 2021 - 11:31 AM.

  • limeyx likes this

#392 Bretw01

Bretw01

    Viking 1

  • *****
  • Posts: 771
  • Joined: 13 Feb 2017
  • Loc: IL USA

Posted 05 February 2021 - 02:49 PM

Just to contribute to keep this going. If you all want to keep working on improving skills, try and resolve the omega centaury cluster data all the way to the core (WITHOUT artifacts). It will take some noise reduction + decov and what ever else you want. This is of course with still preserving the star colors. Its a high dynamic range object so acquisition is also important here but that is already taken care of by myself. The data is there. Learning how to deal with objects like this can teach you a lot. Show the results if you'd like and I'll show what I came up with.

 

https://www.dropbox....uBpDw_bopa?dl=0

Thanks for sharing.

 

My attempt at post-processing:

 

Image04PIPSw.gif

 

PixInsight - linear fit, channel combine, ABE, color calibration, TGV and MMT NR. Initial stretch with arcsinhstretch followed with several HT stretches.


  • F.Meiresonne likes this

#393 Manitu

Manitu

    Explorer 1

  • -----
  • Posts: 76
  • Joined: 04 Mar 2020
  • Loc: South Germany

Posted 06 February 2021 - 09:03 AM

I just made two runs on the M101.

The first one with photometric color calibration in Siril and further processing in Startools with finishing Asinh stretch in Siril and the second with color calibration and processing in Startools.

The colors in Startools have a more yellowish touch and less blue.

Normally I use Siril for photometric color calibration. I like the results of this way of color calibration.

Here are the results:

 

First approach with Siril

PinwheelGalaxy 120x5min siril color asinh Startools. asinh 1

 

Second approach with Startools only

PinwheelGalaxy 120x5min Startools color A1

 


  • F.Meiresonne and the Elf like this

#394 F.Meiresonne

F.Meiresonne

    Cosmos

  • *****
  • Posts: 8,519
  • Joined: 22 Dec 2003
  • Loc: Eeklo,Belgium

Posted 06 February 2021 - 12:18 PM

Startools seem to give more color...but imo you could stretch it a bit further...

I like in this case the Siril version better just because more detail can be seen



#395 F.Meiresonne

F.Meiresonne

    Cosmos

  • *****
  • Posts: 8,519
  • Joined: 22 Dec 2003
  • Loc: Eeklo,Belgium

Posted 07 February 2021 - 09:47 AM

NGC6729

 

Tried to get a  bit more detail in the blue nebulae. But liked the tiny in stars in the nebulae less...probably deconvulation that did that allthough i used it only subtle..

Like also the dark nebulea so i wanted to be very clear too...(just megrin.gif )

This a first try...

 

https://www.cloudyni...911_8598483.png

 

 

Attached Thumbnails

  • NGC6729 (Medium).jpg

  • Ivo Jager, ntph, moonrider and 4 others like this

#396 AdrianoMS

AdrianoMS

    Viking 1

  • -----
  • Posts: 583
  • Joined: 04 Aug 2015
  • Loc: Volta Redonda, Brazil

Posted 08 February 2021 - 04:11 PM

I like topics like this. Thanks for sharing the files. I processed the Rosette Nebula and the Whirlpool Galaxy on Fitswork and PhotoScape quickly and the results are below. Thanks again!

 

fotos-1187x1600.jpg


  • F.Meiresonne, the Elf and limeyx like this

#397 Manitu

Manitu

    Explorer 1

  • -----
  • Posts: 76
  • Joined: 04 Mar 2020
  • Loc: South Germany

Posted 10 February 2021 - 02:46 PM

Just tried to do the Rosette Nebula with Siril and Startools.

The first image is color calibrated with Siril and further processed with Startools.

The second image is color calibrated and processed with Startools.

Interesting to see the difference in the colors. Startools produced a more pink color (using the Canon 600D matrix) while Siril used photometric and resulted in more red color.

I think it would be possible to remove the pink with more manual fine tuning.

 

Siril + Startools

Rosette RGBHa siril color asinh A Startools A

 

Startools

Rosette RGBHa Startools B

  • limeyx likes this

#398 F.Meiresonne

F.Meiresonne

    Cosmos

  • *****
  • Posts: 8,519
  • Joined: 22 Dec 2003
  • Loc: Eeklo,Belgium

Posted 10 February 2021 - 03:11 PM

Double Cluster, photo taken by The Elf

 

https://www.cloudyni...11_10575693.png

 

Could not resist to dim the starfield a bit to accentuate the clusters all together..

 

Like more the light blue tint of the stars en some yellow ones

 

Better use the link to the png, much better...

Reminds me off a view through a good 8 incher long time ago....

I had a Orion Optics 1/8 wavelength...8 inch great scope, sold it to a guy in Finland , of all places...

Attached Thumbnails

  • DoubleCluster (Medium).jpg

Edited by F.Meiresonne, 10 February 2021 - 03:16 PM.

  • the Elf likes this

#399 F.Meiresonne

F.Meiresonne

    Cosmos

  • *****
  • Posts: 8,519
  • Joined: 22 Dec 2003
  • Loc: Eeklo,Belgium

Posted 10 February 2021 - 03:15 PM

Just tried to do the Rosette Nebula with Siril and Startools.

The first image is color calibrated with Siril and further processed with Startools.

The second image is color calibrated and processed with Startools.

Interesting to see the difference in the colors. Startools produced a more pink color (using the Canon 600D matrix) while Siril used photometric and resulted in more red color.

I think it would be possible to remove the pink with more manual fine tuning.

 

Siril + Startools

 

 

Startools

You can use various 'color patterns' in startools. Before this thread i did not realise this completely.

Sometimes a mixture of entropy for HA or other wavelengths to accentuate or use the saturate module for stronger coloring.

 

Still goto experiment alot with these things.



#400 Manitu

Manitu

    Explorer 1

  • -----
  • Posts: 76
  • Joined: 04 Mar 2020
  • Loc: South Germany

Posted 10 February 2021 - 03:52 PM

Slight variation with more stars:

Rosette RGBHa siril color asinh A Startools C

 

Ha entropy tool in Startools:

Rosette RGBHa siril color asinh A Startools D

  • moonrider and the Elf like this


CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics