Fun, thanks a lot for the raw data! Here's a quick take on M33 in PI (cropped screenshot to keep file size under control)
Posted 31 January 2021 - 11:02 PM
New week new data to play with. Seems like Nebulae are a hot commodity so this time, M20, the Trifid Nebula. Data is Ha, R, G, B.
Knock yourself out and let's see how much details you can pull out. Link to data
Here's my take (of many), I've decided to leave the stars bright and shiny this time around.
This is a great one! Assuming you want visual spectrum ("correct") coloring and maximum visible detail, this is an deceptively tricky dataset to process.
The problem is that half of this object is visible as blue reflection nebulosity which does not show up at all in Ha (same issue exists in the Running Man / NGC 1977, which too looks totally different in Ha).
Normally, to maximize signal fidelity you would strictly use Ha as luminance; in this case it is by far the cleanest signal. But when some of the detail is just not there at all in that band, then your consideration may shift.
To maximize the detail you can bring out, you will want to combine Ha and (R+G+B) into an synthetic luminance. There really is no specific blend to target for the combined Ha + visual spectrum signal, but a 50/50 blend will give algorithms a good chance to latch on to any detail. Again, this will not maximize signal fidelity obviously, as there is no "optimal" way of combining narrowband and wide-spectrum data.
Creating the visual spectrum part of the synthetic luminance, is made a little trickier still. THat's due to the different amounts of exposure times and gains used (as quoted on AstroBin). If we give up on signal fidelity, then to maximize visual spectrum detail, you will want to create a synthetic luminance set that incorporates equal amounts of red, green and blue (to represent the full visual spectrum and no particular bias towards red, green or blue parts of the spectrum). Right now we have varying levels of red (15 x 180s x 1.0 gain), green (15 x 120s x 0.5 gain) and blue (15 x 120s x 0.5 gain) signal. So, to equalize the contribution to detail to match the strong red, we should multiply green and blue's contribution by (15 x 180 x 1.0) / (15 x 120 x 0.5) = 3x. This should give the green and blue channels the much needed boost to show the reflection nebulosity.
In the case of StarTools' Compose module, you can make a suitable blend by (ab)using the exposure sliders. You can totally disregard the numbers - they are just there to be helpful in more standard cases. All that matters is the ratios between the numbers;
First, load Ha as luminance, and red green and blue as... red, green and blue.
Keep 'Luminance, Color' set to 'L + Synthetic L from RGB, RGB'.
Set Lum Total Exposure to ~47m, red to 20m, and keep green and blue at 60m. Why these values? It's all about the ratios. 20m is one third of 60m. So our visual spectrum synthetic luminance will incorporate one third less red signal than green and blue. The ~47m for the Ha (loaded as Lum) will achieve a perfect 50/50 balance of Ha signal and visual spectrum signal. The visual spectrum synthetic luminance signal is computed as 1/3rd red + 1/3rd green + 1/3rd blue (e.g. it sort-of naively assumes all channel filters contribute exactly one third of the total signal), so that means 20m/3 red + 60m/3 green + 60m/3 blue = 46.66666m. So if we want to achieve a total/final synthetic luminance that is 50% Ha and 50% visual spectrum, our Ha counterpart should also be set to 47m.
Notice how in this entire story the coloring needs no special treatment at all to achieve visual spectrum-correct coloring; in StarTools the chrominance signal is kept completely separate. Simple white balancing sorts out the signal contributions of the color channels for the chrominance part (once you hit the Color module). No Ha needs to be added to the chrominance; it fully relies on the color signal from the visual spectrum data (which is not just red even for the purest of ionized hydrogen emissions; the other Balmer lines will always inject a measure of blue; see here).
From there, a standard workflow, using only defaults and presets (EDIT: except for Color module), should yield this;
You should be seeing good, varied star coloring/temperatures, blue reflection nebulosity with all its detail intact (ready for more aggressive manipulation if you so wish), pink-ish HII in the Trifid as expected (e.g. not just pure Ha emissions but also blue reflection nebulosity), with redder/purer ionized hydrogen emission in the surrounding areas.
Not your standard compositing, but I hope this helps and makes sense nonetheless when working with a complex dataset like this!
Edited by Ivo Jager, 31 January 2021 - 11:03 PM.
Posted 31 January 2021 - 11:12 PM
This is a great one! Assuming you want visual spectrum ("correct") coloring and maximum visible detail, this is an deceptively tricky dataset to process.
The problem is that half of this object is visible as blue reflection nebulosity which does not show up at all in Ha (same issue exists in the Running Man / NGC 1977, which too looks totally different in Ha).
Normally, to maximize signal fidelity you would strictly use Ha as luminance; in this case it is by far the cleanest signal. But when some of the detail is just not there at all in that band, then your consideration may shift.
To maximize the detail you can bring out, you will want to combine Ha and (R+G+B) into an synthetic luminance. There really is no specific blend to target for the combined Ha + visual spectrum signal, but a 50/50 blend will give algorithms a good chance to latch on to any detail. Again, this will not maximize signal fidelity obviously, as there is no "optimal" way of combining narrowband and wide-spectrum data.
Creating the visual spectrum part of the synthetic luminance, is made a little trickier still. THat's due to the different amounts of exposure times and gains used (as quoted on AstroBin). If we give up on signal fidelity, then to maximize visual spectrum detail, you will want to create a synthetic luminance set that incorporates equal amounts of red, green and blue (to represent the full visual spectrum and no particular bias towards red, green or blue parts of the spectrum). Right now we have varying levels of red (15 x 180s x 1.0 gain), green (15 x 120s x 0.5 gain) and blue (15 x 120s x 0.5 gain) signal. So, to equalize the contribution to detail to match the strong red, we should multiply green and blue's contribution by (15 x 180 x 1.0) / (15 x 120 x 0.5) = 3x. This should give the green and blue channels the much needed boost to show the reflection nebulosity.
In the case of StarTools' Compose module, you can make a suitable blend by (ab)using the exposure sliders. You can totally disregard the numbers - they are just there to be helpful in more standard cases. All that matters is the ratios between the numbers;
First, load Ha as luminance, and red green and blue as... red, green and blue.
Keep 'Luminance, Color' set to 'L + Synthetic L from RGB, RGB'.
Set Lum Total Exposure to ~47m, red to 20m, and keep green and blue at 60m. Why these values? It's all about the ratios. 20m is one third of 60m. So our visual spectrum synthetic luminance will incorporate one third less red signal than green and blue. The ~47m for the Ha (loaded as Lum) will achieve a perfect 50/50 balance of Ha signal and visual spectrum signal. The visual spectrum synthetic luminance signal is computed as 1/3rd red + 1/3rd green + 1/3rd blue (e.g. it sort-of naively assumes all channel filters contribute exactly one third of the total signal), so that means 20m/3 red + 60m/3 green + 60m/3 blue = 46.66666m. So if we want to achieve a total/final synthetic luminance that is 50% Ha and 50% visual spectrum, our Ha counterpart should also be set to 47m.
Notice how in this entire story the coloring needs no special treatment at all to achieve visual spectrum-correct coloring; in StarTools the chrominance signal is kept completely separate. Simple white balancing sorts out the signal contributions of the color channels for the chrominance part (once you hit the Color module). No Ha needs to be added to the chrominance; it fully relies on the color signal from the visual spectrum data (which is not just red even for the purest of ionized hydrogen emissions; the other Balmer lines will always inject a measure of blue; see here).
From there, a standard workflow, using only defaults and presets (EDIT: except for Color module), should yield this;
You should be seeing good, varied star coloring/temperatures, blue reflection nebulosity with all its detail intact (ready for more aggressive manipulation if you so wish), pink-ish HII in the Trifid as expected (e.g. not just pure Ha emissions but also blue reflection nebulosity), with redder/purer ionized hydrogen emission in the surrounding areas.
Not your standard compositing, but I hope this helps and makes sense nonetheless when working with a complex dataset like this!
Ivo,
Just to correct (my own mistake). The Red channel was 120s at 0.5 gain as well. The same as the B and G. The abin info was not correct. I fixed it now thanks to you.
Posted 31 January 2021 - 11:22 PM
Ivo,
Just to correct (my own mistake). The Red channel was 120s at 0.5 gain as well. The same as the B and G. The abin info was not correct. I fixed it now thanks to you.
You mean I did all that re-weighting for nothing!? Awesome data & exercise nonetheless. Thank you for sharing!
The 50/50 Ha/visual split is arbitrary anyway. Also the relative (incorrect) boost of blue and green vs red is probably working in our favor here for the blue reflection nebulosity detail.
Edited by Ivo Jager, 31 January 2021 - 11:22 PM.
Posted 31 January 2021 - 11:26 PM
You mean I did all that re-weighting for nothing!?
Awesome data & exercise nonetheless. Thank you for sharing!
The 50/50 Ha/visual split is arbitrary anyway. Also the relative (incorrect) boost of blue and green vs red is probably working in our favor here for the blue reflection nebulosity detail.
The same as I made people register the frames on another dataset I put little bombs for everyone (well as I said this was my own stupid mistake).
I reprocessed my data and I think I managed to squeeze put more. I did some sort of a 50/50 blend of Ha into synthetic Lum. There are a lot of options here to play with.
Thank you for following all this monster thread. I'm sure a lot of people (including myself) are learning a lot.
Posted 01 February 2021 - 03:11 PM
Triffid , one more
https://www.cloudyni...911_2650545.png
Bit more stretched, bit more blue nebulae (like the blue part), and some subtle environment.
Still reduced the starfield, there are just too many and too overwhelming for my taste....
@Ivo, i am probably wrong saying this but your picture looks so oversaturated to me...(please don't ban me from this forum ) , i mean , does it really have to look like that.
I found quite some peculiar specimens on the internet too though
Edited by F.Meiresonne, 01 February 2021 - 03:16 PM.
Posted 01 February 2021 - 03:20 PM
I'm pretty sure Ivo doesn't have the power to ban you from these forums... he might revoke your Star Tools license, though
Posted 01 February 2021 - 03:34 PM
I'm pretty sure Ivo doesn't have the power to ban you from these forums... he might revoke your Star Tools license, though
That was my second thought yes ,
Posted 04 February 2021 - 03:05 AM
I've had the dragon data for 3 or 4 days now...and it beat the snot out of me! I threw just about everything I had at it short of Registax6 (though maybe I should have?).
Something is just wrong here. Multi-night acquisition with differing skyglow? Channel exposures not the same? Or was it just me?
For a while I was thinking imtl should get another night of mismatched data and we could just settle on doing a triptych.
I failed so many times I must have put a terabyte through the recycle bin. Ultimately, even though with (a lot of) multi-stage precision strategic cropping I could get DSS or ASTAP to stitch them together in two panels, they just wouldn't blend right. Background calibration in either stacker only helped so much, or maybe not at all.
In the end I fully processed both panels separately, matching all my Startools parameters except two: I used different global stretches, and different color balancing. The former I just eyeballed the best I could, and the latter I let ST use its star sampling routine to set the colors. Close, but still not perfect!
So I took the two halves to Gimp and faded the crossover strips into each other with a gradient layer mask. So there!
Beautiful area, hopefully that hides most of my flaws. With so much effort put into the mosaic problem, I didn't spend as much time as I normally would processing all the little details carefully. If I did it again I think I would go for more blue.
Posted 04 February 2021 - 05:05 AM
Well, you didn't expect to improve your skill by just keep on processing the same images over and over hey!?
From the image you uploaded here it looks like you did a pretty good job at the end.
So a few tips and tricks for others.
1. Background extraction needs to be implemented on each panel before attempting to stitch. That is to remove gradients.
2. Next is to balance brightness or gradient between panels. That is the tricky part and should be done in LINEAR stage. There are all sorts of tools in Pixinsight for that, such as photometric mosaic, DNAlinearfit etc. Astropixel processor is another software that handles stitching quite well.
3. For the stitching itself, the problem is always to hide the seam. The two previous stages are important for that. Again in PI, GMM or Photometric mosaic are tools you can use.
After stitching you should go over the image and see that you don't get any artifacts like pinching of stars in the overlap region or leftover seam.
The real big problem with mosaic is not the brightness differences so much but the different noise levels between panels. That is a real problem and a hard one.
Mike, since you asked and after all this training, it looks like you are ready for a 9 panel mosaic. Do you want me to upload that or do you need a vacation first?
Job well done!
Posted 04 February 2021 - 09:11 AM
In PI I have never attempted a mosaic. At work I wrote software stitches life images of 3 cameras that observe a very wide scene. My implementation defines the width of the overlapping part and weighs the intensity of the right image by tanh and the left image by (1-tanh) so that a smooth transition with no steep gradients or oven seams can occur. That is for intensity and color. The images are rectified anyway so that they match geometrically. Here the geometry available in both images is also combined in a weighted average so that the rectification (i.e. registration in astro terms) corrects image distortion so that no geometrical mismatch can occur. Of course it works for a given flat or simple curved area only, a problem that we don't have in astronomy.
Does PI cause hard transitions? I expected it to be implemented similar to my approach. On a larger scale the tiles supposed to have the same average intensity of course.
Posted 04 February 2021 - 09:30 AM
In PI I have never attempted a mosaic. At work I wrote software stitches life images of 3 cameras that observe a very wide scene. My implementation defines the width of the overlapping part and weighs the intensity of the right image by tanh and the left image by (1-tanh) so that a smooth transition with no steep gradients or oven seams can occur. That is for intensity and color. The images are rectified anyway so that they match geometrically. Here the geometry available in both images is also combined in a weighted average so that the rectification (i.e. registration in astro terms) corrects image distortion so that no geometrical mismatch can occur. Of course it works for a given flat or simple curved area only, a problem that we don't have in astronomy.
Does PI cause hard transitions? I expected it to be implemented similar to my approach. On a larger scale the tiles supposed to have the same average intensity of course.
In PI it will depend how well you prepare you panels before stitching. I think it is true for every software.
I personally use photometric mosaic after applying mosaic by coordinates to all the panels. It is magic!
Posted 04 February 2021 - 08:27 PM
Well, you didn't expect to improve your skill by just keep on processing the same images over and over hey!?
From the image you uploaded here it looks like you did a pretty good job at the end.
So a few tips and tricks for others.
1. Background extraction needs to be implemented on each panel before attempting to stitch. That is to remove gradients.
2. Next is to balance brightness or gradient between panels. That is the tricky part and should be done in LINEAR stage. There are all sorts of tools in Pixinsight for that, such as photometric mosaic, DNAlinearfit etc. Astropixel processor is another software that handles stitching quite well.
3. For the stitching itself, the problem is always to hide the seam. The two previous stages are important for that. Again in PI, GMM or Photometric mosaic are tools you can use.
After stitching you should go over the image and see that you don't get any artifacts like pinching of stars in the overlap region or leftover seam.
The real big problem with mosaic is not the brightness differences so much but the different noise levels between panels. That is a real problem and a hard one.
Mike, since you asked and after all this training, it looks like you are ready for a 9 panel mosaic. Do you want me to upload that or do you need a vacation first?
Job well done!
Well, aha! Yes different gradient removal, that makes sense. I gave it some minimal thought, but perhaps not enough and was focusing too much on the stretch and the color balance being the culprit.
Startools is often dynamic and auto-sensing. In a way this is good, as the gradient removal module lets you "see the future" despite the changes being made on the linear data. The question (Ivo?) then, is whether the Wipe module works objectively or subjectively. If the parameters are objective regardless of image (i.e. only the preview is dynamic), then I should be able to match gradient removal across mosaic panels. Or, perhaps more accurately, I can count on those parameters to be what they are, and modify them for each tile to try to get matching end result. If the parameters are subjective to each image, however, that could make mosaic tile matching a real chore.
Good point too on examining the seams for cleanup afterward. I should have. My blend meant no seams, but there was still an overlap strip and the tiles were aligned manually. Gimp was only letting me shift the image by a pixel, when what I really needed was half a pixel! I could probably handle that with less of a bin, or no bin, at the very outset, though the little cooling fan on my laptop really starts cranking the more megapixels I'm processing.
9 panels holy moly! I am not worthy lol. I think 3 or 4 panels would be a heavy lift right now, having barely survived my first and only 2-panel. Plus, we now have this monkey head thing to start capturing. After I am done mangling that, then I'll be ready for more practice data.
Posted 05 February 2021 - 11:28 AM
Picture taken by Jim Misti
M13
https://www.cloudyni...911_5751199.png
Some stars turned out to blue
picture seems to lack some sharpness...
But it is a great glob...
Edited by F.Meiresonne, 05 February 2021 - 11:31 AM.
Posted 05 February 2021 - 02:49 PM
Just to contribute to keep this going. If you all want to keep working on improving skills, try and resolve the omega centaury cluster data all the way to the core (WITHOUT artifacts). It will take some noise reduction + decov and what ever else you want. This is of course with still preserving the star colors. Its a high dynamic range object so acquisition is also important here but that is already taken care of by myself. The data is there. Learning how to deal with objects like this can teach you a lot. Show the results if you'd like and I'll show what I came up with.
Thanks for sharing.
My attempt at post-processing:
PixInsight - linear fit, channel combine, ABE, color calibration, TGV and MMT NR. Initial stretch with arcsinhstretch followed with several HT stretches.
Posted 06 February 2021 - 09:03 AM
I just made two runs on the M101.
The first one with photometric color calibration in Siril and further processing in Startools with finishing Asinh stretch in Siril and the second with color calibration and processing in Startools.
The colors in Startools have a more yellowish touch and less blue.
Normally I use Siril for photometric color calibration. I like the results of this way of color calibration.
Here are the results:
First approach with Siril
Second approach with Startools only
Posted 06 February 2021 - 12:18 PM
Startools seem to give more color...but imo you could stretch it a bit further...
I like in this case the Siril version better just because more detail can be seen
Posted 07 February 2021 - 09:47 AM
NGC6729
Tried to get a bit more detail in the blue nebulae. But liked the tiny in stars in the nebulae less...probably deconvulation that did that allthough i used it only subtle..
Like also the dark nebulea so i wanted to be very clear too...(just me )
This a first try...
https://www.cloudyni...911_8598483.png
Posted 10 February 2021 - 02:46 PM
Just tried to do the Rosette Nebula with Siril and Startools.
The first image is color calibrated with Siril and further processed with Startools.
The second image is color calibrated and processed with Startools.
Interesting to see the difference in the colors. Startools produced a more pink color (using the Canon 600D matrix) while Siril used photometric and resulted in more red color.
I think it would be possible to remove the pink with more manual fine tuning.
Siril + Startools
Startools
Posted 10 February 2021 - 03:11 PM
Double Cluster, photo taken by The Elf
https://www.cloudyni...11_10575693.png
Could not resist to dim the starfield a bit to accentuate the clusters all together..
Like more the light blue tint of the stars en some yellow ones
Better use the link to the png, much better...
Reminds me off a view through a good 8 incher long time ago....
I had a Orion Optics 1/8 wavelength...8 inch great scope, sold it to a guy in Finland , of all places...
Edited by F.Meiresonne, 10 February 2021 - 03:16 PM.
Posted 10 February 2021 - 03:15 PM
Just tried to do the Rosette Nebula with Siril and Startools.
The first image is color calibrated with Siril and further processed with Startools.
The second image is color calibrated and processed with Startools.
Interesting to see the difference in the colors. Startools produced a more pink color (using the Canon 600D matrix) while Siril used photometric and resulted in more red color.
I think it would be possible to remove the pink with more manual fine tuning.
Siril + Startools
Startools
You can use various 'color patterns' in startools. Before this thread i did not realise this completely.
Sometimes a mixture of entropy for HA or other wavelengths to accentuate or use the saturate module for stronger coloring.
Still goto experiment alot with these things.
![]() Cloudy Nights LLC Cloudy Nights Sponsor: Astronomics |