I downloaded johnpane's revised version and followed sharkmelley's advice. Brilliant! Thanks for addressing one of the most exasperating problems in PixInsight.

PixInsight Arcsinh Stretch
#26
Posted 14 March 2017 - 05:53 PM
#27
Posted 14 March 2017 - 06:07 PM
I needed to move the Stretch Factor all the way to 500 to get a useable image.
Even a little nudge to the slider for Background Subtract filled the preview with the highlight for negative values.
Any words of wisdom?
When you are stretching by a factor of 100 or more, the very tiniest adjustment of the background value makes a very noticeable difference because it is magnified by a factor of 100 or more! So you need to get the background value accurate to all 4 decimal places. The up and down arrows on the keyboard are supposed to nudge the value but it's not working for me - I think it's probably something I have overlooked in the code. I'll take a look into this - it's the first module I have written
However, I do find the mouse wheel works fine for nudging the value. Failing that, just type the background number in the box - it's easy to overtype the final digit as required.
Once you've done it once or twice you'll get the hang of it.
Another possibility, though a bit more complicated would be for me to include a fine adjustment slider immediately below the original slider.
Mark
Mark, can you increase the precision with which you can choose the background values? With deeper stacks (I most often stack over 100 subs), I find that the background levels can vary at the fifth or sixth decimal place. I might have values like 0.00341 through 0.00348. With stacks around 300 frames, I may even need another level of precision. I could use 0.0035, but it might not necessarily be as accurate as possible. With the current limit of four decimal places, I don't think it will necessarily always be fine enough precision to set the background level properly. That might be what Russ is running into. I am also not even certain it is the stack depth that mostly affects this...it might also have to do with the amount of your dynamic range used and how you integrate, that might result in a change in how precisely the background level needs to be measured.
Edited by Jon Rista, 14 March 2017 - 06:09 PM.
#28
Posted 14 March 2017 - 06:44 PM
Mark, can you increase the precision with which you can choose the background values? With deeper stacks (I most often stack over 100 subs), I find that the background levels can vary at the fifth or sixth decimal place. I might have values like 0.00341 through 0.00348. With stacks around 300 frames, I may even need another level of precision. I could use 0.0035, but it might not necessarily be as accurate as possible. With the current limit of four decimal places, I don't think it will necessarily always be fine enough precision to set the background level properly. That might be what Russ is running into. I am also not even certain it is the stack depth that mostly affects this...it might also have to do with the amount of your dynamic range used and how you integrate, that might result in a change in how precisely the background level needs to be measured.
I think the addition of a fine adjustment slider would do the trick.
However, you might be right that your stacked image is not using the whole dynamic range of a PixInsight image i.e. 0.0-1.0
If so, you can try a 2-stage process:
Firstly apply Arcsinh Stretch to your image with the default parameters (Stretch=1.0, Autoscale=true) and a rough background subtract. This will be a linear operation (since Stretch = 1.0) and the brightest pixels will end up with the value 1.0 so the whole image range is then being utilised.
Secondly apply the non-linear stretch with a finely tuned background subtract - you might find you have the background control you need on this second iteration.
Mark
Edited by sharkmelley, 14 March 2017 - 06:45 PM.
#29
Posted 14 March 2017 - 06:59 PM
I think the addition of a fine adjustment slider would do the trick.
...
For me, more digits approach is more straightforward then two slider approach (rough/fine). Two slider approach would implicitly suggest you are adjusting two different parameters, while it is actually same parameter.
#30
Posted 14 March 2017 - 11:57 PM
#31
Posted 15 March 2017 - 12:50 AM
I think the addition of a fine adjustment slider would do the trick.
...
For me, more digits approach is more straightforward then two slider approach (rough/fine). Two slider approach would implicitly suggest you are adjusting two different parameters, while it is actually same parameter.
Agreed...I think a single slider would be most useful.
@Mark: Is there any chance you could use an exponent setting like TGVDenoise uses? That would then give users arbitrary freedom to configure the precision of the background slider as they see fit for their particular data, and still only use a single slider...
#32
Posted 15 March 2017 - 01:22 AM
I think the addition of a fine adjustment slider would do the trick.
For me, more digits approach is more straightforward then two slider approach (rough/fine). Two slider approach would implicitly suggest you are adjusting two different parameters, while it is actually same parameter.
Agreed...I think a single slider would be most useful.
@Mark: Is there any chance you could use an exponent setting like TGVDenoise uses? That would then give users arbitrary freedom to configure the precision of the background slider as they see fit for their particular data, and still only use a single slider...
I'll take a look at possible solutions. I definitely want it to remain simple and intuitive to use but in the end it might simply come down to which method I can get working!
Mark
Edited by sharkmelley, 15 March 2017 - 01:37 AM.
#33
Posted 15 March 2017 - 01:31 AM
I seem to get over saturated stars when working whit drizzled image, eny fix on that?. Looks like a great tool though. Thank you for sharing.
I don't know how that could happen nor why drizzled data should affect it. Do you have an example?
Mark
#34
Posted 15 March 2017 - 01:40 AM
#35
Posted 15 March 2017 - 06:19 AM
I use audio software where holding the control (or other modifier) key changes slider resolution from low to high. Easy to get used to.
#36
Posted 19 March 2017 - 06:42 AM
Here's a link to the latest Windows version: https://drive.google...amxpVDJXSndRc0E
The source code and makefile tree is here:
ArcsinhMake00.00.01.0103.zip 31.96KB
58 downloads
Here's what it looks like in operation:
This new version is a substantial re-write for a number of reasons. Juan from PixInsight responded to me saying there is currently a bug in the NumericControl (the combined slide and edit box) which is why the Up/Down arrows and PgUp/PgDn keys did not work for fine control of the slider. This was an unknown problem right across the PixInsight platform but will be fixed in the next release.
I've therefore re-written the code to separate out the Edit boxes and Sliders so the Up/Dn and PgUp/PgDn keys can now work on the sliders. It also allowed me to make a couple of useful changes:
- The Stretch Factor slider now works logarithmically to allow the same degree of relative control near 1.0 and near 1000. Values near 1.0 are useful for applying a second iteration of Arcsinh to increase colour saturation, for instance.
- The Background Subtract has a fine adjustment slider below the original. Think of it as a "fine focusing" knob. It re-centres itself whenever you move the mouse off the slider so you can always finely adjust both up and down from the background level you already have.
The background level now has 6 decimal places of precision and using the keyboard Up/Down arrows on the fine adjustment slider will nudge the final digit. PgUp/PgDn gives an adjustment 10x as much.
I've added an AutoBackgroundLevel button which is operational only when a real-time preview window is open. Hitting this button will set the background subtract level to the minimum pixel value it finds in the real-time window. Typically, using this button and then some fine adjustment will quickly get you an accurate background level.
Known Issue: With "Autoscale highlights" switched on, sometimes the final image will appear darker than its real-time preview. This is because the autoscale adjusts itself to prevent clipping of the very brightest pixel value (after stretching). The real-time preview contains only subset of the whole image and won't always contain the brightest pixel.
As before don't apply a Screen Transfer Function to your data because that will upset the preview.
ArcSinh stretch is designed to work on linear data. It won't give good results on data that has already been non-linearly stretched (e.g. by using CurvesTransformation by adjusting mid tones on the HistogramTransformation).
Let me know if you find any problems or have any suggestions.
Mark
Edited by sharkmelley, 19 March 2017 - 12:00 PM.
#37
Posted 19 March 2017 - 08:33 AM
Mark, did you change the internal algorithm? In a quick test I am having more trouble with oversaturated star colors than in the prior version.
Here's the Mac binary for this version.
#38
Posted 19 March 2017 - 09:42 AM
Hey Mark, thanks for the explanation of the preview, that was a big difference. Also the autobackground and more precise sliders are nice. However , I generally have issues where it seems the data is always very dark even at max stretch I can't seem to make it work, to be honest.
However playing with your test North American image I saw the power of it and it was impressive! So I assume my data is just not that good or something
#39
Posted 19 March 2017 - 09:53 AM
Mark, did you change the internal algorithm? In a quick test I am having more trouble with oversaturated star colors than in the prior version.
Here's the Mac binary for this version.
The new version automatically scales the data to occupy the full dynamic range before it applies the non-linear stretch. This is the way I intended it to work in the first place. So you may find you require a lower stretch factor than previously. For data that already occupies the full dynamic range there is no change.
Thanks for building the Mac binary but did you forget to attach it
Mark
#40
Posted 19 March 2017 - 09:56 AM
Hey Mark, thanks for the explanation of the preview, that was a big difference. Also the autobackground and more precise sliders are nice. However , I generally have issues where it seems the data is always very dark even at max stretch I can't seem to make it work, to be honest.
I'm happy to take a look if you can upload your image file and let me know the settings you are using.
Mark
#41
Posted 19 March 2017 - 10:07 AM
Johnpane's original Mac version is continuing to work well for me, but I'm eagerly awaiting his update (which was not attached to his post earlier today). As I mentioned before, I believe Mark's module is a game changer for PixInsight. Thanks again!
#42
Posted 19 March 2017 - 11:11 AM
Sorry, I forgot the last button click for attaching the file to my prior post. Here it is.
Attached Files
#43
Posted 19 March 2017 - 05:00 PM
I don't know how that could happen nor why drizzled data should affect it. Do you have an example?
I seem to get over saturated stars when working whit drizzled image, eny fix on that?. Looks like a great tool though. Thank you for sharing.
Mark
Ok. You can forget this comment. My data was not color calibrated. Got mixed whit the linear files.
I'am having slight problems to fully strech my images whit this tool. I tend to get dark images after strech. My stars are looking much better then before, but the actual target is not the way I would like to strech it. That might be due poor data quality.
The image looks flat in the end and the target looses some contrast differences. I'am trying to make this part of my workflow, where I take the stars whit this tool and the rest of the image whit normal strech.
#44
Posted 20 March 2017 - 01:13 AM
I'am having slight problems to fully strech my images whit this tool. I tend to get dark images after strech. My stars are looking much better then before, but the actual target is not the way I would like to strech it. That might be due poor data quality.
The image looks flat in the end and the target looses some contrast differences. I'am trying to make this part of my workflow, where I take the stars whit this tool and the rest of the image whit normal strech.
ArcSinh not a universal panacea - no single process will ever be because of the huge variation in image data. But it's another useful weapon in the processing armoury.
The most important thing is to perform a careful background extraction beforehand otherwise you can end up with increasing colour cast as the scene intensity drops.
I typically perform an ArcSinh stretch after a white balance (I already know the R, G, B factors for my camera) and a "Function degree 1" AutomaticBackgroundExtraction (function degree 1 prevents additional gradients being introduced into the data).
I can then immediately clearly see my background gradients. So I can go back and use DBE on the white balanced linear data and try again. It's an iterative process.
Once my ArcSinh stretch has been performed on my final background extracted data I will sometimes need to perform traditional Histogram and Curve Transformation to fine tune the result.
Having said that, I'm always interested in image data where ArcSinh stretch is apparently unsuccessful. I don't deny that they might exist but I've never found such a case in my own processing.
Mark
Edited by sharkmelley, 20 March 2017 - 01:21 AM.
#45
Posted 20 March 2017 - 01:43 AM
#46
Posted 20 March 2017 - 05:57 AM
I'am having slight problems to fully strech my images whit this tool. I tend to get dark images after strech. My stars are looking much better then before, but the actual target is not the way I would like to strech it. That might be due poor data quality.
The image looks flat in the end and the target looses some contrast differences. I'am trying to make this part of my workflow, where I take the stars whit this tool and the rest of the image whit normal strech.
ArcSinh not a universal panacea - no single process will ever be because of the huge variation in image data. But it's another useful weapon in the processing armoury.
The most important thing is to perform a careful background extraction beforehand otherwise you can end up with increasing colour cast as the scene intensity drops.
I typically perform an ArcSinh stretch after a white balance (I already know the R, G, B factors for my camera) and a "Function degree 1" AutomaticBackgroundExtraction (function degree 1 prevents additional gradients being introduced into the data).
I can then immediately clearly see my background gradients. So I can go back and use DBE on the white balanced linear data and try again. It's an iterative process.
Once my ArcSinh stretch has been performed on my final background extracted data I will sometimes need to perform traditional Histogram and Curve Transformation to fine tune the result.
Having said that, I'm always interested in image data where ArcSinh stretch is apparently unsuccessful. I don't deny that they might exist but I've never found such a case in my own processing.
Mark
Here is one image that I have been testing. The galaxy gets ''flat'' after arcsinh, stars are good. Great if you have time to make exsample. The file is free to download for enyone. You can post result in here, but please send me the modified copy of the image.
https://drive.google...NFFMXzJyT2d1YWc
#47
Posted 20 March 2017 - 06:31 PM
Here is one image that I have been testing. The galaxy gets ''flat'' after arcsinh, stars are good. Great if you have time to make exsample. The file is free to download for enyone. You can post result in here, but please send me the modified copy of the image.
https://drive.google...NFFMXzJyT2d1YWc
It's a good example and I do agree with you that it does seem quite flat. For instance here it is using AutoBackgroundLevel and a stretch of 144 (arbitrarily chosen):
The problem is that the stars are much brighter than the galaxy and so the galaxy doesn't take advantage of the full dynamic range - the core remains fairly dim. We need to further compress the difference between the galaxy core and the brightest stars. This can be done by applying Arcsinh iteratively but with the same total stretch. Since 12x12=144 we can apply two iterations of x12 stretch. The background noise will end up having the same amount of stretch but the galaxy core and contrast is greater:
Going further, since 3.5x3.5x3.5x3.5=144 (approximately) we can instead apply 4 iterations of x3.5 stretch:
So you can see that stretch factors behave in multiplicative way (on the really faint background and noise) but will give different degrees of compression for the brighter areas. Whether you apply a stretch in one go or in iterations the overall colour remains pretty constant.
Alternatively apply Arcsinh to perform most of the stretch and then follow up with a HistogramTransformation or CurvesTransformation. The best results will always come from using a variety of tools.
Mark
Edited by sharkmelley, 20 March 2017 - 06:59 PM.
#48
Posted 20 March 2017 - 06:57 PM
Mark, can you explain more why your process expands the image data to fill the dynamic range before stretching? Is that absolutely necessary? I always prefer to keep some headroom in my data until I get to the very end of my processing, at which time I manually bring in the white and black points to expand the data to fill the DR. This avoids issues with clipping as a result of one process or another (particularly those that deal with contrast, HDRMT & LHE, etc.)
It's not absolutely necessary but I did it for a couple of reasons. Firstly, the real time preview is quite crucial to Arcsinh and the scaling means that the real time preview always works well and is not too dark. Secondly I did it mainly for my own convenience because my DSLR stacked images typically don't occupy the whole PixInsight dynamic range of 0.0-1.0. Sometimes the peak value might be 0.1 or less. So my philosophy behind Arcsinh was that a stretch of 100x (for example) would have identical effect whether I applied it to an image with peak value of 0.1 or whether I first scaled the a factor of 10 (using PixelMath) and then applied Arcsinh.
I completely understand what you are saying about headroom but I think you might be better off deliberately scaling the image down before using processes that require that extra headroom.
Mark
Edited by sharkmelley, 20 March 2017 - 06:57 PM.
#49
Posted 21 March 2017 - 01:36 PM
Thanks for looking my data. I had to give this a thought and read couple times. The result you got whit multiple iterations was good. Dont really understand why it works differently. And when it would not be good to do it in iterations to all images if it preserves the fainter contrast in images. Why not do it in many iterations allways as we do masked strech?It's a good example and I do agree with you that it does seem quite flat. For instance here it is using AutoBackgroundLevel and a stretch of 144 (arbitrarily chosen):
Here is one image that I have been testing. The galaxy gets ''flat'' after arcsinh, stars are good. Great if you have time to make exsample. The file is free to download for enyone. You can post result in here, but please send me the modified copy of the image.
https://drive.google...NFFMXzJyT2d1YWc
arcsinh_x144.JPG
The problem is that the stars are much brighter than the galaxy and so the galaxy doesn't take advantage of the full dynamic range - the core remains fairly dim. We need to further compress the difference between the galaxy core and the brightest stars. This can be done by applying Arcsinh iteratively but with the same total stretch. Since 12x12=144 we can apply two iterations of x12 stretch. The background noise will end up having the same amount of stretch but the galaxy core and contrast is greater:
arcsinh_x12x12.JPG
Going further, since 3.5x3.5x3.5x3.5=144 (approximately) we can instead apply 4 iterations of x3.5 stretch:
arcsinh_x3.5x3.5x3.5x3.5.JPG
So you can see that stretch factors behave in multiplicative way (on the really faint background and noise) but will give different degrees of compression for the brighter areas. Whether you apply a stretch in one go or in iterations the overall colour remains pretty constant.
Alternatively apply Arcsinh to perform most of the stretch and then follow up with a HistogramTransformation or CurvesTransformation. The best results will always come from using a variety of tools.
Mark
I did M51 image whit arcsinh(You can look the image in previous topic's) Was great improvement for my stars. I did first strech whit arcsinh and then smaller strech whit histogram transformation. Lost litle star colour, but got it all back whit "saturated block method" in PS CC.
Edited by JukkaP, 21 March 2017 - 01:39 PM.
#50
Posted 21 March 2017 - 04:06 PM
Thanks for looking my data. I had to give this a thought and read couple times. The result you got whit multiple iterations was good. Dont really understand why it works differently. And when it would not be good to do it in iterations to all images if it preserves the fainter contrast in images. Why not do it in many iterations allways as we do masked strech?
It really depends on the distribution of data in the image. The effect of applying the Arcsinh stretch in a number of iterations is to compress the highlights more but if you keep the total stretch the same (in the above example 144) then the gradient at the bottom end of the response curve will be identical.
Here's a graph that illustrates the response curve of those 3 stretches:
Changing the scale of the graph's x-axis shows that the gradient at the bottom end in each case is the same (i.e. 144):
Some images will respond very well to having their highlights extremely compressed - especially if the highlights only contain stars. Your image was an example of that.
Mark
Edited by sharkmelley, 21 March 2017 - 04:08 PM.