Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

A stupid question: How do you measure SNR in your images?

  • Please log in to reply
17 replies to this topic

#1 roofkid

roofkid

    Viking 1

  • -----
  • topic starter
  • Posts: 517
  • Joined: 13 Jan 2016

Posted 27 May 2018 - 06:28 AM

Hi everyone,

 

well we all talk about it but I wonder how you guys actually go about calculating the signal-to-noise ration of your images. For example I use that to determine which color I use as a linear fit reference. How to you guys go about it? I just use the statistics module in PI and take the entire image and just calculate mean / standard deviation by hand for all frames. Though I am unsure if I really should do that for the entire frame or rather just a preview of interest?

 

How do you guys handle that and do you even care about it? What other uses does it have for you?



#2 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 21202
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 27 May 2018 - 08:11 AM

Hi everyone,

 

well we all talk about it but I wonder how you guys actually go about calculating the signal-to-noise ration of your images. For example I use that to determine which color I use as a linear fit reference. How to you guys go about it? I just use the statistics module in PI and take the entire image and just calculate mean / standard deviation by hand for all frames. Though I am unsure if I really should do that for the entire frame or rather just a preview of interest?

 

How do you guys handle that and do you even care about it? What other uses does it have for you?

For your purposes, the Standard Deviation is probably good enough. For a true Poisson signal, Standard Deviation would equal the noise in terms of RMS. Now, our signals are not purely Poisson, nor are they purely Gaussian, but they get pretty close with enough integration. 

 

The PI statistics tool can help here, as can the NoiseEvaluation script. 


  • Gucky likes this

#3 *skyguy*

*skyguy*

    Gemini

  • *****
  • Posts: 3332
  • Joined: 31 Dec 2008
  • Loc: Western New York

Posted 27 May 2018 - 08:11 AM

Here's a simplified formula for calculating the signal to noise ratio in an image:

 

formula16.png

 

where:

    S = total nebula signal
    B = total background signal
    D = dark current
    RN = read noise from bias frame
    n = number of sub-exposures

 

Check out this website with a number of useful formulas for the astrophotographer. Many of them can be determined using the simple online calculators:

 

http://www.wilmslowa...ormulae.htm#SNR



#4 Gucky

Gucky

    Messenger

  • *****
  • Posts: 429
  • Joined: 18 May 2015
  • Loc: Switzerland

Posted 27 May 2018 - 09:17 AM

Here's a simplified formula for calculating the signal to noise ratio in an image:

 

formula16.png

 

where:

    S = total nebula signal
    B = total background signal
    D = dark current
    RN = read noise from bias frame
    n = number of sub-exposures

 

Check out this website with a number of useful formulas for the astrophotographer. Many of them can be determined using the simple online calculators:

 

http://www.wilmslowa...ormulae.htm#SNR

The only variable, which I know how to compute, is the number of subframes...  scratchhead2.gif


  • MPT, Mike7Mak, Synapsno and 1 other like this

#5 ks__observer

ks__observer

    Messenger

  • *****
  • Posts: 470
  • Joined: 28 Sep 2016
  • Loc: Long Island, New York

Posted 28 May 2018 - 06:51 PM

Re figuring out numbers to plug into formula, I believe this is how it is done:

1. Take the pixel ADU from the processing program.

2. Convert it to electrons by using ADU/e for given gain.

3. Subtract bias e.

4. Get background signal by looking at dark spot in picture.

5. Get dark current and read noise by specs.

 

In end I think the formula is more theoretical to show that you need to reduce noise sources.

 

Re Poisson v Gaussian:

I believe the SNR formula presumes all sources Poisson except read noise.



#6 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 21202
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 28 May 2018 - 07:47 PM

Re figuring out numbers to plug into formula, I believe this is how it is done:

1. Take the pixel ADU from the processing program.

2. Convert it to electrons by using ADU/e for given gain.

3. Subtract bias e.

4. Get background signal by looking at dark spot in picture.

5. Get dark current and read noise by specs.

 

In end I think the formula is more theoretical to show that you need to reduce noise sources.

 

Re Poisson v Gaussian:

I believe the SNR formula presumes all sources Poisson except read noise.

You would also want to account for any additional bit depth scaling. If you have 12-bit data scaled to 16-bit, then between #1 and #2, you would want to rescale back to 12-bit before converting to electrons (in this case, divide by 16). 

 

This said, if the ultimate goal is to simply choose the best channel as a reference for linear fit or linear alignment, the above complexity is totally unnecessary. Every channel will most likely be coming from the same camera, same gain, so just using standard deviation in ADU is good enough. 


Edited by Jon Rista, 28 May 2018 - 07:49 PM.

  • roofkid and ks__observer like this

#7 roofkid

roofkid

    Viking 1

  • -----
  • topic starter
  • Posts: 517
  • Joined: 13 Jan 2016

Posted 29 May 2018 - 03:50 AM

Thanks for all the responses. Obviously I knew about the actual formula. I wanted to get some opinions about practical application of it. I suppose I was on the right track then.



#8 Shiraz

Shiraz

    Viking 1

  • -----
  • Posts: 565
  • Joined: 15 Oct 2010
  • Loc: South Australia

Posted 29 May 2018 - 05:37 AM

Hi Sven.

 

In general, the bits that matter most are sky background regions where the lowest SNR can be found and a noise measure provides info for comparing processing methods. I use the pixel stats function of Nebulosity and select a background sky region where there are no stars, nebulae, gradients or obvious hot pixels. Nebulosity measures stats over a 21x21 pixel region around the cursor to provide a measure of noise (SD). I also measure mean background sky to get the get the sky signal (and hence the sky SNR) for testing the effects of sub lengths.

 

When measuring calibration images I use Nebulosity to get SNR from mean/SD in a region free of obvious hot pixels or gradients.

 

SD is not very robust and a bit of care is needed to keep away from outliers (stars and hotpix etc), but the method is generally quick and reliable.

 

If doing calibration image assessments in PI, I select a small preview away from outliers to find SNR from mean/MAD.

 

Cheers Ray


Edited by Shiraz, 29 May 2018 - 04:38 PM.


#9 Eric Benson

Eric Benson

    Sputnik

  • -----
  • Posts: 49
  • Joined: 02 Sep 2009

Posted 29 May 2018 - 07:34 AM

Hi Sven.

 

In general, the bits that matter most are sky background regions where the lowest SNR can be found. I use the pixel stats function of Nebulosity and select a background sky region where there are no stars, nebulae, gradients or obvious hot pixels. Nebulosity measures stats over a 21x21 pixel region around the cursor and I calculate SNR from mean/SD. SD is not very robust and a bit of care is needed to keep away from outliers (stars and hotpix etc), but the method is generally quick and reliable.

 

If doing image comparisons in PI, I select a small dark sky preview region without major outliers and find mean/MAD.

 

Cheers Ray

Hi Ray,

 

If you are only measuring the background level you are simply going to estimate the light pollution / air glow contribution. The darker it is, the lower the signal (but greatly affected by the imaging scale).

 

Sven,

I think the only way to judge the SNR of an image is to find a non-saturated isolated star, measure the total signal above the background level in an aperture around the star. The background level is usually measured in an annulus around, but separated from the star aperture. The RMS value of the background signal is used as a proxy for the total noise contribution. The SNR is simply the aperture signal divided by the RMS. MaxIm DL for example has an information window that shows these values live as the mouse is tracked across the image. A rule of thumb is when is the SNR gets down near 3, the sources/stars become 'invisible' or indistinguishable from noise. The magnitude of those SNR=3 sources is generally considered the 'limiting' magnitude of the image.

 

If you want the SNR of a patch of nebula then the average signal above non-nebula area, divided but the non-nebula area noise again, works, but this needs to be normalized but the aperture area, in which case would eventually work out to mag/square arcseconds. (After calibrating the ADU to magnitudes, again Maxim info window does this for you, don't know about other programs.

 

HTH,

EB



#10 Shiraz

Shiraz

    Viking 1

  • -----
  • Posts: 565
  • Joined: 15 Oct 2010
  • Loc: South Australia

Posted 29 May 2018 - 10:33 AM

Hi Ray,

 

If you are only measuring the background level you are simply going to estimate the light pollution / air glow contribution. The darker it is, the lower the signal (but greatly affected by the imaging scale).

 

 

 

HTH,

EB

Hi Eric. yes you are right and my earlier post was poorly written and wrong. I actually use the Nebulosity statistics function primarily to measure noise when looking at dark sky regions. When looking at calibration images, I use the Nebulosity stats to find the SNR as described and also use PI as described. Will amend the earlier post - thanks. Cheers Ray


Edited by Shiraz, 29 May 2018 - 04:06 PM.


#11 Thirteen

Thirteen

    Gemini

  • *****
  • Posts: 3115
  • Joined: 12 Jul 2013
  • Loc: Milford, Michigan

Posted 29 May 2018 - 12:13 PM

What does everyone think about PIs ContrastBackgroundNoiseRatio script (CBNR)?    I use that to quickly evaluate the effect of different stacking algorithms and settings.    Wondering if it is keeping me straight or misleading me?

 

 

You quickly get an estimate of signal based on contrasting structure in the image and background noise levels.  Seems to make sense to me.  


Edited by Thirteen, 29 May 2018 - 12:14 PM.

  • roofkid likes this

#12 555aaa

555aaa

    Apollo

  • *****
  • Posts: 1006
  • Joined: 09 Aug 2016

Posted 30 May 2018 - 02:39 AM

The signal to noise ratio only makes sense when you pick a specific thing as the signal. In photometry, every star in your image has a different signal to noise ratio. In aperture photometry, you sum up the signal in a circular (or oval) aperture for a given star, and then compare that to the sum of the signal in the annular ring, which you assume represents the same flux in the area where the star is. Those two numbers give you the signal to noise ratio (you take a square root for the sky noise term). The sky noise also has in it the thermal and read noise because its an observed value, not empirically calculated.

 

http://spiff.rit.edu...gnal_illus.html

 

Bright stars will have a very high SNR but faint stars will have a low SNR; as mentioned above, when SNR is about 3 or 4, you are at the limit of detection and it is interesting to go find that star and figure out what your limiting magnitude is.I don't use PI so I don't know what these other things are, sorry.


  • roofkid likes this

#13 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 21202
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 31 May 2018 - 01:27 PM

What does everyone think about PIs ContrastBackgroundNoiseRatio script (CBNR)?    I use that to quickly evaluate the effect of different stacking algorithms and settings.    Wondering if it is keeping me straight or misleading me?

 

 

You quickly get an estimate of signal based on contrasting structure in the image and background noise levels.  Seems to make sense to me.  

Hmm. How is the contrast determined? As in, what signals are used to calculate the contrast? And, I guess, what method is used to determine contrast (i.e. mean(AreaA) - mean(AreaB))? Contrast would differ depending on which areas of signal in the image you used as references... And is it a noise to contrast ratio between two areas of brighter signal in the image? Or is it just taking the ratio of one area of signal in the image to the background noise (which...if that was the case, then you just have SNR for that particular region of the image....)


Edited by Jon Rista, 31 May 2018 - 01:28 PM.


#14 Thirteen

Thirteen

    Gemini

  • *****
  • Posts: 3115
  • Joined: 12 Jul 2013
  • Loc: Milford, Michigan

Posted 03 June 2018 - 11:27 PM

I sure would like to know the answers to these questions, but in typical PI form there isn't any documentation.   What I can tell you is that you define a background percentile for its estimation of background noise.   What I don't know is how it derives the "Contrast" term.   I do see that it operates in an expected manner when I compare, say two stacks of differing integration.  But, I was hoping someone here may have more insight.  


  • roofkid likes this

#15 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 21202
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 04 June 2018 - 02:31 AM

I sure would like to know the answers to these questions, but in typical PI form there isn't any documentation.   What I can tell you is that you define a background percentile for its estimation of background noise.   What I don't know is how it derives the "Contrast" term.   I do see that it operates in an expected manner when I compare, say two stacks of differing integration.  But, I was hoping someone here may have more insight.  

Looking at the source code for the script, I found this:

this.resolution = 65535;
// ...
function contrastOfImage(image) {
   return parameters.resolution * image.avgDev();
}

So, apparently, "contrast" is defined as 65535 (I did not find anywhere else in the code where resolution was set, other than the one line) times the average deviation of the image... I am not really sure if that is a valid definition of contrast, as usually contrast is defined as the difference in signal levels between two defined areas (i.e. the brightest area of the image and the darkest area, which would be the most common definition; in an astro image, brightest might not work since you could have clipped stars and stuck pixels and whatnot...and you would probably want to use an actual object signal region instead). But, I am also not really sure exactly what this script is doing, as there are quite a few other functions in there, and it seems to spin through a few iterative cycles calculating other factors...



#16 Thirteen

Thirteen

    Gemini

  • *****
  • Posts: 3115
  • Joined: 12 Jul 2013
  • Loc: Milford, Michigan

Posted 04 June 2018 - 08:41 AM

What is avgDev()?   and how would it apply here?   That seems the variable that defines the contrast parameter.   

 

Seems to me it is just defining contrast as the average deviation from the mean in the image and then CBNR is just that term divided by the background noise level.  That makes sense if you were trying to find a way to robustly get a full image value without user intervention to try and define signal areas. 

 

I feel like a fool just speculating on this.  It would be nice to see some documentation.  


Edited by Thirteen, 04 June 2018 - 08:49 AM.


#17 555aaa

555aaa

    Apollo

  • *****
  • Posts: 1006
  • Joined: 09 Aug 2016

Posted 04 June 2018 - 09:21 AM

What is the SNR of a picture of a tree? You have no way to know without having a better picture to compare to. You can't calculate the signal to noise ratio of a random image unless you have prior knowledge of what the signal is. I think you are much better off in astronomy to just look at the relationship of snr to magnitude of the stars in your image. You know what the ideal image of the star is, and once you plate solve you can then go find the star in a database and look up the published magnitude. The magnitude of the stars which are at about 10 for SNR to me is a good measure.

The OP's question is good. It's not simple and I think it gets into information theory.

Contrast isnt SNR because you can just apply a function to any image and make it more contrasty. That doesn't add information.

#18 Thirteen

Thirteen

    Gemini

  • *****
  • Posts: 3115
  • Joined: 12 Jul 2013
  • Loc: Milford, Michigan

Posted 04 June 2018 - 09:30 AM

I'm not sure I agree with your tree.    I mean, if all trees were presented on a dark background, then you could define the signal area of the tree as the brighter-than-background areas.  Then you could average that deviation from the background and have a representation of the signal in the image.  Given multiple images of that same tree, I believe you could define which one has better signal to noise (though maybe not through the traditional definition).   Perhaps we are hung up on the semantics of "contrast" as it is defined here.  

 

I agree with you that plate solving and database lookup defining SNR would be a good way to do it.  I also agree the OPs question is good.   The problem I have is how to define the SNR quickly without user intervention for evaluative purposes.  




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics