In general, the bits that matter most are sky background regions where the lowest SNR can be found. I use the pixel stats function of Nebulosity and select a background sky region where there are no stars, nebulae, gradients or obvious hot pixels. Nebulosity measures stats over a 21x21 pixel region around the cursor and I calculate SNR from mean/SD. SD is not very robust and a bit of care is needed to keep away from outliers (stars and hotpix etc), but the method is generally quick and reliable.
If doing image comparisons in PI, I select a small dark sky preview region without major outliers and find mean/MAD.
If you are only measuring the background level you are simply going to estimate the light pollution / air glow contribution. The darker it is, the lower the signal (but greatly affected by the imaging scale).
I think the only way to judge the SNR of an image is to find a non-saturated isolated star, measure the total signal above the background level in an aperture around the star. The background level is usually measured in an annulus around, but separated from the star aperture. The RMS value of the background signal is used as a proxy for the total noise contribution. The SNR is simply the aperture signal divided by the RMS. MaxIm DL for example has an information window that shows these values live as the mouse is tracked across the image. A rule of thumb is when is the SNR gets down near 3, the sources/stars become 'invisible' or indistinguishable from noise. The magnitude of those SNR=3 sources is generally considered the 'limiting' magnitude of the image.
If you want the SNR of a patch of nebula then the average signal above non-nebula area, divided but the non-nebula area noise again, works, but this needs to be normalized but the aperture area, in which case would eventually work out to mag/square arcseconds. (After calibrating the ADU to magnitudes, again Maxim info window does this for you, don't know about other programs.