When to use 8 or 12-bit - a small analysis
#1
Posted 16 June 2011 - 08:30 AM
When the noise is high enough (pixel-to-pixel noise of 2 or higher), both 8 and 12-bit provide the exact same result. The average pixel-value deviation is entirely caused by the amount of frames stacked. If you would stack an infinite amount of frames, the averaged pixel-value deviation would simply approach 0. But when there is too little noise in the system, the 8-bit recordings fail to accurately describe the actual value, and a sudden increase in the pixel value deviation can be seen: at around 0.5 pixel-to-pixel noise the difference between the 8 and 12-bit recordings is rapidly increasing to about 15%, at 0.25 noise and lower the error can grow to 1000% or higher, depending on the amount of frames stacked.
This simulation shows that when you are using 8-bit recordings you need some noise in the system for an accurate representation of the true underlying values (when stacking the frames). It also shows that when there is a lot of noise in the system, it simply doesn't matter if you use 8 or 12-bit to describe your data. I think it is safe to say that if the pixel to pixel noise level is above 2, and the histogram is not over-exposed, you are an idiot to use anything above 12-bit, as it only takes up more space, possibly reduce the maximum attainable framerate, and won't add anything to the image quality.
One thing to note, is that it can be difficult to estimate the actual noise level of an image at low noise levels. This is because the estimator also includes the actual detail in the image. At higher noise levels (and higher magnifications) this only makes up a relatively small amount of the estimator value, but when the noise is really low (<2), the actual detail makes up a large part of what you are measuring. So if you want to have a good estimation of the noise level in an image, you should measure it at a location that you know contains very little actual detail (e.g. the black space around a planet).
#3
Posted 16 June 2011 - 08:33 AM
It does not help a single bit if you used 12-bit instead of 8-bit recordings for number 32 to 2. Only image 1 and 0 would benefit from a 12-bit pixeldepth, but it doesn't happen often that you see these noise levels in planetary recordings.
#4
Posted 16 June 2011 - 09:04 AM
#5
Posted 16 June 2011 - 09:57 AM
For example if you are imaging Venus in extremely low noise conditions, and you use 12-bit recordings, you could push your recordings to show very small brightness variations in the atmosphere of Venus. This would not be possible if you used noise-free 8-bit recordings. If you would stack enough 8-bit frames that had 'some' noise in it, then it would be possible again.
Another example is that you might want to capture details inside the shadow of craters on the moon. Unless you severely over-expose your images, this would not be possible with 8-bit recordings if the image contained no noise. In this case you would need 12-bit recordings to bring out the details.
My post is basicaly to show where 'extremely low noise' begins, so when you should switch from 8-bit to 12-bit recordings. But perhaps more importantly it is to show that using 12-bit recordings when there is 'enough' noise in the system makes absolutely no sense at all.
In fact, it can be a bad thing if you definately want to image at 12-bit all the time. It would not negatively (or positively!) impact the image quality, but it might for example reduce the maximum amount of frames per second that you can achieve. 12-bit recordings also eat up hard-disk space much faster (usually twice as fase, since they are stored inside 16-bit file formats).
#6
Posted 16 June 2011 - 10:31 AM
I will have to read this a few times to understand why 8-bit recordings need some noise.
#7
Posted 16 June 2011 - 10:49 AM
http://www.qsimaging.../ccd_noise.html
I cheated by assuming there was a single source of normally distributed random noise. Otherwise it would become a bit too complicated I think.
But the whole idea of the story still stands either way: you need a bit of random noise for 8-bit recordings, and using 12-bit recordings when there is a lot of random noise makes no sense.
Ideally you would want the noise estimator to be invariable to the fixed noise in the image for example by correcting the image with a proper master dark. Because otherwise it would falsely overestimate the level of random noise in the image.
#8
Posted 16 June 2011 - 10:59 AM
#9
Posted 16 June 2011 - 11:19 AM
Craig Stark also did this analysis and concluded the same as you have. You can read his paper here:
http://www.stark-lab...pthStacking.pdf
For planetary imaging I think you make a good case that higher capture depth may limit the frame rate in such a way that the result is less than expected. Still I would like to see an experiment done of exposing a gray card in 8 & 12 bit mode using a range of gains and exposure time normally used in imaging, and actually measuring the signal to noise for different stack sizes.
Cheers,
Glenn
#10
Posted 16 June 2011 - 11:30 AM
I think it would be better if instead I had said that you could not keep gaining bits if you stacked an infinite amount of extremely low noise 8-bit frames. In practice those extremely low noise 8-bit frames are difficult to come by (or in other words: imaging in 12-bit is hardly ever worthwhile)
Edit: Thanks for link Glenn, I'm rather good at reinventing the wheel sometimes...I should have used the term 'quantization errors', because that is basically what is plotted in the 8-bit graph when the line goes up again: the noise is too low to overcome quantization errors. Note that the same upwards curve is also present in the 12-bit condition, but it is much smaller and starts at a lower noise level (and I kinda forgot to plot it. I will make a new plot for that and post it here)
#11
Posted 16 June 2011 - 11:45 AM
So image Saturn at 8 bit, and Jupiter at 12 bit, or just 8 bit all the time?
#12
Posted 16 June 2011 - 12:00 PM
Huum?
So image Saturn at 8 bit, and Jupiter at 12 bit, or just 8 bit all the time?
I use 8 bit pretty much all the time for planets. Occasionally in nearly perfect seeing I'll switch to 12-bit. Anthony Wesley advocates the use of 12-bit for getting better S/N in single-frames for scientific purposes (i.e., the light curve of the fireball that he and Chris Go captured last year). Given the rarity of such events, I'm not filling up hard drives with 12-bit data "just in case". After all, if you catch a fireball it's going to be in S&T, Spaceweather, and a dozen other places regardless of whether it's 8 or 12 bit data. What's worse is that you might run out of hard drive space capturing 12-bit and miss the fireball altogether! Hahaha.
Seriously, though if we go back to this thread from last fall, you'll see that even in pretty good seeing, the differences between 8 and 12 bit are negligible at best.
8 v 12 bit Jupters
Note that I still haven't mentioned which of the images at the end is 8 and which is 12.
For those who aren't energetic enough to click through, here's my only "conclusion" from the time I spent on this:
from the realistic perspective of whether 12 bit capturing is worth the effort I think the "real world" (or as close as comparisons like this get) are fairly instructive. I wouldn't go buying a new hard drive, for example, to allow me to capture in 12 bit mode. I just don't see enough difference. For imagers that want to make sure they capture every last bit of detail in perfect seeing, then switching to 12 bit will probably be worthwhile.. but for us schmucks in average seeing, it's probably not worth the extra disk space to capture in 12-bit routinely.
regards,
Wayne
#13
Posted 16 June 2011 - 12:21 PM
#14
Posted 16 June 2011 - 01:37 PM
#15
Posted 16 June 2011 - 07:03 PM
#16
Posted 17 June 2011 - 01:40 AM
Interesting discussion, thanks to Emil for starting this !
I just read the paper of Craig stark, most of his points of view seem logical but one thing he seems to have forgotten.
Although the images he shows at different bitdepths do look a lot alike each other they might be more different than we currently can see. If we would only stack images and not do any processing (sharpening) afterwards you would not notice the differences. But if you are going to use - for instance RegiStax - to sharpen your image the differences will be suddenly rather visible. Sharpening images after stacking expands the minor differences in adjacent pixelintensities. If those differences are more gradual the sharpening will give nicer results, thats why larger stacks tend to be (if your images are at least of a decent quality) able to allow more subtle sharpening.
I agree with Glenn that a proper test using controlled conditions (artificial star or so) would be the best way forward. Otherwise we might be starting to create another "rule of thumb" without actual practically testing.
When I started my astronomy software adventure that lead to RegiStax rule of thumb was that stacking less images would always lead to better results than stacking many images.
This "rule" was not founded properly and mainly based on the fact that a stacked image of less frames LOOKS sharper than a stacked image of many more frames. The latter tends to look often more fuzzy. But when you want to use sharpening, the latter image will allow you to sharpen subtle and show often far more details than the smaller stack.
In presentations I have given in the past I used data from several good imagers (I am not shooting any frames myself) to demonstrate the above.
Hope someone with both an 8bit and 12bit CCD can do some real experimenting.
cheers
Cor
--
#17
Posted 17 June 2011 - 03:12 AM
According to the specifications of my Basler Ace camera it has a single 12-bt A/D converter, and the only difference between 12 and 8 bit is, that the 4 least significant bits are discarded when you set the camera to 8 bit.
So if you made 12 bit recordings, you already have the perfect test set to see the differences between 8 or 12 bit: simply discard the 4 least significant bits, and you end up what the camera would have send if it was operating in 8 bit mode. So it is in fact extremely easy to simulate 8-bit data coming out of a 12-bit camera. And the nice thing is, that this is as perfect a controlled conditions you can get, because the only thing that is different is the actual bit depth of the recording everything else remains the same.
And this is also what I did in a beta version of AutoStakkert, both on rather noisy 12-bit recordings of Jupiter (pixel to pixel noise of about 16 according to my scale of Saturn images above) and on high quality 12-bit data of the sun (noise of about 2) the results were - a bit to my suprising - that both stacks were the same after extreme sharpening using crude sharpening methods (or by using the wavelets in RegiStax). I started out with 12 bit data, and discarded the 4 least significant bits just as my 12 bit camera would have done when operating in 8 bit mode. They showed the same amount of noise in both the background and the lighter parts of the images, and the same detail was present. After these initial experiments I turned to a more theoretical approach in this thread, to see why my results were like this.
I don't see why we can't come to a rule of thumb on this IF we can more or less accurately guess the amount of noise in the image. Perhaps the pixel-to-pixel noise is not the best method to do this - although for higher noise levels it works really well - but instead we should focus on a small part of a dark background to get useful information on the amount of random noise in the image (for example see how much the pixels 'flicker' between subsequent frames).
I will try to post some raw stacks of my data (the 12-bit and 8-bit version) within 24 hours. They not only look the same before sharpening, but also after excessive sharpening they will contain the same amount of noise and detail.
#18
Posted 17 June 2011 - 05:51 AM
Can a 12-bit camera have an advantage in a situation where the images are very underexposed, and subsequently stretched? For instance, for Saturn I can only fill 20% of the histogram at 15 fps, and I do a histogram stretch on the stacked result. Could a 12-bit capture be further stretched without introducing banding? Note that I have not had problems with banding with my 8-bit captures so far.
#19
Posted 17 June 2011 - 06:04 AM
#20
Posted 17 June 2011 - 06:14 AM
So it still depends on how much noise you have in your image, but if you can't get it the brightness above 20% then I guess you pretty much have used a all the gain available and thus have a very noisy image.
If you would use very little gain, and have a severely underexposed image (say you only use values 0-255, as if it was an 8-bit recording), then you would have values 256-4096 available to capture some kind of monstrous impact event. That's not possible in an 8-bit recording.
#21
Posted 17 June 2011 - 05:49 PM
The estimated noise levels were between 2.6 standard deviations in dark space and around 8.6 stdev for Jupiter itself. These are on a scale from 0 to 255, where 255 is the maximum intensity level of the image and 0 the minimum.
I then transformed the 12-bit data to either 4, 6, 8, 10 bits and I also used the full 12 bits (so only the most significant bits were used, basically just like my Basler Ace camera operates in 8 bit mode: it discards the 4 least significant bits that come out of the A/D converter). Then I made several stacks of 100, 400 and 1600 frames for each of those bit depths, and summarized the results in the following three images:
http://www.astrokraa...100_jupiter.jpg
http://www.astrokraa...400_jupiter.jpg
http://www.astrokraa...600_jupiter.jpg
----------
I performed a similar analysis on 12-bit data from a recording of the moon. The estimated noise levels were between 0.2 standard deviations in dark space and around 1.8 stdev for the surface of the moon. Again on a scale from 0 to 255.
http://www.astrokraa...h/0020_Moon.jpg
Now the image doesn't look as good in 8-bit mode compared to 10 or 12 bit. You can see some quantization errors around the terminator in 8 bit and below.
----------
Finally one more analysis on 12-bit data from a recording of the sun. The estimated noise levels were between 0.6 standard deviations in dark space and around 3.0 stdev for the surface of the sun. Again on a scale from 0 to 255.
20 frames stacked:
http://www.astrokraa...th/0020_sun.gif
200 frames stacked:
http://www.astrokraa...th/0200_sun.gif
The images for the Moon and Sun were stacked using multiple alignmentpoints, while the Jupiter recording just used a single alignment point.
----------
As a final note a small animation showing that some noise is required to makes sure that you don't end up with quantization errors after stacking. Usually this is not something you need to worry about, but for instance in the case of the 8-bit frames of the moon near the terminator, stacking more 8-bit frames would not solve the quantization error. Adding a bit of extra noise (before A/D conversion!) and stacking more frames, or better yet, switching to 12-bit recordings would.
http://www.astrokraa...uantization.gif
400 frames stacked, the 12 bit data was converted to 4-bit data(!) before stacking. The extra noise was added before this conversion to partially overcome quantization errors.
Edit: I removed the link to the raw stacks for the Moon and Jupiter recordings. A new link can be found a couple of messages below.
#22
Posted 17 June 2011 - 06:39 PM
The working dynamic range of a ccd camera is a function of the well depth of the individual photosites divided by their read noise. Given the small pixel size of 5.6 microns for the 618 chip I kind of doubt the 12bits provided is truly a measure of how many discrete gray level steps can be faithfully represented. It probably functions more like a 10bit camera in 12bit mode. Unfortunately Sony provides little in the way of specs one looks for in a scientific application - no slant on Sony - their market was not aimed at planetary imaging!
Edit:
Looking further I got the following from the PGR Flea3 Technical Manual:
Full Well Depth 23035.06 e- at zero gain
Read Noise 38.74 e- at zero gain
Taking this and calculating
Dynamic range = 23035.06 / 38.74 -> 595 !! just over 9 bits!
Also PGR lists the signal to noise ratio for the Flea3 as 65 dB. This would fall somewhere between 10 and 11 bits. So which is right? I have no idea, but clearly when we operate in 12 bit mode with the 618 we are not there in real terms of bit depth.
Regards,
Glenn
#23
Posted 18 June 2011 - 01:34 AM
You could try counting the number of different gray-levels you get when recording in 12bit mode. Ideally you will need a topic ranging from pitch-black to overlit (saturated).
Cor
#24
Posted 18 June 2011 - 02:00 AM
A fantastic continuation on the comparative analysis of bit depths. Even just from 6-bit to 10-bit, the difference between them is a good deal less than I would have thought, but I can see where 10 or 12-bit could have some advantage with Lunar and Solar imaging in particular.
Very interesting. Thanks again Emil.
#25
Posted 18 June 2011 - 07:32 AM