Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Fixed Pattern Noise getting worse and worse

  • Please log in to reply
145 replies to this topic

#51 17.5Dob

17.5Dob

    Voyager 1

  • *****
  • Posts: 10,133
  • Joined: 21 Mar 2013
  • Loc: Colorado,USA

Posted 15 December 2018 - 04:18 PM

Joe,  The ASI 1600 is an FPN machine!  You are pretty much GUARANTEED to get fixed pattern noise if you don't dither frequently and aggressively.  FPN and the ASI 1600 has been discussed ad nauseam here on the forum for the last two years.  You must dither aggressively and frequently with this camera.  One reason is that this camera has ridiculously low thermal noise and read noise.  This is a good thing!  But unlike your DSLR's which "swamp" the building blocks of FPN with deeper exposure and thermal noise, the ASI 1600 requires active management of collection techniques (i.e. Dithering) to prevent the issues you are having. 

 

 

 ... movements of the scope are not dithering. If the scope moved enough to "dither" you would have stars trailing all over the place in the image....

 

 

... the drift of the stars across the frame due to imperfect polar alignment or differential flexure, which are also often called "dithering", are not only most definitely NOT dithering...they are  the direct cause of the issue the OP is having: Correlated noise! (Walking noise, raining noise, etc.)

 

...It IS the drift that ultimately allows the FPN to correlate. Once the frames are registered, the stars stop drifting...and now the pattern drifts. When stacked, it is the slight but consistent changes in the position of the pattern that ultimately "correlates" in the stack, giving rise to the streaking in the noise. 

 

...The only way to dither such that it can help you randomize the FPN .... is to actually dither, using a program like PHD or a direct mount dither, and dither by 5-10 pixels every few frames for short (never less often than 3 frames, more often if the frames are longer), and dither RANDOMLY (do not use any kind of pattern dither, like box, or spiral, etc.)

^

^

+3


Edited by 17.5Dob, 15 December 2018 - 04:19 PM.


#52 TelescopeGreg

TelescopeGreg

    Fly Me to the Moon

  • -----
  • Posts: 5,380
  • Joined: 16 Jul 2018
  • Loc: Auburn, California, USA

Posted 15 December 2018 - 07:32 PM

So is the key issue in "poor-man's dithering" that true dithering movement occurs between gathering images, vs mount-induced dithering occurs all the time?

 

That implies a synchronization between the guiding and imaging subsystems...


Edited by TelescopeGreg, 15 December 2018 - 07:33 PM.


#53 17.5Dob

17.5Dob

    Voyager 1

  • *****
  • Posts: 10,133
  • Joined: 21 Mar 2013
  • Loc: Colorado,USA

Posted 15 December 2018 - 07:46 PM

So is the key issue in "poor-man's dithering" that true dithering movement occurs between gathering images, vs mount-induced dithering occurs all the time?

 

That implies a synchronization between the guiding and imaging subsystems...

"Poor Man's Dithering" is not dithering, it's simply slow drifting which is THE root cause of "Walking Noise/ correlated noise/ etc.

Dithering is large scale, (5-10 pixels) random shifts of the position of your imaging frame that occurs between frames, as you guessed. Yes it requires software , but it's built into most every image acquisition program available and if you are using an autoguider to begin with, there is absolutely no reason NOT to use it !

"Dither or Die"



#54 joelin

joelin

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2,873
  • Joined: 14 Jan 2008
  • Loc: Saratoga, CA

Posted 15 December 2018 - 09:29 PM

I don't understand why walking noise isn't rejected by stacking algorithms. 

 

 

Lets say I've done a star alignment in PI and now my output frames are all star aligned. This will show a piece of noise walking down the frame, 0 is the darkest pixel, 9 is the brightest. 9 represents a bright star and 3 represents the noise. The is noise on the upper left pixel which then walks to the bottom right like this due to the walking noise

 

309...

000...

900...

......

 

009...

030...

900...

......

 

009...

000...

903...

......

 

(and lets say this pattern continues for another 47 frames and my sequence was 50 frames lon)

 

Couldn't a smart rejection algorithm see the pixel value of a 3 as an outlier? Since you have a single 3 in a pixel spot and then 49 number of frames where its 0

 

The 9's would be kept because the stars are in every frame.

 

So would more frames allow good rejection of walking noise? I think in my examples where the FPN was really bad I had maybe 30 frames. If I increased that to 100, shouldn't the rejection work better and I'm FPN free?


Edited by joelin, 15 December 2018 - 09:45 PM.


#55 TelescopeGreg

TelescopeGreg

    Fly Me to the Moon

  • -----
  • Posts: 5,380
  • Joined: 16 Jul 2018
  • Loc: Auburn, California, USA

Posted 16 December 2018 - 12:56 AM

"Poor Man's Dithering" is not dithering, it's simply slow drifting which is THE root cause of "Walking Noise/ correlated noise/ etc.

Dithering is large scale, (5-10 pixels) random shifts of the position of your imaging frame that occurs between frames, as you guessed. Yes it requires software , but it's built into most every image acquisition program available and if you are using an autoguider to begin with, there is absolutely no reason NOT to use it !

"Dither or Die"

Sorry, I should have called it "So called Poor Man's Dithering"...  But, thank you for helping put this bit of the puzzle in place.

 

I'm running PHD2 on a Raspberry Pi, and separately have a script that triggers the DSLR to snap a picture.  They're not synchronized in any way, and plans are (were?) to replace the triggering script with some sort of intervalometer that can make use of the camera's Bulb setting for longer exposures.  (The camera can't do Bulb from the USB interface.)  This puzzle piece tells me that I either need to disable dithering in PHD2, because it could (will) choose to dither while the shutter is open, or figure out a way to synchronize the two.

 

Or, should I let them run asynchronously?  Is perhaps a small amount of dithering while imaging better overall (star shapes and pattern noise), than not doing it at all?



#56 KenS

KenS

    Apollo

  • *****
  • Posts: 1,103
  • Joined: 10 Jan 2015
  • Loc: Melbourne, Australia

Posted 16 December 2018 - 04:00 AM

PHD2 only dithers when commanded to by another application. In your script you could send the appropriate command to PHD2 and wait for it to respond when it has settled. Easily done with any language combo that can talk to a TCP port. https://github.com/O...EventMonitoring


  • TelescopeGreg likes this

#57 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 25,616
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 16 December 2018 - 05:51 AM

I don't understand why walking noise isn't rejected by stacking algorithms. 

 

 

Lets say I've done a star alignment in PI and now my output frames are all star aligned. This will show a piece of noise walking down the frame, 0 is the darkest pixel, 9 is the brightest. 9 represents a bright star and 3 represents the noise. The is noise on the upper left pixel which then walks to the bottom right like this due to the walking noise

 

309...

000...

900...

......

 

009...

030...

900...

......

 

009...

000...

903...

......

 

(and lets say this pattern continues for another 47 frames and my sequence was 50 frames lon)

 

Couldn't a smart rejection algorithm see the pixel value of a 3 as an outlier? Since you have a single 3 in a pixel spot and then 49 number of frames where its 0

 

The 9's would be kept because the stars are in every frame.

 

So would more frames allow good rejection of walking noise? I think in my examples where the FPN was really bad I had maybe 30 frames. If I increased that to 100, shouldn't the rejection work better and I'm FPN free?

A pixel rejection algorithm in a stacking process can only reject pixels that deviate from the mean by a sufficient amount. Now, the process of stacking is actually fairly complex, and there is a lot involved including the evaluation of each and every sub, shifting and scaling each sub to a normal, as well as the rejection process which involves it's own normalization.

 

Pixel rejection can reject pixels that fall well outside of the normal distribution for the given pixel stack. That is true of some parts of FPN...notably hot and cold pixels, but it is not true for the vast majority of FPN (every single pixel in the sensor contributes to the overall fixed pattern that the sensor produces as a whole...but you might only have 30k-50k hot pixels....that means millions of other pixels are not really that "hot" or "cold", but still exhibit pattern...even if it is spatially random. If most pixels do not deviate enough to be picked up by the rejection algorithm, then it cannot identify them to be rejected. In fact, you do not want them to be rejected, because those pixels all contain both a component of FPN as well as a component of object signal, and neither are large enough to differentiate one from the other (not from a rejection standpoint anyway.)

 

This is why dithering is essential to dealing with FPN. Dithering moves the position of the stars in each sub. Later on, you register those subs to a reference frame, which makes all of the stars align...however that process, when performed on a dithered data set, also randomizes the position of the FPN relative to the frame of the stack. By randomizing the position of the FPN in each frame, you effectively randomize the FPN through time via the act of stacking. Therefor, you do not need pixel rejection to correct this "small scale" or "low level" FPN, which accounts for the vast majority of FPN...you simply need to randomize the position of the FPN in each frame. In more elegant terms...you simply need to impart a temporal component to the FPN, at which point it is effectively no different than temporally random noise, and will "average out" with stacking the same as any other temporally random noise term.



#58 spokeshave

spokeshave

    Mercury-Atlas

  • *****
  • Posts: 2,658
  • Joined: 08 Apr 2015

Posted 16 December 2018 - 08:30 AM

Another cause that I have not seen mentioned is quantization error due to the 12-bit ADC. I believe that is responsible for a lot of the dark "pitting" we see with this sensor. The ADC "rounds down" the signal from each pixel into the appropriate 12-bit bin. With fixed pattern noise, there will always be pixels that produce a signal that is close to, but does not exceed the threshold for the next ADU and it gets rounded down - reducing its actual value significantly. So, a pixel whose actual value should be very close to that of its neighbor actually gets assigned a much lower value by the ADC. Since we're discussing FPN - which by definition does not change (much) from frame to frame - those artificially dark pixels will appear in the same spot in every image. This gets further complicated by the conversion to 16-bit space. The difference between converted value and true value (quantization error) gets multiplied by 16.

One thing you might try is using a larger gain value. The higher the gain, the lower the quantization error.

One thing I have always thought would be useful with these cameras would be a "gain dither". Much like the spatial dither that most of us do, gain dither would make slight changes to the gain for each image, scattered around a central value. If the gain dither is not too large, it would not affect the dark matching but it would reduce quantization error.

I may give something like this a try as an experiment - set up a sequence with multiple events that differ only by the gain setting and have SGP rotate through the different gains.

Tim
  • tkottary and 42itous1 like this

#59 dkeller_nc

dkeller_nc

    Surveyor 1

  • *****
  • Posts: 1,546
  • Joined: 10 Jul 2016
  • Loc: Central NC

Posted 16 December 2018 - 10:04 AM

I have to refute this every time it comes up. tongue2.gif The movements of the scope are not dithering. If the scope moved enough to "dither" you would have stars trailing all over the place in the image. Dithering is not caused by movements of the scope.

Yeah, Jon, I should note that you are correct, and that's not exactly what I meant (i.e., poor description).  When I looked at joelin's "movie", it was clear that there was what I interpreted as "random" movement between frames, not during the frames, which of course would blur the stars significantly.  In contrast, his movie of his shots with the ASI1600MM-C between the frames was highly correlated (moving in a straight line, in this case).

 

The only reason I bring this up is that if there's a newb reading this, it is possible to "manually dither" by moving the telescope between frames with the hand controller, computer ASCOM control, etc...  That said, it's a poor way to do it since one has to actually know how many pixels you are moving with each pulse, and you'd have to randomize the movements between every X frames.  However, if one is using older equipment that will track and guide but will not accept computer commands to allow use of something like PHD2, then it's still possible (and advisable) to dither.



#60 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 25,616
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 17 December 2018 - 03:36 AM

Another cause that I have not seen mentioned is quantization error due to the 12-bit ADC. I believe that is responsible for a lot of the dark "pitting" we see with this sensor. The ADC "rounds down" the signal from each pixel into the appropriate 12-bit bin. With fixed pattern noise, there will always be pixels that produce a signal that is close to, but does not exceed the threshold for the next ADU and it gets rounded down - reducing its actual value significantly. So, a pixel whose actual value should be very close to that of its neighbor actually gets assigned a much lower value by the ADC. Since we're discussing FPN - which by definition does not change (much) from frame to frame - those artificially dark pixels will appear in the same spot in every image. This gets further complicated by the conversion to 16-bit space. The difference between converted value and true value (quantization error) gets multiplied by 16.

One thing you might try is using a larger gain value. The higher the gain, the lower the quantization error.

One thing I have always thought would be useful with these cameras would be a "gain dither". Much like the spatial dither that most of us do, gain dither would make slight changes to the gain for each image, scattered around a central value. If the gain dither is not too large, it would not affect the dark matching but it would reduce quantization error.

I may give something like this a try as an experiment - set up a sequence with multiple events that differ only by the gain setting and have SGP rotate through the different gains.

Tim

Tim..first, interesting concept about gain dithering... I'd like to see that in action...

 

As for quantization error, I do not believe that is the cause of the pitting that can sometimes be seen with the ASI1600. I think that is actually caused by RTS or burst noise, which does affect a percentage of the pixels of the sensor. This causes some pixels to jump around between 2-4 different discrete levels, each subject to the noise of the signal and read noise. Due to this inconsistency (which is pretty small, at lower gains maybe an ADU at most, at higher gains each discrete level might be a few ADU from each other), you might get lower pixel values in some subs, which can occur enough that they do not entirely average out through the stack. Because of the inconsistency, they are not always corrected properly by darks either. This makes  dithering particularly important with the ASI1600 (or QHY163, or Atik Horizon, etc.) so that these RTS-affected pixels do not always occupy the same space within the stack. 

 

I also think that when used right, the quantization error will be pretty thouroughly swamped by shot noise. Using lower gains only with LRGB should easily bury read noise, quantization noise and the consequences of the quantization error in much larger shot noise. At f/4, a mere 10 seconds is all I need in a light polluted zone to swamp the read noise by more than 10x. A 30 second sub is usually way more than enough, but simply to avoid stacking a thousand frames per image I often use 45-60 second subs for L and 90-120 for RGB. I've plotted the differences in noise with different swamp factors on numerous occasions, and quantization error becomes a minuscule term compared to photon shot noise itself pretty quickly, even at 10x:

 

lM8odrz.jpg

 

Obviously, achieving the same thing with narrow band is harder...but, with narrow band, stars saturate more slowly so we can use a higher gain which reduces the quantization error to levels low enough that it is swamped even by read noise alone. 


Edited by Jon Rista, 17 December 2018 - 03:39 AM.


#61 spokeshave

spokeshave

    Mercury-Atlas

  • *****
  • Posts: 2,658
  • Joined: 08 Apr 2015

Posted 17 December 2018 - 09:04 AM

Tim..first, interesting concept about gain dithering... I'd like to see that in action...

 

As for quantization error, I do not believe that is the cause of the pitting that can sometimes be seen with the ASI1600. I think that is actually caused by RTS or burst noise, which does affect a percentage of the pixels of the sensor. This causes some pixels to jump around between 2-4 different discrete levels, each subject to the noise of the signal and read noise. Due to this inconsistency (which is pretty small, at lower gains maybe an ADU at most, at higher gains each discrete level might be a few ADU from each other), you might get lower pixel values in some subs, which can occur enough that they do not entirely average out through the stack. Because of the inconsistency, they are not always corrected properly by darks either. This makes  dithering particularly important with the ASI1600 (or QHY163, or Atik Horizon, etc.) so that these RTS-affected pixels do not always occupy the same space within the stack. 

 

I also think that when used right, the quantization error will be pretty thouroughly swamped by shot noise. Using lower gains only with LRGB should easily bury read noise, quantization noise and the consequences of the quantization error in much larger shot noise. At f/4, a mere 10 seconds is all I need in a light polluted zone to swamp the read noise by more than 10x. A 30 second sub is usually way more than enough, but simply to avoid stacking a thousand frames per image I often use 45-60 second subs for L and 90-120 for RGB. I've plotted the differences in noise with different swamp factors on numerous occasions, and quantization error becomes a minuscule term compared to photon shot noise itself pretty quickly, even at 10x:

 

 

Obviously, achieving the same thing with narrow band is harder...but, with narrow band, stars saturate more slowly so we can use a higher gain which reduces the quantization error to levels low enough that it is swamped even by read noise alone. 

Jon:

 

RTN should cause bright pixels, not dark ones. Having said that, i don't know what kind of impact RTN has on CMOS devices and why it would be different from CCD devices (i have not seen the same "pitting" artifacts in CCD subs). So if you have any links to reading material on the subject, I would be very interested. 

 

I understand that it is a relatively simple matter to swamp read noise, and since quantization noise is typically less than read noise, it should be easily swamped too. But just because the noise terms are "swamped" does not mean that they are no longer there. My theory is that most of the noise terms (read noise, shot noise and Johnson-Nyquist noise) are Poisson in nature - except quantization noise. All of the Poisson noise terms will vary randomly about a mean and visually, such variation is not striking - it just looks like noise. Quantization noise, on the other hand, biases the error to the dark side - assuming that the sensor ADC truncates values to the ADU bin (i.e. always rounds down) which I believe it does. This would cause a non-random excess of darker pixels, something that does become visually striking. 

 

Anyway, just a theory.

 

Tim



#62 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 25,616
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 17 December 2018 - 01:13 PM

Jon:

 

RTN should cause bright pixels, not dark ones. Having said that, i don't know what kind of impact RTN has on CMOS devices and why it would be different from CCD devices (i have not seen the same "pitting" artifacts in CCD subs). So if you have any links to reading material on the subject, I would be very interested. 

 

I understand that it is a relatively simple matter to swamp read noise, and since quantization noise is typically less than read noise, it should be easily swamped too. But just because the noise terms are "swamped" does not mean that they are no longer there. My theory is that most of the noise terms (read noise, shot noise and Johnson-Nyquist noise) are Poisson in nature - except quantization noise. All of the Poisson noise terms will vary randomly about a mean and visually, such variation is not striking - it just looks like noise. Quantization noise, on the other hand, biases the error to the dark side - assuming that the sensor ADC truncates values to the ADU bin (i.e. always rounds down) which I believe it does. This would cause a non-random excess of darker pixels, something that does become visually striking. 

 

Anyway, just a theory.

 

Tim

Tim, when I was trying to figure out what was causing the effect I was seeing (which is fairly readily visible when observing a heavily stretched video feed from the camera with a dark filter or cap on the camera), I came across "Burst Noise", which is another name for RTS, but perhaps a broader term (I guess it depends on what you read.) Anyway, with many of the examples of burst noise, it seemed that you could have dips as well as spikes, usually between two discrete primary levels, sometimes three or more. I have mostly seen pixels jump between "normal" and "brighter", but I have also observed some pixels that seem to jump to "darker". It is not a particularly pronounced effect, though, and once there is some image signal in place only the most extreme of pixels that exhibit the behavior are usually still observable. 



#63 joelin

joelin

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2,873
  • Joined: 14 Jan 2008
  • Loc: Saratoga, CA

Posted 21 December 2018 - 12:10 AM

Well I finally got to try the ASIAir v1.08 beta version that ZWO provided to me. I got to try test it out with my Hyperstar setup. A few thoughts

 

I took 8 frames of 30 seconds long, twice. Once with dither and once without.

 

The first thing I noticed was the dramatic difference in total time required to dither.

 

Dither: 14 minutes

No Dither: 4 minutes

 

Thats about 3.5x longer to dither! Ouch... I set the dither to 5 pixels random and settle for 5 seconds. That seems to take a LONG time....is there any way to reduce this?

 

I stacked the 8 frames using median and no calibration. Here is the result in PI with a STF stretch, left is no dither, right is dither

l9U4s8R.png

 

 

The dither result looks much better.

 

I then tried an automatic background extraction with a function degree of 4, left is no dither, right is dither

 

2omy1OI.jpg

 

I have to say the dither was worse than the no dither background noise wise. Signal wise the dither seems to have better signal overall. 

 

Seems like its possible to still get ugly patterned backgrounds with dither.

 

Granted I only stacked 8 frames....any thoughts here?



#64 joelin

joelin

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2,873
  • Joined: 14 Jan 2008
  • Loc: Saratoga, CA

Posted 21 December 2018 - 12:14 AM

Lastly, I wanted to show a comparison of

left: 8 frames no dither

middle: 8 frames dither + 8 frames no dither

right: 8 frames dither

 

A very clear progression for how dithering removes the noise!!!!

 

gsdIRQy.png



#65 fmeschia

fmeschia

    Surveyor 1

  • *****
  • Posts: 1,944
  • Joined: 20 May 2016
  • Loc: Mountain View, CA

Posted 21 December 2018 - 12:16 AM

Not really sure if I follow... in post #63 you said that dither makes things worse, and in post #64 it makes things better?



#66 joelin

joelin

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2,873
  • Joined: 14 Jan 2008
  • Loc: Saratoga, CA

Posted 21 December 2018 - 01:05 AM

dither makes things better...but if i do an automatic background extraction in PI...then the background seems to look worse with the dither case 



#67 fmeschia

fmeschia

    Surveyor 1

  • *****
  • Posts: 1,944
  • Joined: 20 May 2016
  • Loc: Mountain View, CA

Posted 21 December 2018 - 01:12 AM

 

I then tried an automatic background extraction with a function degree of 4, left is no dither, right is dither

 

2omy1OI.jpg

 

I have to say the dither was worse than the no dither background noise wise. Signal wise the dither seems to have better signal overall. 

 

Seems like its possible to still get ugly patterned backgrounds with dither.

 

Granted I only stacked 8 frames....any thoughts here?

The right frame has very visible walking noise. There is no way this comes from dithered lights. Something in the processing chain must be amiss.



#68 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 25,616
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 21 December 2018 - 01:37 AM

 

I then tried an automatic background extraction with a function degree of 4, left is no dither, right is dither

 

2omy1OI.jpg

 

I have to say the dither was worse than the no dither background noise wise. Signal wise the dither seems to have better signal overall. 

 

Seems like its possible to still get ugly patterned backgrounds with dither.

 

Granted I only stacked 8 frames....any thoughts here?

You simply did not dither enough. You need to dither by a large enough amount that the patterns are fully randomized. I generally recommend 5-10 pixel shifts in both axes for each dither, and dithers must be random. Make sure you are not using a pattern dither (such as spiral, box, etc.), as that will simply result in more correlation. 

 

My guess is you do not have all the dithering settings configured right. The 5-10 pixels should be imager pixels, not guider pixels. Most of the time, guider scale and imager scale are different. You may have 2.5"/px in the guider, say 1.3"/px in the imager. That is nearly a factor of 2x. So to dither 5-10 pixels on the imager you need to dither 2-5 pixels in the guider. You need to configure PHD and whatever software you use to run your imaging sequence to dither by enough each frame, or every two frames if you are stacking a couple hundred. 

 

Once you dither enough, then you should see the benefits of it. 



#69 joelin

joelin

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2,873
  • Joined: 14 Jan 2008
  • Loc: Saratoga, CA

Posted 21 December 2018 - 03:33 AM

i used ASIAir..the dither was 5px which I think is the highest in the app....I required it to return to 2" and settle there for at least 5 seconds..those were the standard settings



#70 happylimpet

happylimpet

    Fly Me to the Moon

  • *****
  • Posts: 7,233
  • Joined: 29 Sep 2013
  • Loc: Southampton, UK

Posted 21 December 2018 - 05:49 AM

Joe,  The ASI 1600 is an FPN machine!  You are pretty much GUARANTEED to get fixed pattern noise if you don't dither frequently and aggressively.  FPN and the ASI 1600 has been discussed ad nauseam here on the forum for the last two years.  You must dither aggressively and frequently with this camera.  One reason is that this camera has ridiculously low thermal noise and read noise.  This is a good thing!  But unlike your DSLR's which "swamp" the building blocks of FPN with deeper exposure and thermal noise, the ASI 1600 requires active management of collection techniques (i.e. Dithering) to prevent the issues you are having. 

You're right - Ive been stressing about the 'pits' in my images for a while, and I thought I was alone...thank goodness no!

 

"Poor Man's Dithering" is not dithering, it's simply slow drifting which is THE root cause of "Walking Noise/ correlated noise/ etc.

Dithering is large scale, (5-10 pixels) random shifts of the position of your imaging frame that occurs between frames, as you guessed. Yes it requires software , but it's built into most every image acquisition program available and if you are using an autoguider to begin with, there is absolutely no reason NOT to use it !

"Dither or Die"

I must respectfully disagree, in that drifting is a whole lot better than nothing, though clearly not quite as good as real dithering. I would say it reduces my FPN artifacts by about 90-95%, and am happy to call it poor man's dithering.

 

My drift comes from field rotation, as I dont have any way of dithering (using firecapture, which cant trigger the PHD dithering). When I have perfect PA and no FR, I get hot pixel and 'pitting' artifacts, but with a degree of FR, these disappear.  I used to get bad walking noise before i changed from median stacking my calibration frames to kappa-sigma rejection. I dont get walking noise any more and have restacked lots of old images (always keep your raw data!).

 

I think that dithering is often used as a band-aid to cover up poor calibration, and while there's nothig wrong with that, and it is a powerful technique, we shouldnt get things out of perspective.


Edited by happylimpet, 21 December 2018 - 05:53 AM.


#71 joelin

joelin

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2,873
  • Joined: 14 Jan 2008
  • Loc: Saratoga, CA

Posted 21 December 2018 - 01:51 PM

how much time is usually lost dithering compared to image capture?

 

a quick test with my ASIAir and hyperstar showed that 71% of the time was spent dithering...only 29% capturing...a huge loss in dark sky time!

 

My only thought was to increase exposure time so I spend less time dithering..but wouldn't that negate the benefit of Hyperstar which works well with short exposures?

 

Here is a plot showing exposure time on the X axis in seconds and the % of total clock time is spent on imaging:

 

CvhxFX7.png

 

 

 

At 30 second exposures, less than 30% of total clock time is spent on capturing photons!!! 

 

At 60 seconds, this improves to only 45%...slightly better

 

We need to go to 120 seconds to increase this to 62%

 

and finally at 180 seconds we arrive at 70%...more reasonable...

 

From this chart: https://www.cloudyni...d-maybe-qhy163/

 

it seems that long exposures with Hyperstar at f/2 are all but impossible!!!!


Edited by joelin, 21 December 2018 - 01:59 PM.

  • PilotAstronomy likes this

#72 fmeschia

fmeschia

    Surveyor 1

  • *****
  • Posts: 1,944
  • Joined: 20 May 2016
  • Loc: Mountain View, CA

Posted 21 December 2018 - 01:53 PM

For me, dithering usually takes less than 10 seconds at the end of each exposure. So, the “duty cycle” depends on how long is the duration of each exposure. With 300 seconds, it is less than 3.5%.



#73 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 25,616
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 21 December 2018 - 02:32 PM

how much time is usually lost dithering compared to image capture?

 

a quick test with my ASIAir and hyperstar showed that 71% of the time was spent dithering...only 29% capturing...a huge loss in dark sky time!

 

My only thought was to increase exposure time so I spend less time dithering..but wouldn't that negate the benefit of Hyperstar which works well with short exposures?

 

Here is a plot showing exposure time on the X axis in seconds and the % of total clock time is spent on imaging:

 

CvhxFX7.png

 

 

 

At 30 second exposures, less than 30% of total clock time is spent on capturing photons!!! 

 

At 60 seconds, this improves to only 45%...slightly better

 

We need to go to 120 seconds to increase this to 62%

 

and finally at 180 seconds we arrive at 70%...more reasonable...

 

From this chart: https://www.cloudyni...d-maybe-qhy163/

 

it seems that long exposures with Hyperstar at f/2 are all but impossible!!!!

Is this all assuming you are dithering every frame? What happens if you change it to dither every 2 frames with shorter exposures? Every 3?

 

Also, what are you assuming for time for each dither? My dithers take 5-10 seconds, so on average around 7.5 seconds. That is 25% overhead, not 70% overhead. If your dithers are taking longer than that, then you need to adjust your settings so that you are not trying to settle below the limits imposed by seeing. If your seeing is resulting in a guide RMS of say 0.6 pixels, but you are trying to settle at the default (for PHD) of 0.25 pixels, then you will always require the maximum time (assuming there even is one) of about 60 seconds to settle. So yeah, you'll waste a massive amount of time. Settle at 0.65-0.7 pixels, and your dithers will settle almost immediately.

 

The curve will flatten out a lot once you reduce the settling time and dither sparsely at very short exposures. When you are using very short exposures, you are also stacking a lot of them. At 30 seconds per sub, for a 5 hour integration you are going to stack 600 subs. You are certainly not going to dither 600 times. You would dither maybe 200 times...and if stacking a lot of subs, you might even be able to dither less frequently (depends on how deep your exposures are, how well you calibrate, etc.)

 

If we assume that you dither every 3 of 30s frames with dithering overhead of 7.5 seconds, then you will only spend ~7.5 seconds every 90 seconds of exposure, which is dithering overhead of only 8.3%. If you use 60s frames, you will spend only ~7.5 seconds every 180 seconds, which is dithering overhead of only 4.2%!! This is a FAR cry from your current chart...

 

Don't dither naively. Dither smart! Optimize your dithering settings and minimize the overhead time imposed by it. If ASIAir currently does not offer this level of configuration, then I would put the pressure onto ZWO to make it possible, as short exposures are business as usual with fast scopes and most cameras, not just CMOS cameras (at f/2, you wouldn't be able to expose a KAF-8300 all that long either!!)


Edited by Jon Rista, 21 December 2018 - 02:33 PM.


#74 joelin

joelin

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2,873
  • Joined: 14 Jan 2008
  • Loc: Saratoga, CA

Posted 22 December 2018 - 01:27 AM

How many pixels on the imaging camera should I dither by?

The asiair allows a setting for the number of dither pixels for the guide camera only. It is 1-5. I can do the math and translate that into a desired number of pixels for the imaging camera.

#75 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 25,616
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 22 December 2018 - 02:11 AM

How many pixels on the imaging camera should I dither by?

The asiair allows a setting for the number of dither pixels for the guide camera only. It is 1-5. I can do the math and translate that into a desired number of pixels for the imaging camera.

I would dither at the max for now. It is pretty tough to dither too much...but you can dither too little. It is also likely that your guider scale is smaller than your imager scale, so 5 pixels on the guider is likely to be more than that many pixels on the imager. 

 

I would be curious to know if the asiair is doing a pattern dither or random dither. If it is a pattern dither, then it might not matter how much you dither by, if they are using a spiral or box pattern, then it really won't help. 




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics