Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Fixed Pattern Noise getting worse and worse

  • Please log in to reply
145 replies to this topic

#76 dkeller_nc

dkeller_nc

    Surveyor 1

  • *****
  • Posts: 1,546
  • Joined: 10 Jul 2016
  • Loc: Central NC

Posted 22 December 2018 - 09:47 AM

Joelin - Your signature line doesn't contain your equipment details, so I made some guesses:

 

Guidescope - ZWO 60mm, 280mm FL

GuideCamera - ZWO ASI120MM

OTA:  8" Celestron Edge HD + v3 Hyperstar - FL 400mm (aperture 200mm)

Imaging Camera - ZWO ASI1600MC-C

 

Guidecamera image scale - 2.76"/px

 

Imaging Camera image scale - 1.96"/px

 

You noted that the ASIAir is set to dither by 5 guidecamera pixels.  That means that your imaging camera is getting dithered by 13.8 arcseconds or 7 pixels.  IMO, that's a massive dither with your setup, and it's likely that you mount is taking an inordinately long time to return to the target after such a large displacement.  I would suggest that you time the return-to-target with your mount.  You might find that you just can't use a dither that high with your particular setup.

 

And as Jon notes, it's the dither frequency and dither randomness that also makes a large difference, not just the dither magnitude.  You could potentially test this by setting up your gear and taking 20 10-15 second frames, dithering between each frame, then blinking them in PixInsight.  If the ASIAir software is setup correctly, you should see the stars in a "blink movie" move around randomly, and by different amounts with each frame.  If you find that the direction is random, but the amount of displacement is the same between each frame, I'd say you could back off your setting to about "3", which would presumably be dithering your imaging camera by about 4 pixels.  That might help with your mount's recovery time.

 

If you find that the dither direction is not random, that's actually a software error - a dither in one direction only really won't help reject artifacts.



#77 joelin

joelin

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2,873
  • Joined: 14 Jan 2008
  • Loc: Saratoga, CA

Posted 22 December 2018 - 11:38 AM

Guide scope: 50mm Orion mini guide scope - 162mm focal length
Guide camera : zwo asi 174mm
Main scope Hyperstar c8: 425mm focal length
Imaging camera: zwo asi 1600mc

A 5 pixel dither I think will result in something like 37 arc seconds of jump. That’s about 7-8 pixels on the ASI1600. I’ll will do a blink but I think it is random.

#78 entilza

entilza

    Soyuz

  • *****
  • Posts: 3,826
  • Joined: 06 Oct 2014
  • Loc: Oakville, ON, Canada

Posted 22 December 2018 - 08:15 PM

Dither distance should be an average star size distance in your image. Does that make sense?

#79 joelin

joelin

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2,873
  • Joined: 14 Jan 2008
  • Loc: Saratoga, CA

Posted 22 December 2018 - 11:18 PM

Shouldn’t it be more like the average blob size in the pattern noise?

This way the noise blobs don’t overlap and you can Vance themnout through stacking.

#80 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 25,616
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 23 December 2018 - 03:25 AM

Shouldn’t it be more like the average blob size in the pattern noise?

This way the noise blobs don’t overlap and you can Vance themnout through stacking.

Yes...star size really isn't a meaningful factor in determining how much to dither.  Your dithers should be large enough to offset the significant aspects of the pattern enough in each dither period (which may be every 1, 2, possibly 3 frames depending on how many subs you are stacking) such that the pattern is effectively randomized in time through the act of stacking. If your obvious patterns have a scale of 15 pixels, you would want to dither around 15 pixels or so, and not necessarily exactly 15 pixels every time (a little less/more some times), and randomly so that the patterns are distributed in as many positions as possible throughout the stack. 



#81 dkeller_nc

dkeller_nc

    Surveyor 1

  • *****
  • Posts: 1,546
  • Joined: 10 Jul 2016
  • Loc: Central NC

Posted 23 December 2018 - 09:29 PM

Joelin - Based on your equipment list:

 

Guidescope/guidecamera image scale - 7.45 arcseconds/pixel

 

Main Imaging Camera - 1.84 arcseconds per pixel

 

If the ASIAir is indeed dithering the setup by 5 guidecamera pixels, that means that your main imaging camera is moving by 37 arcseconds (your statement in post #77 is correct).  However, that means your main imaging camera is moving by 20 pixels, not 7-8.

 

That's a gigantic dither, and with those details it now makes sense why you'd lose 70% of your imaging time to mount settling - with the mount being moved that far off of target, I'd guess it's taking 30 seconds or more to recover the guidestar and smooth out any transients before the next frame will start integrating.


  • 42itous1 and ks__observer like this

#82 ks__observer

ks__observer

    Surveyor 1

  • *****
  • Posts: 1,951
  • Joined: 28 Sep 2016
  • Loc: Long Island, New York

Posted 23 December 2018 - 11:15 PM

Dithering distance:

5.9um pix x 5 dither / 3.8 camera pix = 7.8 camera pix.

Not sure if mentioned, PHD2 has a scale factor to increase movement beyond the 5 pix movement.



#83 dkeller_nc

dkeller_nc

    Surveyor 1

  • *****
  • Posts: 1,546
  • Joined: 10 Jul 2016
  • Loc: Central NC

Posted 23 December 2018 - 11:55 PM

That'd be correct if the focal lengths of the guidescope/main OTA were the same.  In this case, they're not.  The guidescope has a focal length of 162mm, the main OTA has a focal length of 425mm, so 5.9um pix x 5 dither/ 3.8 um camera pix * 425mm/162mm = 20.4 camera pixels.



#84 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 25,616
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 24 December 2018 - 01:57 AM

Joelin - Based on your equipment list:

 

Guidescope/guidecamera image scale - 7.45 arcseconds/pixel

 

Main Imaging Camera - 1.84 arcseconds per pixel

 

If the ASIAir is indeed dithering the setup by 5 guidecamera pixels, that means that your main imaging camera is moving by 37 arcseconds (your statement in post #77 is correct).  However, that means your main imaging camera is moving by 20 pixels, not 7-8.

 

That's a gigantic dither, and with those details it now makes sense why you'd lose 70% of your imaging time to mount settling - with the mount being moved that far off of target, I'd guess it's taking 30 seconds or more to recover the guidestar and smooth out any transients before the next frame will start integrating.

It is doubtful that the dither itself is the problem. I've had 20 arcsecond dithers, and it takes PHD2 a few guide periods, 2 seconds each, to bring the mount back to the star. 

 

The biggest time sink with guiding is the settling period...if you try to settle below your guide RMS, the you will usually only settle by chance. When you configure things to settle at the guide RMS or just slightly above, settling only takes a couple of seconds. So even with a 20 arcsecond dither, dithering overhead should still be at or under 10 seconds if dithering is configured properly.



#85 ks__observer

ks__observer

    Surveyor 1

  • *****
  • Posts: 1,951
  • Joined: 28 Sep 2016
  • Loc: Long Island, New York

Posted 24 December 2018 - 05:40 AM

That'd be correct if the focal lengths of the guidescope/main OTA were the same.  In this case, they're not.  The guidescope has a focal length of 162mm, the main OTA has a focal length of 425mm, so 5.9um pix x 5 dither/ 3.8 um camera pix * 425mm/162mm = 20.4 camera pixels.

 

I think it depends where the guide camera is physically placed compared to the rotation point -- center of the mount.

The closer to the rotation point the more the main camera will move.

Actually, looking at PHD2 and APT manual it looks like the dithering amount is based on a pulse magnitude and not a pixel magnitude.


Edited by nmoushon, 27 December 2018 - 11:02 AM.


#86 joelin

joelin

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2,873
  • Joined: 14 Jan 2008
  • Loc: Saratoga, CA

Posted 26 December 2018 - 02:18 AM

So taking a closer look at why dithering takes forever...

The biggest time wasted is the guide star takes too long to move back into the threshold after the dither. The guide chart will first jump to some high value like 17”, then the next pulse brings it down to 9”, then 8”, then 7.1”, then 6.3”, 5.5”,4.8”, etc. each sucessive pulse seems to reduce the gap by a smaller and smaller amount. Yesterday I tried a 35” dither, today I’m doing about 15” and the they take about the same amount to time to settle to 4” or 3” (50-70 seconds) because of the exponential decay factor.

So why does it get exponentially slower as it gets closer to the threshold.

#87 Der_Pit

Der_Pit

    Mercury-Atlas

  • *****
  • Posts: 2,874
  • Joined: 07 Jul 2018
  • Loc: La Palma

Posted 26 December 2018 - 06:35 AM

First guess would be your settings of aggressiveness and maybe the (weights?) in PPEC mode.



#88 joelin

joelin

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2,873
  • Joined: 14 Jan 2008
  • Loc: Saratoga, CA

Posted 26 December 2018 - 01:18 PM

Okay so I finally got around to shooting with dithering. This time I did about 10-15" which is about 6-8 pixels of dither between each frame of 60 seconds long. Sky was 18.6 mag.

 

First, here is the Blink test, looks decent:

https://www.youtube....eature=youtu.be

 

Here is the stacked result (all are shown with a PI STF), no calibration files since I shot all frames at -20C (don't need darks right?). I took about 70 minutes of data, 70 frames.

 

VYXs5ue.jpg

 

Here is zoom in, I can see hints of noise:

DWhHea8.png

 

I did an auto background extraction and here is the extracted data

DPu8OXJ.png

 

Here is the result after extraction:

Z3H2KEu.jpg

 

Here is a zoom in:

 

9UhWR6w.png

 

Looks horrendous!!! After 70 frames and dithering in between I still get these ugly walking patterns?



#89 choward94002

choward94002

    Surveyor 1

  • *****
  • Posts: 1,545
  • Joined: 25 Jul 2016
  • Loc: Central AZ

Posted 26 December 2018 - 01:52 PM

... because you DO need darks ... every time, without exception ...

 

We know that pixel wells accumulate dark current, but they do it at uneven rates ... at temp X after Y seconds well A might be at 5, well B might be at 6, well C might be at 10, well D is a "hot" pixel, it's at 16554 and well E has 1

 

You take a picture that adds 10 units of signal to all wells, so after Y + 10 seconds well A has 15, well B has 16, well C has 20 units and D is at 16554 and well E is at 11, cool ...

 

now we take another picture, dither by one pixel, so now well A has 16, well B has 20, well C is now at 16554, well D is now at 11, also cool ...

 

now we stack ... software says "hey, pixel D and pixel C are both "hot" in different spots, that's not right!" and pulls the value from the second pix to switch with hot pixel D in the first pix, pulls the value from the first pix C to swap with the hot pixel C in the second pix, then registers the pix so that the dither is undone ... so now we have this for the pixels ..

 

A is at (15 + 15), B is at (16 + 16), C is at (20 used twice), D is at (11 used twice) ... but that's not right, remember we sent in 10 units of signal on each pix, they should all be at 20!  Instead you have C being brightest, then B, then A, finally D.  Realize that you're working with a bayer matrix, so suppose that A is your red channel for the pixel, B is your blue channel, C and D are your green channels ... now when we debayer we get a combined color of salmon pink ...

 

On the other hand, if we had taken a dark the computer would have subtracted those dark current levels from each pixel BEFORE it was stacked, which means all of the pixels would have been at the signal level of 20 ... your color is now burgundy ...

 

That's why you're seeing that "walking rain" pattern; dithering will HELP but dithering isn't there for the walking rain, it's there for the hot/ cold pixel correction.  Darks are for the walking rain ...

 

Darks are VERY sensitive to temperature, they double every 6C (and half every 6C, which is why cooling is so important) and a camera's dark pattern will change over time.  My dark libraries are spaced 2C and 15sec apart, and I know of some here who space them 1C apart ...

 

Looking much better though, nice round stars and clean field edges, no seagulls flapping away or comets!  Nicely done!



#90 ks__observer

ks__observer

    Surveyor 1

  • *****
  • Posts: 1,951
  • Joined: 28 Sep 2016
  • Loc: Long Island, New York

Posted 26 December 2018 - 02:02 PM

... because you DO need darks ... every time, without exception …

Dark frames are to eliminate fixed-pattern-noise and hot pixels.

This is separate from temp related noise which is uniform Poisson noise -- which we can reduce with cooling.

Some, like the person who wrote APT, suggests dithering alone will solve FPN + hot pixels.  He also suggests dithering is preferred over dark frames because dark frames add more temp related dark noise (distinct and separate from FPN).

Interesting thread.


Edited by ks__observer, 26 December 2018 - 02:03 PM.


#91 choward94002

choward94002

    Surveyor 1

  • *****
  • Posts: 1,545
  • Joined: 25 Jul 2016
  • Loc: Central AZ

Posted 26 December 2018 - 02:06 PM

Dark frames are to eliminate fixed-pattern-noise and hot pixels.

This is separate from temp related noise which is uniform Poisson noise -- which we can reduce with cooling.

Some, like the person who wrote APT, suggests dithering alone will solve FPN + hot pixels.  He also suggests dithering is preferred over dark frames because dark frames add more temp related dark noise (distinct and separate from FPN).

Interesting thread.

Jon Rista has posted a lot of info here on darks and dithering and FPN and VPN (the "strobe" effect), all of which is very much worth a CN search and read ... I'm not going to restate what he's so eloquently done in the past, but simply suggest a quick search/ read to help clarify what fixes what and when (and why) ...

 

As to the fellow writing APT's suggestions, well, glad that that works for him.  Some people suggest I don't need to use snow tires when I'm driving in the mountains in the winter too, glad that works for them too ...

 

smile.gif


Edited by choward94002, 26 December 2018 - 02:07 PM.


#92 joelin

joelin

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2,873
  • Joined: 14 Jan 2008
  • Loc: Saratoga, CA

Posted 26 December 2018 - 02:30 PM

I’ll add the dark frames tonight...so you guys are saying AlL of the red streaks will be removed as long as I have darks??

#93 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 25,616
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 26 December 2018 - 02:44 PM

Darks are essential. Dithering is essential. Both are required to effectively deal with FPN. Without darks, the DFPN is much higher than it would be if you were using darks. Dithering alone is insufficient. Combine proper dark calibration with dithering, and things will improve. By proper, I mean make sure you are not scaling the darks, and make sure the darks are well-matched to the lights. FPN also comes from photon signal as well, and flats are necessary to correct this other type of FPN. It does not appear as though you are using flats, and if not, I highly recommend you use them. 

 

Even with the above, it still appears as though your exposures are quite shallow. Most of the signal is LP, and once the LP is extracted, the remaining signal is mostly right at the read noise and FPN floor. You have stacked only 70 60-second subs...which is only slightly more than an hour. You really need much, much more than that to get good results in a more heavily light polluted area. I've mentioned hundreds of subs before...and that was not accidental. With this camera, I recommend no less than 100 subs when using shorter exposures like this, and ideally you want much more than that. You want to integrate several hours at least, which would be 180 subs for three hours, 240 subs for four hours. More hours would be better, however once you get beyond 4, you are looking at 8-16 hours to really make a difference...

 

Alternatively...to get the necessary hours of integration, you could use longer exposures. The Pleiades are very, very bright. It is normal to clip them unless you are imaging at a significantly larger image scale, and imaging only one or two of the pleiades at a time with a system that spreads their light over many more pixels (thus reducing the saturation rate). Don't rate your stellar clipping on the pleiades or their brightest neighbor stars. Base it on the other stars in the frame. You may well be able to use 2, maybe even 3 minute subs. With longer subs, you would get more hours of integration with fewer total subs...say 100x3m subs is 5 hours, and even 100x2m subs would be 3h20m. Further, the deeper subs will bury the DFPN more, which will minimize any issues with remnant DFPN (any pattern that might remain after calibration) and allow dithering to be more effective.


Edited by Jon Rista, 26 December 2018 - 02:45 PM.

  • 42itous1 likes this

#94 joelin

joelin

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2,873
  • Joined: 14 Jan 2008
  • Loc: Saratoga, CA

Posted 26 December 2018 - 03:04 PM

I thought collecting data at f/2 could substitute for the hours of exposure that people do at say f/5 or f/6.

After all f/2 is the same as 9 hours at f/6 right?

So my 70 minutes becomes 630 minutes ...

#95 joelin

joelin

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2,873
  • Joined: 14 Jan 2008
  • Loc: Saratoga, CA

Posted 26 December 2018 - 03:14 PM

... because you DO need darks ... every time, without exception ...

We know that pixel wells accumulate dark current, but they do it at uneven rates ... at temp X after Y seconds well A might be at 5, well B might be at 6, well C might be at 10, well D is a "hot" pixel, it's at 16554 and well E has 1

You take a picture that adds 10 units of signal to all wells, so after Y + 10 seconds well A has 15, well B has 16, well C has 20 units and D is at 16554 and well E is at 11, cool ...

now we take another picture, dither by one pixel, so now well A has 16, well B has 20, well C is now at 16554, well D is now at 11, also cool ...

now we stack ... software says "hey, pixel D and pixel C are both "hot" in different spots, that's not right!" and pulls the value from the second pix to switch with hot pixel D in the first pix, pulls the value from the first pix C to swap with the hot pixel C in the second pix, then registers the pix so that the dither is undone ... so now we have this for the pixels ..

A is at (15 + 15), B is at (16 + 16), C is at (20 used twice), D is at (11 used twice) ... but that's not right, remember we sent in 10 units of signal on each pix, they should all be at 20! Instead you have C being brightest, then B, then A, finally D. Realize that you're working with a bayer matrix, so suppose that A is your red channel for the pixel, B is your blue channel, C and D are your green channels ... now when we debayer we get a combined color of salmon pink ...

On the other hand, if we had taken a dark the computer would have subtracted those dark current levels from each pixel BEFORE it was stacked, which means all of the pixels would have been at the signal level of 20 ... your color is now burgundy ...

That's why you're seeing that "walking rain" pattern; dithering will HELP but dithering isn't there for the walking rain, it's there for the hot/ cold pixel correction. Darks are for the walking rain ...

Darks are VERY sensitive to temperature, they double every 6C (and half every 6C, which is why cooling is so important) and a camera's dark pattern will change over time. My dark libraries are spaced 2C and 15sec apart, and I know of some here who space them 1C apart ...

Looking much better though, nice round stars and clean field edges, no seagulls flapping away or comets! Nicely done!


And with all the halving of noise every 6C...I’m get noise as bad as it as at -20C.

I can’t imagine what an exposure at +30C would look like. That’s over 256 times more noise!!

Surely there must be at a point where after halving it a few times it’s undetectable...

#96 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 25,616
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 26 December 2018 - 03:19 PM

I thought collecting data at f/2 could substitute for the hours of exposure that people do at say f/5 or f/6.

After all f/2 is the same as 9 hours at f/6 right?

So my 70 minutes becomes 630 minutes ...

 

Sort of, but it is not quite that simple. That would only be the case if both apertures had the same CO (or no CO at all) by area of obstruction, and it would only be the case if you were exposing enough object signal throughout the frame, which does not appear to be the case in your images. If you are picking up an electron or two of object signal in some areas only every few frames or so, then it will take significantly longer to build up a reasonable signal in those areas of the frame, than if you pick up an electron or two in every frame.

 

Even at f/2, you want a couple hours or so of integration with LP. LP amounts to the majority of your signal. The LP signal offset is removed with gradient extraction...however, ALL of the noise that that light pollution signal contained remains in your image after gradient extraction. As such, you must expose long enough to overcome that additional noise...which from what I can see in your example images here, is likely the single largest source of noise, likely by several times over any other source of noise. One hour of exposure, f/2 or not, is not enough. You need more. If your LP noise is 20-30e-, and your read noise is 2e-, and dark current basically immaterial, then you need to expose enough, both per-sub and in total integration, to overcome that large source of noise. Now, I've recommended 2-3 hours of integration for you. In light polluted areas, I usually recommend TENS of hours...ten hours at the very least, but significantly more if you can achieve it. That is for people imaging at f/4, f/5, f/6, etc. What can be done in an hour at a dark site could require 20, 30, 40 hours or more in a light polluted zone with the same system. And that goes for OSC as well as mono.

 

You need more exposure...and 3 hours should very achievable even on the shortest of nights.


Edited by Jon Rista, 26 December 2018 - 03:39 PM.


#97 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 25,616
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 26 December 2018 - 03:22 PM

And with all the halving of noise every 6C...I’m get noise as bad as it as at -20C.

I can’t imagine what an exposure at +30C would look like. That’s over 256 times more noise!!

Surely there must be at a point where after halving it a few times it’s undetectable...

Yes, at deeper cooling temps, halving dark current again does not matter as much. If you have 0.005e-/s, and you halve that, you end up with 0.0025e-/s, and the difference between those two is usually going to be immaterial in the final integration. It would take special circumstances for the difference to matter, such as very very long individual sub-exposures (which you do not seem to be doing).



#98 joelin

joelin

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2,873
  • Joined: 14 Jan 2008
  • Loc: Saratoga, CA

Posted 26 December 2018 - 05:38 PM

So would it be fair to say at -20C , the dark current shouldn’t matter?

#99 freestar8n

freestar8n

    MetaGuide

  • *****
  • Freeware Developers
  • Posts: 12,793
  • Joined: 12 Oct 2007
  • Loc: Melbourne, Australia

Posted 26 December 2018 - 05:56 PM

So would it be fair to say at -20C , the dark current shouldn’t matter?

When you say "no calibration files" - does that mean no bias and no flat?  You absolutely need to subtract at least the bias from the frames before stacking.  Normally you would subtract the master dark from each frame - but if you assume the dark current is negligible - that just says you can use the master bias as a dark.  It doesn't say you can ignore the dark or bias completely.

 

Even so, the pattern of motion in your blink video surprises me as causing the end result because there is a fair amount of motion.  Are you sure you are stacking all the frames in that sequence?  If so - it is still somewhat consistent with non-random dithering - because there is clear motion to the upper left - consistent with the noise pattern - then a move to the upper right - and then motion to lower right.  So a good part of the motion is in the direction of the final noise - and could explain the final result having the streaked pattern.

 

Ultimately you need to reduce flexure and any field rotation - and use at least a master bias subtract - and dither widely and randomly enough that it randomizes any underlying pattern of motion between frames.

 

Frank



#100 joelin

joelin

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2,873
  • Joined: 14 Jan 2008
  • Loc: Saratoga, CA

Posted 26 December 2018 - 06:18 PM

When you say "no calibration files" - does that mean no bias and no flat?  You absolutely need to subtract at least the bias from the frames before stacking.  Normally you would subtract the master dark from each frame - but if you assume the dark current is negligible - that just says you can use the master bias as a dark.  It doesn't say you can ignore the dark or bias completely.

 

Even so, the pattern of motion in your blink video surprises me as causing the end result because there is a fair amount of motion.  Are you sure you are stacking all the frames in that sequence?  If so - it is still somewhat consistent with non-random dithering - because there is clear motion to the upper left - consistent with the noise pattern - then a move to the upper right - and then motion to lower right.  So a good part of the motion is in the direction of the final noise - and could explain the final result having the streaked pattern.

 

Ultimately you need to reduce flexure and any field rotation - and use at least a master bias subtract - and dither widely and randomly enough that it randomizes any underlying pattern of motion between frames.

 

Frank

I'm not sure how to evaluate a random walk, but the video seems random enough to me. Sure you could argue the stars seem to go in 3 different directions (upper left, upper right, lower right), but I would suspect there are some overarching directions in any random walk. This post: https://www.quora.co...-of-a-fair-coin says theres an 11% of getting 5 heads in a row in just 10 coin flips...wow! 11% is quite a bit




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics