Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Inconsistent star counts in NINA

  • Please log in to reply
8 replies to this topic

#1 Hindsight2020

Hindsight2020

    Viking 1

  • *****
  • topic starter
  • Posts: 964
  • Joined: 10 Jan 2023
  • Loc: Atlanta, GA

Posted 15 August 2024 - 01:00 PM

I've asked this in the NINA discord and am hopeful someone may respond there but since that hasn't happened yet, I'll tap the collective wisdom here too.

 

Start counts should change. Clouds, seeing, smoke, transparency - etc. What I don't understand is why there is such a MASSIVE delta between some subs that can't be explained. 

 

A while ago, I updated my NINA settings so it logs star count (among other things) in an effort to get an at-a-glance view of transparency across a night, or multiple nights of imaging. 

 

I have not (to the best of my knowledge) changed any settings recently (other than adding a couple more variables into the filename and appending some letters in front of each to help make it easier to read), and all images are captured via advanced sequencer in NINA that is generally fully automated. 

 

The variance I am seeing can be quite huge. In some cases, I will see a star-count of over 4,000 stars, while in other cases only 1,200. When I view these raw lights in PixInsight, the mean and median are basically the same (within 1 ADU), so I would think this rules out moonlight. I don't SEE anything in the images just looking at them in PI that makes me think one of them suffers from a major transparency deficiency. Using the FWHMEccentricity script in PI, PI reports 7,315 stars in the image that NINA reported has 4,315 stars and PI reports 7,611 stars in the image that NINA reported has 1,263 stars. I don't expect NINA to ever agree with PixInsight in star counts due to different algorithms and settings but I would expect there to be a large variance in star count reported by PI just like in NINA, but not only is there not a large variance, the variance that is there is opposite in PI from what it is in NINA. 

 

Setup is an FRA400 on an EQ8 with 3nm Chroma OIII filter and an ASI2600MM camera. 

 

I notices shifts within nights, and bigger shifts from night to night.

 

Everything about these should be the same. Same exposure, same filter, same gain. Dates were 8/8 and 8/15. In one of these, NINA reported 4,315 stars in the filename while NINA reported 1,263 stars in the other filename. The 4,315 is not an outlier. I have multiples in the 4,000s, in the 3,000's, in the 2000's and even down into the 1,200s etc. 

 

Here are two raw subs for example purposes

 

https://drive.google...iew?usp=sharing

 

https://drive.google...iew?usp=sharing

 

Any ideas?



#2 PIEJr

PIEJr

    Apollo

  • *----
  • Posts: 1,477
  • Joined: 18 Jan 2023
  • Loc: Northern Los Angeles County, Southern California

Posted 15 August 2024 - 01:45 PM

I think the answer lies somewhere in this:

Ever go swimming with goggles or a face mask and look up at the sky from underwater?

Yeah, I think the night sky is something like that, but not as pronounced.

But still, a lot of variation.

 

So yep, the atmosphere is one big variable in the results you might expect. And something we cannot control.

 

But then, I don't buy into Orion being a star factory. I think Orion might be stars from black holes coming back from the other side. smirk.gif

 

And while I really like NINA, I'm not sure she is a math whiz at counting..


Edited by PIEJr, 15 August 2024 - 01:47 PM.


#3 james7ca

james7ca

    Hubble

  • *****
  • Posts: 13,113
  • Joined: 21 May 2011
  • Loc: San Diego, CA

Posted 15 August 2024 - 02:12 PM

I think your problem MAY be because of the very high number of bright pixel defects in these images. NINA is probably measuring those defects as stars and unless you've calibrated your subs with darks in PixInsight it MAY be doing the same thing. Are you using darks to calibrate your files in PixInsight?

 

The files you uploaded were NOT calibrated and I ran CosmeticCorrect to remove the hot pixels and that process found almost one million hot pixels and even then there were still pixel defects that had not been removed. Plus, when I perform an equal histogram stretch on these images the background comes out to be the same but the actual star sizes are pretty significantly different visually, so there could be a problem with focus between these two subs (and that too could affect the star counts). But, since you MAY just be measuring mostly hot pixels when you run the FWHMEccentricity script you MAY not be seeing the true difference in the sizes and shapes of the stars.

 

What is the FWHM that you measure in PixInsight? Have you run "Support" in the FWHMEccentricity script to see what stars are actually being measured? You'll want to do a ScreenTranferFunction on the analysis map to be able to see the faintest stars that are being measured and then overlay the map with the original file (so you can alternate between the map and the actual image).

 

I ran CosmeticCorrection on both images and then I measured the FWHM on each and 2024-08-15_04-10-43_O_t-8.00_600.00s_f4.84_h1.59_s1263_r0.24_e0.39_g100-50_0116_cc came out with a FWHM of 1.36 pixels (probably undersampled) and the other file (2024-08-08_00-22-00_O_-7.90_600.00s_5.76_4315_0.29_G100_OFF50_0023_cc) came in at 1.98 pixels. Thus, something definitely affected the star sizes. It also looks like there was a meridian flip between these two subs and I rotated one to match the other before I ran CC and measured the result.

 

All that said, since the files you uploaded weren't calibrated I can't tell what your calibrated files actually look like and that could make a big difference in the analysis of the problem.


Edited by james7ca, 15 August 2024 - 02:15 PM.

  • Hindsight2020 likes this

#4 Hindsight2020

Hindsight2020

    Viking 1

  • *****
  • topic starter
  • Posts: 964
  • Joined: 10 Jan 2023
  • Loc: Atlanta, GA

Posted 15 August 2024 - 04:25 PM

James, thank you very much for taking the time to look at and analyze the files. 

 

Are you using darks to calibrate your files in PixInsight? Ultimately yes (when I stack and process the images) but for the purposes of my troubleshooting, no. My thought process is: NINA isn't dark-subtracting or performing any calibration of the lights, yet it is providing star counts, so I don't want to calibrate it during debugging in PI either. 

 

Does the amount of hot pixels you found seem overly high to you for an ASI2600MM? What other pixel defects did you notice? 

 

RE the star sizes coming out quite differently, I would not be surprised as those images were taken on different days, it has been cloudy off and on, and varying degrees of insane humidity. I expect that to be the case and I am specifically looking to use star-counts to help me gauge it so I can weed out individual subs, blocks of subs, or entire nights, before I load subs into WBPP. But based on the inconsistency of star-count in NINA, I can't seem to rely on that as a yardstick. 

 

If NINA is reporting star counts with uncalibrated files, would you suggest that due to hot pixels and other factors, I not bother trying to rely on NINA star acounts?

 

I will calibrate now and post the files shortly.



#5 Hindsight2020

Hindsight2020

    Viking 1

  • *****
  • topic starter
  • Posts: 964
  • Joined: 10 Jan 2023
  • Loc: Atlanta, GA

Posted 15 August 2024 - 05:30 PM

I calibrated them. Here are two different images from the same batch, one with over 4000 reported stars and another with around 2000 reported stars, by NINA. 

 

I can definitely notice that the lower star-count images are dimmer all-around. Much less signal. This would be consistent with changes to transparency resulting in reduced or increased star counts. Hot pixels or not, there is a clear difference between the two in terms of signal.

 

In PI, I did an STF and blinked the two. I mostly just see differences between the two of them that are caused by hot/warm pixels or pixel defects - I don't see different star counts. Maybe NINA doesn't count some of the dimmer stars that I can see in the image and that may be why the images with low transparency have a lower star count because the dim stars are below the brightness cutoff, and why the pictures that had better sky transparency had higher star brightness which likely caused more of the dimmer stars to now be counted. Regardless, there is definitely a link to overall target brightness and star-count which would indicate that the star count value NINA displays is accurate for what I need it for (judging sky transparency). 

 

I used automatic Cosmetic Correction in WBPP yet still see some pixel defects which is odd. Perhaps I need to adjust my Cosmetic Correction settings. 

 

Here are two calibrated+registered images.

 

https://drive.google...iew?usp=sharing

 

https://drive.google...iew?usp=sharing



#6 james7ca

james7ca

    Hubble

  • *****
  • Posts: 13,113
  • Joined: 21 May 2011
  • Loc: San Diego, CA

Posted 15 August 2024 - 11:07 PM

Well, if the sky transparency changes that will affect your star counts. But, I look at both HFR (similar to FWHM) and median/mean to judge the focus and sky transparency or brightness when capturing with N.I.N.A.

 

In the case of PixInsight, you should always calibrate the files that you want to measure when using the FWHMEccentricity script. Also, the difference that I measured between an FWHM of 1.36 pixels versus 1.98 is quite large and that indicates some kind of problem. That could either be from the seeing conditions or your focus or perhaps a measurement error (for the latter, measuring noise rather than stars). In any case, an FWHM of 1.36 pixels is undersampled to the point where it could affect the shape of your stars. What kind of image scale are you using for your imaging?

 

As for the pixel defects, I don't think CosmeticCorrection reports the actual number of hot or cold pixels it finds, it just uses a statistical measure to estimate that number. So, when I said it found almost one million hot pixels that was only the estimate out of CosmeticCorrection (probably based upon the sigma value used to automatically detect the defects). But, there were definitely tens of thousands of small, hot pixels in those subs and it MAY have been affecting your measurements. As to what you should expect from the ASI2600MM, I really don't know if that is particularly high for hot pixels but I assume that you were using cooling on the camera since that can greatly affect those kinds of defects.


Edited by james7ca, 16 August 2024 - 12:21 AM.


#7 james7ca

james7ca

    Hubble

  • *****
  • Posts: 13,113
  • Joined: 21 May 2011
  • Loc: San Diego, CA

Posted 16 August 2024 - 12:14 AM

Did you perform an equal STF on both subs? To do that, select the reference file and do the STF. Then, while the second sub is still in the background drag the STF instance icon (small solid blue triangle) over to the other file and drop it. This will do an equal stretch on both subs. This will allow you to compare the star quality and the backgrounds more accurately.


Edited by james7ca, 16 August 2024 - 12:18 AM.


#8 ngc2218

ngc2218

    Viking 1

  • -----
  • Posts: 501
  • Joined: 26 Jun 2022

Posted 16 August 2024 - 02:50 AM

I've asked this in the NINA discord and am hopeful someone may respond there but since that hasn't happened yet, I'll tap the collective wisdom here too.

 

Start counts should change. Clouds, seeing, smoke, transparency - etc. What I don't understand is why there is such a MASSIVE delta between some subs that can't be explained. 

 

A while ago, I updated my NINA settings so it logs star count (among other things) in an effort to get an at-a-glance view of transparency across a night, or multiple nights of imaging. 

 

I have not (to the best of my knowledge) changed any settings recently (other than adding a couple more variables into the filename and appending some letters in front of each to help make it easier to read), and all images are captured via advanced sequencer in NINA that is generally fully automated. 

 

The variance I am seeing can be quite huge. In some cases, I will see a star-count of over 4,000 stars, while in other cases only 1,200. When I view these raw lights in PixInsight, the mean and median are basically the same (within 1 ADU), so I would think this rules out moonlight. I don't SEE anything in the images just looking at them in PI that makes me think one of them suffers from a major transparency deficiency. Using the FWHMEccentricity script in PI, PI reports 7,315 stars in the image that NINA reported has 4,315 stars and PI reports 7,611 stars in the image that NINA reported has 1,263 stars. I don't expect NINA to ever agree with PixInsight in star counts due to different algorithms and settings but I would expect there to be a large variance in star count reported by PI just like in NINA, but not only is there not a large variance, the variance that is there is opposite in PI from what it is in NINA. 

 

Setup is an FRA400 on an EQ8 with 3nm Chroma OIII filter and an ASI2600MM camera. 

 

I notices shifts within nights, and bigger shifts from night to night.

 

Everything about these should be the same. Same exposure, same filter, same gain. Dates were 8/8 and 8/15. In one of these, NINA reported 4,315 stars in the filename while NINA reported 1,263 stars in the other filename. The 4,315 is not an outlier. I have multiples in the 4,000s, in the 3,000's, in the 2000's and even down into the 1,200s etc. 

 

Here are two raw subs for example purposes

 

https://drive.google...iew?usp=sharing

 

https://drive.google...iew?usp=sharing

 

Any ideas?

Different algorithms will yield different star counts.



#9 Hindsight2020

Hindsight2020

    Viking 1

  • *****
  • topic starter
  • Posts: 964
  • Joined: 10 Jan 2023
  • Loc: Atlanta, GA

Posted 16 August 2024 - 06:06 AM

Different algorithms will yield different star counts.

That is correct - I mentioned the same in my OP:

I don't expect NINA to ever agree with PixInsight in star counts due to different algorithms and settings but I would expect there to be a large variance in star count reported by PI just like in NINA, but not only is there not a large variance, the variance that is there is opposite in PI from what it is in NINA. 

 

 

@James, yes I did do an equal STF on both. 

 

Well, if the sky transparency changes that will affect your star counts.

 

Yes and that is the reason I am appending star counts to the filename: To five me an at-a-glance indication of major sky transparency issues so I can manually cull subs before running WBPP. I have a number of other stats appended to the filename to assist with the same, such as FWHM, guiding RMS, eccentricity, etc. I am at the stage now of manually validating what is reported by NINA to ensure everything is working as it should and if NINA tells me one sub is 4,000 stars and another is 1,000 stars, I can take that to the bank and cull the 1,000 star sub, but as part of my verification, I wasn't seeing what I expected to see. IE I didn't see any major visual difference. Your advice to compare the subs after calibration instead of before was good advice and allowed me to get to the bottom of it. I hadn't done that previously because I had figured, "If NINA is counting stars on raw subs, I should also be able to compare the raw subs myself and see the difference" but I've learned that just isn't the case. Bottom line is that I've learned the star counts are properly working as an indicator of sky transparency. The counts don't need to be accurate, they just need to be accurately reflective of transparency and I've verified that now. 

 

Thanks again for the help!

 

On to the next problem....




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics