Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

PixInsight Integration Results: What Are Yours Like?

  • Please log in to reply
50 replies to this topic

#1 jdupton

jdupton

    Vanguard

  • *****
  • topic starter
  • Posts: 2,106
  • Joined: 21 Nov 2010
  • Loc: Central Texas, USA

Posted 13 August 2019 - 08:31 PM

Hi,

 

   I need to work on improving my image processing skills and have often wondered about what other users of PixInsight have once they get an integrated image fresh from the ImageIntegration Process. I tend to not expose deeply enough and end up stretching my images until they break; then backing off a little. It seems that is probably not the best way to approach things.

 

   As a case in point, I am working with some OSC data gathered the other night. The target is a 13th magnitude galaxy and friends. I have 3 hours of data taken from skies measuring 17.75 mpass -- the result of shooting in town with lots of light pollution and a roughly first quarter moon in the sky about 35° away from the target. Once I calibrate, DeBayer, assign weights, align, and integrate the captured frames, I am left wondering if others see similar messes. I am planning to gather about 12 to 15+ hours total on this object but this same set of questions arise for every target I image, it seems.

 

   Specifically: what do your own "typical images" look like coming right out of ImageIntegration?

  • How "flat" are your images?
    Do you see any residual vignetting or complex gradients when you really stretch them hard in STF?
    How many ADU units separate the lightest and darkest areas of background signal? (This is easiest to judge on galaxy images, I think. My recent image shows a difference of 19-20 ADU in background intensities on a base of 159 ADU.) Some of that was a linear light pollution gradient and some of it looked like residual vignetting.
     
  • How faint are the structures you wish to later make clearly visible in the final finished image?
    Again, how many ADU units separate the faint portions of your linear image from the background? (My recent in-process galaxy shot shows only 3 ADU between the background and the arms of the galaxy right out of ImageIntegration. For reference, on my system, 3 ADU is much less than one electron of signal -- about 0.3e-, actually.)
     
  • Do you routinely need to make more than two passes through DBE to completely flatten your images?
    When do you consider the image "flat" enough to move on with other processing? My galaxy image background varies by about 1.46 ADU after flattening. It took two passes through DBE and one touch-up pass of ABE to get there. My median background at this point is about 157 ADU.

   So, while there may not be a "typical" set of values for others, what do your results look like as you finish the image integration stage but before you start with the rest of your linear processing?

 

Sample_Galaxy_ScreenShot_180min.jpg

 

   This is what I see with only three hours of data. I hope it will bloom into something worthwhile after multiple imaging sessions. However, I never seem to be satisfied and always try to get more from that data than it wants to yield. Does everyone feel this way?

 

 

John


  • IshanAstronomer likes this

#2 NorthField

NorthField

    Viking 1

  • *****
  • Posts: 954
  • Joined: 01 Jun 2017
  • Loc: SW Missouri

Posted 13 August 2019 - 08:40 PM

Not that I’m “right”, but I try to shoot till I never want to shoot that object again, even if it takes months ( now I’m on a couple that will take years 🙄 )

Everybody is different, I know
  • pbkoden likes this

#3 IshanAstronomer

IshanAstronomer

    Vostok 1

  • -----
  • Posts: 176
  • Joined: 26 Dec 2011

Posted 13 August 2019 - 08:45 PM

I have the same questions. I'll hear the replies.

After image integration, I see residual gradients that are from amp glow, imperfect flat calibration and light pollution when I do autostretch STF. DBE always gives me a hard time.



#4 bobzeq25

bobzeq25

    Hubble

  • *****
  • Posts: 17,383
  • Joined: 27 Oct 2014

Posted 13 August 2019 - 09:49 PM

Coming out of integration, in my Bortle 7 skies, I always have very large gradients in any broadband stack.  The first two pictures demonstrate.  82 minutes of RGB.  The stack, and the stack after one pass of ABE.  No processing except a stretch, necessary so one can see the data.

 

Narrowband is completely different.  The gradient is very much less.

 

The next post (CN restrictions dictate) is 2 hours 50 minutes of Ha.  Same deal.  There was a small gradient removed.  Third picture.

 

The final image is here, with complete acquisition details.

 

https://www.astrobin...4117/G/?nc=user

 

ABE exampl before.jpg

 

ABE example after.jpg


Edited by bobzeq25, 13 August 2019 - 09:57 PM.

  • jdupton likes this

#5 bobzeq25

bobzeq25

    Hubble

  • *****
  • Posts: 17,383
  • Joined: 27 Oct 2014

Posted 13 August 2019 - 09:51 PM

.

 

ABE example Ha.jpg

 

ABE example Ha after.jpg

 

ABE example Ha gradient.jpg


Edited by bobzeq25, 13 August 2019 - 09:55 PM.

  • elmiko likes this

#6 jdupton

jdupton

    Vanguard

  • *****
  • topic starter
  • Posts: 2,106
  • Joined: 21 Nov 2010
  • Loc: Central Texas, USA

Posted 13 August 2019 - 10:14 PM

Bob,

 

   Yes, your Ha data is very clean. It doesn't appear that much gradient had to be removed. (At least the gradient image looks relatively tame.) Your data was obviously very good for both the RGB and Ha.

 

   Regarding the RGB stack, can you put some numbers on the magnitude of the gradients that were removed with ABE? Are we taking 20+ ADU or less than 2 ADU between the darkest background and the lightest background? I am looking for some rough quantitative estimates by which I can judge, in general, whether my processing is really bad or whether I just need to amp up my data acquisition.

 

   It's hard for me to get a handle on whether others only have to fight 2 ADU worth of gradient / residual vignetting or much, much more. Knowing where you started, I can then compare to what I normally see and then I'll know more about whether I just need lots more data or I need to hone my processing skills. At the moment, I don't which windmill to attempt to tilt.

 

 

John

 

PS: Another request: do you have a shot of the first photo (the blue one) with a simple unlinked STF applied? That would remove the color cast and allow us to see the gradients directly. If you have that, I'd love to see it. Just an unlinked STF rather than the linked STF you posted.


Edited by jdupton, 13 August 2019 - 10:29 PM.


#7 bobzeq25

bobzeq25

    Hubble

  • *****
  • Posts: 17,383
  • Joined: 27 Oct 2014

Posted 13 August 2019 - 10:46 PM

How would I do that?  Hovering the clicked cursor in PI gives a result normalized to between 0 and 1.  Is there a setting somewhere that changes that?

 

Here's something pretty interesting.  I was trying to get you some ADU data.  My median ADU values (12 bit) for the 3 RGB channels went from 15.8, 12.5, 26.1 to 19.5, 19.5, 19.5.

 

ABE normalized the channels.  Looking at some descriptive stuff by hovering the mouse over "normalize", that appears to be the default, when "normalize" is not checked.  Usual PI weirdness.

 

It probably goes along with the fact that mono R, G, and B stacks with my mono camera look a lot closer to narrowband than to OSC.

 

In case I haven't made it clear, I really don't understand all the technical details.  <smile>  I know what makes for a pretty picture.


Edited by bobzeq25, 13 August 2019 - 10:52 PM.


#8 fmeschia

fmeschia

    Apollo

  • *****
  • Posts: 1,103
  • Joined: 20 May 2016
  • Loc: Mountain View, CA

Posted 13 August 2019 - 10:56 PM

How would I do that?  Hovering the clicked cursor in PI gives a result normalized to between 0 and 1.  Is there a setting somewhere that changes that?

 

Of course there is. Edit menu > Readout Options > select Binary integer range with (for example) 16 bit depth.

Francesco



#9 jdupton

jdupton

    Vanguard

  • *****
  • topic starter
  • Posts: 2,106
  • Joined: 21 Nov 2010
  • Loc: Central Texas, USA

Posted 13 August 2019 - 11:03 PM

Bob,

 

   Thanks for playing along. waytogo.gif

 

   Below are a couple of ways to look at the ADU (16 bit) values.

  • Apply an ABE to your integrated result. In the Interpolation and Output section, set the Function Degree to 0. This will remove the color cast but none of the gradients that may be present.
     
  • In the Target Image Correction section, set the Correction to be Subtraction (you probably already have this set) and make sure the Normalize is NOT checked.
     
  • Apply the ABE.
     
  • You can then read out the background levels with the mouse pointer. If they are normalized, multiply the readout by 65536. Alternately, you can change the readout mode (temporarily) to 16 bit ADU values by clicking the right-facing triangle at the bottom of the screen in the readout area and selecting "Integer Range | 16 bit". When done, you can go back to normalized readouts by selecting "Normalized Real Range | 1e-5" (or 1e-6) as you prefer.

 

John


Edited by jdupton, 13 August 2019 - 11:04 PM.


#10 bobzeq25

bobzeq25

    Hubble

  • *****
  • Posts: 17,383
  • Joined: 27 Oct 2014

Posted 13 August 2019 - 11:41 PM

Bob,

 

   Thanks for playing along. waytogo.gif

 

   Below are a couple of ways to look at the ADU (16 bit) values.

  • Apply an ABE to your integrated result. In the Interpolation and Output section, set the Function Degree to 0. This will remove the color cast but none of the gradients that may be present.
     
  • In the Target Image Correction section, set the Correction to be Subtraction (you probably already have this set) and make sure the Normalize is NOT checked.
     
  • Apply the ABE.
     
  • You can then read out the background levels with the mouse pointer. If they are normalized, multiply the readout by 65536. Alternately, you can change the readout mode (temporarily) to 16 bit ADU values by clicking the right-facing triangle at the bottom of the screen in the readout area and selecting "Integer Range | 16 bit". When done, you can go back to normalized readouts by selecting "Normalized Real Range | 1e-5" (or 1e-6) as you prefer.

 

John

Thanks.  That was useful learning.

 

Most of what I'm seeing before ABE is the color cast.  It's hard to see exactly what the gradient change is because the data is really noisy when I move the cursor. 

 

If I knew how to clone a preview (Transfer the pixel range of a preview from one image to another), I could maybe smooth the data out by measuring the median of the preview?  Can you tell me how to do that?

 

Or do you have other ideas?

 

I'm going away shortly, will pick this up tomorrow.


Edited by bobzeq25, 13 August 2019 - 11:44 PM.


#11 fmeschia

fmeschia

    Apollo

  • *****
  • Posts: 1,103
  • Joined: 20 May 2016
  • Loc: Mountain View, CA

Posted 13 August 2019 - 11:43 PM

Lay both image windows side by side. Select the preview you want to clone. Drag the preview tab from the left sidebar of the source image to the left sidebar of the target image.
Francesco
  • bobzeq25 and RossW like this

#12 jdupton

jdupton

    Vanguard

  • *****
  • topic starter
  • Posts: 2,106
  • Joined: 21 Nov 2010
  • Loc: Central Texas, USA

Posted 14 August 2019 - 12:23 AM

Bob,

 

   Yes, it's hard to get a good constant reading by moving the mouse around although that is easiest to get a rough idea of what is brightest and darkest. I resorted to doing just about what you asked when I pulled readings from my image to write up the original post. Here is what I did:

  • Apply the Function Degree 0 ABE as indicated above.
  • Define a small preview window in the darker area of background.
  • Select that preview using the tab at the left side of the image window.
  • Clone it by right-clicking its tab and selecting "Clone"
    The two previews will be on top of each other.
  • Select the main image using the top left tab of the image window.
  • Change the cursor to Edit Preview Mode by pressing the Alt-E keys on the keyboard.
  • Grab the top most preview and drag it to one of the brighter parts of the background.
    You now have one preview on a bright section of background and another (of exactly the same size) in a dark area of background.
     
  • Open the Image Statistics Process and click the Track View icon at the bottom right of its window. Set if for 16 bit readout mode.
  • Select one of the previews.
  • Read the median value or average the median values if an RGB image. Write down the value(s).
  • Select the other preview.
  • Read the median value or average the median values if an RGB image. Write down the value(s).
  • Subtract one median from the other to determine the ADU difference between the two preview locations.

 

 

John



#13 terry59

terry59

    Cosmos

  • *****
  • Posts: 9,211
  • Joined: 18 Jul 2011
  • Loc: Colorado, USA

Posted 14 August 2019 - 06:54 AM

I find that results are directly related to the quality of the calibration data. The most challenging thing to do is to get good flats


Edited by terry59, 14 August 2019 - 07:02 AM.

  • Bretw01 likes this

#14 WadeH237

WadeH237

    Aurora

  • *****
  • Posts: 4,950
  • Joined: 24 Feb 2007
  • Loc: Snohomish, WA

Posted 14 August 2019 - 07:29 AM

I think that most of the issues that you are running into are the result of light pollution and short integration times.

 

This is an integration of about 5 hours from a dark sky site (SQM 21.8) at F/7.  I captured it at the last new moon.  This is just integration and a default auto STF.  There is no other processing.  No gradient reduction.  No histogram tweaks.  No noise reduction.  Nothing.  It's from an ASI1600MM-cool through an Astrodon luminance filter.  I also have about 10 hours of RGB data that I will eventually combine with it.  I really only calibrated and integrated it because it was the only night of luminance data that I took over 12 dusk-to-dawn imaging sessions at the site.  I'm months or years behind in processing data that I have, so it's going to be a while before I get to the final image.

 

Now you can't expect to get something like this from your skies, but given what you've posted, I don't think that you are doing anything wrong.  I think that it's just a combination of a faint object, strong light pollution, short integration time, and a OSC color camera (great from dark skies, but challenging in bright skies).

Attached Thumbnails

  • default_autostretch.jpg

  • elmiko, jdupton, calypsob and 1 other like this

#15 WadeH237

WadeH237

    Aurora

  • *****
  • Posts: 4,950
  • Joined: 24 Feb 2007
  • Loc: Snohomish, WA

Posted 14 August 2019 - 07:38 AM

Most of what I'm seeing before ABE is the color cast.

Just a comment on your data...

 

I've seen you post this combination of examples a few times now.  Do you realize that the color cast is almost certainly not in your data?  I would bet good money that if you take a single, raw sub from each filter and do a 1:1:1 combine, the strong blue cast is not there.  If the calibration is working properly, then it wouldn't be there when you combine a single, calibrated sub from each channel, either.

 

I've noticed that the scale algorithm that you use at integration time is where these casts are introduced.  If you use the Statistics tool to look at the median pixel values after integration, I suspect that you'll see that the blue channel has a higher median than the other two.  If you change the output scaling algorithm on ImageIntegration, you can change this behavior.


Edited by WadeH237, 14 August 2019 - 07:39 AM.


#16 jdupton

jdupton

    Vanguard

  • *****
  • topic starter
  • Posts: 2,106
  • Joined: 21 Nov 2010
  • Loc: Central Texas, USA

Posted 14 August 2019 - 07:46 AM

Terry,

 

   I agree that calibration is king. Clean data and good calibration are the goals. Can you pin some numbers to what you routinely get?

 

Wade,

 

   That is a very nice L shot of the galaxy group. I love groups and Arp systems. Do you have any measurements of the gradients (bright to dark differences) in that shot? I can see some but they do not look severe. However, STF can be very misleading and getting numbers can tell a better story. I think your conclusions are right about my just needing lots more data and darker skies. I only have three hours on this target and need at least four or five times that -- of course more that even that is always better.

 

 

John


  • elmiko likes this

#17 terry59

terry59

    Cosmos

  • *****
  • Posts: 9,211
  • Joined: 18 Jul 2011
  • Loc: Colorado, USA

Posted 14 August 2019 - 07:54 AM

Terry,

 

   I agree that calibration is king. Clean data and good calibration are the goals. Can you pin some numbers to what you routinely get?

 

 

 

 

John

I wish I could....about a week ago my imaging data drive crashed and I lost every bit I've ever collected except the final images that are my screensaver

 

frown.gif 



#18 jdupton

jdupton

    Vanguard

  • *****
  • topic starter
  • Posts: 2,106
  • Joined: 21 Nov 2010
  • Loc: Central Texas, USA

Posted 14 August 2019 - 08:24 AM

Everyone,

 

   I have just done a quick comparison of areas of the image I showed in my original post. I got real numbers from exactly the same preview areas of each image. Here is what I did.

  • Define two previews in image. One in bight background and one in dark background.
  • Propagate those two previews to the following images:
    My highest weighted single raw FITs sub of the session.
    The same highest weighted single sub following calibration.
    The raw integrated combination of 180 subs (right out of ImageIntegration)
    The integrated image following flattening with two passes of DBE and one of ABE.
     
  • For each image above, I did the following operations:
    1 - Promote the previews to images (drag onto workspace)
    2 - Apply PixelMath "abs(med(Preview01) - med(Preview011))" to the previews
    Record the value of the Mean of the result using Statistics. (For RGB images after DeBayering, I extracted the Lightness of the PixelMath result and recorded its Mean.)

   My results were as follows:

  • Raw Sub: Mean Difference in Backgrounds = 408 ADU
    (Image Mean= 4149, 9.8% difference)
  • Calibrated Sub: Mean Difference in Backgrounds = 111 ADU 
    (Image Mean= 2216, 5.0%difference)
  • Integration of 180 subs: Mean Difference in Backgrounds = 9.926 ADU
    (Lightness Mean=750, 1.32% difference)
  • Flattened Integration of 180 subs: Mean Difference in Backgrounds = 1.274 ADU 
    (Lightness Mean=117, 1.09% difference)

   It is clear that calibration removed a lot of vignetting. Some residual seems to have remained which continues to show in the integrated image before any background extraction was done. Background extraction (in three passes) got me to less than 2 ADU difference in the background. (My final "un-flatness" is about 0.15 e-.) With a starting point of almost 10% background differences in the raw subs, I ended up at about 1% irregularity in the flatness of the background.

 

   Now, is that good or bad? I don't have any comparisons so I don't know. Maybe that is on par with what others start with when I see exceptional final images posted or maybe others are starting at much less than 1% background irregularity. Those are the questions I wish to get a better handle on. How flat should I shoot for?

 

 

John


  • elmiko likes this

#19 jdupton

jdupton

    Vanguard

  • *****
  • topic starter
  • Posts: 2,106
  • Joined: 21 Nov 2010
  • Loc: Central Texas, USA

Posted 14 August 2019 - 08:26 AM

Terry,

 

I wish I could....about a week ago my imaging data drive crashed and I lost every bit I've ever collected except the final images that are my screensaver

 

frown.gif

   Oh, my! That sucks big time. I lost a bunch of data due to a drive crash several years ago. I try to keep multiple copies on multiple systems now but that eats a huge chunk of space. I feel your pain.

 

 

John



#20 bobzeq25

bobzeq25

    Hubble

  • *****
  • Posts: 17,383
  • Joined: 27 Oct 2014

Posted 14 August 2019 - 10:16 AM

Just a comment on your data...

 

I've seen you post this combination of examples a few times now.  Do you realize that the color cast is almost certainly not in your data?  I would bet good money that if you take a single, raw sub from each filter and do a 1:1:1 combine, the strong blue cast is not there.  If the calibration is working properly, then it wouldn't be there when you combine a single, calibrated sub from each channel, either.

 

I've noticed that the scale algorithm that you use at integration time is where these casts are introduced.  If you use the Statistics tool to look at the median pixel values after integration, I suspect that you'll see that the blue channel has a higher median than the other two.  If you change the output scaling algorithm on ImageIntegration, you can change this behavior.

That's OSC data, no filters.

 

You gave me an idea.  Unlinking the STF works for visualization. 

 

But, ImageStatistics still shows an imbalance in the channels (which is compensated in the unlinked STF).  If I try arcsinh stretch, the color cast is evident.  If I do PhotometricColorCalibration first, most (not all) of the imbalance disappears.  The image color looks good, but there's a large white gradient.

 

So, were you talking about the "Scale Estimator"?  Suggestions about what I should set it to?

 

Thanks.  I'm learning a lot here, thanks to everyone for tolerating my fumbling around.


Edited by bobzeq25, 14 August 2019 - 11:03 AM.


#21 jdupton

jdupton

    Vanguard

  • *****
  • topic starter
  • Posts: 2,106
  • Joined: 21 Nov 2010
  • Loc: Central Texas, USA

Posted 14 August 2019 - 11:29 AM

Bob,

 

   I am not sure whether either of the two Color Calibration routines would change the data enough to matter, other than removing the color cast. It is actually easier (for me) to remove the color cast using an ABE of Function Degree 0 subtraction. Just remember to UNcheck the Normalize option. Another alternative is to do a Background Neutralization on the image. Again, I am not sure if that alters the background in any way other than removing the overall cast. (There is a very slight difference between an ABEf0 and a BN applied to the same image amounting to less than 1 ADU over the whole image. I expect for the rough purposes of gauging backgrounds, it is more than good enough.)

 

 

John



#22 bobzeq25

bobzeq25

    Hubble

  • *****
  • Posts: 17,383
  • Joined: 27 Oct 2014

Posted 14 August 2019 - 11:51 AM

Bob,

 

   I am not sure whether either of the two Color Calibration routines would change the data enough to matter, other than removing the color cast. It is actually easier (for me) to remove the color cast using an ABE of Function Degree 0 subtraction. Just remember to UNcheck the Normalize option. Another alternative is to do a Background Neutralization on the image. Again, I am not sure if that alters the background in any way other than removing the overall cast. (There is a very slight difference between an ABEf0 and a BN applied to the same image amounting to less than 1 ADU over the whole image. I expect for the rough purposes of gauging backgrounds, it is more than good enough.)

 

 

John

In my case (the RGB data above), when I tried things.  ABEf0 took the cast out to a few tenths of an ADU (16 bit).  PhotometricColorCalibration (including Background Neutralization using a preview) reduced it to maybe 1.4 ADU (16 bit).  When stretched, that 1.4 ADU increased a lot.  Not proportionally, somewhat more.  I think it may simply be that the image really has some blue.  But there are so many options here for parameter setting or preview definition, that 1.4 could change.

 

I have a whole lot more to think about and learn about this.  I want to do what you did to measure the gradient in ADU.  It will be a while.

 

Thanks so much for this thread.  It's definitely going to improve my processing.


Edited by bobzeq25, 14 August 2019 - 11:57 AM.


#23 pfile

pfile

    Fly Me to the Moon

  • -----
  • Posts: 5,294
  • Joined: 14 Jun 2009

Posted 14 August 2019 - 11:55 AM

the color cast in an OSC image is sometimes caused by your flats not being grey. it's actually not a problem because all that's happened when you calibrate with a non-grey flat is a multiplication to a linear image, so you're not actually doing anything to the data that can't be exactly undone by another multiplication or division.

 

as long as the SNR of each channel of the master flat is high enough, you should be good.

 

ABE/DBE usually neutralizes the background. ticking "normalize" normalizes the output image to the input image, which would preserve any cast that exists in the original image (save for any casts caused by vignetting.)

 

rob


  • bobzeq25 likes this

#24 jdupton

jdupton

    Vanguard

  • *****
  • topic starter
  • Posts: 2,106
  • Joined: 21 Nov 2010
  • Loc: Central Texas, USA

Posted 14 August 2019 - 12:41 PM

Rob,

 

   Yes, to not using the Normalize option for DBE and ABE. That is why I specifically recommend above not Normalizing when all we want to do is subtract off the color cast of each channel so that the resulting file is reasonably neutral when we measure the background gradient magnitudes.

 

   And yes, I agree the color cast is coming from the flats and is not a problem. It is rather normal. It's just that it gets in the way of quantifying the magnitude of (lightness) gradients in the image.

 

   That is my goal here. I want to get a strong feeling for how good is good enough. In astro-photography, that is not always an easy value to put a metric on. Hopefully, a few others will take the time to measure some of their own image data so we can compare notes.

 

 

John


Edited by jdupton, 14 August 2019 - 01:14 PM.

  • pfile likes this

#25 bobzeq25

bobzeq25

    Hubble

  • *****
  • Posts: 17,383
  • Joined: 27 Oct 2014

Posted 14 August 2019 - 12:51 PM

the color cast in an OSC image is sometimes caused by your flats not being grey. it's actually not a problem because all that's happened when you calibrate with a non-grey flat is a multiplication to a linear image, so you're not actually doing anything to the data that can't be exactly undone by another multiplication or division.

 

as long as the SNR of each channel of the master flat is high enough, you should be good.

 

ABE/DBE usually neutralizes the background. ticking "normalize" normalizes the output image to the input image, which would preserve any cast that exists in the original image (save for any casts caused by vignetting.)

 

rob

Question for you.  Say you have a color cast for any reason.  Gradient reduction is usually the very first thing (maybe after a crop) that's done.  As I've seen that will simultaneously remove a color cast.

 

Is there any advantage to removing a color cast first (other than quantifying the effect of the gradient reduction, as mentioned just above)?




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics