Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

A comparative test between ASI290MM and ASI224MC in Hi-Res

  • Please log in to reply
23 replies to this topic

#1 Marco Lorenzi

Marco Lorenzi

    Explorer 1

  • *****
  • topic starter
  • Posts: 74
  • Joined: 27 Mar 2007
  • Loc: Singapore

Posted 19 September 2019 - 01:07 AM

 Hi everyone, with quite a delay I am eventually posting the result of a comparative test I run few months ago between two of the most popular cameras used in hi-res imaging. The ASI290MM is probably considered to be the best monochromatic camera for the planetary imaging in the market at current time, as well as the ASI224MC probably being the reference among color cameras.

Since generally monochromatic cameras are considered to be superior to color ones for hi-res work, I wanted to test both these two cameras in rapid sequence, imaging through the same telescope and working in condition as similar a as possible (seeing, sampling, etc.).

 

To test the cameras I used a TeleVue 2x barlow with a custom made adapter with variable length to adjust its power and compensate for the different pixel sizes between the 224MC and the 290MM. So the two cameras worked with a similar sampling.
During the test the seeing was pretty good and stable. I integrated the same amount of total light for both the ASI290MM and 224MC (5 minutes), in particular the image with the 290MM is the stack of the best (33%) frames from 4x30" (Red) and 3x30" (Green and Blue) videos sequences, while the image with the 224MC is the result of stacking the best (33%) frames of 5 videos of 60 ".

 

The panel showing the final comparative result  is reduced to respect the forum rules, the full resolution panel is anyway visible on (and can be downloaded from) my website here: https://www.glitteri...tem/i-58XwmBj/O

 

I processed both images the best I could to bring out all the details available, so the final results are probably a little more contrasted than I normally do. I didn't apply any noise reduction to the final result to stay true to the cameras output.

 

Considerations

 

1) The 290MM produced a slightly better Jupiter, with finer details, especially in the blue channel.
2) Colors with the 290MM were more evident, I had to push less on the saturation comparing the 224MC. This confirms that the RGB filters provide a better color separation than the Bayer matrix, as also quite easily noticeable looking at the 224 relative color curves where there is much more overlapping between the green and the other channels.
3) The processing time with the mono sensor was significantly higher (you have to do part of the processing on each channels separately instead that working on a already assembled RGB image).
4) Taking RGB sequences in variable weather conditions (as in my case, where most of the nights including the one of the test are affected by passing clouds with clear windows limited to 7/8 minutes) is a real challenge, I had to redo several times the RGB sequence due to passing clouds during the imaging of one of the channels.

 

Ultimately, is a monochrome sensor worth the extra complexity ?
I leave to each one drawing their own conclusions, in my opinion the answer is 'yes' if you want to push your own telescope to its limit (when the seeing allows it).

However for a "normal" use or for observing sites with poor or average seeing the 224MC confirms to be a great camera that delivers comparable results of a mono sensor, but with much less complexity in shooting and processing and overall lower cost, in particular when also the filters and filter wheel are added to the package.

 

Let me know what you think.

 

Clear Skies
Marco

Attached Thumbnails

  • Jupiter_224MC_290MM_1600.jpg

Edited by Marco Lorenzi, 19 September 2019 - 01:11 AM.

  • Sunspot, John Boudreau, tomwall and 8 others like this

#2 msacco

msacco

    Explorer 1

  • -----
  • Posts: 59
  • Joined: 13 Aug 2018

Posted 19 September 2019 - 01:12 AM

Very interesting thread, I just purchased the 224MC for both guiding and planetary imaging, still haven't received that.

I don't have any filters yet, so with the 290 being much more expensive, and also adding the cost of the filters was too much for me right now.

 

Love the comparison, the 290 is clearly a winner here. Thanks :)



#3 james7ca

james7ca

    Fly Me to the Moon

  • *****
  • Posts: 7414
  • Joined: 21 May 2011
  • Loc: San Diego, CA

Posted 19 September 2019 - 02:37 AM

Marco, thanks for this comparison. The differences are even more obvious in the download (full size) image from your website. The 290MM definitely wins in this comparison.



#4 CPellier

CPellier

    Apollo

  • -----
  • Posts: 1340
  • Joined: 07 Aug 2010

Posted 19 September 2019 - 04:19 AM

Very good test with nice remarks. Indeed the mono camera will have a slightly better color rendition. Potentially it will also permit a better resolution, although this last point could be not true at every case (depending on conditions maybe)

At any time, a color camera will never produce correct separated channels, so if you want to see real blue light images of the planet (especially Mars), you must use a mono sensor.


  • Marco Lorenzi likes this

#5 RedLionNJ

RedLionNJ

    Soyuz

  • *****
  • Posts: 3701
  • Joined: 29 Dec 2009
  • Loc: Red Lion, NJ, USA

Posted 19 September 2019 - 05:52 AM

Not sure I'm buying this result.  A more rigorous test would have been to compare the ASI290MC with ASI290MM.

 

More details on the processing regime would be useful?

 

Also on the exposure details?  Gain, frame duration, fps

 

It's too early in the day me to do the mental math with any degree of certainty - but was the total exposure (number of photons captured by each photosite) and culled for further processing really the same?

 

Just a bit skeptical - I've seen 290MC images which are equal or superior to mono images.



#6 james7ca

james7ca

    Fly Me to the Moon

  • *****
  • Posts: 7414
  • Joined: 21 May 2011
  • Loc: San Diego, CA

Posted 19 September 2019 - 06:26 AM

Not sure I'm buying this result.  A more rigorous test would have been to compare the ASI290MC with ASI290MM.

 

More details on the processing regime would be useful?

 

Also on the exposure details?  Gain, frame duration, fps

 

It's too early in the day me to do the mental math with any degree of certainty - but was the total exposure (number of photons captured by each photosite) and culled for further processing really the same?

 

Just a bit skeptical - I've seen 290MC images which are equal or superior to mono images.

I'd agree that Marco's results should be limited only to a comparison between these two different cameras and even then the results can't be termed absolutely conclusive.

 

However, if you used an ASI290MC would you make any changes to the sampling for that particular camera given that the Bayer pattern in a one-shot-color (OSC) camera operates at an effectively lower sampling? I know that a lot of proponents for OSC say that the Bayer pattern doesn't matter in terms of sampling but mathematically that is quite impossible. So, a user may not notice that difference but it does exist (with absolute certainty, no reason to even try to debate that point). Some users bring out Bayer drizzling as a way to dismiss that difference but the thing is you can drizzle a mono camera also so it's a never ending race.

 

That said, the difference between a mono and OSC sampling are probably not that extreme and are probably overshadowed by other factors like the seeing conditions (and the opportunity to gather subs during any given window of time), the user's technique, and the post processing. Simply put, with proper handling you can produce excellent results with either type of camera, but in terms of just the sampling the mono camera will always start with some benefits over its OSC sibling.


  • DMach likes this

#7 Marco Lorenzi

Marco Lorenzi

    Explorer 1

  • *****
  • topic starter
  • Posts: 74
  • Joined: 27 Mar 2007
  • Loc: Singapore

Posted 19 September 2019 - 09:40 AM

Thanks everyone for the comments !

 

Not sure I'm buying this result.  A more rigorous test would have been to compare the ASI290MC with ASI290MM.

 

More details on the processing regime would be useful?

 

Also on the exposure details?  Gain, frame duration, fps

 

It's too early in the day me to do the mental math with any degree of certainty - but was the total exposure (number of photons captured by each photosite) and culled for further processing really the same?

Grant, I may agree that, for the pure purpose of testing mono vs color, the best would have been having the same chip in both flavors, however,  apart not owning a 290MC, I consider the 224MC to be still the best color camera out there in terms of sensitivity, speed and read-noise.. pity it is not available in mono version! The target of this test was to compare these two cameras in conditions as similar as possible, including sampling. The few comparative tests I saw around were all taken on different nights and with not optimized setups (i.e. same f/ for both sensors, producing a completely different sampling), that is why I wanted to run it by myself.

Here are the exposure details (same for all sequence with same filter/sensor combination):

 

ASI290MM @ 0.077 "/pixel

Red Filter: FPS (avg.)=166, Shutter=6.0 ms, Gain=350 (58%) -> Histogram 58%

Green Filter: FPS (avg.)=166, Shutter=6.0 ms, Gain=350 (58%) -> Histogram 55%

Blue Filter: FPS (avg.)=166,  Shutter=6.0 ms, Gain=390 (65%) -> Histogram=58%

 

ASI224MC @ 0.073 "/pixel

Luminance filter : FPS (avg.)=136, Shutter=7.3 ms, Gain=370 (61%) -> Histogram 58%

My filter are Baader LRGB.

 

Imaging scale was pretty close, with the 224MC sampled about 5% higher than the 290MM, so by calculation the mono camera received about 10% more photons flux.

 

About the processing, comparable for both camera: stacking in AS3!, derotation (and RGB assembly for the mono) in WinJuPos, deconvolution in PI (restoration filter, same values in both stacks), color adjustment and saturation push in PS.

 

Just a bit skeptical - I've seen 290MC images which are equal or superior to mono images.

Well, were these images taken with the same telescope, during the same night, at closely comparable image scale and processed by the same "hand" and program settings as in my test? Otherwise, despite I don't doubt you have seen equal or superior results from the 290MC than some images taken with mono sensors, I don't think you may really draw solid conclusions between a camera or the other, being probably the seeing or other factors (including different processing abilities) more significant on the final result than the sensors themselves.. 

 

However, if you used an ASI290MC would you make any changes to the sampling for that particular camera given that the Bayer pattern in a one-shot-color (OSC) camera operates at an effectively lower sampling? I know that a lot of proponents for OSC say that the Bayer pattern doesn't matter in terms of sampling but mathematically that is quite impossible. 

Agree here James, in theory the sampling of a bayer camera should be sqrt(2) higher to produce the same resolution, being two pixels of same color positioned in diagonal on the chip. However it seems that recent algorithms are able to reduce this gap consistently, not sure which logic they are using. In any case I was pretty much oversampling the resolution power of my C14 with both cameras (> 4 times), so increasing it even more on the OSC to compensate for the bayer matrix is probably a splitting-hair discussion..

In fact I actually wanted to work @ f/25 on Jupiter with the color sensor and around @ f/19 with the mono, but the minimum length of my adapter was still a bit too much, so I ended up sampling at higher imaging scales than needed, with both cameras..

 

Clear skies

Marco


Edited by Marco Lorenzi, 19 September 2019 - 10:27 AM.


#8 Jeff Lee

Jeff Lee

    Surveyor 1

  • *****
  • Posts: 1648
  • Joined: 17 Sep 2006

Posted 19 September 2019 - 09:47 AM

I have to say for the cost the 224 is an amazing tool. So amazing it's forced me into buying a 294 PRO (224 will be my guide camera at time when I am using EQ not AZ. When using AZ one will be on the ES102 and the other camera on the C8). Now that I finally have the 1 1/4 .5 focal reducer working on my St80 & C90 the 224 is even more of a bargain. In fact its cost/benefit factor might be as high as one can find in astronomy now.

 

Not much of a planet person (working the Herschel 1900 or so objects I can see from my yard), but the 224 does well against the 290 and a filter wheel setup for me and EAA.



#9 james7ca

james7ca

    Fly Me to the Moon

  • *****
  • Posts: 7414
  • Joined: 21 May 2011
  • Loc: San Diego, CA

Posted 19 September 2019 - 10:57 AM

...Agree here James, in theory the sampling of a bayer camera should be sqrt(2) higher to produce the same resolution, being two pixels of same color positioned in diagonal on the chip. However it seems that recent algorithms are able to reduce this gap consistently, not sure which logic they are using.

In any case I was pretty much oversampling the resolution power of my C14 with both cameras (> 4 times), so increasing it even more on the OSC to compensate for the bayer matrix is probably a splitting-hair discussion...

Yes, but the sampling on your two cameras was not really my concern. In theory if you were using two IMX290 cameras (one mono and one OSC) you should reach critical sampling on the mono camera at a shorter focal length and thus the mono camera could either use shorter exposures or receive more photons per pixel given the same exposure time. So, because of the difference in the sampling the mono camera could benefit in terms of pure signal. Then the question becomes it is even fair to try to compensate for this difference when you test the two cameras? Since you'd have to increase the focal length for the OSC camera if you really want to compare them at a mathematically equivalent level of sampling.

 

As to whether a "recent algorithm" can eliminate this difference I'd say again that that is mathematically impossible. Even though debayer algorithms work very well they can't give you the exact same sampling that you'd get on a mono camera (if both cameras have the same size pixels and use the same image scales). The fact that some say they can't see the difference is perhaps just an indication that their comparison methods are flawed or that their testing was not sensitive enough to show this difference. Of course, if you really can't see any difference after repeated trials then it may not matter in a practical sense. But, I don't think this issue should be ignored or dismissed simply out of hand.

 

Really, the only way to settle this issue would be to run multiple trials with each camera in a way to produce a statistically valid sample for both cameras and then to compare the results via some form of direct, objective measurement. Lacking the latter you'd have to assemble a panel of judges with a sufficient understanding of image quality to make a reasoned verdict. But, none of that is likely to happen and it probably doesn't matter much since as I mentioned earlier pretty much every one of these cameras is capable of producing really outstanding work.

 

That said, I appreciate what you've done and it's certainly a better and more interesting approach that simply relying upon some uncontrolled comparisons done in a too casual manner.


  • Marco Lorenzi likes this

#10 Rouzbeh

Rouzbeh

    Viking 1

  • -----
  • Posts: 512
  • Joined: 28 Jun 2006

Posted 19 September 2019 - 12:42 PM

Very nice comparison. I have both cameras too and the C14. Didn't do it as extensive as yourself but yes a few nights with the 224 and I felt the 290mm was better for me.

Id say its worth the extra effort considering all the extra work that goes into the setup as a whole (more for larger systems).

 

I am now trying to compare the 290mm with the QHY183m. Having difficulty running the QHY proper;y with FC.


  • Marco Lorenzi likes this

#11 Tom Glenn

Tom Glenn

    Surveyor 1

  • -----
  • Posts: 1949
  • Joined: 07 Feb 2018
  • Loc: San Diego, CA

Posted 19 September 2019 - 06:21 PM

This is a good conversation starter, and a nicely prepared figure, but I don't believe the overall conclusions are fully justified.  Image 2 is certainly superior to Image 1, but this is an n=1 experiment in which it is impossible to control the variable of the seeing.  I'm less concerned about the color differences (there will always be differences here between mono and OSC), but rather the significant difference in sharpness of detail (Image 2 is clearly sharper).  The difference between these two images with respect to detail is very typical of capture to capture variation, even when using the same camera and identical settings.  If I compile a series of ten consecutive captures, all taken under steady conditions (as judged by eye) with the same camera and settings, the final results are not identical, often with one (or sometimes several) captures having superior sharpness.  And the magnitude of the difference is about what is displayed here.  And sometimes a WinJUPOS integration actually makes the outcome worse, instead of better.  The image taken with the ASI224 above has a little softness to it compared to the ASI290 image, but I would not conclude this is caused from the camera itself.  Unfortunately, there is no good way to control for the atmosphere, outside of some type of laboratory bench test on a controlled resolution chart.  Absent that, unfortunately, this test would have to be repeated dozens of times on many occasions before any conclusions could be tentatively made.  And those tests would have to take place under excellent conditions (which you may have given your location), because most images are limited by the atmosphere rather than by the equipment.  


  • Marco Lorenzi likes this

#12 Tom Glenn

Tom Glenn

    Surveyor 1

  • -----
  • Posts: 1949
  • Joined: 07 Feb 2018
  • Loc: San Diego, CA

Posted 19 September 2019 - 06:25 PM

Also, I just noticed that total integration time was 5 minutes for both, yet typical images with the ASI224 run about 3 minutes each, often with ten or more captures being recorded, followed by choosing the sharpest ones to continue processing.  So your test presented above really does represent a brief snapshot of two fairly short capture sequences, with the ASI290 image coming out better in this case.  I would want to see far more data, spread across much longer periods of time, for a comparison.  Nevertheless, a nice presentation, and as I said above, good for conversation.  



#13 Marco Lorenzi

Marco Lorenzi

    Explorer 1

  • *****
  • topic starter
  • Posts: 74
  • Joined: 27 Mar 2007
  • Loc: Singapore

Posted 19 September 2019 - 10:57 PM

Yes, but the sampling on your two cameras was not really my concern. In theory if you were using two IMX290 cameras (one mono and one OSC) you should reach critical sampling on the mono camera at a shorter focal length and thus the mono camera could either use shorter exposures or receive more photons per pixel given the same exposure time. So, because of the difference in the sampling the mono camera could benefit in terms of pure signal. Then the question becomes it is even fair to try to compensate for this difference when you test the two cameras? Since you'd have to increase the focal length for the OSC camera if you really want to compare them at a mathematically equivalent level of sampling.

Yes James, that is actually a very good point, having mono camera intrinsically higher resolution thanks to the lack of the Bayer matrix, one could use a lower sampling to get the same resolving power while enjoying higher light throughput, so either use shorter exposures or get higher s/n at the same exposure time.

Which is probably another test I will run next Jupiter opposition, with the new adapter I built allowing a wider range of barlow magnification smile.gif  

 

This is a good conversation starter, and a nicely prepared figure, but I don't believe the overall conclusions are fully justified.  Image 2 is certainly superior to Image 1, but this is an n=1 experiment in which it is impossible to control the variable of the seeing.  I'm less concerned about the color differences (there will always be differences here between mono and OSC), but rather the significant difference in sharpness of detail (Image 2 is clearly sharper).  The difference between these two images with respect to detail is very typical of capture to capture variation, even when using the same camera and identical settings.  If I compile a series of ten consecutive captures, all taken under steady conditions (as judged by eye) with the same camera and settings, the final results are not identical, often with one (or sometimes several) captures having superior sharpness.  And the magnitude of the difference is about what is displayed here.  And sometimes a WinJUPOS integration actually makes the outcome worse, instead of better.  The image taken with the ASI224 above has a little softness to it compared to the ASI290 image, but I would not conclude this is caused from the camera itself.  Unfortunately, there is no good way to control for the atmosphere, outside of some type of laboratory bench test on a controlled resolution chart.  Absent that, unfortunately, this test would have to be repeated dozens of times on many occasions before any conclusions could be tentatively made.  And those tests would have to take place under excellent conditions (which you may have given your location), because most images are limited by the atmosphere rather than by the equipment.  

Yes Tom, you are right in saying that ideally a test, to be significant, should be repeated several times/nights in order to be statistically relevant and rule out seeing differential between one set and another. However at my location I don't see large variations during the night and even from one night to the other (with some exceptions), results are pretty consistent, which is something I bless on imaging at the equator outside the jet-stream effect. Again, everybody can draw his own conclusion, mine is that last opposition I could consistently recorded more details with the 290MM than with the 224MC (which I am still routinely using), with the caveat that I could not use the mono sensor on nights where there were frequently passing clouds and I could not afford collecting RGB sequences without interruptions (which are unfortunately the majority of nights in Singapore).  

 

Also, I just noticed that total integration time was 5 minutes for both, yet typical images with the ASI224 run about 3 minutes each, often with ten or more captures being recorded, followed by choosing the sharpest ones to continue processing. So your test presented above really does represent a brief snapshot of two fairly short capture sequences, with the ASI290 image coming out better in this case.  I would want to see far more data, spread across much longer periods of time, for a comparison.

This is not totally correct. The integration time of each video is not something fixed, it should be adjusted according to the resolving power and imaging scale of your setup to avoid the planet rotation becoming an issue. More on the technical math behind here. I have done some tests in the past on the matter and found that with longer captures details were effectively worse after stacking than stacking shorter ones, despite AS!3 try to minimize a bit the problem. For my set up so I have set 60 seconds as ideal time on Jupiter, then of course I take multiple sets to later derotate to integrate more light (i.e. higher s/n). As comparison with the C9.25 I used to capture up to 100 seconds. Furthermore, using data of up to 30 minutes on Jupiter is not really feasible, you will have a quite noticeable loss of details (in particular on the planet borders) when you operate in good seeing conditions and using large telescopes, no matter the single capture duration. With Jupiter the actual limit I found is 10/12 minutes top, while Saturn can accepts longer total integration times having less and lower contrasted disk details. This of course according to my experience and tests. On this particular comparison I could have taken sets to integrate 10 min instead of 5 on both cameras, however I frankly don't believe that this would have produced any relative difference, having both images surely improving their s/n while keeping the same gap of performances.. 

 

Clear skies

Marco


Edited by Marco Lorenzi, 20 September 2019 - 12:13 AM.


#14 aeroman4907

aeroman4907

    Apollo

  • -----
  • Posts: 1054
  • Joined: 23 Nov 2017
  • Loc: Castle Rock, Colorado

Posted 20 September 2019 - 06:53 AM

Marco, I apologize if you stated this in your posting, but I did have a couple of questions on your processing with the 224MC.

 

Did you process the image as an RGB and then create synthetic R, G, B channels after processing, or did you create synthetic R, G, B channels for processing and then recombine for your full color final image?  Also, did you ever use a synthetic L channel and process that as well?


Edited by aeroman4907, 20 September 2019 - 06:53 AM.


#15 Kokatha man

Kokatha man

    Voyager 1

  • *****
  • Posts: 12925
  • Joined: 13 Sep 2009
  • Loc: "cooker-ta man" downunda...

Posted 20 September 2019 - 09:10 AM

 

Yes Tom, you are right in saying that ideally a test, to be significant, should be repeated several times/nights in order to be statistically relevant and rule out seeing differential between one set and another. However at my location I don't see large variations during the night and even from one night to the other (with some exceptions), results are pretty consistent, which is something I bless on imaging at the equator outside the jet-stream effect. Again, everybody can draw his own conclusion, mine is that last opposition I could consistently recorded more details with the 290MM than with the 224MC (which I am still routinely using), with the caveat that I could not use the mono sensor on nights where there were frequently passing clouds and I could not afford collecting RGB sequences without interruptions (which are unfortunately the majority of nights in Singapore).  

 

This is not totally correct. The integration time of each video is not something fixed, it should be adjusted according to the resolving power and imaging scale of your setup to avoid the planet rotation becoming an issue. More on the technical math behind here. I have done some tests in the past on the matter and found that with longer captures details were effectively worse after stacking than stacking shorter ones, despite AS!3 try to minimize a bit the problem. For my set up so I have set 60 seconds as ideal time on Jupiter, then of course I take multiple sets to later derotate to integrate more light (i.e. higher s/n). As comparison with the C9.25 I used to capture up to 100 seconds. Furthermore, using data of up to 30 minutes on Jupiter is not really feasible, you will have a quite noticeable loss of details (in particular on the planet borders) when you operate in good seeing conditions and using large telescopes, no matter the single capture duration. With Jupiter the actual limit I found is 10/12 minutes top, while Saturn can accepts longer total integration times having less and lower contrasted disk details. This of course according to my experience and tests. On this particular comparison I could have taken sets to integrate 10 min instead of 5 on both cameras, however I frankly don't believe that this would have produced any relative difference, having both images surely improving their s/n while keeping the same gap of performances.. 

 

Clear skies

Marco

Hi Marco - I tend to not respond to these types of posts because in the end it comes down to individual experiences...& you do make the point that these findings are according to your own experience & testing. wink.gif

 

I have a quite different view from our own experiences, even though we still prefer the ASI290MM - almost as much to do with the flexibility - but in this last 2019 season (as opposed to last year with Jove, Saturn & Mars for instance) constant passing cloud would have sabotaged a lot of our r-g-b sequences if we'd used the mono predominantly.

 

The reason why I have responded is specifically for that part of your quote above which I have emboldened & italicised: I posted a set of examples earlier this year of some reasonable Jovian imaging outcomes using various WinJUPOS integration timespans http://www.momilika....12-Square.png       EDIT - please remove the "%C2%A0%C2%A0" that CN always seems to want to ad to the url to get this link to function properly!

 

The reality is that almost all AA images display Quote: <"loss of details (in particular on the planet borders)"> & as an example I've taken the liberty of inserting one from the link I've given (reduced to the scale of your own images) for comparison...using what I freely admit is an "uber-long" WJ integration timespan of over 45 minutes...but nonetheless one that I don't think suffers in any way in comparison wrt the limbs...despite its' extreme duration.

 

I'm not poo-paa-ing your own deliberations, merely pointing out that besides a few of the concerns other folk have noted it is always going to be extremely difficult to nail down any absolutes in these matters - because someone will nearly always produce examples suggesting either a different opinion, or at least contrary evidence: apologies for the lack of f/l on our image btw, I'm away from home & cannot access those details atm...but it would be operating at around 5800mm. (approx. f16)

 

Incidentally, with the ASI224MC we only reduce the single capture durations to 150" at opposition when Jove is at maximum aparent diameter, varying it from that to anything up to 220" - solely dependant upon this apparent diameter, even with our highest resolution images from this year...& most of these end up in WJ integrations of 15' to 25'. wink.gif

 

Comps.png

 

pps: I've just realised that my current thread https://www.cloudyni...ow-integration/  has a comparison between an integration of 5 x 3 minute captures & a single stack in Post #13 where- if anything - the integration (over at least a 20-minute total timespan) displays more clarity near the limb peripheries. lol.gif

 

I can respect the fact that you approached the captures etc of your testing with some specific ideas on how to create some uniformity Marco, but tbh any of these "comparative tests" are fraught with all sorts of variables & inconsistencies...such that as I initially said that personally I really don't give them too much credence, though I certainly applaud you for your efforts! waytogo.gif smile.gif


Edited by Kokatha man, 20 September 2019 - 09:17 PM.

  • Marco Lorenzi likes this

#16 Jenz114

Jenz114

    Explorer 1

  • -----
  • Posts: 54
  • Joined: 15 Aug 2019
  • Loc: Southwest Missouri, USA

Posted 20 September 2019 - 11:33 PM

Interesting thread and conversation starter, not to mention great image results!

I personally decided the added complexity of capture and processing of the mono cameras was not worth the additional cost and effort at my current level of imaging. I went with the ASI224MC and have found it more than adequate given my experience and local conditions.

As a side note, I do wonder if anyone has tried LRGB filters over a color camera and processing the captures like a mono to see what would happen. Huge degradation in image quality? Marginal improvements? How would the Bayer matrix and camera respond to these filters? Hmm... thinking1.gif



#17 aeroman4907

aeroman4907

    Apollo

  • -----
  • Posts: 1054
  • Joined: 23 Nov 2017
  • Loc: Castle Rock, Colorado

Posted 22 September 2019 - 08:53 AM

As a side note, I do wonder if anyone has tried LRGB filters over a color camera and processing the captures like a mono to see what would happen. Huge degradation in image quality? Marginal improvements? How would the Bayer matrix and camera respond to these filters? Hmm... thinking1.gif

I haven't tried using a filter with a color camera, but in a recent post I discussed how I think that would be a poor option here: https://www.cloudyni...-color-sensors/


  • Jenz114 likes this

#18 Jenz114

Jenz114

    Explorer 1

  • -----
  • Posts: 54
  • Joined: 15 Aug 2019
  • Loc: Southwest Missouri, USA

Posted 22 September 2019 - 06:55 PM

I haven't tried using a filter with a color camera, but in a recent post I discussed how I think that would be a poor option here: https://www.cloudyni...-color-sensors/

I just finished reading your post, thank you again!



#19 Marco Lorenzi

Marco Lorenzi

    Explorer 1

  • *****
  • topic starter
  • Posts: 74
  • Joined: 27 Mar 2007
  • Loc: Singapore

Posted 23 September 2019 - 01:34 AM

Marco, I apologize if you stated this in your posting, but I did have a couple of questions on your processing with the 224MC.

 

Did you process the image as an RGB and then create synthetic R, G, B channels after processing, or did you create synthetic R, G, B channels for processing and then recombine for your full color final image?  Also, did you ever use a synthetic L channel and process that as well?

I processed the 224MC as RGB and didn't add or processed separately a synthetic Luminance. Also for the mono camera I only stacked and derotated the single channel separately but than processed the image after having assembled it in RGB

 

The reality is that almost all AA images display Quote: <"loss of details (in particular on the planet borders)"> & as an example I've taken the liberty of inserting one from the link I've given (reduced to the scale of your own images) for comparison...using what I freely admit is an "uber-long" WJ integration timespan of over 45 minutes...but nonetheless one that I don't think suffers in any way in comparison wrt the limbs...despite its' extreme duration.

 

<...> 

 

Incidentally, with the ASI224MC we only reduce the single capture durations to 150" at opposition when Jove is at maximum aparent diameter, varying it from that to anything up to 220" - solely dependant upon this apparent diameter, even with our highest resolution images from this year...& most of these end up in WJ integrations of 15' to 25'. wink.gif

Hi Nicholas, thanks for your contribution. In fact looking at your exceedingly long derotation makes me wander what I may do wrong in my derotation process, since I have never been able to assembly sequences spanning a period longer than 15 minutes without having some kind of artifacts coming out on the stack. So definitely there must be something I need to improve there since it is clear that going beyond the limit I stated previously it's pretty much possible!  Ditto for the 60" self imposed limit, I wander if you are possibly using several more points of alignment on a oversized (drizzled?) image of Jupiter than I do and so somehow AS!3 is doing a better job correcting the planet rotation during the capturing time ?. Anyway taking several shorter videos or longer one doesn't change too much if the total light collection is the same since at the end we are always going to stack thousands of very short exposure single images. 

Thanks anyway for adding this interesting comparison that allows me to review my statement and re-thing of my imaging approach for next year opposition (and apology to Tom for the apparently wrong reply I gave his post).

Apart from that, it is clear that the more images we are able to stack the better it is (in my test I kept acquisition time shorter but I usually collect 12/13 minutes of light at high ftp for each image to have higher s/n), it is also interesting that the apparent level of details on our images is pretty comparable (yours having obviously less noise), that means I may be as well close to the resolving limit of my C14 even if the fact that you are reporting to work at "only" 5800 mm of focal length (f/16) is puzzling me a bit, I would have expect you need a longer f/ ratio to reach the OTA limits with the 224MC scratchhead2.gif

 

Clear skies

Marco



#20 Kokatha man

Kokatha man

    Voyager 1

  • *****
  • Posts: 12925
  • Joined: 13 Sep 2009
  • Loc: "cooker-ta man" downunda...

Posted 23 September 2019 - 06:43 AM

Hi again Marco - it's "Darryl" as in Darryl Milika, Pat Nicholas is my partner. smile.gif

 

Not much I can add: we do under-sample mainly which must give a (very) slight assistance in capture duration lengths, & of course assists with ROI's & fps: said under-sampling is supposed to be advantageous when drizzling in AS!3 (which we always do btw) but I have conducted a few trials myself where no tangible differences could be seen when changing the image scale/focal length quite drastically from  around f16 to f20+...though I do "feel" the lesser scales do assist with seeing in general.

 

The only drizzling we enable is in AS!3 (none beforehand via any other programming) so apart from using around 80-odd AP's, I don't think my AS!3 settings are anything special...

 

As far as WinJUPOS integrations are concerned, the "auto detect" does make Jovian integrations quite simple - although this function often plays up rather mysteriously & I have to set the AF by eye: nonetheless I always try to ensure there are no "gaps" between each colour capture or sets of mono captures...we always re-focus after each colour capture & between each set of r-g-b captures with the mono camera. (meaning before each r-g-b sequence we check each filter's focus position on the DRO & utilise these for the r-g-b captures, then re-check before the next set...so with either camera it is really only those delays plus the defogging & cooling of the corrector plate - a very frequent occurrence btw where we are - that make up the time intervals between stacks etc)

 

That's all pretty standard stuff I'm sure you'd agree & as I said WJ integrations are usually around the 20 minute mark or so...there nearly always seems to be a small sequence or time interval where the seeing & imaging outcomes peak & these don't usually last more than a half hour most times...thus the integrations usually being in that timespan...

 

The only "artefact" we encounter is that from slight over-sharpening...usually "LD" (limb darkening) in WinJUPOS takes care of that, but of course if we do "push" the sharpening in poorer seeing it is simple & highly-effective to create 2 sets of all images, one set significantly less-sharpened so no edge-effect appears...WJ'ing both sets of stacks & layer masking the limbs with a feathering to get the softer edges...

 

I don't think that 45 minute integration was processed that way, but it is a helpful approach that maintains limb integrity should you feel the need to "push" sharpening, although when I say "push" I'm only referring to the limb artefact - not garishly over-sharpened images & details of course! wink.gif



#21 Marco Lorenzi

Marco Lorenzi

    Explorer 1

  • *****
  • topic starter
  • Posts: 74
  • Joined: 27 Mar 2007
  • Loc: Singapore

Posted 24 September 2019 - 02:31 AM

Hi again Marco - it's "Darryl" as in Darryl Milika, Pat Nicholas is my partner. smile.gif

Sorry Darryl! I should have paid a bit more attention blush.gif

 

 

Thanks for the big insight of your process flow, appreciated. As I understand there are several differences in your capturing/processing approach comparing mine:

- You tend to under sample slightly (f/ vs pixel size is 4.5x) than you drizzle in AS!3 to recover details (even if you find not visible difference than having higher sampling). I sample usually much higher (f/ vs at pixel size is about 7x) and do not drizzle, I will try next opposition if your approach is better in terms of details or more "forgiving" in terms of planet rotation. Surely a smaller disk is brighter, i.e. exposure shorter and ftp higher, which is always a good benefit in further minimizing seeing effect..

-  You set a quite large amount of anchor points (80) on a larger (3x?) drizzled image, I use about 1/3rd of your points on a normal size disk, this is perhaps the reason why AS!3 is doing a better job in minimizing the rotation during the capture than in my case, an interesting approach I will also try.

- I do not refocus between color filters, I found the difference quite small and this is time consuming during the capture of several short time sequences, however you are probably right that there could be some benefit in refocusing (as I do it in my deep sky imaging).

- I am not sure about LD feature in WinJuPos, I will give it a look since I have never use it..

 

Overall thanks for the input, good food for thoughts and a way to see what I can improve in my processing routine waytogo.gif

 

Clear Skies

Marco



#22 Kokatha man

Kokatha man

    Voyager 1

  • *****
  • Posts: 12925
  • Joined: 13 Sep 2009
  • Loc: "cooker-ta man" downunda...

Posted 24 September 2019 - 04:23 AM

No worries Marco - just to repeat myself, I do not <"set a quite large amount of anchor points (80) on a larger (3x?) drizzled image."

 

The AP's (about 84) are set on the capture scale avi image that is in AS!3 & then 3x drizzled in AS!3's processing regimen...I used to use significantly less AP's but trialling led me to believe the greater number was producing tangible benefits...

 

Lower sampling is mainly (as I see it from my perspective) to achieve higher fps via slightly smaller ROI heights due to a smaller disk diameter in FireCapture - of course a slightly smaller apparent diameter is going to give a very small advantage as far as total imaging timespans for each capture or capture set but I look at it as what I see is a slightly better response in lesser seeing conditions probably more than anything else...

 

LD doesn't provide any real benefits besides ameliorating the limbs slightly...the real "secret" to WinJUPOS integrations is the accuracy of the AF positioning - but if "Auto-detect" is functioning for Jove then that should be irrelevant...although I "suspect" you "might" still be able to cause discrepancies that "could" flaw the outcomes... confused1.gif so whether by eye or auto-detect I always scrutinise each image as it drops into WJ to look out for a very smooth & totally horizontal "spin" as the next image appears. (by shifting the "Open" window clear of the current image so I can see everything when I open the next - but maybe this is all part of my obsessiveness! rofl2.gif )

 

Your <"it is also interesting that the apparent level of details on our images is pretty comparable (yours having obviously less noise), that means I may be as well close to the resolving limit of my C14"> I hope isn't correct for either of us Marco: we have quite a few higher resolution images this year than the one I posted with your images above...but regardless, I don't want to accept such a limitation even if it is an inevitability sometime..."never surrender" - or at least not until I get too old..! rofl2.gif

 

On the issue of constant re-focusing I am afraid that is one area where I am utterly adamant about - small shifts of a half dozen or so microns are always happening & I have the advantage of an eagle-eyed partner to confirm my own observations there...as well as adding that often the shift can suddenly be significantly greater all of a sudden: no one should want to capture a poorly-focused set of data imho! wink.gif

 

 



#23 Tom Glenn

Tom Glenn

    Surveyor 1

  • -----
  • Posts: 1949
  • Joined: 07 Feb 2018
  • Loc: San Diego, CA

Posted 25 September 2019 - 01:54 AM

(and apology to Tom for the apparently wrong reply I gave his post).

No apologies needed, Marco!  I have enjoyed reading this thread, and disagreements followed by productive discussion always leads to positive outcomes as far as I'm concerned.  I think the mathematics of planetary rotation and capture duration that you describe in your link is sound, but as Darryl points out, many of these theoretical limitations can be overcome, due in part to the improvements in current software, as well as the limitations imposed on all amateurs by the atmosphere.  As long as the blur produced from rotation (after stacking with multi APs in AS!3) is less than the degradation that occurs in all amateur images due to turbulence, then it will go largely unnoticed.  The other point that becomes apparent in these discussions is that there are so many different ways that we process our images, with many subtle variations, that this makes it very difficult to compare image outcomes and have the ability to explain with certainly what caused the differences.  



#24 TareqPhoto

TareqPhoto

    Aurora

  • -----
  • Posts: 4534
  • Joined: 20 Feb 2017
  • Loc: Ajman - UAE

Posted 25 September 2019 - 03:39 PM

For me it is simple, for the moon both are good enough, and if color then a color camera is better for me than mono, i had very headache and bad times trying to align the filters for the moon and produce that good enough colors even if they are true and vibrant, while with color i had likely more flexible control with colors, and at the end i liked the results from the color camera for the moon in color, but with mono i like the mono results of the moon better.

 

For the planets, it is all about seeing as well, if seeing is good then focusing with mono is easier, if the seeing is stable then i can keep using mono for RGB and having very nice results, but if seeing is almost average or poor then i just don't bother with mono and move to color, most of the time if i am good in processing many people can't tell the difference between mono RGB or color camera, we know that with mono and RGB we have more control, not just planetary but even in DSO, but i already read that discussions about Mono vs. OSC, and sounds the big advantage of color cameras or roughly the main reason is the time spending to capture and process, in DSO it is more complicated if it is not dark sky, but in planetary and the moon sounds the color is always a winner in term of time consuming.




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics