Dan K.....thanks very much for your constructive criticism. It comes as no surprise to me that someone firmly believes this test is flawed. However, I would like the opportunity to point out the flaws in your comments........especially since I am the tester.
See my replies below in green:
The first thing that would invalidate any conclustions are the misalignment of your color channels. The larger stars all have a blue 'cap' and a reddish bottom.
These images were processed on a fairly basic workflow. I am not going to go into detail about processing as I would be here all day and night. Since you appear to to have a valid point, I will not question your skill or experience. However, the blue and tan fringe you are seeing are not a result of poor processing or channel alignment. That is a refractive optical effect called chromatic aberration. Refractive optics focus different wavelengths of light at different points. Feel free to read about this here ....... I am also a precision rifle shooter and had a very lenghty discussion about this with a top engineer at Schmidt & Bender. A high quality german optics company.
*** If that is chromatic abberation then why is it constant over the entire image?
Please read the article that you referenced to me, especially the part about axial and transverse chromatic abberation. You didn't actually read this article yourself, did you? <g>
Roland will no doubt be disheartened that you have found extensive 'chromatic abberation' across the entire field of his 160.
His effrontery in calling this an 'APO'chromatic telescope is offensive to me.
Pull your 16-bit image into PhotoShop or something similar. Look at the individual RGB channels at 200%. You will see non-spherical star shapes that don't share common centers.
The common causes for that would be guiding errors, flexure, or something similar. It could also be a processing mistake.
It is not chromatic abberation.
Suppose that you take two cars down to the drag strip, a Model T and Bugatti Veyron.
You start them off, turn around to talk to the pretty girl behind you, and after a couple of minutes look back at the track to see both cars sitting at the end of the quarter mile.
Many of the responders here would say "Well, there's obviously no difference between the two".
This is exaclty why there are test results displayed at the end of that quarter mile. When the gentleman turns around from talking to the girl, he can clearly see that the Bugatti made it to the end first by analyzing the test result (Elapsed Time).
*** You're kidding, right?
You have totally missed the point of the analogy.
The point is that this camera 'test' does not have the sensitivity to distinguish differences between these two scopes.
If you are going to devise a test to differentiate between two quantities, then you must use a testing device that is sensitive enough to measure a difference. You haven't done this.
Your "no difference" image results is not evidence of no difference between scopes, it is a measure of the testing device not being sensitive enough.
An OSC camera is not the best instrument for this test. The images were not fully calibrated. The author states an unfamiliarity with the processing software. We have no idea of what the processing steps were. The color channels are misaligned. The data has been obviously massaged. The results are presented as 8-bit jpegs.
Unfortunately, a OSC camera was chosen for the test because that is what was available. That is also how the Esprit is marketed. It comes with a t-ring that attaches directly to its flattener. To even the score, that is exaclty why two of them were used. With all do respect......I am familiar enough with my processing software. I was not shooting for an APOD here but if you are familiar with PixInsight, I will be happy to post my general workflow......which was used to produce these images in a side by side fashion. For example......I have a technique for reducing noise......I can use it on both images successfully. PixInsight has a pretty lengthy learning curve and although I have not mastered every tool it offers, I hold the knowledge capable of producing decent results. Later tonight, I will produce a histogram as well as the picture and prove to you that the channels are lined up enough to produce a test result (Elapsed Time).
*** "It's all I had" does not make the camera qualified.
You don't use a histogram to show that channels are aligned, we are talking about the registration of the color channels. We are talking about the stars and stuff in each color channel being aligned, one directly over another.
Your choice of aligning algorithm is importent, especially with an OSC camera. Should you choose 'nearest neighbor' over 'bicubic b-spline' makes a difference in FWHM and star shape.
This is hardly a rigorous, scientific investigation.
.....it was not meant to be a rigorous, scientific investigation. It was meant to show that the Esprit line of scopes can compete with a popular and quality line of optical engineering for less money and less wait. I believe I have proved that point very well.
*** ...and I believe that you haven't. <g>
It's also informative to know that you were out to prove a point, i.e. being biased.
I believe that all you have shown is that your testing procedure wasn't nearly up to the task of differentiating between these two scopes.
Did you star test either of these scopes?