Wanting to buy a monochromatic planetary camera, I thought I could use the ROI mode to increase the fps and, in order to better understand the FOV, I went on Stellarium (I chose Jupiter as the planet to shoot, being the bigger) and entered the telescope and sensors measures. And so far so good.
From now on I refer to a practical example to make a better idea of my final doubt.
- I'm using a Meade 12" ACF f/10 (D = 304.8mm - FL = 3048mm) from this I know that the theoretical maximum magnification is ~2.5xD = 760x;
- my theoretical optimal sampling in the visible band (~550nm) is 0.121"/pixel (to be able to apply the Nyquist criterion);
- the chosen camera is a QHY5L-II because the 3.75um pixels offer me (with a 2x barlow) a resulting sampling of 0.126"/pixel (the closest to the theoretically calculated one);
- the maximum resolution I can aspire to, determined by the average of local turbulence, is around 2".
All that said, to calculate the FOV, given the maximum resolution of (1280x960)pixels of the camera, I calculate the size of the chip or (4.8x3.6)mm, therefore a diagonal of 6mm.
Now to calculate the 'corresponding magnification', instead of using the FL as usual for an eyepiece, here I use the diagonal just calculated and it comes out 508x considering there is the 2x barlow (always in order to achieve the optimal sampling), this becomes 1016x which is already much beyond the fateful maximum theoretical magnification allowed by my telescope, previously calculated, but it's still acceptable.
Now, however, at (1280x960)pixels, or at full resolution as you want, the camera in question takes a measly 30fps, very few! So I take advantage of the smallest ROI available, looking for the right compromise on the one hand for the highest fps and on the other in order to get the captured planet, e.g. Jupiter, in the FOV.
Well, taking into account that Jupiter has an average angular diameter of 39.4" and that the sampling is 0.126"/pixel we immediately obtain ~(313x313)pixel which is therefore the minimum ROI to be adopted to get it in.
The problem is that the minimum resolution available for shooting with this camera is only 320x240 at 200 fps (for the height we are in, but not for the width) and so I have to opt for 640x480 even if this means going to only 80.4 fps...
But now the fun comes, because so far so good, or almost.
Using this lastest resolution I therefore go to calculate, as previously done, the corresponding 'magnification' finding a diagonal of 3mm which leads to 1016x meaning 2032x with the barlow lens: definitely out of scale compared to the theoretical 760x.
Now you will tell me: "Just remove the barlow", but so the sampling is completely wrong, does this mean that I cannot use the ROI?
I've thought quite a lot about it, but I can't find a solution.
The first thing that came to my mind is that ROI is nothing more than a software tool and that, in essence, the actual sensor size doesn't change, but only the data of fewer pixels is transmitted at the output (even if, unfortunately, the increase of the fps is not proportional) and therefore it isn't necessary to recalculate the size of the sensor which always remains, in this case, of (4.8x3.6)mm.
On the other hand, this means that if I initially take a much larger sensor (e.g. 1600MM Pro), with the same pixel size (3.75um) and the same ROI of (640x480)pixels, I will always frame a region of sky much larger than the QHY5L-II?
Maybe I'm getting lost in a glass of water, but this is where the doubts comes out because the reasoning seems counterintuitive and I hope someone could help me, thanks.
Edited by SolarSystem96, 15 May 2021 - 02:36 AM.