I have been searching online to get some answer but info on the subject is limited. LRGB imaging is straight forward, by L-RGB I mean shooting luminosity data with a mono camera and getting RGB would come from a OSC camera.
Let me put down a detailed scenario:
Refractor at 500mm focal length
ASI178mm Cool (2.4 micron)
Camera lens at 500mm focal length
Nikon D5100 (4.8 micron)
For sake of simplicity let's assume that guiding and dithering is at sufficient level, and disregard the difference in field of view.
I am theorizing that I could use the RGB data for color and Luminance from the ZWO for luminosity. The pixel size is of the DSLR is twice as large as the ZWO, 4.8 vs 2.4 respectively. If I stack the OSC data with 2x drizzle in DSS that would give me the same sampling resolution (0.99"/pixel) as the ASI178. This way I could crop out the "area of interest" from the OSC data, align with luminosity and combine later down process flow. This way it would save me some time on the field, not needing to image separe R, G, and B data.
I've read here on CN somewhere I think ( by user ccs_hello if I remember right) that the quality of the separate color data also makes a difference, although many imagers tend to shoot their RGB binning it 2x2 in order to save some time on the field. Consequently, the same theory would fly if I bin the luminosity data 2x2 and combine down the processing flow.
Would this set-up work?
What I am really after is saving imaging time. Up here north in this season I get 1-2 clear nights a month at best. I want to make the best of those rare nights.
Edited by moxican, 24 January 2020 - 07:12 PM.