Congratulations on the excellent result. Can you tell a bit about the camera and processing? Are you moving from CCD to CMOS ?
Sure, Ger. The camera was a QHY600, so I have, indeed, moved from the SBIG CCD camera to CMOS—just a couple months ago. I love the camera. Cooling is very good, field of view is huge, read noise is negligible. What’s not to like? I think I even get a touch more resolution on nights of good seeing, though it’s hard to tell for sure since I haven’t had any nights of good seeing since I bought the camera. Decent seeing, yes. Good? No.
As far as technique and processing... It’s fairly straightforward for anyone using good equipment and PixInsight. No guiding is required with my mount (absolute encoders) and short subs like these. When I got to the dark sky site I took twilight flats, twenty per filter. I then polar aligned using PemPro and got a result that was within an arc minute or so. I then built an 80 point sky model using APMM so I could skip guiding. I took a few sample shots with and without guiding just to make sure I was getting equivalent FWHM values in both situations. I was. Seeing was decent but not exceptional at something like 2.2” FWHY for the central two thirds of the frame. I setup a sequence in NINA to shoot ninety minutes of RGB images, then 2.5 hours of luminance (when the target was high in the sky), then another ninety minutes of RGB. Checked the first few frames, then went to bed in the car. NINA ran the whole routine, then parked the mount around 4am and warmed up the camera.
I calibrate my frames manually rather than using WBPP just because I use the overscan region of my chip to account for bias drift. WBPP has a checkbox for incorporating overscan, but I haven’t managed to get it to yield the correct results yet. So I calibrate using the individual processes. I then use WBPP for cosmetic correction, registration, and image integration. I process luminance first, then RGB. For the luminance frames, I used Mure Denoise, then cropped, then DBE to flatten the image. Next was a slight deconvolution. This is a subject where “less is more” in terms of deconvolution since I didn’t want to create halo artifacts. I used a luminance mask to limit the areas where deconvolution would be run and also used a star mask for local support. I think I chose 10 iterations.
The luminance data looked really good, so I wanted to stretch them enough to show (if faintly) the IFN as well as the tidal tail streaming out from NGC 5198. That meant star bloat was going to be a problem. I decided to run a morphological transformation to shrink the stars a touch so they wouldn’t swell as badly when the data were stretched. The stretch itself was a tough compromise. I was worried about highlights in the galaxy going past what I could easily recover in HDRMT if I tried to make the tidal tail and IFN obvious. The result was a compromise. HDRMT recovered the blown out highlights in M51. I used the “substitute preview” script to make sure HDRMT was only applied to M51 itself.
RGB data were not nearly as good. Biggest issue was that I have a big chip in my green filter—I dropped the entire carousel a couple weeks ago and the green filter was damaged. Like an idiot, I had rotated the chip to the absolute worst possible location, one of the corners. As a result, that meant it intruded way into the field of view. I’ve already got vignetting problems using 50mm circular filters with such a fast scope. The chip made things worse. Oh,well, at least the issue is in the color data not the luminance. With the meridian flip, though, that meant two corners of data representing nearly ⅓ of the frame were bad.
RGB got heavy noise reduction—Mure Denoise as well as TGV and a low pass filter (multi-scale linear transform). Normal channel combination, crop, and DBE followed by a photometric color calibration. I then split out the red channel and merged it with my H-Alpha data using the technique described in LightVortex.com. ARCSINH stretch (less noise than a normal histogram transformation) followed by a mild histogram transformation to get tones where I wanted them.
I do LRGB combine and final touch up in Photoshop rather than PixInsight since it lets me work freehand for things like local contrast adjustments, etc. I overplayed the luminance layer onto the RGB layer, the converted to LAB mode so I could tweak the saturation using a curve to the A and B channels. Like ARCSINH stretch, this avoids the color noise if you were to just bump up the saturation in RGB mode. Because this is Photoshop and I can select areas by hand, I could apply different levels of saturation to stars and to the galaxies.
Finally, some local contrast adjustments on the galaxies by using a high pass filter set to “soft light” or “overlay” (depending on the scale of the adjustment). I think I used a high pass at 2.6 pixels and other at about five pixels. I used a mask to make sure the local contrast adjustments were applied to the brighter portions of the galaxies only (since this contrast bump tends to highlight noise), then painted out the brighter stars by hand so they wouldn’t swell. Imported to Lightroom Classic just for database and file management. Don’t think I made any adjustments in Lightroom. Maybe a touch of sharpening? Not much if anything.
Edited by Jared, 13 April 2021 - 06:10 PM.