The confusion is my fault. I used the term "resolution" with one context in mind but it can have several different meanings when we talk about images. It can mean the pixel dimensions of the the image, the Image scale of the image, or the spatial delineation across the target in the sky. I meant the latter use of the term when I called that attribute "better resolution".
The following content is more of a general purpose explanation for others. I expect that you already know all this but it provides some information Dave asked about in the original Post and gives context for other readers in the future. (This assumes I have not flubbed the explanation badly in some way. Let me know if I have...)
This all relates back to the diagram in Post #6 of the related thread I linked. I have redrawn the illustration used by @Art Morrison in that thread below to better show the results you might get from Mono Binning, Color Binning, and SuperPixel deBayer Conversion all in the context of a One Shot Color camera.
The first case to look at is what we all commonly called (just) "binning" in the past. For an CMOS OSC camera, this is called Mono-Binning. This case is where each group of 2x2 or 3x3 adjacent pixels on the camera are combined to create a single pixel in the image of the saved file data.
Mono-Binning -- Combining 2x2 Color Pixels Into A Single Monochrome Pixel
A natural consequence of doing this traditional form of binning is that for OSC cameras, we lose all color information. The output pixels in the binned frame are the combination of multiple pixels without regard to the Bayer filter over each. We do not get a color image, just the addition of the photons captured under the various filters. Also note that each output pixel is made up of 4 adjacent pixels on the sensor. Each pixel in the binned output frame represents twice the linear dimensions in the sky compared to not binning at all. The size of the output frame is one half the original in each dimension.
The second case to examine is the "normal" Color Binning Mode for ZWO OSC cameras. In this mode, we can "bin" the camera 2x2 or 3x3 and retain color information. This is not the method we normally associate with "binning" a camera. This binning operation combines non-adjacent pixels of like Bayer filter color.
Color-Binning -- Combining 4 Like-Color Pixels From A 3x3 Sensor Area Into A Single Color Pixel
Here we do get a color image. Note that each output pixel is made up of 4 non-adjacent pixels on the sensor. Each pixel represents three times the linear dimensions in the sky compared to not binning at all. The resulting file, though, is still one half the size in each dimension as in the prior case of Mono-Binning.
This is where my better resolution" wording comes into play. A Mono-binned image has an image scale equivalent to twice (numerically) of the original. Your spatial resolution is half of the original. However, the Color-binned image represents three times the dimensional area of sky rather than twice. While the pixel dimensioning (image height and width) of the Mono-binned and Color-binned images is the same, the Color-binned image combined a three pixel wide area of the sensor into a single pixel while the Mono-binned image combined only two. The spatial resolution of the Mono-binned image is 1/3 better than the spatial resolution (in the sky) of the Color-binned image.
That is where I see the Color-binned image as having less spatial resolution (in the sky) compared to the Mono-Binned image. The tradeoff is that with the Mono-binned image, we have lost the color information we gathered. It would be nice if we could get a color image with the same spatial resolution as the Mono-binned image. That is where my suggestion for the use of SuperPixel deBayering comes into play.
When we deBayer an image, we usually use a method that interpolates the colors of pixels. Each color layer in the RGB image that results contains an estimate of the actual color derived from the surrounding pixels. The result is an image that no longer contains the Bayer matrix checkerboard but three color channels instead. This new image is the same size as the original image from the sensor. One option for deBayering is to use the SuperPixel method. In this method, all Red pixels are extracted and form the Red color channel. Likewise, all the Blue pixels are extracted and put into the Blue color channel. The Green pixels are extracted into two separate layers and then averaged to form the Green color channel. The resulting image is one half the size of the original image in each dimension. Each color channel contains an exact representation of the CFA pixel data in the original. There is not interpolation required to derive the color of any given pixel.
SuperPixel DeBayering -- Combining 4 Color Pixels From A 2x2 Sensor Area Into A Single Color (3 channel) Pixel
Now, look at this operation in terms of the final result. The three channel RGB image is half the size of the original raw OSC image. Each color layer contains the data from 4 adjacent pixels in the original. Thus the spatial resolution of the SuperPixel image (from the original captured at 1x1 unbinned) is twice (numerically) compared to the original. This is exactly the same result, in terms of resolution, as the Mono-Binned image but this one is in color.
That is why I suggested that perhaps imaging at 1x1 and then using SuperPixel deBayering is better in terms of resolution than using Color-Binning in the first place. The final image is the same size dimensionally but the spatial resolution (in the sky) of the 1x1 SuperPixeled image is better than the 2x2 Color-binned image. (The SuperPixel image came from a 2x2 pixels area in the original while the Color-Binned image came from 3x3 area in the original.)
Whew! I hope that all made some sense...