It rained all day yesterday, but today a cold front came through and its clear so far. I used the time to do a quick video tutorial on my workflow for acquiring data that I process into images later. FireCapture is the software I'm using specifically because it allows real time flat calibration applied to the live feed from your camera and its embedded into the source video upon recording, which is a huge help when making sure you've eliminated newtonian rings, gradients, dust and artifacts, etc, instead of finding out later. It also helps with exposure, seeing WYSIWYG with the flat applied, so you can avoid clipping data. This will cover basic focusing (manual, by hand, nothing special), flat calibration (this is a big part of this, and includes defocus and diffuser methods for full FOV and partial disc FOV flat calibration), and exposure values. I use gamma as a tool to expand and crush shadow tones and mid-tones to change contrast so that it's easier to real time see the output of the camera for focusing and for seeing prominences in data without changing actual exposure. This is a key element that I use a lot, but I do not use gamma when recording, it will always be checked off when recording is happening. Exposure values are totally variable, there's no magic number, we merely look at the histogram and make adjustments based on the histogram.
Please forgive any mistakes in words or terms, it's not my day job (wish it was!) to do this stuff.
Also, sorry for the wobbling and slewing around, it was very windy this morning and I was touching the scope and moving it quickly while trying to make this video.
Key elements to know perhaps before watching the tutorial video:
FireCapture: I'm using FireCapture software specifically for this entire process. Huge thanks to the author of this software and that it's free!
Gamma: I use gamma a lot in FireCapture, it's totally software manipulation. I do not use gamma (its off or neutral, neutral is 50 in FireCapture by the way) when recording however. Gamma stretches values that are useful when using your eyeball to see what's on your live feed from your camera, it's handy to stretch up the shadows and mid-tones (moving the slider to the left towards 0) (less contrast, see faint stuff like prominences); it's also handy to crush the shadows and mid-tones so that perceived contrast is higher on things like spicules, plages, filaments, spots, etc (moving the slider to the right, towards 100). Several times in the tutorial I will set exposure and then use gamma to crush shadows and see surface detail to critically focus, and then open up shadows with gamma to then see the prominences on the limb, again, without changing exposure--the key is that exposure wasn't changed to see the surface or prominences, just software manipulation of gamma, and the point of that is that the data is there, so turn gamma off when recording your video. You can get the faint prominences lifted in post processing and you can increase surface contrast with post processing from the same single exposure capture (my previous tutorial, Rapid Workflow). You don't have to use this, I just find it handy to focus and see prominences to know its in my data, then turn it off to actually capture the data.
Flat Calibration, Defocus Method: Defocus method is commonly used and easy when the solar disc fills the FOV of your camera so that there's only sun in your FOV. One can simply defocus the disc until features are gone, somewhere near the center of the disc ideally. I lower exposure values to achieve about 65% histogram fill. FireCapture recommends between 50~80% if you use the hover tool. Exposure time doesn't matter. I prefer not to use gain if possible doing this, but you can use gain if you need to. FireCapture has a default flat frame tool built in, you simply click it, tell it how many frames you want to capture, it will capture them and apply the flat calibration to your real time video stream from the camera.
Flat Calibration, Diffuser Method (Bag Flats): The Diffuser method is an easy way to create a flat calibration frame when the solar disc does not fill the FOV on your camera sensor, and you can see the limb or void of space around the full disc or partial disc. You need an opaque transluscent bag, I'm using a cereal bag. It should not be completely see through, but opaque. Not all bags are equal, so you have to experiement and find one that does what you need. The key is that it diffuses light, as in, it scatters the light. What this does is illuminates the bag itself so that when its in front of your aperture, the light source is now larger and it will fill your FOV on your sensor so that you can create a flat frame even though the solar disc isn't filling your FOV. This works for full disc FOV with a short scope, and for partial disc FOV. The bag needs to be over the entire aperture into your scope, but also it needs to be farther away from the front of your aperture, not directly touching it, I find it needs 2+ inches of space so that there's no hard edge and the diffuser material will be illuminated farther out than what the solar disc would normally appear as. A lens hood or lens shade is ideal for this to provide that space, ideally, larger than your actual aperture. This is key to not have the bag flat up against your entrance to your aperture of your solar scope (you may need to make a small hood or cardboard holder for dedicated solar scopes that tend to completely lack lens hoods). I again target 65% histogram fill. When performing this method, put the disc in the center or near center of your FOV or sweet spot, we focus first to get critical focus, then don't touch the focuser. Then we put the bag on, and raise exposure to fill the histogram to 65% (or 50~80% per FireCapture's hover tool). Gamma off. Capture your flat frames with the FireCapture tool. It will auto-apply the flat frame. Now remove the bag. You will need to lower exposure values again to your recording exposure values. It will still be in focus, no need to change it. You can however still fine focus if you need to.
Exposure Time: In general I recommend 10ms or shorter exposure times to freeze seeing. This depends on image scale. With fine image scales such as 0.3"/pixel, I tend to try to keep it closer to 2ms, 3ms or 5ms to better freeze the seeing. With course image scales, like 2"/pixel, 1.5"/pixel, 1"/pixel, 0.8"/pixel, 10ms is likely fine as a maximum. Whatever exposure time is needed to best fill your histogram without clipping the data to the right (whites) and short enough exposures to freeze seeing. You may need to use some gain to get the histogram filled if you reach the limits of exposure duration to freeze the seeing. Every imaging system and filter system is different, so there's no magic numbers other than time related to freezing seeing and then the rest is just manipulating it based on the histogram.
Histogram: Lower left, or in case you moved it elsewhere, is your Histogram. You need this to understand your exposure. It shows the shape and spread of your data from the black point on the far left to the white point on the far right. When I refer to histogram fill, I'm referring to putting as much data between those two points as I can without pushing it past the white point, or clipping to the right, which results in lost data (its considered white after that point). There's a % value there to tell you how close you are to filling the entire histogram between the two points. This is where I'm referring to the 65%, 80%'s 90%'s ranges, etc during the tutorial.
Critical Focus: Being in focus is not that easy sometimes. I manually focus. It's even easier if you have a controller and motorized focuser of course. When the seeing is poor, it can be hard to focus critically as you chase the seeing. Ideally, adjust it to highest contrast that you can see and watch it to see moments of good seeing, if you see pencil-drawing like features, you're close. When seeing is dreadful you may never achieve critical focus. When seeing is average with brief moments of good seeing, you can see high contrast features, lines, etc, and know if you're close to focus and adjust from there. I suggest you may adjustments, wait and watch, make adjustments, wait and watch. It's much easier when you have good seeing conditions. It's also much easier with course image scales from small aperture scopes as they are less effected by seeing conditions. It's much more challenging with fine image scales from very large apertures that require excellent seeing conditions.
Bit Rate & Container: Ideally you would want to use 16 bit for bit rate when capturing (especially for prominences), I used 8 bit in these videos because its faster and more accessible to most people and their systems. If you can capture in 16 bit however, I suggest you do that. The container will matter based on the bit rate. Some capture to AVI, but AVI has limits with bit rate. I suggest SER container which will allow the 16 bit rate if you want to use it or can use it. It doesn't matter which one, there's no difference as the data is RAW, the container is just changing what you'll use to preview it mainly, as the stacking software will be happy to look at an AVI or SER container all the same. I'm using SER container. And for preview, I have a SER player that allows me to view the videos independently (free).
How Many Frames? You can preset any number of frames to capture. I capture in bursts of 1,000 frames at a fairly fast frame rate (it's slower in the video due to running all the software, recording this video real time, with the video feed, etc on a laptop). You can capture less. You can capture more. Just know that the more time you spend capturing frames, features can change on the sun, as its super dynamic. I would not capture more than 2~3 minutes of video at course image scales, and at fine image scales I wouldn't go past a minute or so likely, usually less. The apparent movement of things like prominences, filaments, flares, etc, can occur during just minutes of time. So keep your bursts of recording short on time, packed with as many frames as possible (ie, you want fast FPS). This is why we recommend monochrome sensor cameras with fast data rate potential (you can use region of interest to speed up a slower, larger pixel array camera).
The scope, filter system, camera, etc, I'm using do not matter. Nothing will have the same values used above. The intent is just the idea of how to go through the process of this workflow and it will work for small 40mm PST solar scopes and up all the same.
Video Tutorial (20 minutes):
(Please forgive any mistakes, wobbly stuff, incorrect terms or descriptions, this isn't my day job)
Bag Flats are always a difficult one to explain and show, so here's some images and a result from the above method for full or partial disc FOV that is usually difficult to perform flat calibration for: