This is about my first impressions of N.I.N.A., after years of using SGP (very successfully).
Here's the background: for my OSC imaging (with light pollution filters), I was getting tired of the time SGP takes to get from finishing one frame to starting the next one. With the CMOS sensor of the ASI294MC Pro, very short exposures are the norm, and loosing 10-20% of imaging time because of SGP overhead is not particularly nice. The slower your PC, the more sub to sub time it takes, the more time autofocusing takes, etc.
Now just to be clear I love SGP, it is an excellent piece of software with amazing flexibility and features. I've been using it for years, and paid for the update to version 3.
But last night I tested an alternative to SGP called N.I.N.A. (https://nighttime-imaging.eu). It is free and open source, and I used the 1.8.0 RC1 version. I wanted to see how it would perform on my underpowered imaging PC (I've been considering the ASIAIR because of the overhead in SGP, but the lack of autofocus is an obstacle right now).
- EQ6R mount with EQDIR cable (e.g. EQMOD), actually my first time using this mount.
- 350mm f4 lens
- Moonlite belt focuser
- ASI294MC Pro as main imaging camera, with Quad BP filter in front of it.
- ASI178MC as OAG camera
- Underpowered Atom-based Minix Z83-4 computer
- Lots of wind (no fun without wind!)
I fired up NINA well before the evening (I needed to configure it first) to find it has a pretty well organized interface, with tabs on the left hand side that can almost be used chronologically until actual capture (reminded me a bit of StarTools). First things first, I needed to configure the equipment: native ZWO drivers, EQMOD ascom, focuser ASCOM. Next I went into the various tabs to see what could be setup there. In the settings tab there were various configurable things that should be filled in:
- observing site coordinates
- camera pixel size, bit depth, etc, under framing.
- Telescope focal length, under framing and under options -> equipment. I believe that together with the camera information, it is used to give the FOV hints to the PlateSolver.
- PlateSolve2 path: you need to install PlateSolve2 directly if you wish to use it, and then get a catalog for it. All free from Planewave. Other solvers are available, and I added my API key for astronomy.net online solver as my backup blind solver.
- API key for Openweather, if you want to get weather information
- Sub length for auto-focusing and for plate solving.
- Sky Atlas Picture catalog path (used in the Sky Atlas Tool), downloadable for free from NINA.
- PHD2 path (for NINA to launch PHD2 automatically)
- I connected to the equipment without issue. A good thing was that when I connected the mount, NINA detected that the coordinates of the mount and the ones in NINA were not exactly the same, and offered the option to get the coordinates from the mount into NINA, which I did.
- Another neat feature is that you can connect a "manual rotator", e.g. your hands This is good for framing.
- The theme is important too, with a variety of dark themes available for the application. I liked the "Persian" theme.
- You can save your equipment profile just like SGP.
I noted some pretty neat tools that make NINA an excellent way to actually plan the imaging session:
- Search for an object name under Sky Atlas, a list of matching objects will appear with pictures and a graph with the object altitude versus time of day, complete with the dusk, dawn, and night hours, making it easy to find good targets to image.
- With the target decided, you can just send it to the sequencer as the next target, or first send it to the framing assistant (and you can slew to it from that screen too)
- In the framing assistant, you can download an image of the target from several sources, including NASA Sky Survey (or offline using SkyAtlas, which gives a good idea of the size of the target), and then you can see your own FOV versus the target object. Drag around the FOV as desired, rotate as desired, or you can even set a mosaic in there, as desired.
- Once ready, you can then send your scene to the Sequencer for imaging.
Seriously, by the time I had seen this I was sold. I had no idea whether this would actually work well in Prod, but I joined the Discord chat, and sent a donation via Paypal while I was at it. I learned at the same time that autofocus automatically pauses guiding, so that's good for people with OAGs.
Looking at the sequencer, I found that it is very complete, with autofocus after x minutes or exposures or temperature change (talking about autofocus, it will do filter offsets too). And something I love and want to see in SGP: the dither every x frames is available at the task level. So that means that if you image HaRGB, you can choose to dither every frame with HA, and every 5 frames with RGB for example. You also get the gain per task (although no offset - it's going to be added in the future, although not a big deal for me since ZWO cameras have a fixed offset). And your target is in there, together with the chart of altitude versus time.
The tab that impressed me most was probably the Imaging tab. This is the real control center of the software, where you can see the PHD2 graph, image history, image per image HFR, sequence progress, take exposures for framing and focusing, perform platesolving (with automated sync and slew for aligning my EQ6R!), perform autofocusing, view images and image statistics, view telescope status and weather information, it's very comprehensive, well laid out, and beautiful.
One limitation I noticed while looking around: it only supports PHD2 for autoguiding. This means that dithering without a guider is not currently possible (SGP has that via Direct Mount Guiding), so that's inconvenient for my portable imaging, when I use unguided short focal length lenses. The team is planning to add this in the future, although I'm wondering whether PHD2 could use the camera simulator to do the dither. Still, I'll be waiting for that feature with baited breath.
Now that my tour was done, it was time to wait for the evening. Evening came, clouds cleared up, PA was done with Sharpcap (although NINA has a Polar Alighn tool, which I have not tested yet), and here we go!
Connecting all the equipment worked flawlessly. This was by the way my first time using EQMOD, so it was interesting to see how it works. I set up the camera to cool to -10 degrees (nice to see camera power and temperature as charts over time by the way), and slewed to Betelgeuse using Cartes the Ciel. In NINA, I took a short exposure from the Imaging tab to see that Betelgeuse was not in the FOV, so I used the PlateSolve tool with "Sync" and "Reslew to target" turned on - PlateSolve2 came up, solved the image, and NINA synced the mount, it slewed to target, and just like that my mount was aligned. Next I ran autofocus - and it ran without issues and it was fast, like really fast, much, much faster than SGP, especially on this very slow computer. That was quite impressive.
Next step, I fired up PHD2, found a guide star, calibrated, and was guiding. I had poor RMS of around 1.5 arc seconds, which I assume was because of the wind. I hope so at least. In the imaging tab, the guide graph was reflected in real time. This was cool to see. Note that NINA will automatically launch PHD2, connect the equipment, select a star, and start guiding.
I then went to the sequencing tab. The target was set as per my framing from the framing tab. As a test, I set 60 exposures of 60 seconds each, with autofocus after 30 minutes. And then I let it do its thing. I was amazed that the sub to sub time is instantaneous, even with HFR calculation and image history enabled. There is a fancy chart of HFR and nb of stars per sub as well. Really neat to keep track of what's happening. After 30 minutes, autofocus ran as expected, without any issues, and then it kept on capturing. Dithering also worked perfectly fine. Oh and of course the captured subs can have a custom name, and xisf is also available as a file format, for PI users.
So yes, I am very impressed by NINA. It has a clear, intuitive interface. It has some outstanding tools for sequence planning and definition. It has fast image capture, fast statistics for images, and fast autofocus. It is free and open source. I may transition to it fully, because besides the absence of the Direct Mount Guider, it seems to tick all the boxes for me. I still need to understand what happens when clouds roll in, but at this point, I am positively wowed.
The resulting image is nothing to write home about (OSC from Tokyo super white zone, target low on the horizon, one hour integration time only), but here it is attached.