Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Developing a Deep Sky Lucky Imaging ASCOM Component

  • Please log in to reply
18 replies to this topic

#1 CygnusBob

CygnusBob

    Vostok 1

  • -----
  • topic starter
  • Posts: 148
  • Joined: 30 Jun 2018
  • Loc: Las Vegas, NV

Posted 18 July 2019 - 11:17 AM

I would like to develop an ASCOM component that implements deep sky lucky imaging.  My experience is mostly with Macintosh Cocoa Xcode projects.  I don't like C++, but I am willing to going to give it a try.  I have already created a Macintosh application that does this and it works well (of course, it does not use ASCOM).  

 

I would like the component to look like a virtual camera that connects to a real hardware camera some how internally.  The cameras of interest would be CMOS like a ZWO or Kepler.  I would rather not handle the communication to each camera inside this component.  I need a camera with a fast USB3 interface to make this work.  The component  would perform dark subtraction and flat fielding.  It would calculate image shifts and send guide commands to the mount.  Thus, a single camera would be used for both imaging and guiding.  It would also shift each short exposure to align it with a reference image and sum to a floating point image buffer.  Also, star FWHM values would be computed in order to implement the lucky imaging.  At the end it would return a sub to the client program.

 

Since I have never used ASCOM, how would such a software component fit into the ASCOM scheme?  a Driver?,  a Client?, something else?

 

How about a component that looks like a driver as viewed from a client (TheSkyX,Maxim DL,...) and client as viewed from a driver like a ZWO camera driver?

 

Can I do this with only C programming and not C++, or Basic.


Edited by CygnusBob, 18 July 2019 - 02:45 PM.


#2 NMCN

NMCN

    Ranger 4

  • *****
  • Posts: 310
  • Joined: 20 Sep 2006

Posted 18 July 2019 - 11:42 AM

Seems like this is something you should talk to the ASCOM folks about.



#3 Patrick Chevalley

Patrick Chevalley

    Explorer 1

  • -----
  • Posts: 58
  • Joined: 04 Jul 2017

Posted 19 July 2019 - 02:18 AM

This can be a very useful component.

 

From a ASCOM point of view this kind of component is a hub, something that is both a driver for the end application and client for the camera driver.

I don't think such a hub for camera exit for now, but there is already a few for the telescope and dome.

 

The preferred programming language is C#. As an example you can look at the code for a recent telescope/dome/focuser hub: https://github.com/A.../ASCOMDeviceHub

 

All the camera interface method the hub can use and must implement on it's driver side are described here:

https://www.ascom-st...e_ICameraV2.htm

 

Start by reading carefully the link given by NMCN, there is a lot of information and link to essential resources.

 

If this .NET / COM approach is too far from your programming habits you can look at the new Alpaca interface:

https://ascom-standa...oper/Alpaca.htm

This use a RESTful interface and is more easy to implement in any programming language.

But for your application be careful with the overhead of the network layer, this can be difficult to match with the fast rate of the camera. Some testing is probably need.



#4 CygnusBob

CygnusBob

    Vostok 1

  • -----
  • topic starter
  • Posts: 148
  • Joined: 30 Jun 2018
  • Loc: Las Vegas, NV

Posted 19 July 2019 - 11:12 AM

Patrick Chevalley

 

Thanks for the advice.  This kind of programming is starting to take me out of my comfort zone.  I write plain old C code.  I have some experience with objective C, but for me its monkey see monkey do.  It would be helpful to obtain a worked example of a hub that I modify rather than write from scratch.  Would the focuser code work?  A hub example that connects to a camera would be best.

 

Or maybe partner with someone who already knows this stuff.

 

Its not clear that Alpaca is the way to go.  I heard Bob Denny give a presentation saying that it would not provide high speed.  For this component, high speed performance of image downloads is essential.  I need to download 4K images in less than a few tenths of a second.  That is what I am doing with my Macintosh application.

 

Bob


Edited by CygnusBob, 19 July 2019 - 12:19 PM.


#5 CygnusBob

CygnusBob

    Vostok 1

  • -----
  • topic starter
  • Posts: 148
  • Joined: 30 Jun 2018
  • Loc: Las Vegas, NV

Posted 19 July 2019 - 03:38 PM

Sounds like a hub might work.

 

However I want to image and guide without using guide stars with the same component.  Lets say the user wants a 10 minute sub.  The hub component then asks the actual hardware camera through its camera driver for lets say 120 5 second exposures.  For each of those 5 second exposures an image shift will be computed.  I would like to use those same image shifts for both image alignment and guiding.  However I do not want the component to have to talk to the mount directly.  Rather I would like to report to the client (application program) what the image shift is and let it generate some command to the mount.  Will the ASCOM standard support this sort of thing?

 

Configuring this component as both an imaging camera and a guiding camera as two separate cameras which compute the same separate shift calculations would be rather inefficient.

 

Bob



#6 555aaa

555aaa

    Vendor (Xerxes Scientific)

  • *****
  • Vendors
  • Posts: 1361
  • Joined: 09 Aug 2016
  • Loc: Lynnwood, WA, USA

Posted 19 July 2019 - 06:54 PM

 It doesn't have to be an ASCOM hub (an .exe) if it doesn't make an ASCOM camera connection, it will be an ASCOM client to the mount, and the connection to the mount can be shared via someone else's hub like the POTH hub or the Optec hub. You would make an ASCOM connection to the mount and then send it pulse guide commands; the ASCOM mount driver will handle these for you. You might want to run the camera directly through an API or via a video standard like directShow for video cameras / fast cameras. If you use the ASCOM camera interface then yes you are writing a hub which is more complex as I understand it. I've never written a camera driver but I just "finished" (if you are ever finished with that sort of thing) a telescope mount driver and that was about a three month project. An ASCOM server program is a dll. Oh, one other thing you might run into is that you might need to build it for a 64 bit memory space target and that may cause problems although the ASCOM folks say no worries mate.

 

I'm sort of interested in this because of the option of using what's called interframe guiding which is what you are talking about.

 

If you want to use ASCOM really you are into the nuts and bolts of .NET, which is written in C#. But you know there are VB.NET to C# translators; they look almost identical. I still work in VB.NET and it doesn't care that the other code is in C# because that is the beauty of .NET.

 

The Alpaca thing is interesting but I agree with Patrick it might need to mature a bit for an application like yours with a lot of data which has to be timely. I think your proposal is interesting and the ASCOM camera interface is probably where you want to start. The pulse guiding command to the mount is very simple but you will have to have a means to calibrate the mount first. 

 

-Bruce



#7 CygnusBob

CygnusBob

    Vostok 1

  • -----
  • topic starter
  • Posts: 148
  • Joined: 30 Jun 2018
  • Loc: Las Vegas, NV

Posted 19 July 2019 - 09:39 PM

555aaa

 

Wow!  I am a physicist not a computer scientist.  I got interested in ASCOM because I thought that the compartmentalization of ASCOM would make things simpler.  It looks like when you open the hood on ASCOM things are not that simple.

 

I think I must use C# or C++ not because I want to but because the FFT routines and the code I have already written in my Xcode project are written in C (not C++).  Basic does not make sense to me because I think it will run too slowly.  I am using large FFTs to both compute image shifts and shift the images.  Since this is running in real-time, it must run quickly.  So what ever approach I pick needs to have high performance.

 

I have not done any ASCOM programming before so I am confused and would like to avoid running down dead ends.

 

So if you were me what approach would you take?  Maybe a compromise that is easier to implement.


Edited by CygnusBob, 19 July 2019 - 09:40 PM.


#8 CygnusBob

CygnusBob

    Vostok 1

  • -----
  • topic starter
  • Posts: 148
  • Joined: 30 Jun 2018
  • Loc: Las Vegas, NV

Posted 20 July 2019 - 12:35 PM

I am mostly interested in a high performance system for monochrome CMOS cameras and filter wheels.  Using a one shot color camera for this while possible, would not be optimal.  So its not clear whether supporting OSC cameras is worth it.  The driving force here is to obtain the sharpest final image possible given the equipment used and the observing conditions.  This is achieved by aligning sub exposures to sub-pixal accuracy by measuring image shifts of the entire scene and not a single guide star and using lucky imaging.  By lucky imaging I don't mean freezing the turbulence like that done for planetary imaging.  For the most part the radiance of the DSOs is too low for that.  However, the Fried parameter of the atmospheric turbulence does vary quite a bit and by selecting the sub exposures where the median of the star FWHM is low does improve the MTF of the final image.  This is quite clearly seen in the testing of my Macintosh application.  Also imagers whose mount cannot track well for long periods of time can still produce high resolution imagery since the only thing that matters is the blur that occurs during the short exposures.

 

I would like to connect to the cameras through a camera driver.  I am not interested in writing drivers for specific cameras.  So I guess that means either a hub or a client.  However it is still not clear how to guide within the framework of ASCOM.  I guess a user might want to use a separate guide camera just to minimize drift and let this component do alignment and stacking.  However image shift information would be determined for each sub exposure free of charge, but I don't see how an existing client program could use that without so modification.

 

Bob



#9 gregj888

gregj888

    Vanguard

  • -----
  • Posts: 2249
  • Joined: 26 Mar 2006
  • Loc: Oregon

Posted 20 July 2019 - 01:37 PM

Bob,

 

I looked at this for INDI on Linux a year or three ago.  I Finally settled for Speckle on a small GPU (NVIDIA Jetson) and post processing FITS cubes.  I still have hopes of a more advanced version that is similar to what you are doing but for occultations.  There are open source imaging programs and some take plug-ins.  For Ascom, you might look at Astro-ImageJ. You could also write the files in FITS to a shared memory buffer/file so your application doesn't need to know ASCOM or the camera information then only save the result. 

 

 Can I ask a few questions?  What your expected exposures/FPS are and pixel count?  Are you doing any kind of tin plate spline for the stacking?  

 

Cool project.



#10 CygnusBob

CygnusBob

    Vostok 1

  • -----
  • topic starter
  • Posts: 148
  • Joined: 30 Jun 2018
  • Loc: Las Vegas, NV

Posted 20 July 2019 - 02:45 PM

gregj888

 

I am interested in high resolution imagery of small objects like spiral galaxies.  I am using an 8 inch Newtonian telescope with a coma corrector that gives me a diffraction limited imaging region of nearly 30 arc minutes in diameter, which is big enough for most of my targets.  I use the ZWO CMOS cameras ASI1600MM Pro and the ASI183MM Pro.  I generally use 5 second sub exposures for a total of 10 minutes.  In other words 120 5 second exposures are captured for a single 10 minute sub saved as a FITS file.  In practice because of the lucky imaging somewhere between 25% to 50% of the sub exposures are thrown out because their star FWHM values exceed the threshold.  The image size I use is 3840 x 2160.  I can download these through the USB3 interface in less than 0.1 seconds.  However the dark subtractions, flat fielding, FFT image shift estimation and FFT image shifting, star FWHM calculation etc. cause the total calculation  time of 2.5 seconds, which I think is fairly impressive.  I also dither the positions of each of the 10 minute subs (not the sub exposures) in order to reduce fixed pattern noise of the final processed image in PixInsight.

 

I admit what I am doing is not optimal if you want to get the highest SNR in the minimum time.  However when I Star align these 10 minutes "subs" in PixInsight and integrate them, the amount of detail in the final image really builds up.

 

There is issue with these ZWO cameras however:  They have rather small pixels which means the image scale is small.  For a telescope with a 1 meter focal length, the image scale is quite good for sampling the seeing.  However on a really large telescope with a very long focal length, the image scale would be too small.  Using a Kepler CMOS camera with 9 micron pixels on a large telescope I think could produce great results using this approach.  So one of my objectives would be to let someone with such a camera and telescope to give it a try.

 

 

Bob



#11 555aaa

555aaa

    Vendor (Xerxes Scientific)

  • *****
  • Vendors
  • Posts: 1361
  • Joined: 09 Aug 2016
  • Loc: Lynnwood, WA, USA

Posted 20 July 2019 - 04:03 PM

Basic (VB.Net) is complied just like C. or C++ or C#. You should be able to use your existing C code and compile that as a library (a windows dll) and then call that from your ASCOM client, which will also control the camera. It doesn't sound too complicated to me. If your existing code is clean and properly platform-agnostic (for example it runs the same on a big endian machine as little endian) then it should be straightforward to recomplie it as a callable library. That is the point of .net. You might also consider that MS Visual Studio is now cross-platform.



#12 CygnusBob

CygnusBob

    Vostok 1

  • -----
  • topic starter
  • Posts: 148
  • Joined: 30 Jun 2018
  • Loc: Las Vegas, NV

Posted 20 July 2019 - 04:54 PM

555aaa

 

In the old days Basic was an interpreter.  Are you saying that now it is a compiled language just like C?  

 

So if I take collect my C code as a bunch of functions then create a dll I can just call those functions from visual Basic?  That sounds great!  So does that mean I can avoid the "Com" thing and just program in Visual Basic?

 

What does .Net mean?

 

Bob


Edited by CygnusBob, 20 July 2019 - 04:55 PM.


#13 gregj888

gregj888

    Vanguard

  • -----
  • Posts: 2249
  • Joined: 26 Mar 2006
  • Loc: Oregon

Posted 21 July 2019 - 12:00 PM

Bob,

 

Thank you.  Some musing follows....

 

It takes 10 - 100ms exposures to freeze the seeing depending on seeing and wavelength.  Ideally that's 10- 100 FPS, but it's the exposure that is actually important.  That gets daunting at 2.5 sec/frame.  With my 8" SCT  from my driveway I can get diffraction rings with 120ms exposures and a 720 or 780 long pass filter.  Sloan r' that drops to 30-80ms depending on the night.  I prefer the Sloan or NB filters but for what you are doing I would use the sloan's to start (narrower with sharper cuts).

 

I will encourage you to pick a language that supports multiprocessors and even GPU coprocessors.  

 

Granted, these are only 128 x 128 pixel frames (tiny) but the Jetson processes 1000 frames off the disk in 7.6 sec, 6.7 seconds to read from the disk and less than 1 sec to process.  Also Fourier based. 

 

For the occultation program I need 30-50 fps for short occultations (TNOs) but that really limits the number of available stars to m10 - m16 depending on the camera and scope.  So there's an interest in stacking these short exposures to about 1 sec and those to about 1 minute.  Great application for a pipeline and some parallel processing.  

 

If you can get a copy of Astrolive USB (it was free if you have a ZWO camera), you can try it with some really short exposures and see how it does and if there's a sweet spot for your application.  

 

BTW, the jury is out on calibrating CMOS images assuming a clean optical train.  The CMOS cameras do an in camera calibration which get's you partway, at least.  For speckle (spatial) we don't calibrate for intensity any more.  If read noise is <1.5e and exposures are < 1s, there's not much benefit in dark subtraction.  Bias frames can remove some FPN if it's actually fixed.  If the camera does it's internal cal every frame, even tis doesn't work.  If you take say 100 short exposures and add them, then do it again, and again... so 3 stacks.  Now subtract one stack for another and see if the results actually have lower noise and less patterning.  My QHY 5LIIm if all done without changing setting gets really flat (patterns virtually disappear).  With some newer cameras that re-cal internally on their own, there's no benefit (mathematically).  I don't have either of your cameras.  Flat fielding may be beneficial, but don't take it as a certainty until verified.  On the occultation SW I don't plan to flatfield and will let the averaging dive things closer to neutral.



#14 CygnusBob

CygnusBob

    Vostok 1

  • -----
  • topic starter
  • Posts: 148
  • Joined: 30 Jun 2018
  • Loc: Las Vegas, NV

Posted 21 July 2019 - 01:19 PM

gregj888

 

At the moment I am not that enthusiastic about trying to freeze the turbulence for deep sky objects.  Don't get me wrong it would be great if we could, however for many of the targets we would be in a regime of photon counting!  For what I am doing I need to know the image shifts of the sub exposures to sub pixel accuracy and then shift these images to align then with the reference image before stacking.  I don't see how I could do that unless some really bright stars are in the field of view.  However I don't want depend on bright guide stars to do this.  In fact I don't use guide stars at all!  A lot of these targets do not have conveniently located bright stars.  I use Fourier analysis to both measure the image shifts and shift the images.  I don't think my methodology would work well with a 0.2 meter telescope at a light polluted site for very short exposures.  However with a 1 meter telescope at a dark site using cameras with big pixels (Kepler 9 microns for example) taking 1/10 second exposures of DSOs might actually work well!  In that case, your suggestions of using GPUs or some sort of massive parallel processing might be fantastic.  So maybe it might be worth it to develop software to do this even if I never can lay my hands on such equipment.

 

Bob


Edited by CygnusBob, 21 July 2019 - 06:25 PM.


#15 gregj888

gregj888

    Vanguard

  • -----
  • Posts: 2249
  • Joined: 26 Mar 2006
  • Loc: Oregon

Posted 22 July 2019 - 10:42 AM

Bob,

 

The cameras you have are effectively photon counting...  so that's a plus.  Yes, you would need guide stars and they would need to be close (isoplanic) so a significant negative, especially with a 200mm.  

 

Have you tried (or is your program) based on triple correlation/bi-spectrum?  The Speckle Toolbox has that built in and might be worth running a data set through it.



#16 CygnusBob

CygnusBob

    Vostok 1

  • -----
  • topic starter
  • Posts: 148
  • Joined: 30 Jun 2018
  • Loc: Las Vegas, NV

Posted 22 July 2019 - 12:21 PM

gregj888

 

With 5 second exposures over large regions like 3840 by 2160 pixels I am not currently in a "photon counting" regime.  If I went down to 1/10 second I would be.

 

I have 2 ways to measure image shifts, cross correlation and measuring the phase of complex phasors in the spatial frequency domain.  Cross correlation is good for large shifts, but is a bit problematic for sub pixel accuracy.  My favorite method makes use of the Fourier transform shift theorem.  Its great for measuring image shifts to sub pixel accuracy, but is problematic for large shifts.  Normally since the shifts should be small during guiding I use this Fourier phase method.  Its also faster, only one FFT is required for each new sub exposure.

 

 

Bob


Edited by CygnusBob, 22 July 2019 - 12:24 PM.


#17 555aaa

555aaa

    Vendor (Xerxes Scientific)

  • *****
  • Vendors
  • Posts: 1361
  • Joined: 09 Aug 2016
  • Loc: Lynnwood, WA, USA

Posted 22 July 2019 - 03:34 PM

I am interested in basically the same thing for interframe guiding. In my use case I'd assume the exposures are relatively long, maybe a few minutes. The image shift is measured and that is used to change the tracking rate to compensate for variations in drift, using a conventional control algorithm or an adaptive control. It also needs sub-pixel accuracy. It doesn't have to plate solve but it has to know the image scale and orientation so that the correct orientation of mount corrections can be used. It could also use ASCOM pulse guiding. I am familiar with the FFT phase method. I'm willing to pursue this non-commercially, so if you want to collaborate we should talk.  There is a program, I am told, however, that already does this in Java. It is SIPS. I have been meaning to try it out.

 

https://www.gxccd.co...cat=22&lang=409

 

 https://www.gxccd.co...id=146&lang=409



#18 CygnusBob

CygnusBob

    Vostok 1

  • -----
  • topic starter
  • Posts: 148
  • Joined: 30 Jun 2018
  • Loc: Las Vegas, NV

Posted 22 July 2019 - 06:07 PM

555aaa

 

The method I am using would work very well with high SNR long exposure images.  No guide stars or plate solves needed.

 

Why don't you send me one of those private emails.  Or tell me how to do it.

 

Bob


Edited by CygnusBob, 22 July 2019 - 06:08 PM.


#19 Rickster

Rickster

    Viking 1

  • *****
  • Posts: 768
  • Joined: 09 Jun 2008
  • Loc: NC Kansas Bortle 3 SQM 21.8+

Posted 02 August 2019 - 01:17 PM

If your code could be integrated into Sharpcap, you would have a killer.  Sharpcap seems like a good match for your code because its users (EAA) typically use Alt/Az mounts that work best with short exposures.   Sharpcap already has most if not all of the features that you are looking for.  It seems to me (a non programmer) that what you are proposing would make an excellent upgrade/add on.  And using Sharpcap would give you an instant user base (who would no doubt consider you a hero).  You might try making a proposition in their forum  https://forums.sharpcap.co.uk/.  If they think it is feasible, you could check the potential user enthusiasm here https://www.cloudyni...t-processing/. 




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics