Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Deep learning denoising for astrophotography

  • Please log in to reply
25 replies to this topic

#1 charles.tremblay.darveau

charles.tremblay.darveau

    Viking 1

  • *****
  • topic starter
  • Posts: 947
  • Joined: 16 Oct 2020
  • Loc: Seattle, WA, USA

Posted 18 September 2021 - 11:48 AM

I'm a physicist in my daytime work and have been recently working on denoising deep learning networks for other applications. Those tools are very powerful and tend to outperform regular denoising strategies. Here is a good reference on such tools for regular photography:
https://arxiv.org/pdf/1608.03981.pdf

 

Regular CNN and U-NET normally run near real-time on customer GPU.

 

Now astro images are different in that we want to preserve very bright stars, so I see a need to retrain these networks on proper astrophotography images. I see 2 ways to achieve this:

  • Get high SNR images post stack and model camera noise (with proper noise statistics).
  • Directly compare a single sub to the final stack (with both images aligned)

 

I saw a few other posts on this forum and was wondering if anyone already went through the effort to collect data for this purpose. If not, I would be happy to start this work. Of course, results/dataset would be available to the public.


  • lambermo, ks__observer, TrustyChords and 1 other like this

#2 vidrazor

vidrazor

    Fly Me to the Moon

  • *****
  • Posts: 6,016
  • Joined: 31 Oct 2017
  • Loc: North Bergen, NJ

Posted 18 September 2021 - 01:02 PM

Interesting. This stuff is technically over my head, but it would be interesting to see improved methods of effective chroma and luma noise reduction.



#3 bobzeq25

bobzeq25

    ISS

  • *****
  • Posts: 35,623
  • Joined: 27 Oct 2014

Posted 18 September 2021 - 05:15 PM

Here's a target for you.  Doing it better than this.

 

https://www.topazlabs.com/denoise-ai


  • dswtan and charles.tremblay.darveau like this

#4 Jim Thommes

Jim Thommes

    Fly Me to the Moon

  • *****
  • Posts: 7,328
  • Joined: 20 Sep 2004
  • Loc: San DiegoCA USA

Posted 18 September 2021 - 05:22 PM

Way over my head too.

 

But in glancing through the paper, I was wondering if this is similar to Mure DeNoise in PixInsight (minimization of an unbiased estimator)?? It's intent is to remove the noise from a noisy observation. I translate that to camera read noise in astro imaging.

 

Also, did you see this post - https://www.cloudyni...se-attenuation/

(Topic is "Deep Learning for random noise attenuation")



#5 charles.tremblay.darveau

charles.tremblay.darveau

    Viking 1

  • *****
  • topic starter
  • Posts: 947
  • Joined: 16 Oct 2020
  • Loc: Seattle, WA, USA

Posted 18 September 2021 - 06:12 PM

Here's a target for you.  Doing it better than this.

 

https://www.topazlabs.com/denoise-ai

 

Yes, I'm aware of Topaz denoise. It goes toward what I was thinking, except I'm ready to bet that their network is trained on natural images (e.g. bird, cat, bears, planes, ...) that are normally off-the-shelve. If you want a denoiser to work well on astro images, it's best to train it on astro images. Natural images will work to some value, but may confuse stars for noise or worse. It's also 80$ lol.gif .

 

 

Way over my head too.

 

But in glancing through the paper, I was wondering if this is similar to Mure DeNoise in PixInsight (minimization of an unbiased estimator)?? It's intent is to remove the noise from a noisy observation. I translate that to camera read noise in astro imaging.

 

Also, did you see this post - https://www.cloudyni...se-attenuation/

(Topic is "Deep Learning for random noise attenuation")

It seems this post is using deep prior which from what I read is a type of network based on GAN. Though difficult to assess without source-code or training data. Mure DeNoise seems more like a SNR-aware denoiser which is more in the classical signal processing category. Looks good though...

 

Interestingly, I found a different ArXiv paper that tried the #1 idea (model noise) using Hubble open source images.
https://arxiv.org/abs/2011.07002

The authors were kind enough to leave some github source codes, but unfortunately not the network weights. I guess I'll start there and try to re-train their network. 


Edited by charles.tremblay.darveau, 18 September 2021 - 06:13 PM.

  • Hobby Astronomer and Der_Pit like this

#6 RazvanUnderStars

RazvanUnderStars

    Vanguard

  • *****
  • Posts: 2,085
  • Joined: 15 Jul 2014
  • Loc: Ontario, Canada

Posted 18 September 2021 - 09:16 PM

Charles, you've probably seen a (heated, at times) thread about Topaz (both the denoiser and the sharpener) that took place this summer. You're already aware of one of its limitations (the network having been trained on non-astronomical data).

 

The other limitation, discussed at length in the thread as well, is the risk of having artificially synthesized details for areas where there is a faint object - where we need denoising most. I'm sure you're aware of it. It will be interesting to learn if there is a way to control it (I don't want to use the more specific term 'regularization', since don't know whether it can address this particular issue). 

 

Your thoughts on the matter will be appreciated.



#7 whwang

whwang

    Fly Me to the Moon

  • *****
  • Posts: 5,001
  • Joined: 20 Mar 2013

Posted 18 September 2021 - 09:25 PM

The problem of Topaz denoise is that it very often over-sharpening the image and then creates artifacts.  It also sometimes cannot tell the subtle difference between noise and real astronomical features.  So I think a dedicated program trained on astronomical images (and only astronomical images) can beat it quite easily.

 

To come up with training images, I think it should be much easier than the case for star removal.  One can just collect something like 30 to 60 subs, stack all of them, and stack just a few (4, for example).  The two stacks can then serve as the training images.  The stack of a few images is the noisy input, and the stack of several tens of images is the ground truth.  This is easy because we do this kind of things all the time.  The only extra effort would be just to create another stack of a few images.  


  • rockstarbill likes this

#8 Mert

Mert

    Voyager 1

  • *****
  • Posts: 11,438
  • Joined: 31 Aug 2005
  • Loc: Spain

Posted 19 September 2021 - 08:21 AM

Interesting to see progress on this!



#9 charles.tremblay.darveau

charles.tremblay.darveau

    Viking 1

  • *****
  • topic starter
  • Posts: 947
  • Joined: 16 Oct 2020
  • Loc: Seattle, WA, USA

Posted 19 September 2021 - 01:34 PM

Charles, you've probably seen a (heated, at times) thread about Topaz (both the denoiser and the sharpener) that took place this summer. You're already aware of one of its limitations (the network having been trained on non-astronomical data).

 

The other limitation, discussed at length in the thread as well, is the risk of having artificially synthesized details for areas where there is a faint object - where we need denoising most. I'm sure you're aware of it. It will be interesting to learn if there is a way to control it (I don't want to use the more specific term 'regularization', since don't know whether it can address this particular issue). 

 

Your thoughts on the matter will be appreciated.

 

First, please take my opinion with a grain of salt. I'm a deep learning enthusiast but certainly not at the same level as Google engineers. While some networks can abstract details, convolution networks typically only sharpen, blur or clip. I'm not worried about the network creating stars, but it may clip some low SNR signal. Losing some low SNR is also inevitable even with classical edge-aware filters.

 

 

The problem of Topaz denoise is that it very often over-sharpening the image and then creates artifacts.  It also sometimes cannot tell the subtle difference between noise and real astronomical features.  So I think a dedicated program trained on astronomical images (and only astronomical images) can beat it quite easily.

 

To come up with training images, I think it should be much easier than the case for star removal.  One can just collect something like 30 to 60 subs, stack all of them, and stack just a few (4, for example).  The two stacks can then serve as the training images.  The stack of a few images is the noisy input, and the stack of several tens of images is the ground truth.  This is easy because we do this kind of things all the time.  The only extra effort would be just to create another stack of a few images.  

 

My exact same thought. If we all chip-in for the dataset it would be very easy to get sufficient data. Challenge would be to account for variables such as gain, exposure, sky brightness.



#10 freestar8n

freestar8n

    MetaGuide

  • *****
  • Freeware Developers
  • Posts: 13,649
  • Joined: 12 Oct 2007
  • Loc: Melbourne, Australia

Posted 19 September 2021 - 06:37 PM

In landscape photography the views are always different, but for deep sky imaging most objects are static except for occasional changes like a nova.

 

So if you can plate solve an image and determine its size and orientation - you can go on the web and use all available versions of that same scene to enhance the given image.  There isn't really a need to train on anything.

 

In fact, you don't even need an original image in the first place - just the coordinates, boundary and filters used.

 

Frank



#11 Jim Thommes

Jim Thommes

    Fly Me to the Moon

  • *****
  • Posts: 7,328
  • Joined: 20 Sep 2004
  • Loc: San DiegoCA USA

Posted 19 September 2021 - 07:05 PM

In landscape photography the views are always different, but for deep sky imaging most objects are static except for occasional changes like a nova.

 

So if you can plate solve an image and determine its size and orientation - you can go on the web and use all available versions of that same scene to enhance the given image.  There isn't really a need to train on anything.

 

In fact, you don't even need an original image in the first place - just the coordinates, boundary and filters used.

 

Frank

OK, but kinda takes away some of the fun of the hobby. 


  • Lead_Weight likes this

#12 Jim Thommes

Jim Thommes

    Fly Me to the Moon

  • *****
  • Posts: 7,328
  • Joined: 20 Sep 2004
  • Loc: San DiegoCA USA

Posted 19 September 2021 - 07:06 PM

....

 

My exact same thought. If we all chip-in for the dataset it would be very easy to get sufficient data. Challenge would be to account for variables such as gain, exposure, sky brightness.

 

.....

Charles,
I would be willing to provide some data sets. I have stuff from 500 mm to 1645 mm FL, refractors, catadioptrics (no reflectors).  What would you like? Finished image? Stacked subs? a series of aligned subs? Mono? Color?

 

What formats .TIF, .FITS, .XISF,  other?

 

Could end up being a lot of data (GBytes). Set up a drop box for contributors?



#13 freestar8n

freestar8n

    MetaGuide

  • *****
  • Freeware Developers
  • Posts: 13,649
  • Joined: 12 Oct 2007
  • Loc: Melbourne, Australia

Posted 19 September 2021 - 09:29 PM

OK, but kinda takes away some of the fun of the hobby. 

That’s basically my point.  It’s like handing a noisy image to an artist who then smooths and embellishes it with details that look nice but aren’t really there.  Any change in a pixel value by a routine that wants to alter the image into something it thinks should be there is by definition an artifact.

 

My other point is that there is no need to learn what Astro objects look like in general if for each image you can find a high-res version of it.  You could use deep learning to improve fuzzy images of lunar craters - but there is no need if for any lunar image you can identify the region and refer to a high res version of it.

 

This all makes sense as something to do and it would be interesting to see how well it can conjure up details.  But I much prefer imaging to conjuring.  And I sure hope this stuff doesn’t become a standard form of post-processing that doesn’t even get mentioned.

 

Frank



#14 charles.tremblay.darveau

charles.tremblay.darveau

    Viking 1

  • *****
  • topic starter
  • Posts: 947
  • Joined: 16 Oct 2020
  • Loc: Seattle, WA, USA

Posted 19 September 2021 - 11:26 PM

That’s basically my point.  It’s like handing a noisy image to an artist who then smooths and embellishes it with details that look nice but aren’t really there.  Any change in a pixel value by a routine that wants to alter the image into something it thinks should be there is by definition an artifact.

 

My other point is that there is no need to learn what Astro objects look like in general if for each image you can find a high-res version of it.  You could use deep learning to improve fuzzy images of lunar craters - but there is no need if for any lunar image you can identify the region and refer to a high res version of it.

 

This all makes sense as something to do and it would be interesting to see how well it can conjure up details.  But I much prefer imaging to conjuring.  And I sure hope this stuff doesn’t become a standard form of post-processing that doesn’t even get mentioned.

 

Frank

Yes, I agree that going for Deep fake networks would take the fun out of it. Though what I'm thinking are convolution-based networks which means the only thing they do is either blur or sharpen per pixel. This kind of stuff is already used in medical imaging, and for sure they don't want to do fake diagnosis wink.gif


  • TrustyChords likes this

#15 freestar8n

freestar8n

    MetaGuide

  • *****
  • Freeware Developers
  • Posts: 13,649
  • Joined: 12 Oct 2007
  • Loc: Melbourne, Australia

Posted 19 September 2021 - 11:59 PM

Yes, I agree that going for Deep fake networks would take the fun out of it. Though what I'm thinking are convolution-based networks which means the only thing they do is either blur or sharpen per pixel. This kind of stuff is already used in medical imaging, and for sure they don't want to do fake diagnosis wink.gif

A key test for me is if such methods are allowed in top journals like science or nature. Using such things to improve image segmentation in medical imaging is one thing. But for supporting images in journal articles I doubt it would be allowed. I guess we’ll see. 
 

Frank



#16 vidrazor

vidrazor

    Fly Me to the Moon

  • *****
  • Posts: 6,016
  • Joined: 31 Oct 2017
  • Loc: North Bergen, NJ

Posted 20 September 2021 - 12:02 AM

Yes, I agree that going for Deep fake networks would take the fun out of it. Though what I'm thinking are convolution-based networks which means the only thing they do is either blur or sharpen per pixel. This kind of stuff is already used in medical imaging, and for sure they don't want to do fake diagnosis wink.gif

:)

Facebook-2b2d5d.png


Edited by vidrazor, 20 September 2021 - 12:03 AM.

  • rockstarbill, TrustyChords and charles.tremblay.darveau like this

#17 Jim Thommes

Jim Thommes

    Fly Me to the Moon

  • *****
  • Posts: 7,328
  • Joined: 20 Sep 2004
  • Loc: San DiegoCA USA

Posted 20 September 2021 - 08:39 PM

That’s basically my point.  It’s like handing a noisy image to an artist who then smooths and embellishes it with details that look nice but aren’t really there.  Any change in a pixel value by a routine that wants to alter the image into something it thinks should be there is by definition an artifact.

 

...........

 

This all makes sense as something to do and it would be interesting to see how well it can conjure up details.  But I much prefer imaging to conjuring.  And I sure hope this stuff doesn’t become a standard form of post-processing that doesn’t even get mentioned.

 

Frank

So Frank, I agree with you in principle about changing pixels from their original - "...Any change in a pixel value by a routine that wants to alter the image into something it thinks should be there is by definition an artifact...'' But your statement seems an extreme statement of the principle? For example, we stack sub-frames to achieved a noise reduced result. A pixel is, as such, modified by its treatment (median, average, etc.).  What about pixel rejection of aircraft or satellite streaks? For that matter, what about a simple histogram stretch? There are likely many other examples of such routines. These are all routines many of us in astro imaging use regularly.

 

Where do we draw the line? Many of the routines we use regularly for post-process can be misused by the imager to the point where they create "serious" artifacts. We generally don't reject these routines because the imager misused them. Perhaps you object to the development of deep learning neural network routines? These routines could conceivably lead the inexperienced imager to accept essentially false results as fact.

 

So again, I agree with you that I would definitely not to want to see these routines mainstreamed and misused on a large scale. Such misuse resulting in object features that are not there. With some who claim to find "new" nonexistent details on mostly static objects whose claims could easily be refuted  by professional data/images publicly available.

 

Thanks for expressing needed caution  about potentially over zealous noise reduction and/or sharpening routines.

 

 


  • charles.tremblay.darveau likes this

#18 Higgsfield

Higgsfield

    Viking 1

  • -----
  • Vendors
  • Posts: 701
  • Joined: 10 Sep 2020

Posted 20 September 2021 - 09:56 PM

Here's a target for you.  Doing it better than this.

 

https://www.topazlabs.com/denoise-ai

This piece of software does a remarkable job of tightening up stars and reducing chromance noise. I integrated data from the Wizard Nebula after about 15hrs total time just to get a peek, and I'm impressed with the result using Topaz Denoise. I now have over 45 hrs of data on this target, and I'm wondering if I should perhaps spend less time on a target in favour of shooting more targets and use this program in an intermediate way as part of my workflow? I.e bring a TopazDenoised image back into Pixinsight for further processing. Since this is the trial version, I can not see what file save options it supports, but I would assume TIF. http://drive.google....iew?usp=sharing

 

Wizard-TopazDenois.JPG  


Edited by Higgsfield, 20 September 2021 - 09:58 PM.


#19 freestar8n

freestar8n

    MetaGuide

  • *****
  • Freeware Developers
  • Posts: 13,649
  • Joined: 12 Oct 2007
  • Loc: Melbourne, Australia

Posted 20 September 2021 - 10:31 PM

So Frank, I agree with you in principle about changing pixels from their original - "...Any change in a pixel value by a routine that wants to alter the image into something it thinks should be there is by definition an artifact...'' But your statement seems an extreme statement of the principle? For example, we stack sub-frames to achieved a noise reduced result. A pixel is, as such, modified by its treatment (median, average, etc.).  What about pixel rejection of aircraft or satellite streaks? For that matter, what about a simple histogram stretch? There are likely many other examples of such routines. These are all routines many of us in astro imaging use regularly.

 

Where do we draw the line? Many of the routines we use regularly for post-process can be misused by the imager to the point where they create "serious" artifacts. We generally don't reject these routines because the imager misused them. Perhaps you object to the development of deep learning neural network routines? These routines could conceivably lead the inexperienced imager to accept essentially false results as fact.

 

So again, I agree with you that I would definitely not to want to see these routines mainstreamed and misused on a large scale. Such misuse resulting in object features that are not there. With some who claim to find "new" nonexistent details on mostly static objects whose claims could easily be refuted  by professional data/images publicly available.

 

Thanks for expressing needed caution  about potentially over zealous noise reduction and/or sharpening routines.

In astronomical imaging there is a phase of pre-processing that is deterministic and driven by noise models and statistics.  And it is almost all happening at the pixel level.  Rejecting satellite trails or cosmic rays is part of that noise model, where you reject outliers as expected events in imaging with known causes.  And that process amounts to rejecting data that you deem not to be valid - as opposed to selectively changing values based on what you think they should be.

 

Once you have processed and stacked the exposures you can apply linear and even nonlinear global operations - in order to make the data visible.  This has to be done in order for the image to be viewable - and since all operations are global there is no chance to bias or steer the result toward something you want to see - that isn't actually there.

 

So by 'artifact' I mean any change at that point that is local and based on what is going on in the vicinity of a pixel.  Whether it is an artist wanting to make the image look prettier there, or an algorithm using what it has "learned" to fix up a patch of pixels so they look different.  There is no noise model involved, it isn't rejecting invalid data, it isn't acting based on what is happening in a single column of pixels, and it is making a local change based on what it has 'learned' from other images.

 

The above summarizes the rules for image submission in top journals - and a key aspect is that local, selective changes have not been made.  Even smoothing or sharpening is discouraged - but as long as it is done in a global way it may be ok.  Anything with the word 'adaptive' in it probably isn't.

 

So I think the above summarizes a very clear line - and it isn't just my line - it is the line drawn by top journals.

 

If you ignore that line then I think there really is a slippery slope where an image I see that looks nice and compelling - isn't really an "image" - it is an artistic reinterpretation with no holds barred on what was done to create it.  It's not so bad if all the processing is spelled out - but with this new stuff I'm concerned where it is headed.

 

For me, and at least some others, knowing that these additional manipulations have been applied really detracts from the impact an image has - because I don't have a sense I am looking at an actual image captured under the sky.

 

As I said in my post - I'm not clear where top journals will go with this stuff.  It is one thing to use these methods in medical imaging in order to aid segmenting tissue, for example, but for "denoising" astronomical images so they look cleaner - I'm sure where that will go.  There are certainly papers on people working in those areas - but I don't know if top journals are allowing it.

 

Obviously this stuff isn't my cup of tea - but it's still interesting to me and I'd like to see where it goes.  And if it does get endorsed as a valid way to "denoise" and even bring out more detail that isn't there - I'd like to hear more about it.  And if people want to do this stuff and post examples that's also fine with me.  But I sure wish people would post more examples *without* the extra processing and before applying local manipulations.  Every image in CN was at one point in a form that met the criteria above - and in terms of seeing how well different setups and techniques capture good data - that would be very informative.  And it lets the object in the sky speak for itself.

 

Frank


  • RazvanUnderStars and Rasfahan like this

#20 Higgsfield

Higgsfield

    Viking 1

  • -----
  • Vendors
  • Posts: 701
  • Joined: 10 Sep 2020

Posted 20 September 2021 - 10:34 PM

To answer my own question, I think using this program to reduce noise prior to other a enhancement operations in pixinsight or some other astro processing program could be worth while. I'm not sure that one could do better than what this program already does. It's likely already a 90% solution, and any improvements would likely be marginal, especially if one brought the image back into Pixinsight.

 

I ran the previous image back through pixinsight doing a few curves adjustment on a the denoised image.

 

Wizard_PNG_ToTiff_PixinsightProcessed_ScreenGrab_jpg.JPG



#21 Higgsfield

Higgsfield

    Viking 1

  • -----
  • Vendors
  • Posts: 701
  • Joined: 10 Sep 2020

Posted 20 September 2021 - 11:13 PM

One last post. I'm 

 

The problem of Topaz denoise is that it very often over-sharpening the image and then creates artifacts.  It also sometimes cannot tell the subtle difference between noise and real astronomical features.  So I think a dedicated program trained on astronomical images (and only astronomical images) can beat it quite easily.

 

To come up with training images, I think it should be much easier than the case for star removal.  One can just collect something like 30 to 60 subs, stack all of them, and stack just a few (4, for example).  The two stacks can then serve as the training images.  The stack of a few images is the noisy input, and the stack of several tens of images is the ground truth.  This is easy because we do this kind of things all the time.  The only extra effort would be just to create another stack of a few images.  

I too looked at this program awhile ago and came to the same conclusion. However, I've now totally changed my mind. There is good control over the amount of noise reduction, as well as sharpening if you choose to do any. I'm thinking star removal, over to TopazDenoise, back into Pixsight. 

 

Here's a split screen of Pickering's Triangle: https://drive.google...iew?usp=sharing

 

PickeringsTriangle (1).JPG


Edited by Higgsfield, 20 September 2021 - 11:14 PM.


#22 charles.tremblay.darveau

charles.tremblay.darveau

    Viking 1

  • *****
  • topic starter
  • Posts: 947
  • Joined: 16 Oct 2020
  • Loc: Seattle, WA, USA

Posted 01 October 2021 - 12:35 PM

Some early results... I've played with the codebase from the Arxiv paper. As usual, had to re-write a lot of the code due to tensorflow version change and some of the data libraries seem corrupted (fun...). I've moved toward a Resnet approach which is usually less aggressive on the noise and less artefact-prone. 

 

This is using only a few images from the Hubble dataset and using the Poisson statistics highlighted in the paper, I've tested on one of my image color channel:

Input

ResNet Test1 input

 

Denoised output

ResNet Test1 Denoised
 

 

Definitively early results, but you see the network already does some good edge preserving and doesn't blur the stars too much considering this is linear data (huge dynamic range which is normally difficult for convolution to handle). I'm looking now into fine-tuning the noise model to account different stack number and using non-Hubble dataset.


  • noodle and lambermo like this

#23 lambermo

lambermo

    Apollo

  • -----
  • Posts: 1,325
  • Joined: 16 Jul 2007
  • Loc: .nl

Posted 03 October 2021 - 03:59 PM

Cool that you have a network First Light ;-) Is the change of contrast to be expected ? Or maybe misrepresented by a stretch ?



#24 charles.tremblay.darveau

charles.tremblay.darveau

    Viking 1

  • *****
  • topic starter
  • Posts: 947
  • Joined: 16 Oct 2020
  • Loc: Seattle, WA, USA

Posted 03 October 2021 - 05:36 PM

Cool that you have a network First Light ;-) Is the change of contrast to be expected ? Or maybe misrepresented by a stretch ?

Good question. The 2 images are stretched by the same amount (I'm simply using a linear stretch with same range to show noise). This is also something visible in the ArXiv paper, so assuming this is inherent to the noise model used (perhaps tied to the dark current offset). Next step is to review the noise model to make it more realistic to what we collect in our camera. I also think using real stack data would improve performance, but that's a lot more work to prepare the dataset. Conservation of agony I suppose XD.

 

That being said, denoising will inherently trade resolution/contrast for SNR, that's just the nature of the beast. If you go aggressive on the denoising step, you can recover details using contrast-enhancement techniques (wavelet, unsharp mask, CLAHE, etc.).



#25 RazvanUnderStars

RazvanUnderStars

    Vanguard

  • *****
  • Posts: 2,085
  • Joined: 15 Jul 2014
  • Loc: Ontario, Canada

Posted 03 October 2021 - 07:41 PM

Charles, you may find this of interest: "a new technique developed by scientists at the Max Planck Institute for Solar System Research (MPS) in Germany.  They used an AI algorithm called the Hyper-effective nOise Removal U-net Software (HORUS).  HORUS’s primary goal is to “clean up” the noisy images of the bottom of unlit craters collected by other spacecraft, such as the Lunar Reconnaissance Orbiter (LRO).

 

https://www.universe...ts-inside-them/




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics