Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Deep Learning for random noise attenuation

astrophotography ccd imaging
  • Please log in to reply
23 replies to this topic

#1 BenAllen

BenAllen

    Lift Off

  • -----
  • topic starter
  • Posts: 8
  • Joined: 11 Apr 2019
  • Loc: Houston, TX

Posted 15 April 2019 - 05:05 PM

Our limiting factor in astrophotography is definitely noise.It can only be reduced by stacking the traces or filtering during processing.There is two ways to reduce random noise level; a vertical one: stacking several pictures of the same object. And spatial ones, which are different kind of clever smoothing. I think that machine learning can open a new kind of noise reduction.

 

The trend to denoise images with machine learning is to train GAN with a lot of examples. It will be very efficient for smartphone pictures, but very dangerous for scientific data as it can introduce structures from another dataset. To improve my astronomy pictures quality, I’m using machine learning that only use corrupted data, i.e. without apriori information.

 

I used a deep convolutional network called DeepPior to denoise my astronomy data. This algorithm doesn’t search for the answer in the image space like a trained GAN should do, it searches for the answer in the space of neural network’s parameters. It’s like a brain that trains itself to recreate that image without noise.

 

I shot M51 galaxy with a 0.4m Ritchey Chretien telescope in New Mexico. On the top left of the attached figure, this is a raw shot of 20 minutes. On the bottom right, a restoration using deep prior. This is way above a conventional denoise result. Of course it's below a stack; this is just a test to show the technology. I highly recommend to use it post stack.

 

Comparison_DeepPrior_M51_Zoom.jpg

 

More of my work here: https://www.instagram.com/ben_b_allen/


Edited by BenAllen, 15 April 2019 - 06:33 PM.

  • rustynpp, rvr, calypsob and 6 others like this

#2 TinySpeck

TinySpeck

    Mariner 2

  • *****
  • Posts: 217
  • Joined: 08 Oct 2017
  • Loc: Seattle area

Posted 15 April 2019 - 05:33 PM

Very impressive!  I found Deep Prior at https://github.com/D...eep-image-prior .  It's a Python / Jupyter project.  I'm going to grab it and start experimenting.


  • rvr, calypsob and PirateMike like this

#3 ccs_hello

ccs_hello

    Voyager 1

  • *****
  • Posts: 10196
  • Joined: 03 Jul 2004

Posted 15 April 2019 - 06:18 PM

re: train GAN with a lot of examples

 

How these examples or training data sets were obtained/derived in the first place?

 

P.S. a note to myself about GAN

https://skymind.ai/w...ial-network-gan



#4 PirateMike

PirateMike

    Vanguard

  • -----
  • Posts: 2085
  • Joined: 27 Sep 2013
  • Loc: A Green Dot On A Blue Sea

Posted 15 April 2019 - 06:59 PM

I'm going to try Deep Prior too at the next chance I get.

 

Thanks for the info BenAllen. waytogo.gif

 

 

 

UPDATE: This seems to complicated for me. I thought it was a usual Windows type program, or is there something I am missing?

 

 

Miguel   8)


Edited by PirateMike, 15 April 2019 - 07:14 PM.


#5 jhayes_tucson

jhayes_tucson

    Fly Me to the Moon

  • *****
  • Posts: 6782
  • Joined: 26 Aug 2012
  • Loc: Bend, OR

Posted 15 April 2019 - 07:37 PM

Have you looked at MureDenoise in PI?  It's a noise estimator that does not rely on spatial diffusion and it produces a similar result on a stacked image.

 

John



#6 calypsob

calypsob

    Aurora

  • *****
  • Posts: 4500
  • Joined: 20 Apr 2013

Posted 15 April 2019 - 07:54 PM

Pretty interesting. I wonder if you could use it to help clean up color noise on an osc camera, using a mono reference image.



#7 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 22916
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 15 April 2019 - 08:45 PM

Very interesting results. I'm curious to see more. You can definitely see how the algorithm handled noise in a single frame...but I am curious to see how it handles the stack. There does appear to be some loss of small details in the single frame...I wonder if it would eat up some of the smaller details of the stack as well? 



#8 adamland

adamland

    Mariner 2

  • -----
  • Posts: 240
  • Joined: 21 Feb 2015
  • Loc: Sammamish, WA

Posted 16 April 2019 - 12:03 AM

Ben, can you post the python script you ended up using to run this on your PNGs? I tried a few modifications of the denoise.py (converted from the ipynb) but wasn't able to get it working properly.



#9 jerahian

jerahian

    Mariner 2

  • *****
  • Posts: 257
  • Joined: 02 Aug 2018
  • Loc: Maine

Posted 16 April 2019 - 12:37 AM

Ben, can you post the python script you ended up using to run this on your PNGs? I tried a few modifications of the denoise.py (converted from the ipynb) but wasn't able to get it working properly.

Adam, just as a note, the denoising.py is a sample script which includes a switch for 2 of the test images included with the source.  I commented out the CUDA TensorFlow option on line 31, as something is not playing well with the v10 CUDA environment, and I replaced it with the non-CUDA TF.  Uncomment it back if that wasn't your issue.  Here is the script that runs for me:

 

https://www.dropbox...._nocuda.py?dl=0

 

Of course, you will need all the other libraries installed as well, per the git markdown notes.

 

NOTE:  The script is extremely SLOW.  50 iterations take about 10 minutes to run (at least for me)...that's out of 2500-3000!.  I imagine it's faster if CUDA is enabled.  Also, I'm currently processing a 512x512 subframe of a relatively noisy stacked color M101 of mine.  Anything much bigger than that was crushingly slow, and a full size frame from a 1x1 binned 1600MM (4656x3520) couldn't allocate enough memory on my 16GB laptop.  So, realistically, this is somewhat of an academic exercise at the moment.



#10 adamland

adamland

    Mariner 2

  • -----
  • Posts: 240
  • Joined: 21 Feb 2015
  • Loc: Sammamish, WA

Posted 16 April 2019 - 01:48 AM

Adam, just as a note, the denoising.py is a sample script which includes a switch for 2 of the test images included with the source.  I commented out the CUDA TensorFlow option on line 31, as something is not playing well with the v10 CUDA environment, and I replaced it with the non-CUDA TF.  Uncomment it back if that wasn't your issue.  Here is the script that runs for me:

 

https://www.dropbox...._nocuda.py?dl=0

 

Of course, you will need all the other libraries installed as well, per the git markdown notes.

 

NOTE:  The script is extremely SLOW.  50 iterations take about 10 minutes to run (at least for me)...that's out of 2500-3000!.  I imagine it's faster if CUDA is enabled.  Also, I'm currently processing a 512x512 subframe of a relatively noisy stacked color M101 of mine.  Anything much bigger than that was crushingly slow, and a full size frame from a 1x1 binned 1600MM (4656x3520) couldn't allocate enough memory on my 16GB laptop.  So, realistically, this is somewhat of an academic exercise at the moment.

Hah, so initially I was trying to modify the F16_GT case to avoid the adding noise step, then I realized that I just need to use the snail.jpg case :). Then I hit a GPU memory alloc problem which was solved by your trick of using non-CUDA tensorflow. As you said, it is very slow and basically destroyed my laptop. I'll see if I can figure out what is going on with CUDA as I assume that will help.



#11 WadeH237

WadeH237

    Skylab

  • *****
  • Posts: 4162
  • Joined: 24 Feb 2007
  • Loc: Snohomish, WA

Posted 16 April 2019 - 07:20 AM

I think that the examples in post 1 show some of the risks of this kind of denoising.

 

If we consider the 10 hour integration to show the real objects in the inset, both of the attempts at noise reduction on a single image get it wrong, with Deep Prior getting it more wrong than the TGVDenoise.

 

Here's how I am evaluating the claim:

 

In the inset, we see 3 bright stars.  All 3 of them are represented in each of the insets.  In the 10 hour integration, we see two dimmer stars between the bright stars on the left.  TGVDenoise preserves some of the upper of the two dim stars.  Deep Prior obliterates them both.  In the 10 hour integration, we see two dim objects at the 10 o'clock position from the right most bright star (the lower one looks stellar, and the upper one may be diffuse).  TGVDenoise preserves a hint of the diffuse dim object and invents a hint of a non-existent object just above the diffuse one.  Deep Prior obliterates them both.  Finally, near the left of the inset, there is something to the left of the upper bright star.  In the 10 hour inset, we do not see it, which suggests that it may have been a cosmic ray hit or an imaging artifact.  Both TGVDenoise and Deep Prior preserve it.  To be fair, the spot was clearly in the data from the raw 1200 second shot.  It would take a few images to determine that it's not "real", and neither denoise process had that.

 

I would second John's suggestion of looking at MureDenoise.  I've been using it for a couple of months ago now, and am really impressed with it.  Given the correct parameters and applied with some restraint, I've not seen it clobber real objects or create artifacts.  I have seen it reveal structures that were evident in the calibrated and integrated frame, that were present, but hard to discern, in the original.


  • ccs_hello and Jon Rista like this

#12 BenAllen

BenAllen

    Lift Off

  • -----
  • topic starter
  • Posts: 8
  • Joined: 11 Apr 2019
  • Loc: Houston, TX

Posted 16 April 2019 - 02:54 PM

Hi all and thanks for your feedback !

 

Yes the computing power is a real problem. I use four Tesla P4 on google cloud. I highly recommend to open a virtual machine on google cloud like me if you want to test it.

 

Yes I use Mure Denoise. Like TGV denoise or MLT, it’s below the result that you can get with DeepPrior. I will post a result on Thor’s Helmet Ha stack soon. Unfortunately, I’m only able to launch it on 1500x1500 pix 8 bits tif for now because of memory problems. I’m working on it.



#13 jerahian

jerahian

    Mariner 2

  • *****
  • Posts: 257
  • Joined: 02 Aug 2018
  • Loc: Maine

Posted 16 April 2019 - 10:34 PM

Hey guys, I have not compared this preview to MureDenoise yet, but I did want to follow up and show you the output from the deep-image-prior script running on a noisy M101 preview subframe.  This preview is from an image of mine with 3.8 hours of HaLRGB data integration, which I chose due to its high noise.  I kept the preview to 512x512 px due to the herculean effort it takes a fairly high end laptop to run this neural net.

 

So, I ran this for 3000 iterations, leaving the number of iterations the same as for the sample provided by the authors (here is their project page, btw).  The final output image and the original input image are here (final left, orig right):

 

M101 HaLRGB 3000

 

Now, to show you the progress of the neural net search by 100 iterations, here is an animated GIF showing the progress on top and the composited image below (the bottom image may not look like it's changing after the first few, but it is):

 

M101 deep image prior
 
Each of the top and bottom images in the 30 frame GIF are exactly as they were output by the net.  I did nothing more than animate them for us all to see the progression.
 
Do what you wish with this information.  I wanted to see this through due to my curiosity.  All 3000 iterations took about 9 hours to run!
 
Enjoy.
 
 

 


  • BenAllen likes this

#14 BenAllen

BenAllen

    Lift Off

  • -----
  • topic starter
  • Posts: 8
  • Joined: 11 Apr 2019
  • Loc: Houston, TX

Posted 16 April 2019 - 10:54 PM

Hey guys, I have not compared this preview to MureDenoise yet, but I did want to follow up and show you the output from the deep-image-prior script running on a noisy M101 preview subframe. This preview is from an image of mine with 3.8 hours of HaLRGB data integration, which I chose due to its high noise. I kept the preview to 512x512 px due to the herculean effort it takes a fairly high end laptop to run this neural net.


Do what you wish with this information. I wanted to see this through due to my curiosity. All 3000 iterations took about 9 hours to run!

Enjoy.

Just open a google cloud virtual machine. It’s free !
It takes me 1h for 12000 iterations

Edited by BenAllen, 16 April 2019 - 10:54 PM.


#15 jerahian

jerahian

    Mariner 2

  • *****
  • Posts: 257
  • Joined: 02 Aug 2018
  • Loc: Maine

Posted 16 April 2019 - 11:05 PM

Just open a google cloud virtual machine. It’s free !
It takes me 1h for 12000 iterations

Hah, I noticed you mentioned to instantiate a virtual machine on Google Cloud in your previous post, but I did not know it was free!!  That changes everything, and I will have to investigate!  Thanks for the pro tip waytogo.gif



#16 Francois

Francois

    Explorer 1

  • -----
  • Posts: 99
  • Joined: 09 Jun 2007
  • Loc: Montreal

Posted 17 April 2019 - 06:42 AM

Our limiting factor in astrophotography is definitely noise.It can only be reduced by stacking the traces or filtering during processing.There is two ways to reduce random noise level; a vertical one: stacking several pictures of the same object. And spatial ones, which are different kind of clever smoothing. I think that machine learning can open a new kind of noise reduction.

 

The trend to denoise images with machine learning is to train GAN with a lot of examples. It will be very efficient for smartphone pictures, but very dangerous for scientific data as it can introduce structures from another dataset.

Not to rain too much on everyone's parade, but noise and convolution are information-loss processes. Any algorithm that offers universal improvements in noise or universal deconvolution violates conservation of information. Hence the "very dangerous for scientific data".

 

If the goal is to obtain aesthetic filtering for astroimaging in particular, that's perfectly feasible though. It should just be clear that it is only aesthetic.



#17 Tayson82

Tayson82

    Lift Off

  • -----
  • Posts: 24
  • Joined: 17 May 2018
  • Loc: Poland / Wolomin

Posted 17 April 2019 - 07:35 AM

Is it for windows?



#18 BenAllen

BenAllen

    Lift Off

  • -----
  • topic starter
  • Posts: 8
  • Joined: 11 Apr 2019
  • Loc: Houston, TX

Posted 17 April 2019 - 08:26 AM

Not to rain too much on everyone's parade, but noise and convolution are information-loss processes. Any algorithm that offers universal improvements in noise or universal deconvolution violates conservation of information. Hence the "very dangerous for scientific data".

If the goal is to obtain aesthetic filtering for astroimaging in particular, that's perfectly feasible though. It should just be clear that it is only aesthetic.


I think you get it wrong. This is nothing like any other noise attenuation.
Of course some signal is affected, but we should reach a SNR way better than any other conventional denoise.

Moreover, this method does not use a pretrained network or an image database. There is no risk to implement data from another dataset.

Edited by BenAllen, 17 April 2019 - 08:36 AM.


#19 BenAllen

BenAllen

    Lift Off

  • -----
  • topic starter
  • Posts: 8
  • Joined: 11 Apr 2019
  • Loc: Houston, TX

Posted 17 April 2019 - 08:28 AM

Is it for windows?

It’s a Python script, and requires high performance GPU. I highly recommend to use it on a google cloud virtual machine, on floyd or aws.



#20 BenAllen

BenAllen

    Lift Off

  • -----
  • topic starter
  • Posts: 8
  • Joined: 11 Apr 2019
  • Loc: Houston, TX

Posted 17 April 2019 - 10:12 AM

If you want to try on google cloud:

- open an account on : Google Cloud Platform

- Create a virtual Machine with 1 GPU. K80 or P4. Do not use a V100, DeepPrior does not converge on this one for some reason. You may want to ask for an authorisation to have more GPU; To do so, go to App engine ; Quota.

- Use Deep Learning Image: PyTorch 1.0.0 and fastai m23 CUDA 10.0 ; as an boot disk

- Allow Http and Https trafic

- Create the instance.

- Start it, open the SSH.

- In the console

sudo apt-get upgrade
chmod +x Anaconda2-2018.12-Linux-x86_64.sh
./Anaconda2-2018.12-Linux-x86_64.sh
export PATH=~/anaconda2/bin:$PATH
conda install pytorch torchvision cudatoolkit=9.0 -c pytorch

 

- Than open a jupyter notebook with: 

jupyter notebook --ip=0.0.0.0 --port=8888 --no-browser &

- Copy past the external ip in your browser with 8888/ at the end

- download DeepPrior your notebook

 

Hope this will help



#21 WadeH237

WadeH237

    Skylab

  • *****
  • Posts: 4162
  • Joined: 24 Feb 2007
  • Loc: Snohomish, WA

Posted 17 April 2019 - 01:35 PM

Of course some signal is affected, but we should reach a SNR way better than any other conventional denoise.

I think that this kind of denoising has its place, and I think that it's going to get better over time.

 

But I'm not sure that you are presenting a balanced view here.  I already pointed out where it's destroying visible structures in the single 1200s exposure (that are confirmed to be real in the integrated stack).  If the goal is simply to reach a target SNR, you could do that with a simple PixelMath expression - but you probably wouldn't like the result.

 

There has to be a balance between the benefit of noise reduction against the damage to the data.


  • Jon Rista likes this

#22 Francois

Francois

    Explorer 1

  • -----
  • Posts: 99
  • Joined: 09 Jun 2007
  • Loc: Montreal

Posted 17 April 2019 - 04:12 PM

Of course some signal is affected, but we should reach a SNR way better than any other conventional denoise.

I think you got me wrong. I'm stating that any universal denoising violates fundamental information theory. Doesn't matter how it's done.



#23 ccs_hello

ccs_hello

    Voyager 1

  • *****
  • Posts: 10196
  • Joined: 03 Jul 2004

Posted 19 April 2019 - 08:02 AM

To benefit the community, folks who have played in this field ought to tell us what's going on behind the curtain.

(Not what tools are used, how much computational efforts needed, which site/backend to go to,,,)

 

In simple terms, other than the shooter's own multiple of samples himself/herself had collected, 

what else (samples of other origins, reference models of  "what stars/noises should be distinguished", or other knowledge/inferences have been introduced) are at play here?

 

If there no outside dependency, why not just state/declare a (new) method is found such that 

based on zero-knowledge obtained/injected from elsewhere, new knowledge (such as noise modeling is built by self-referencing, some type of refinement mechanism previously not explored enough, or "some new tech filed") can be extracted?

 

Or it's just a method to aesthetically "prettify the image" while dropping information (as Francois had asked.)

 

 

A clarification will be really helpful.


  • Francois likes this

#24 Charlie B

Charlie B

    Apollo

  • -----
  • Posts: 1417
  • Joined: 22 Mar 2008
  • Loc: Sterling, Virginia

Posted 19 April 2019 - 08:22 AM

I'm interested in how well the super-resolution would work as compared with the drizzle algorithm.  

 

Regards,

 

Charlie B




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics





Also tagged with one or more of these keywords: astrophotography, ccd, imaging



Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics