Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Cloud Detection for All-Sky cameras.

DIY Equipment Observatory Astrophotography
  • Please log in to reply
24 replies to this topic

#1 ignacio_db

ignacio_db

    Vostok 1

  • -----
  • topic starter
  • Posts: 190
  • Joined: 30 Oct 2009
  • Loc: Buenos Aires, Argentina

Posted 11 May 2022 - 09:12 AM

Hi,

 

For those of you who might be interested, recently I developed an app (SkyCondition) that classifies the sky condition by processing images streamed from an all-sky camera. It is a ML/AI approach, which is quite effective.

 

In my observatory I use an indi-allsky system as the image server, but the app can be integrated with any all-sky system that serves the latest image to a give file location. The output is a text file, updated periodically, with the sky condition classified in four classes (Clear, Cloudy, Covered, and Rainy), and a session recommendation (GO, PAUSE, STOP) based on some user settings.

 

The text file can be read an parsed by a Safety Monitor of sorts. I have integrated mine with NINA, using the ASCOM Generic File Safety Monitor Driver, and it works very nicely.

 

This is my first version, and have plans for further improvements. It comprises of an .exe file that runs in Windows, and an .h5 file that contains the neural net data. Both are provided in the link bellow. Note that the .exe file is quite large, as it bundles all of the dependencies so that it can run in any installation.

 

Please visit http://www.pampaskie...tion-Detection  for more details, and a few examples (including timelapse labeled videos that demonstrate the performance of the approach).

 

Note this is a work-in-progress, but already very usable (at your own risk, of course).

 

Ignacio 


  • gordtulloch, Raginar, kzar and 2 others like this

#2 Raginar

Raginar

    Voyager 1

  • *****
  • Posts: 10,874
  • Joined: 19 Oct 2010
  • Loc: Louisiana

Posted 11 May 2022 - 11:25 AM

Hey Ignacio,

 

    That's super cool!  I'll check it out. Is there a way to run this directly on a raspi vice having a windows computer in the loop?

 

Chris


Edited by Raginar, 11 May 2022 - 11:26 AM.


#3 ignacio_db

ignacio_db

    Vostok 1

  • -----
  • topic starter
  • Posts: 190
  • Joined: 30 Oct 2009
  • Loc: Buenos Aires, Argentina

Posted 11 May 2022 - 11:39 AM

Hi Chris. Thanks. Yes, it is possible, and in my plans. For that I need to bundle the app in an raspbian/python environment. I´ll post an update in this thread when available.

 

Ignacio 


Edited by ignacio_db, 11 May 2022 - 11:41 AM.

  • Raginar likes this

#4 Raginar

Raginar

    Voyager 1

  • *****
  • Posts: 10,874
  • Joined: 19 Oct 2010
  • Loc: Louisiana

Posted 11 May 2022 - 12:13 PM

I'm glad you resurrected the concept.  :)



#5 RossW

RossW

    Ranger 4

  • -----
  • Posts: 350
  • Joined: 15 Jun 2018
  • Loc: Lake Biwa, Japan

Posted 11 May 2022 - 08:24 PM

Hello Ignacio,

 

I'm interested to hear how your neural network handles the moon? What is the false positive rate, and does that rate vary with the phases of the moon? I would have thought it would be a major hurdle, but perhaps not? I know that my All Sky camera images are quite washed out whenever the moon is out, and although you can deal with that by reducing the exposure, by doing so the camera is no longer an "all sky" camera (it can only see the moon, no stars at all).

 

Cheers,

 

Ross



#6 nthoward41

nthoward41

    Sputnik

  • -----
  • Posts: 31
  • Joined: 09 Sep 2020

Posted 11 May 2022 - 09:28 PM

Very cool software. Thanks for sharing. On the website it suggests that the training data might not applicable for dark skies since it was trained under bortle 8 skies but suggests the model can be retrained. How does the user retrain with a set of dark sky images? I am under bortle 3 and would really be interested in using it.

#7 ignacio_db

ignacio_db

    Vostok 1

  • -----
  • topic starter
  • Posts: 190
  • Joined: 30 Oct 2009
  • Loc: Buenos Aires, Argentina

Posted 12 May 2022 - 07:59 AM

Hello Ignacio,

 

I'm interested to hear how your neural network handles the moon? What is the false positive rate, and does that rate vary with the phases of the moon? I would have thought it would be a major hurdle, but perhaps not? I know that my All Sky camera images are quite washed out whenever the moon is out, and although you can deal with that by reducing the exposure, by doing so the camera is no longer an "all sky" camera (it can only see the moon, no stars at all).

 

Cheers,

 

Ross

Hi Ross, I purposely included a few training example with the moon in several phases (not full, thou), and so far it has worked perfectly. We'll see as the moon approaches its full phase. In may setup, the moon oversaturates the image, and that's ok. Bright stars are still visible, thou. A key element, I suspect, is that the allsky camera dome has to be clean, so that when the moon shines on it, it does not create weird reflections that could be interpreted as clouds. 

 

If you have a timelapse of your allsky, I can pass it through the detector and tag the output to see how it works.

 

Ignacio


Edited by ignacio_db, 12 May 2022 - 08:38 AM.


#8 ignacio_db

ignacio_db

    Vostok 1

  • -----
  • topic starter
  • Posts: 190
  • Joined: 30 Oct 2009
  • Loc: Buenos Aires, Argentina

Posted 12 May 2022 - 08:08 AM

Very cool software. Thanks for sharing. On the website it suggests that the training data might not applicable for dark skies since it was trained under bortle 8 skies but suggests the model can be retrained. How does the user retrain with a set of dark sky images? I am under bortle 3 and would really be interested in using it.

HI, I would try it as is, neural nets can be surprising some times. Again, if you send me a timelapse of your camera, I can try it (have another piece of software for that). Otherwise, you can train with your own examples at teachablemachine.withgoogle.com. Make sure to use the same class definitions, and then download and replace the keras model that comes out after the training (keras_model.h5). You will have to collect around 100 example per class, and they have to be as varied as possible within each class (ie, from different nights)

 

Ignacio


  • Raginar likes this

#9 ignacio_db

ignacio_db

    Vostok 1

  • -----
  • topic starter
  • Posts: 190
  • Joined: 30 Oct 2009
  • Loc: Buenos Aires, Argentina

Posted 12 May 2022 - 12:48 PM

Hello Ignacio,

 

I'm interested to hear how your neural network handles the moon? What is the false positive rate, and does that rate vary with the phases of the moon? I would have thought it would be a major hurdle, but perhaps not? I know that my All Sky camera images are quite washed out whenever the moon is out, and although you can deal with that by reducing the exposure, by doing so the camera is no longer an "all sky" camera (it can only see the moon, no stars at all).

 

Cheers,

 

Ross

To follow-up on the moon question, I tried the classifier on last night timelapse, mostly clear but with a quarter moon. It works very well. See the labeled video here: https://drive.google...iew?usp=sharing

 

Note that the Clear class does not imply no clouds at all, but rather a condition where one would continue imaging, maybe with clouds near the horizon or with very thin high clouds passing by. 

 

Ignacio


Edited by ignacio_db, 12 May 2022 - 01:08 PM.

  • Raginar likes this

#10 kzar

kzar

    Messenger

  • -----
  • Posts: 496
  • Joined: 18 Mar 2021
  • Loc: Switzerland

Posted 16 June 2022 - 02:26 PM

That's an excellent initiative - I was looking for months for such solutions - really cool. 

 

I would love to have such devices in my remote observatory .

 

Regards.



#11 utnuc

utnuc

    Sputnik

  • -----
  • Posts: 25
  • Joined: 02 Mar 2017
  • Loc: East Tennessee

Posted 13 May 2023 - 10:21 PM

Hi Ignacio,

Nice project, I was just about to tackle this when I found your post. Can you publish the code to github so I can attempt to use it directly on my RPi? 

 

Best



#12 gordtulloch

gordtulloch

    Viking 1

  • *****
  • Posts: 877
  • Joined: 10 Feb 2005
  • Loc: Winnipeg Canada

Posted 15 May 2023 - 02:20 PM

Hi Ignacio,

Nice project, I was just about to tackle this when I found your post. Can you publish the code to github so I can attempt to use it directly on my RPi? 

 

Best

+1 on this, I'd like to install on either my RPi or Linux PC



#13 utnuc

utnuc

    Sputnik

  • -----
  • Posts: 25
  • Joined: 02 Mar 2017
  • Loc: East Tennessee

Posted 15 May 2023 - 09:44 PM

Actually, I think I can get to it myself. There's plenty of tutorials on Teachable Machine for python or node, so I'm going start there and see how your model performs with my bortle 7 sky. I'll post my code once done.



#14 utnuc

utnuc

    Sputnik

  • -----
  • Posts: 25
  • Joined: 02 Mar 2017
  • Loc: East Tennessee

Posted 16 May 2023 - 03:44 AM

OK, got it working on the RPi 3B+. Was a beast to figure out how to get tensorflow installed on 32bit bullseye. Main issue was to install under virtual environment with python 3.7.0. Next install with the correct version tensorflow wheel from here. Here was my command (run under virtual environment):

pip install --upgrade https://github.com/bitsy-ai/tensorflow-arm-bin/releases/download/v2.4.0/tensorflow-2.4.0-cp37-none-linux_armv7l.whl

Then install Pillow:

pip install Pillow

And here's some boilerplate commented code I found somewhere that works:

import tensorflow.keras
from PIL import Image, ImageOps
import numpy as np
# Disable scientific notation for clarity
np.set_printoptions(suppress=True)
# Load the model
model = tensorflow.keras.models.load_model('keras_model.h5')
# Create the array of the right shape to feed into the keras model
# The 'length' or number of images you can put into the array is
# determined by the first position in the shape tuple, in this case 1.
data = np.ndarray(shape=(1, 224, 224, 3), dtype=np.float32)
# Replace this with the path to your image
image = Image.open('latest.jpg')
#resize the image to a 224x224 with the same strategy as in TM2:
#resizing the image to be at least 224x224 and then cropping from the center
size = (224, 224)
image = ImageOps.fit(image, size, Image.ANTIALIAS)
#turn the image into a numpy array
image_array = np.asarray(image)
# display the resized image
image.show()
# Normalize the image
normalized_image_array = (image_array.astype(np.float32) / 127.0) - 1
# Load the image into the array
data[0] = normalized_image_array
# run the inference
prediction = model.predict(data)
print(prediction)

I ran it with this line to ignore warnings and errors:

python detect.py 2>&-

Two problems that I've had: #1 is that the output gives what looks like predictive weights without labels:

 

[[0.9874187  0.00109448 0.01120752 0.00027925]]

@ignacio_db can you share your labels file so I can match these weights up with your model?

 

#2 problem is that it takes 30-40 seconds to run on a single sky image. Even after pre-shrinking the file it takes that long with the newest version of tensorflow. This is with indi-allsky on and off, doesn't seem to matter. If I can't optimize it any better I'll probably buy a Coral USB Accelerator to outsource this task, I'm guessing it'll be lightning fast.



#15 gordtulloch

gordtulloch

    Viking 1

  • *****
  • Posts: 877
  • Joined: 10 Feb 2005
  • Loc: Winnipeg Canada

Posted 16 May 2023 - 11:32 AM

I got everything up and running on my Ubuntu Linux 20.04 VM at work using:

 

pip install --upgrade pip

pip install tensorflow

pip install Pillow

 

After downloading the model file I ran into some issues with TensorFlow crashing because the AVX and AVX2 instructions weren't enabled on the VM, solved with the following from the Windows command line (run as Administrator) to enable AVX on the VM and disable Windows HyperV on the host, which causes problems:

 

BoxManage setextradata "Ubuntu Dev" VBoxInternal/CPUM/IsaExts/AVX 1

BoxManage setextradata "Ubuntu Dev" VBoxInternal/CPUM/IsaExts/AVX2 1

bcdedit /set hypervisorlaunchtype off

DISM /Online /Disable-Feature:Microsoft-Hyper-V

 

Once I did that I got a result from utnuc's code in about 5s so pretty fast on a single processor with 8GB RAM. Should scream on my Corei7 16GB Ubuntu box in the observatory.  I need to prepare some test data to give it a workout, my current sky is very smokey from all the fires in Alberta so I'll have to go back through the images to get a range of test data.


Edited by gordtulloch, 16 May 2023 - 11:39 AM.

  • utnuc likes this

#16 utnuc

utnuc

    Sputnik

  • -----
  • Posts: 25
  • Joined: 02 Mar 2017
  • Loc: East Tennessee

Posted 16 May 2023 - 02:45 PM

Nice! 

 

I got everything up and running on my Ubuntu Linux 20.04 VM at work using:

 

pip install --upgrade pip

pip install tensorflow

pip install Pillow

 

After downloading the model file I ran into some issues with TensorFlow crashing because the AVX and AVX2 instructions weren't enabled on the VM, solved with the following from the Windows command line (run as Administrator) to enable AVX on the VM and disable Windows HyperV on the host, which causes problems:

 

BoxManage setextradata "Ubuntu Dev" VBoxInternal/CPUM/IsaExts/AVX 1

BoxManage setextradata "Ubuntu Dev" VBoxInternal/CPUM/IsaExts/AVX2 1

bcdedit /set hypervisorlaunchtype off

DISM /Online /Disable-Feature:Microsoft-Hyper-V

 

Once I did that I got a result from utnuc's code in about 5s so pretty fast on a single processor with 8GB RAM. Should scream on my Corei7 16GB Ubuntu box in the observatory.  I need to prepare some test data to give it a workout, my current sky is very smokey from all the fires in Alberta so I'll have to go back through the images to get a range of test data.

 

Nice! One thing that I found was that most of the runtime is spent loading the model:

model = tensorflow.keras.models.load_model('keras_model.h5')

So once you've loaded the model you can loop over a bunch of images without much penalty (7s on RPI3B+  and <1s on my 16 core Xeon).

 

Other ideas to speed this up:

  • I did stumble across this little gem, it'a python socket server that keep the model loaded and waits for a sample image.
  • There's also GPU support if your card has CUDA cores (NVIDIA only I think). 
  • At least on my RPi it's more performant to downsize latest.jpg, upload to my server, run the Keras model there and return a result than to run it locally.
  • I've read that the tensorflow-lite performance is better on embedded systems etc, so I'm going to try and train a model for that algorithm.
  • I may just break down and buy a Coral USB accelerator to improve reliability (~$100)

  • lambermo likes this

#17 gordtulloch

gordtulloch

    Viking 1

  • *****
  • Posts: 877
  • Joined: 10 Feb 2005
  • Loc: Winnipeg Canada

Posted 17 May 2023 - 10:05 AM

 

[[0.9874187  0.00109448 0.01120752 0.00027925]]

Do we have any idea how to interpret these results? Running a few images doesn't make any sense but there may be issues with my images...



#18 gordtulloch

gordtulloch

    Viking 1

  • *****
  • Posts: 877
  • Joined: 10 Feb 2005
  • Loc: Winnipeg Canada

Posted 23 May 2023 - 09:55 AM

I installed the Windows version of the software on my Ubuntu miniPC and it works just fine under WINE. I have it running right now and have indi-allsky using the resulting file as the alternate text for images so I can see how it's doing with my sky.

 

One thing I had to do is move a script I have on my web server that extracts the latest image for posting on my site to the local server to create a latest.jog file for Ignacio's program to look at, you can find the PHP script in the indi-allsky/examples folder. I just changed the script so it dumped out the filename by changing the last line to un-jsonize it so just the filename is returned and embed that in a crontab to copy the resulting filename extracted from the database to a file

 

* * * * * cp /var/www/html/allsky/`php /home/ubuntu/indi-allsky/makelatest.php` /home/gtulloch/CloudDetect/latest.jpg

 

That unfortunately doesn't help people with RPi's since the ARM processor won't run amd64 binaries under WINE but worked for me til we get a native Python version. I'm not totally enamoured with the decision of whether the cloud is bad enough to close the roof being decided in the cloud detection software but we'll see how it works in practice. It would be nicer if the software just returned a cloud %age so the decision could be fine tuned in scripts.


Edited by gordtulloch, 23 May 2023 - 10:10 AM.


#19 utnuc

utnuc

    Sputnik

  • -----
  • Posts: 25
  • Joined: 02 Mar 2017
  • Loc: East Tennessee

Posted 24 May 2023 - 07:15 PM

Well I've tackled the slow prediction speed by moving the tensorflow model and python script to my server. I transfer the latest allsky pic and run the script remotely to get the result:

scp /var/www/html/indi-allsky/images/latest.jpg ben@10.0.0.98:/tmp/latest.jpg
ssh ben@10.0.0.98 'bash -s < ~/predict.py'

Note that I have set up passwordless login for ssh. I have this set up as a service on the RPi, which will send me an alert via Pushover when cloudy or rainy.

 

I ended up training my own model using Teachable Machine and sorted my sky pics into just 2 categories: clear/cloudy. It works really well and is very sensitive for any degree of clouds. Here's a demo of the predictions from 2 nights ago: 

 

https://vimeo.com/ma...1083/a03943d46e

 

I also included a 1px high graph of the sky temp delta (air temp - IR sky temp) on each model input picture, and do the same for the prediction. I do plan to add a third class raining once I have enough pictures, and might try for a partly cloudy class for 0-50% clouds. I did try to add partly cloudy on my first attempt and it wasn't that successful... but now that I have a lot more data I think I can make it work. I'll keep you guys posted.


Edited by utnuc, 24 May 2023 - 07:17 PM.


#20 gordtulloch

gordtulloch

    Viking 1

  • *****
  • Posts: 877
  • Joined: 10 Feb 2005
  • Loc: Winnipeg Canada

Posted 24 May 2023 - 10:50 PM

Cool - I'll try teachable machine, the Windows code works but locks up eventually so not a long term solution for Linux. Thanks!



#21 utnuc

utnuc

    Sputnik

  • -----
  • Posts: 25
  • Joined: 02 Mar 2017
  • Loc: East Tennessee

Posted 25 May 2023 - 06:34 AM

Cool - I'll try teachable machine, the Windows code works but locks up eventually so not a long term solution for Linux. Thanks!

One caution when building using TM (and any TF model for that matter): it crops your pics to a square (1:1), so keep that in mind when you sort your input data. Pre-cropping before sort is really useful.



#22 gordtulloch

gordtulloch

    Viking 1

  • *****
  • Posts: 877
  • Joined: 10 Feb 2005
  • Loc: Winnipeg Canada

Posted 26 May 2023 - 10:14 AM

I tried building a model in TM without doing anything to my images and the result was quite good, so I'm just modifying the code provided now to feed my allskycam and will monitor the results over the next few nights where we get some clear and smokefree (!) nights. TM is super easy - I'm writing an article I'll put on my website with all the steps for the less technically inclined.



#23 gordtulloch

gordtulloch

    Viking 1

  • *****
  • Posts: 877
  • Joined: 10 Feb 2005
  • Loc: Winnipeg Canada

Posted 27 May 2023 - 04:16 PM

Here's a basic How To article. Bit cloudy here but one brief moment of clear sky was detected properly! Just need to refine the model a bit, more data with boundary conditions I think.

 

Detecting Clouds with Machine Learning



#24 utnuc

utnuc

    Sputnik

  • -----
  • Posts: 25
  • Joined: 02 Mar 2017
  • Loc: East Tennessee

Posted 28 May 2023 - 12:31 PM

Here's a basic How To article. Bit cloudy here but one brief moment of clear sky was detected properly! Just need to refine the model a bit, more data with boundary conditions I think.

 

Detecting Clouds with Machine Learning

Nice writeup, Gord!

 

I've noticed that sometimes my model flips it prediction between Cloudy and Clear on a totally clear sky. I'm continuing to refine my model by adding in clear night data and the predictions are getting more and more accurate.  Had a good view of the moon last night so I'll add that in.  It seems like the more data, the better the learning. I've got 2-3k training samples/class right now. Re: Rain, I had a rainy night a few nights ago so I collected data from that night to see how it performs. I really don't want any rain false-positives, and I have another rain sensor to do this job so I might end up leaving this data out. My rain is really out of focus on the dome because my camera is right next to it, so the drops aren't nearly as visible as Ignacio's sample.

 

Last thing I wanted to mention is that I set up my python script as a linux service rather than using cron. No clear advantage to doing it this way, except maybe logging... but you could just as easily log your python output using cron with a redirect. My python script logs the current conditions to a mysql database, along with air/sky temps and raining state.


Edited by utnuc, 28 May 2023 - 12:34 PM.


#25 gordtulloch

gordtulloch

    Viking 1

  • *****
  • Posts: 877
  • Joined: 10 Feb 2005
  • Loc: Winnipeg Canada

Posted 28 May 2023 - 05:45 PM

Yeah I need to add some boundary conditions around haze and smoke (we're getting it bad up here from fires in Alberta) so I'll continue to refine the model. Nice thing is TM makes it so easy to make models it's a snap :)  We'll get some rain this week so I'll be able to experiment with that.

 

My script will run as a service (just runs in a terminal window at the moment), the cron job is to extract the latest image from the indi-allsky database and embed it in a copy command:

 

* * * * * cp /var/www/html/allsky/`php /home/gtulloch/indi-allsky/makelatest.php` /home/gtulloch/CloudDetect/latest.jpg

 

since indi-allsky doesn't produce a latest.jpg like the TJ code. My script is now saving the clouds.txt file to /usr/local/share/indi/scripts where my weather watcher script picks it up and combines it with data from my Argent ADS-WS1 weather station to feed indi-weather_watcher data on whether it can open the roof.  Just need to integrate the RG-11 output and I'll have the basics of a decent weather system, along with ekos-sentinal and a few other bits and pieces of scripts.




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics





Also tagged with one or more of these keywords: DIY, Equipment, Observatory, Astrophotography



Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics