Greetings!
I was able to successfully install cuda, and gpu enabled tensorflow libraries, and run starnet++ using these under linux. I found a similar set of steps for Windows from Darkarchon here: https://www.darkskie...t-starnet-cuda/ . I followed those general ideas and adapted them to linux.
Disclosures
1) This is not a list of instructions for how you could successfully do the same.
2) If you follow these steps your own installation may become irreparably damaged! I do not take any responsibility. Please use your own judgment if you decide to do the same.
3) I cannot help with installation questions. I am posting this as a journal and as a proof of concept since I wasn't able to find documentation through a web search.
My system
AMD 3900X, 32 GB
Nvidia GeForce GTX 1660 Super, Driver Version: 510.47.03, CUDA Version: 11.6 [after cuda installation]
Linuxmint 20.3 with the xfce4 desktop. This distribution is based on Ubuntu 20.04.
Nvidia's list of cuda enabled GPUs are listed here: https://developer.nvidia.com/cuda-gpus but this list is incomplete. For example, It does not list the GeForce GTX 1660 Super. The 1660 Super supports CUDA 7.5. According to Darkarchon, starnet needs CUDA 3.5 or higher.
https://en.wikipedia...rocessing_units [cuda version is listed under supported API's]
https://www.techpowerup.com/gpu-specs/ [cuda version is listed under Graphics Features]
Installed cuda
The cuda toolkit is here: https://developer.nv...toolkit-archive . The documentation is a very long read. Here are the steps that I took.
Cuda install requires the build-essential package as a dependency. However build-essential would not install with libc6 version 2.31-0ubuntu9.3 that I initially had on my system. On March 3, 2022, after libc6 updated to 2.31-0ubuntu9.7 build-essential and everything else installed smoothly.
$ sudo apt update
$ sudo apt upgrade
$ sudo apt install build-essential
I then went to https://developer.nv...toolkit-archive and clicked the following links in turn to get the quick install instructions.
CUDA Toolkit 11.6.1 -> Linux -> x86_64 -> Ubuntu -> 20.04 -> deb (local).
Installation Instructions for my distribution:
$ wget https://developer.do...-ubuntu2004.pin
$ sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600
$ wget https://developer.do....03-1_amd64.deb
$ sudo dpkg -i cuda-repo-ubuntu2004-11-6-local_11.6.1-510.47.03-1_amd64.deb
$ sudo apt-key add /var/cuda-repo-ubuntu2004-11-6-local/7fa2af80.pub
$ sudo apt-get update
$ sudo apt-get -y install cuda
$ apt install libcudnn8
Edit: cuda components are installed in /usr/local/
Installed libtensorflow-gpu
Found it here: https://www.tensorfl.../install/lang_c and followed install instructions.
$ sudo tar -C /usr/local -xzf libtensorflow-gpu-linux-x86_64-2.7.0.tar.gz
$ sudo ldconfig /usr/local/lib
Doing ldconfig ensures that programs will find the tensorflow libraries from the /usr/local install.
Moved the included libtensorflow libraries to a temporary location so that the gpu enabled libraries in /usr/local were picked up. Otherwise the cpu versions are used.
For example: In the directory that contains the starnet++ command line files [such as StarNetv2CLI_linux/] I did the following.
$ mkdir temp
$ mv libtensorflow* ./temp/
Did the same for Pixinsight: The tensorflow libraries are in /opt/PixInsight/bin/lib
$ mkdir /opt/temp
$ mv libtensorflow* /opt/temp
Be warned here! You will need to use super user privileges to do this. You could damage your Pixinsight installation if you are not careful.
Set an environment variable
$ export TF_FORCE_GPU_ALLOW_GROWTH="true"
I put this in my bashrc so that it is set every session. Why is this needed? See https://www.tensorflow.org/guide/gpu
Quote:
"In some cases it is desirable for the process to only allocate a subset of the available memory, or to only grow the memory usage as is needed by the process. TensorFlow provides two methods to control this.
The first option is to turn on memory growth by calling tf.config.experimental.set_memory_growth, which attempts to allocate only as much GPU memory as needed for the runtime allocations: it starts out allocating very little memory, and as the program gets run and more GPU memory is needed, the GPU memory region is extended for the TensorFlow process.
Another way to enable this option is to set the environmental variable TF_FORCE_GPU_ALLOW_GROWTH to true. This configuration is platform specific."
Error message
There was an error message in the terminal running starnet++ from the command line:
"I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero."
This does not seem to matter. It can be dealt with by running the following command in the terminal as described here: https://github.com/t...ow/issues/42738
$ for a in /sys/bus/pci/devices/*; do echo 0 | sudo tee -a $a/numa_node; done
This was only needed once and I was asked for my sudo password.
Installed nvtop to monitor gpu usage
$ sudo apt install nvtop
nvtop is run from the command line in a terminal. When starnet++ is running an entry is shown under Type as "Compute" whereas normal display tasks are listed as "Graphic." The line graph shows memory and gpu usage. This should go up when starnet++ is running.
Results
From the command line I ran starnet++ as an argument to the time command to get a time estimate:
$ time ./starnet++ ~/PI/NGC700_Drzl2_Ha_nonlinear.tif fileout.tif 64 [Yes, there's a typo in the input filename, should have been NGC7000]
Reading input image... Done!
Bits per sample: 16
Samples per pixel: 1
Height: 5074
Width: 6724
Done!
Output of the time command:
real 5m35.637s <----- This is good!
user 5m39.473s
sys 0m1.908s
Also works with starnetGUI. [https://www.cloudyni...6#entry11695043]
Attached picture is luminance, not the Ha file whose stats I have above.
Many thanks to Nikita Misiura (Starnet++), JJ Teoh (starnet GUI) and Darkarchon (instructions for windows).
Cheers!
Ajay
Edited by bluedandelion, 06 March 2022 - 12:24 PM.