I've been able to get StarNet++ to run on Google Colab; I haven't learned enough about Python and Tensorflow to correctly replace the tensorflow placeholders with the appropriate eager_execution code, but it works. I'm not sure how recent the python version of StarNet++ is, either.
Fundamentally, it involved (in Google Colab):
invoking tf.compat.v1.disable_eager_execution() in transform()
replacing references to toimage in transform() with the appropriate Image.fromarray call; it took me a while to figure out that you have to manually cast the arrays to uint8 for the result images to come out right, but once I figured that out then everything seemed to work without problems.
The carriage return without newline doesn't seem to work in the Colab console, so I cut it down so it only prints every 10%.
I still get lots of deprecation warnings, but it seems to function correctly.
Performance is pretty decent - running on Google's Tesla K80, it completes a 4656 x 3520 image in 0.7 minutes and a 9312 x 7010 image took 2.1 minutes. Not sure what this normally takes folks on their home computers.
This is only for the transform function; I think I have to figure out the eager_execution and state saving code a little better to implement training.
Maybe this will be of interest to someone if performance on your home machine is slow (or you have an ancient i7 like me that doesn't support AVX). This will do for me for now, until I can figure out how to recompile the bits in Windows that appear to still depend on AVX; even with the tensorflow GPU (CUDA) dll, I still get application errors that appear to be related to AVX instructions trying to run on my desktop, and running the outer portion in the development simulator caused some strange behaviors. FP32 performance on the GTX1080Ti should be about double that of the K80.