How to Make Tensorflow1 Work with Newer GPUs

日本語ページ

The official Tensorflow1.15 only supports CUDA10.0. Therefore, the official Tensorflow1.15 does not work with GPUs of the Turing generation or later, which have a compute capability of 7.5.

To make Tensorflow1.15 work with newer GPUs, you can use NVIDIA's pre-built binaries, but since those didn't work for me, I modified the source code slightly and built it myself for cuda12.9, which I've uploaded for your use.

Download

Tensorflow1.15 built with cuda12.9 and compute capabilities 7.5,8.0,8.6,8.9,9.0,10.0,10.3,12.0

tensorflow-1.15.5-cp36-cp36m-linux_x86_64.whl
tensorflow-1.15.5-cp38-cp38-linux_x86_64.whl
tensorflow-1.15.5-cp310-cp310-linux_x86_64.whl
tensorflow-1.15.5-cp312-cp312-linux_x86_64.whl

Downloading and Extracting CUDA 12.9

sudo apt install wget libxml2 gcc g++ xz-utils patch
wget https://developer.download.nvidia.com/compute/cuda/12.9.1/local_installers/cuda_12.9.1_575.57.08_linux.run
sudo bash cuda_12.9.1_575.57.08_linux.run --toolkit --silent

Downloading and Extracting cudnn8.9

For some reason, you need to download it through a browser for it to work properly. https://developer.nvidia.com/downloads/compute/cudnn/secure/8.9.7/local_installers/12.x/cudnn-linux-x86_64-8.9.7.29_cuda12-archive.tar.xz/

sudo tar xvf cudnn-linux-x86_64-8.9.7.29_cuda12-archive.tar.xz --strip-components=1  --transform="s,lib$,lib64,x;s,lib/,lib64/," -C /usr/local/cuda/ --keep-directory-symlink

Downloading and Extracting nccl2.30

For some reason, you need to download it through a browser for it to work properly. https://developer.nvidia.com/downloads/compute/machine-learning/nccl/secure/2.30.3/agnostic/x64/nccl_2.30.3-1+cuda12.9_x86_64.txz/

sudo tar xvf nccl_2.30.3-1+cuda12.9_x86_64.txz --strip-components=1  --transform="s,lib$,lib64,x;s,lib/,lib64/," -C /usr/local/cuda/ --keep-directory-symlink

Download Binaries

If you find it troublesome to download through a browser, I've uploaded the binaries here for your convenience.

cuda_12.9.1_575.57.08_linux.run
cudnn-linux-x86_64-8.9.7.29_cuda12-archive.tar.xz
nccl_2.30.3-1+cuda12.9_x86_64.txz

Building Tensorflow1.15 with cuda12.9

If you prefer to build the binaries yourself, follow the steps below.

If you want to build with Python 3.12 on Ubuntu 24.04, there are more modifications needed, so please refer to this page for that.

Apply Patch

cuda-12.9.patch

Since TensorFlow 1.15 has conflicting definitions, we must patch the CUDA headers.

cd /usr/local/cuda-12.9/
sudo patch -p1 < cuda-12.9.patch

Install bazel

Bazel 0.26.1 is required. Modern versions will not work.

wget https://releases.bazel.build/0.26.1/release/bazel-0.26.1-installer-linux-x86_64.sh
bash bazel-0.26.1-installer-linux-x86_64.sh

Install Python Dependencies

If you use virtualenv, install the dependencies in the virtual environment.

sudo apt install python3-dev python3-pip git unzip
(cd /usr/bin && sudo ln -s python3 python)
pip3 install 'numpy<2' keras_preprocessing

Prepare Tensorflow1.15 Source

git clone -b r1.15 https://github.com/tensorflow/tensorflow.git

Apply Patch to Tensorflow1.15 Source

tensorflow-1.15.5-cuda12.9.patch

Apply a patch to the Tensorflow1.15 source code to make it build with cuda12.9.

cd tensorflow
patch -p1 < ../tensorflow-1.15.5-cuda12.9.patch

configure

cd tensorflow
./configure

The default settings are fine. When asked about CUDA, answer Yes, and specify the Compute Capability as 7.5, 8.0, 8.6, 8.9, 9.0, 10.0, 10.3, 12.0. Removing unused GPU capabilities will speed up compilation.

Modify CUDA Path

CUDA12.9 is automatically recognized as being located at /usr/local/cuda, but for some reason, the include paths are messed up, causing the build to fail. Edit the .tf_configure.bazelrc file, find /usr/local/cuda, and change the path to /usr/local/cuda-12.9.

build --action_env CUDA_TOOLKIT_PATH="/usr/local/cuda-12.9"

Build

bazel build --config=opt --verbose_failures //tensorflow/tools/pip_package:build_pip_package

Building may take several hours, so be patient.

Build pip Package

./bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg

A file named tensorflow-1.15.5-cp36-cp36m-linux_x86_64.whl will be created in /tmp/tensorflow_pkg, which you can install and use. The cp36 part will vary depending on your Python version.


Check if Tensorflow1.15 Recognizes the GPU

After installing via pip, you can check if the GPU is recognized by running the following code.

import tensorflow as tf
print(tf.config.experimental.list_logical_devices("GPU"))

If the library's dependency files are loaded successfully, the GPU will be listed. If the path is incorrect, error logs indicating failed loading will be displayed.

a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b
sess = tf.Session()
print(sess.run(c))

If everything works correctly, memory will be allocated on the GPU, and you can verify this with nvidia-smi.

model = tf.keras.Sequential([tf.keras.Input(shape=(64,64,3)),tf.keras.layers.Conv2D(256,3),])
model.summary()
x = model(tf.random_normal([1,64,64,3]))
init_op = tf.initialize_all_variables()
sess.run(init_op)
sess.run(x)

To check operations like Conv, you can do it like this.


If you have any issues or feedback regarding the content, please contact us at contact@lithium03.info.

Back to Index