There are 4 tiers of packages to install, each serving a distinct purpose and building on the tiers before it. Depending on your environment — uv, conda, or bare-metal Python — the final 2 tiers may be optional when installing on Debian. The package tier landscape is as follows:
NVIDIA provides the latest versions.
Here is the NVIDIA graphics driver list for UNIX.
NVIDIA has a good documentation on CUDA installation, which describes the installation of the graphics driver and the CUDA toolkit.
NVIDIA also has detailed documention on cuDNN installation. In the cuDNN documentation, you can clearly see the 2 prerequisites: graphics drivers and CUDA.
Conda provides CUDA toolkit and cuDNN. Note they requires compatible versions of the graphics driver to function. In fact, conda has multiple channels providing CUDA toolkit and cuDNN. The default anaconda channel has cudatoolkit and cudnn. The conda-forge channel has newer versions of cudatoolkit and cudnn. The NVIDIA channel has the most up-to-date cuda and cudnn.
First, choose the version of the graphics driver that is compatible with the GPUs at hand. For example, for 2070 Super, the graphics driver from buster-backports or later is needed. For 3080 TI, bullseye or later is needed.
| Debian Release | NVIDIA graphics driver | Supported GPUs | Note |
|---|---|---|---|
| trixie-backports | nvidia-driver 550.163.01 | supported devices | Geforce RTX 40xx |
| trixie | nvidia-driver 550.163.01 | supported devices | Geforce RTX 40xx |
| bookworm-backports | nvidia-driver 535.216.03 | supported devices | Geforce RTX 40xx |
| bookworm | nvidia-driver 535.261.03 | supported devices | Geforce RTX 40xx |
| bullseye | nvidia-driver 470.256.02 | supported devices | Geforce RTX 30xx |
Second, it is critical that CUDA is supported by a compatible graphics driver. Here is a table copied from NVIDIA’s release nots of CUDA toolkit components:
| CUDA Toolkit | Linux x86_64 Driver Version | Windows x86_64 Driver Version |
|---|---|---|
| … | … | … |
| CUDA 12.4 Update 1 | >=550.54.15 | >=551.78 |
| CUDA 12.4 GA | >=550.54.14 | >=551.61 |
| … | … | … |
| CUDA 11.8 GA | >=520.61.05 | >=520.06 |
| … | … | … |
| CUDA 11.2.2 Update 2 | >=460.32.03 | >=461.33 |
| CUDA 11.2.1 Update 1 | >=460.32.03 | >=461.09 |
| CUDA 11.2.0 GA | >=460.27.03 | >=460.82 |
| … | … | … |
| CUDA 9.2 (9.2.148 Update 1) | >= 396.37 | >= 398.26 |
| CUDA 9.2 (9.2.88) | >= 396.26 | >= 397.44 |
| … | … | … |
When installing CUDA and cuDNN, you might need to lock down the versions to obtain compatibility.
lspci | grep -i nvidiaetc/apt/sources.list to add them. Your sources.list should look like the following:deb http://deb.debian.org/debian/ trixie main contrib non-free non-free-firmware
deb http://security.debian.org/debian-security/ trixie-security main contrib non-free non-free-firmware
deb http://deb.debian.org/debian/ trixie-updates main contrib non-free non-free-firmware
deb http://deb.debian.org/debian/ trixie-backports main contrib non-free non-free-firmware
NVIDIA installs into the kernel tree. In order to do that, Linux headers are needed. It is important we install the exact version of Linux headers. Thus this better be done manually and separately.
First, verify Linux kernel uname -r and architecture uname -m
To list the linux-headers packages already installed:
sudo dpkg -l | grep 'linux-headers'
Then to install the Linux headers:
sudo apt-get install linux-headers-$(uname -r | sed 's/[^-]*-[^-]*-//')
On my desktop, the command uname -r | sed 's/[^-]*-[^-]*-//' outputs amd64. So the above command is equivalent to:
sudo apt-get install linux-headers-amd64
The package linux-headers-amd64 is the architecture-specific meta-package. The package manager points it to the package of the correct kernel version, for example, linux-headers-6.12.74+deb13+1-amd64. So in the list of packages to be installed, double check there is linux-headers-6.12.74+deb13+1-amd64 where the 6.12.74+deb13+1-amd64 part should match the kernel of your system shown by uname -r.
sudo apt-get install dkms
The dkms package is singled out to make it clear that NVIDIA installs into the kernel tree. From the Ubuntu documentation, “This DKMS (Dynamic Kernel Module Support) package provides support for installing supplementary versions of kernel modules. The package compiles and installs into the kernel tree.” It turns out this package is also required by other software such as VirtulBox, Docker. Thus locking it down as a manual install.
Again, the packages to install at this step require non-free and non-free-firmware enabled in /etc/apt/sources.list.
In Trixie still, the meta package nvidia-driver introduces dependencies to X Server dependencies. The dependencies, however, are incomplete for a desktop workstation as they are missing xserver-xorg-input-* for keyboard and mouse, and yet are redundant for a headless ML server.
So, for the desktop environment, we explicitly include the xserver-xorg meta package which will pull in the input drivers. Note X Server may be replaced by Wayland in the future.
sudo apt-get install nvidia-driver xserver-xorg
For the headless server, we install only nvidia-kernel-dkms.
sudo apt-get --no-install-recommends install nvidia-kernel-dkms
In the end, restart to replace nouveau with nvidia. You will be prompted during installation if a reboot is needed.
To verify, nvidia-smi.
sudo apt-get install nvidia-cuda-toolkit nvidia-cudnn
To verify, nvcc --version should display the CUDA version.
conda is not alreay installedconda create --name numba python=3.9conda activate numbaconda install cudatoolkit cudnn numba from numba import cuda
cuda.detect()
It should list the CUDA devices, e.g. ‘GeForce GTX 3080 Ti’.
conda create --name tf python=3.9conda activate tfconda install tensorflow-gpu import tensorflow as tf
tf.config.list_physical_devices()
conda create --name torch python=3.9conda activate torchconda install pytorch cudatoolkit=10.2 -c pytorch import torch
torch.cuda.is_available()
Next step: Gnome