pytorch setup with uv

overview

pytorch requires special index urls for different compute backends. this guide covers uv-specific configuration for pytorch projects.

key points:

  • pytorch wheels hosted on separate indexes by compute type
  • version tags indicate backend (e.g., 2.5.1+cpu, 2.5.1+cu121)
  • uv handles multi-index configuration elegantly
  • platform-specific dependencies supported

quick start

cpu-only

uv add torch torchvision torchaudio --index https://download.pytorch.org/whl/cpu

cuda (latest)

uv add torch torchvision torchaudio --index https://download.pytorch.org/whl/cu128

automatic backend selection

uv pip install torch --torch-backend=auto

index urls

backendindex urluse case
cpuhttps://download.pytorch.org/whl/cpuno gpu acceleration
cuda 11.8https://download.pytorch.org/whl/cu118older nvidia gpus
cuda 12.6https://download.pytorch.org/whl/cu126recent nvidia gpus
cuda 12.8https://download.pytorch.org/whl/cu128latest nvidia gpus
rocm 6.3https://download.pytorch.org/whl/rocm6.3amd gpus
intel gpuhttps://download.pytorch.org/whl/xpuintel arc/iris

project configuration

basic setup

# pyproject.toml
[project]
dependencies = [
    "torch>=2.7.0",
    "torchvision>=0.22.0",
    "torchaudio>=2.7.0"
]

specific index configuration

# pyproject.toml
[[tool.uv.index]]
name = "pytorch-cu128"
url = "https://download.pytorch.org/whl/cu128"

[tool.uv.sources]
torch = { index = "pytorch-cu128" }
torchvision = { index = "pytorch-cu128" }
torchaudio = { index = "pytorch-cu128" }

platform-specific setup

linux gets cuda, everything else gets cpu:

# pyproject.toml
[[tool.uv.index]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"

[[tool.uv.index]]
name = "pytorch-cu128"
url = "https://download.pytorch.org/whl/cu128"

[tool.uv.sources]
torch = [
    { index = "pytorch-cpu", marker = "sys_platform != 'linux'" },
    { index = "pytorch-cu128", marker = "sys_platform == 'linux'" }
]
torchvision = [
    { index = "pytorch-cpu", marker = "sys_platform != 'linux'" },
    { index = "pytorch-cu128", marker = "sys_platform == 'linux'" }
]

optional dependencies

# pyproject.toml
[project.optional-dependencies]
cpu = [
    "torch>=2.7.0",
    "torchvision>=0.22.0",
    "torchaudio>=2.7.0"
]
cu128 = [
    "torch>=2.7.0",
    "torchvision>=0.22.0",
    "torchaudio>=2.7.0"
]

[[tool.uv.index]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"

[[tool.uv.index]]
name = "pytorch-cu128"
url = "https://download.pytorch.org/whl/cu128"

[tool.uv.sources]
torch = [
    { index = "pytorch-cpu", extra = "cpu" },
    { index = "pytorch-cu128", extra = "cu128" }
]

install with:

uv sync --extra cpu     # cpu-only
uv sync --extra cu128   # cuda 12.8

gpu setup

for gpu acceleration:

verify gpu availability:

# quick test
uv run --with torch --index https://download.pytorch.org/whl/cu128 \
    python -c "import torch; print(torch.cuda.is_available())"

common patterns

jupyter with pytorch

# temporary notebook
uv run --with jupyter --with torch --with torchvision \
    --index https://download.pytorch.org/whl/cu128 \
    jupyter lab

# permanent setup
uv add --group notebooks jupyter ipykernel matplotlib
uv add torch torchvision --index https://download.pytorch.org/whl/cu128

training script

uv can run scripts directly from urls:

# run directly from url
uv run https://michaelbommarito.com/wiki/python/scripts/check-pytorch.py

sample output with cuda:

Installed 37 packages in 161ms
pytorch version: 2.7.1+cu128
cuda available: True
cuda device: NVIDIA RTX 1000 Ada Generation Laptop GPU
cuda version: 12.8

cpu-only version (faster download):

# cpu version
uv run https://michaelbommarito.com/wiki/python/scripts/check-pytorch-cpu.py

sample output cpu-only:

pytorch version: 2.7.1+cpu
cuda available: False
cpu threads: 16
pytorch build: PyTorch built with:
  - GCC 11.2
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Ver...

script source:

#!/usr/bin/env -S uv run
# /// script
# dependencies = [
#   "torch",
#   "torchvision",
#   "tqdm",
#   "tensorboard"
# ]
# [tool.uv.sources]
# torch = { index = "pytorch-cu128" }
# torchvision = { index = "pytorch-cu128" }
#
# [[tool.uv.index]]
# name = "pytorch-cu128"
# url = "https://download.pytorch.org/whl/cu128"
# ///

import torch

print(f"pytorch version: {torch.__version__}")
print(f"cuda available: {torch.cuda.is_available()}")
if torch.cuda.is_available():
    print(f"cuda device: {torch.cuda.get_device_name(0)}")
    print(f"cuda version: {torch.version.cuda}")

view cpu script | view cuda script

development workflow

# init project
uv init ml-project --python 3.13
cd ml-project

# add pytorch with cuda
uv add torch torchvision torchaudio --index https://download.pytorch.org/whl/cu128

# add ml dependencies
uv add numpy pandas scikit-learn wandb
uv add --group dev pytest ipython ruff

# verify installation
uv run python -c "import torch; print(torch.cuda.is_available())"

troubleshooting

cuda not available

# check nvidia drivers
nvidia-smi

# verify cuda toolkit
nvcc --version

# reinstall with correct index
uv remove torch torchvision
uv add torch torchvision --index https://download.pytorch.org/whl/cu128

version conflicts

# clear cache
uv cache clean torch

# force reinstall
uv sync --reinstall-package torch

slow downloads

pytorch wheels are large (2-3gb for cuda). solutions:

# use uv's built-in cache
uv cache dir  # shows cache location

# ci/cd caching
- uses: actions/cache@v3
  with:
    path: ~/.cache/uv
    key: uv-pytorch-${{ runner.os }}-${{ hashFiles('**/uv.lock') }}

# pre-download in ci
uv pip install torch --index https://download.pytorch.org/whl/cu128 --dry-run

tips

  1. check cuda compatibility

    # before installing
    nvidia-smi | grep "CUDA Version"
  2. use lock files

    # lock specific versions
    uv lock
    # sync exact versions
    uv sync --frozen
  3. separate cpu/gpu environments

    # cpu development
    UV_INDEX_URL=https://download.pytorch.org/whl/cpu uv sync
    
    # gpu training
    UV_INDEX_URL=https://download.pytorch.org/whl/cu128 uv sync
  4. specify backend explicitly

    # in scripts
    UV_INDEX_URL=https://download.pytorch.org/whl/cu128 uv run train.py
    
    # or use --torch-backend flag
    uv pip install torch --torch-backend=cu128

resources

══════════════════════════════════════════════════════════════════
on this page