PyTorch

PyTorch

2019-09-03. Category & Tags: Deep Learning, PyTorch

Install (w/ GPU) #

Install NV GPU driver and compatible CUDA version first, or install using pip together.
See PyTorch doc’s selector to find a compatible CUDA version.
pytorch-with-cuda
Then use the cmd given by the selector to install PyTorch:

Tip: slow, tmux is suggested.

pip source: #

pip install pytorch torchvision torchaudio cudatoolkit=11.1

pip binary: #

Tip: the torch...whl file is > 3GB, which can be pre-downloaded using IDM/FDM etc., then:

# method 1. purely online
pip3 install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio===0.9.0 -f https://download.pytorch.org/whl/torch_stable.html

# or: method 2. with pre-downloaded whl
pip3 install torch-1.9.0+cu111-cp39-cp39-win_amd64.whl torchvision==0.10.0+cu111 torchaudio===0.9.0 -f https://download.pytorch.org/whl/torch_stable.html

conda (NOT suggested): #

conda install pytorch torchvision # torchaudio cudatoolkit=11.1

verify installation #

simplest:

import torch
torch.cuda.is_available()

or: nvcc -V; nvidia-smi # nvcc: NVIDIA Cuda compiler driver, within cuda toolkit python -c 'import torch; print(torch.cuda.is_available())' python -c 'import torch; print(torch.rand(2,3).cuda())'

basic info:

import torch

# 打印PyTorch版本
print('torch.__version__:', torch.__version__)

# 检查CUDA是否可用
cuda_available = torch.cuda.is_available()
print("cuda is available:", cuda_available)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print('using device:', device)

# 如果CUDA可用,打印CUDA版本和可用GPU数量
if cuda_available:
    print("cuda version (torch.version.cuda):", torch.version.cuda)
    print("nr of GPUs:", torch.cuda.device_count())
    print("current_device GPU index:", torch.cuda.current_device())
    for i in range(torch.cuda.device_count()):
        print(f"GPU {i}\tdevice name: {torch.cuda.get_device_name(i)}")
        print(f"\tdevice selected obj.: {torch.cuda.device(i)}")
        print('\tmem allocated:', round(torch.cuda.memory_allocated(i)/1024**3,1), 'GB')
        print('\tmem cached:   ', round(torch.cuda.memory_reserved(i)/1024**3,1), 'GB')
else:
    print("cuda is NOT available.")

ref:csdn

normal computation:

import torch
import torch.nn as nn
dev = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
t1 = torch.randn(1,2)
t2 = torch.randn(1,2).to(dev)
print(t1)  # tensor([[-0.2678,  1.9252]])
print(t2)  # tensor([[ 0.5117, -3.6247]], device='cuda:0')
t1.to(dev)
print(t1)  # tensor([[-0.2678,  1.9252]])
print(t1.is_cuda) # False
t1 = t1.to(dev)
print(t1)  # tensor([[-0.2678,  1.9252]], device='cuda:0')
print(t1.is_cuda) # True

class M(nn.Module):
    def __init__(self):
        super().__init__()
        self.l1 = nn.Linear(1,2)

    def forward(self, x):
        x = self.l1(x)
        return x
model = M()   # not on cuda
model.to(dev) # is on cuda (all parameters)
print(next(model.parameters()).is_cuda) # True

ref:stackoverflow

See also:
Fast.ai who is using PyTorch.

Install Libs with GPU/Cuda Support (e.g. GNN Libs) #

Tip: Compatible versions of GPU/Cuda and torch should be installed before the following cmds. (Those cmds can also be run within Pycharm’s venv “Terminal” in its UI to install into the venv.)

pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.9.0+cu111.html &\
pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-1.9.0+cu111.html &\
pip install torch-cluster -f https://pytorch-geometric.com/whl/torch-1.9.0+cu111.html &\
pip install torch-spline-conv -f https://pytorch-geometric.com/whl/torch-1.9.0+cu111.html &\
pip install torch-geometric