I find this is always the first thing I want to run when setting up a deep learning environment, whether a desktop machine or on AWS. These commands simply load PyTorch and check to make sure PyTorch can use the GPU.
# Import PyTorch import torch
Check If There Are Multiple Devices (i.e. GPU cards)
# How many GPUs are there? print(torch.cuda.device_count())
Check Which Is The Current GPU?
# Which GPU Is The Current GPU? print(torch.cuda.current_device())
What Is The Name Of The Current GPU?
# Get the name of the current GPU print(torch.cuda.get_device_name(torch.cuda.current_device()))
GeForce GTX 1080 Ti
Is PyTorch Using A GPU?
# Is PyTorch using a GPU? print(torch.cuda.is_available())