Skip to content

Commit 8282cf0

Browse files
author
Svetlana Karslioglu
authored
Update changing_default_device.py
1 parent 77cf6b8 commit 8282cf0

File tree

1 file changed

+11
-7
lines changed

1 file changed

+11
-7
lines changed
Lines changed: 11 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,13 @@
11
"""
22
Changing default device
33
=======================
4-
It is common to want to write PyTorch code in a device agnostic way,
4+
5+
It is common practice to write PyTorch code in a device-agnostic way,
56
and then switch between CPU and CUDA depending on what hardware is available.
6-
Traditionally, to do this you might have used if-statements and cuda() calls
7+
Typically, to do this you might have used if-statements and cuda() calls
78
to do this:
8-
"""
99
10+
"""
1011
import torch
1112

1213
USE_CUDA = False
@@ -21,23 +22,26 @@
2122
inp = torch.randn(128, 20, device=device)
2223
print(mod(inp).device)
2324

25+
###################################################################
2426
# PyTorch now also has a context manager which can take care of the
25-
# device transfer automatically.
27+
# device transfer automatically. Here is an example:
2628

2729
with torch.device('cuda'):
2830
mod = torch.nn.Linear(20, 30)
2931
print(mod.weight.device)
3032
print(mod(torch.randn(128, 20)).device)
3133

32-
# You can also set it globally
34+
#########################################
35+
# You can also set it globally like this:
3336

3437
torch.set_default_device('cuda')
3538

3639
mod = torch.nn.Linear(20, 30)
3740
print(mod.weight.device)
3841
print(mod(torch.randn(128, 20)).device)
3942

43+
################################################################
4044
# This function imposes a slight performance cost on every Python
41-
# call to the torch API (not just factory functions). If this
45+
# call to the torch API (not just factory functions). If this
4246
# is causing problems for you, please comment on
43-
# https://github.com/pytorch/pytorch/issues/92701
47+
# `this issue <https://github.com/pytorch/pytorch/issues/92701>`__

0 commit comments

Comments
 (0)