Skip to content

Commit 5e115ef

Browse files
committed
Automated tutorials push
1 parent e6c3161 commit 5e115ef

File tree

181 files changed

+481110
-487806
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

181 files changed

+481110
-487806
lines changed

_downloads/dbcf5e6a5e95bf9f7a0e49e123c13b60/saving_and_loading_a_general_checkpoint.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@
105105
},
106106
"outputs": [],
107107
"source": [
108-
"model = Net()\noptimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)\n\ncheckpoint = torch.load(PATH)\nmodel.load_state_dict(checkpoint['model_state_dict'])\noptimizer.load_state_dict(checkpoint['optimizer_state_dict'])\nepoch = checkpoint['epoch']\nloss = checkpoint['loss']\n\nmodel.eval()\n# - or -\nmodel.train()"
108+
"model = Net()\noptimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)\n\ncheckpoint = torch.load(PATH)\nmodel.load_state_dict(checkpoint['model_state_dict'])\noptimizer.load_state_dict(checkpoint['optimizer_state_dict'])\nepoch = checkpoint['epoch']\nloss = checkpoint['loss']\n\nmodel.eval()\n# - or -\nmodel.train()"
109109
]
110110
},
111111
{

_downloads/f1c4e73325cc32385d0f39145a3d83eb/saving_and_loading_a_general_checkpoint.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -129,7 +129,7 @@ def forward(self, x):
129129
#
130130

131131
model = Net()
132-
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
132+
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
133133

134134
checkpoint = torch.load(PATH)
135135
model.load_state_dict(checkpoint['model_state_dict'])
Loading
Loading
Loading
488 Bytes
Loading
58 Bytes
Loading
-23.6 KB
Loading
1.25 KB
Loading
-257 Bytes
Loading
396 Bytes
Loading
-13.3 KB
Loading
-10.3 KB
Loading
2.56 KB
Loading
-23 Bytes
Loading
540 Bytes
Loading
Loading
Loading
Loading
Loading
Loading
Loading
Loading
Loading
992 Bytes
Loading

_images/sphx_glr_trainingyt_001.png

1.15 KB
Loading

_images/sphx_glr_trainingyt_thumb.png

2.73 KB
Loading
Loading
Loading
Loading

_sources/advanced/dynamic_quantization_tutorial.rst.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -516,9 +516,9 @@ models run single threaded.
516516
.. code-block:: none
517517
518518
loss: 5.167
519-
elapsed time (seconds): 184.6
519+
elapsed time (seconds): 198.2
520520
loss: 5.168
521-
elapsed time (seconds): 107.0
521+
elapsed time (seconds): 124.4
522522
523523
524524
@@ -540,7 +540,7 @@ Thanks for reading! As always, we welcome any feedback, so please create an issu
540540

541541
.. rst-class:: sphx-glr-timing
542542

543-
**Total running time of the script:** ( 5 minutes 0.415 seconds)
543+
**Total running time of the script:** ( 5 minutes 32.035 seconds)
544544

545545

546546
.. _sphx_glr_download_advanced_dynamic_quantization_tutorial.py:

_sources/advanced/neural_style_tutorial.rst.txt

Lines changed: 31 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -419,27 +419,30 @@ network to evaluation mode using ``.eval()``.
419419
Downloading: "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth" to /var/lib/jenkins/.cache/torch/hub/checkpoints/vgg19-dcbb9e9d.pth
420420
421421
0%| | 0.00/548M [00:00<?, ?B/s]
422-
2%|2 | 11.5M/548M [00:00<00:04, 120MB/s]
423-
7%|7 | 38.8M/548M [00:00<00:02, 218MB/s]
424-
12%|#2 | 66.1M/548M [00:00<00:02, 249MB/s]
425-
17%|#7 | 93.4M/548M [00:00<00:01, 264MB/s]
426-
22%|##1 | 121M/548M [00:00<00:01, 271MB/s]
427-
27%|##6 | 148M/548M [00:00<00:01, 276MB/s]
428-
32%|###1 | 175M/548M [00:00<00:01, 279MB/s]
429-
37%|###6 | 202M/548M [00:00<00:01, 281MB/s]
430-
42%|####1 | 229M/548M [00:00<00:01, 283MB/s]
431-
47%|####6 | 256M/548M [00:01<00:01, 281MB/s]
432-
52%|#####1 | 284M/548M [00:01<00:00, 282MB/s]
433-
57%|#####6 | 311M/548M [00:01<00:00, 284MB/s]
434-
62%|######1 | 338M/548M [00:01<00:00, 283MB/s]
435-
67%|######6 | 365M/548M [00:01<00:00, 284MB/s]
436-
72%|#######1 | 393M/548M [00:01<00:00, 284MB/s]
437-
77%|#######6 | 420M/548M [00:01<00:00, 284MB/s]
438-
82%|########1 | 447M/548M [00:01<00:00, 285MB/s]
439-
87%|########6 | 475M/548M [00:01<00:00, 286MB/s]
440-
92%|#########1| 502M/548M [00:01<00:00, 286MB/s]
441-
97%|#########6| 529M/548M [00:02<00:00, 286MB/s]
442-
100%|##########| 548M/548M [00:02<00:00, 277MB/s]
422+
2%|1 | 8.51M/548M [00:00<00:06, 89.2MB/s]
423+
5%|5 | 27.5M/548M [00:00<00:03, 154MB/s]
424+
9%|9 | 49.9M/548M [00:00<00:02, 191MB/s]
425+
13%|#3 | 72.6M/548M [00:00<00:02, 210MB/s]
426+
18%|#7 | 96.0M/548M [00:00<00:02, 222MB/s]
427+
22%|##1 | 119M/548M [00:00<00:01, 229MB/s]
428+
26%|##6 | 143M/548M [00:00<00:01, 235MB/s]
429+
30%|### | 166M/548M [00:00<00:01, 239MB/s]
430+
35%|###4 | 191M/548M [00:00<00:01, 244MB/s]
431+
39%|###9 | 215M/548M [00:01<00:01, 248MB/s]
432+
44%|####3 | 239M/548M [00:01<00:01, 247MB/s]
433+
48%|####7 | 262M/548M [00:01<00:01, 241MB/s]
434+
52%|#####2 | 287M/548M [00:01<00:01, 246MB/s]
435+
57%|#####6 | 312M/548M [00:01<00:00, 251MB/s]
436+
61%|######1 | 336M/548M [00:01<00:00, 252MB/s]
437+
66%|######5 | 361M/548M [00:01<00:00, 254MB/s]
438+
70%|####### | 385M/548M [00:01<00:00, 253MB/s]
439+
75%|#######4 | 409M/548M [00:01<00:00, 253MB/s]
440+
79%|#######9 | 434M/548M [00:01<00:00, 255MB/s]
441+
84%|########3 | 458M/548M [00:02<00:00, 254MB/s]
442+
88%|########8 | 482M/548M [00:02<00:00, 253MB/s]
443+
93%|#########2| 507M/548M [00:02<00:00, 255MB/s]
444+
97%|#########6| 532M/548M [00:02<00:00, 256MB/s]
445+
100%|##########| 548M/548M [00:02<00:00, 241MB/s]
443446
444447
445448
@@ -758,22 +761,22 @@ Finally, we can run the algorithm.
758761
759762
Optimizing..
760763
run [50]:
761-
Style Loss : 3.844144 Content Loss: 4.073460
764+
Style Loss : 4.260757 Content Loss: 4.044753
762765
763766
run [100]:
764-
Style Loss : 1.109934 Content Loss: 3.010808
767+
Style Loss : 1.126809 Content Loss: 3.026857
765768
766769
run [150]:
767-
Style Loss : 0.692627 Content Loss: 2.640074
770+
Style Loss : 0.703462 Content Loss: 2.638905
768771
769772
run [200]:
770-
Style Loss : 0.465224 Content Loss: 2.483206
773+
Style Loss : 0.475341 Content Loss: 2.491518
771774
772775
run [250]:
773-
Style Loss : 0.340352 Content Loss: 2.399870
776+
Style Loss : 0.345535 Content Loss: 2.403845
774777
775778
run [300]:
776-
Style Loss : 0.260797 Content Loss: 2.346783
779+
Style Loss : 0.262767 Content Loss: 2.350889
777780
778781
779782
@@ -782,7 +785,7 @@ Finally, we can run the algorithm.
782785
783786
.. rst-class:: sphx-glr-timing
784787

785-
**Total running time of the script:** ( 0 minutes 36.697 seconds)
788+
**Total running time of the script:** ( 0 minutes 36.771 seconds)
786789

787790

788791
.. _sphx_glr_download_advanced_neural_style_tutorial.py:

_sources/advanced/numpy_extensions_tutorial.rst.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -311,7 +311,7 @@ The backward pass computes the gradient wrt the input and the gradient wrt the f
311311
312312
.. rst-class:: sphx-glr-timing
313313

314-
**Total running time of the script:** ( 0 minutes 1.053 seconds)
314+
**Total running time of the script:** ( 0 minutes 1.102 seconds)
315315

316316

317317
.. _sphx_glr_download_advanced_numpy_extensions_tutorial.py:

_sources/beginner/Intro_to_TorchScript_tutorial.rst.txt

Lines changed: 37 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -114,11 +114,11 @@ Let’s examine a small example:
114114

115115
.. code-block:: none
116116
117-
(tensor([[0.8625, 0.2666, 0.4852, 0.8900],
118-
[0.8946, 0.6309, 0.8095, 0.4405],
119-
[0.9356, 0.9073, 0.8988, 0.7443]]), tensor([[0.8625, 0.2666, 0.4852, 0.8900],
120-
[0.8946, 0.6309, 0.8095, 0.4405],
121-
[0.9356, 0.9073, 0.8988, 0.7443]]))
117+
(tensor([[0.7682, 0.5209, 0.5360, 0.7231],
118+
[0.9017, 0.8825, 0.4290, 0.3843],
119+
[0.9126, 0.8366, 0.6580, 0.2907]]), tensor([[0.7682, 0.5209, 0.5360, 0.7231],
120+
[0.9017, 0.8825, 0.4290, 0.3843],
121+
[0.9126, 0.8366, 0.6580, 0.2907]]))
122122
123123
124124
@@ -173,11 +173,11 @@ Let’s do something a little more interesting:
173173
MyCell(
174174
(linear): Linear(in_features=4, out_features=4, bias=True)
175175
)
176-
(tensor([[0.8381, 0.2882, 0.5250, 0.7289],
177-
[0.6650, 0.4640, 0.8494, 0.6271],
178-
[0.7965, 0.6384, 0.8609, 0.8246]], grad_fn=<TanhBackward0>), tensor([[0.8381, 0.2882, 0.5250, 0.7289],
179-
[0.6650, 0.4640, 0.8494, 0.6271],
180-
[0.7965, 0.6384, 0.8609, 0.8246]], grad_fn=<TanhBackward0>))
176+
(tensor([[0.0534, 0.3669, 0.4033, 0.5633],
177+
[0.7102, 0.9165, 0.2846, 0.8534],
178+
[0.8271, 0.7792, 0.2856, 0.8334]], grad_fn=<TanhBackward0>), tensor([[0.0534, 0.3669, 0.4033, 0.5633],
179+
[0.7102, 0.9165, 0.2846, 0.8534],
180+
[0.8271, 0.7792, 0.2856, 0.8334]], grad_fn=<TanhBackward0>))
181181
182182
183183
@@ -248,11 +248,11 @@ Now let’s examine said flexibility:
248248
(dg): MyDecisionGate()
249249
(linear): Linear(in_features=4, out_features=4, bias=True)
250250
)
251-
(tensor([[0.8883, 0.6079, 0.4601, 0.5768],
252-
[0.8601, 0.8278, 0.7881, 0.4191],
253-
[0.9087, 0.8940, 0.7467, 0.6181]], grad_fn=<TanhBackward0>), tensor([[0.8883, 0.6079, 0.4601, 0.5768],
254-
[0.8601, 0.8278, 0.7881, 0.4191],
255-
[0.9087, 0.8940, 0.7467, 0.6181]], grad_fn=<TanhBackward0>))
251+
(tensor([[ 0.4778, -0.0953, 0.7796, -0.4068],
252+
[ 0.7181, 0.5861, 0.6147, -0.1041],
253+
[ 0.7187, -0.0476, 0.5960, -0.2374]], grad_fn=<TanhBackward0>), tensor([[ 0.4778, -0.0953, 0.7796, -0.4068],
254+
[ 0.7181, 0.5861, 0.6147, -0.1041],
255+
[ 0.7187, -0.0476, 0.5960, -0.2374]], grad_fn=<TanhBackward0>))
256256
257257
258258
@@ -325,11 +325,11 @@ Tracing ``Modules``
325325
(linear): Linear(original_name=Linear)
326326
)
327327
328-
(tensor([[0.4084, 0.5197, 0.7663, 0.0083],
329-
[0.1466, 0.7296, 0.6954, 0.5937],
330-
[0.1349, 0.6090, 0.4125, 0.7356]], grad_fn=<TanhBackward0>), tensor([[0.4084, 0.5197, 0.7663, 0.0083],
331-
[0.1466, 0.7296, 0.6954, 0.5937],
332-
[0.1349, 0.6090, 0.4125, 0.7356]], grad_fn=<TanhBackward0>))
328+
(tensor([[ 0.7199, 0.6414, 0.4574, 0.8501],
329+
[ 0.6018, -0.3666, 0.9026, 0.6057],
330+
[ 0.8042, 0.3636, 0.8445, 0.8285]], grad_fn=<TanhBackward0>), tensor([[ 0.7199, 0.6414, 0.4574, 0.8501],
331+
[ 0.6018, -0.3666, 0.9026, 0.6057],
332+
[ 0.8042, 0.3636, 0.8445, 0.8285]], grad_fn=<TanhBackward0>))
333333
334334
335335
@@ -453,17 +453,17 @@ the Python module:
453453

454454
.. code-block:: none
455455
456-
(tensor([[0.4084, 0.5197, 0.7663, 0.0083],
457-
[0.1466, 0.7296, 0.6954, 0.5937],
458-
[0.1349, 0.6090, 0.4125, 0.7356]], grad_fn=<TanhBackward0>), tensor([[0.4084, 0.5197, 0.7663, 0.0083],
459-
[0.1466, 0.7296, 0.6954, 0.5937],
460-
[0.1349, 0.6090, 0.4125, 0.7356]], grad_fn=<TanhBackward0>))
461-
(tensor([[0.4084, 0.5197, 0.7663, 0.0083],
462-
[0.1466, 0.7296, 0.6954, 0.5937],
463-
[0.1349, 0.6090, 0.4125, 0.7356]],
464-
grad_fn=<DifferentiableGraphBackward>), tensor([[0.4084, 0.5197, 0.7663, 0.0083],
465-
[0.1466, 0.7296, 0.6954, 0.5937],
466-
[0.1349, 0.6090, 0.4125, 0.7356]],
456+
(tensor([[ 0.7199, 0.6414, 0.4574, 0.8501],
457+
[ 0.6018, -0.3666, 0.9026, 0.6057],
458+
[ 0.8042, 0.3636, 0.8445, 0.8285]], grad_fn=<TanhBackward0>), tensor([[ 0.7199, 0.6414, 0.4574, 0.8501],
459+
[ 0.6018, -0.3666, 0.9026, 0.6057],
460+
[ 0.8042, 0.3636, 0.8445, 0.8285]], grad_fn=<TanhBackward0>))
461+
(tensor([[ 0.7199, 0.6414, 0.4574, 0.8501],
462+
[ 0.6018, -0.3666, 0.9026, 0.6057],
463+
[ 0.8042, 0.3636, 0.8445, 0.8285]],
464+
grad_fn=<DifferentiableGraphBackward>), tensor([[ 0.7199, 0.6414, 0.4574, 0.8501],
465+
[ 0.6018, -0.3666, 0.9026, 0.6057],
466+
[ 0.8042, 0.3636, 0.8445, 0.8285]],
467467
grad_fn=<DifferentiableGraphBackward>))
468468
469469
@@ -618,11 +618,11 @@ TorchScript. Let’s now try running the program:
618618
.. code-block:: none
619619
620620
621-
(tensor([[0.7420, 0.1661, 0.4991, 0.7497],
622-
[0.8506, 0.4535, 0.6937, 0.2127],
623-
[0.4936, 0.2008, 0.3331, 0.2968]], grad_fn=<TanhBackward0>), tensor([[0.7420, 0.1661, 0.4991, 0.7497],
624-
[0.8506, 0.4535, 0.6937, 0.2127],
625-
[0.4936, 0.2008, 0.3331, 0.2968]], grad_fn=<TanhBackward0>))
621+
(tensor([[-0.6553, 0.7222, 0.5345, 0.3337],
622+
[ 0.1028, 0.2597, 0.8959, 0.4656],
623+
[-0.5948, 0.7973, 0.7678, 0.7607]], grad_fn=<TanhBackward0>), tensor([[-0.6553, 0.7222, 0.5345, 0.3337],
624+
[ 0.1028, 0.2597, 0.8959, 0.4656],
625+
[-0.5948, 0.7973, 0.7678, 0.7607]], grad_fn=<TanhBackward0>))
626626
627627
628628
@@ -805,7 +805,7 @@ https://colab.research.google.com/drive/1HiICg6jRkBnr5hvK2-VnMi88Vi9pUzEJ
805805

806806
.. rst-class:: sphx-glr-timing
807807

808-
**Total running time of the script:** ( 0 minutes 0.728 seconds)
808+
**Total running time of the script:** ( 0 minutes 1.164 seconds)
809809

810810

811811
.. _sphx_glr_download_beginner_Intro_to_TorchScript_tutorial.py:

_sources/beginner/basics/autogradqs_tutorial.rst.txt

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -113,8 +113,8 @@ documentation <https://pytorch.org/docs/stable/autograd.html#function>`__.
113113

114114
.. code-block:: none
115115
116-
Gradient function for z = <AddBackward0 object at 0x7f00a8213010>
117-
Gradient function for loss = <BinaryCrossEntropyWithLogitsBackward0 object at 0x7f00a8213400>
116+
Gradient function for z = <AddBackward0 object at 0x7f6ec536ae00>
117+
Gradient function for loss = <BinaryCrossEntropyWithLogitsBackward0 object at 0x7f6ef4e66b00>
118118
119119
120120
@@ -151,12 +151,12 @@ namely, we need :math:`\frac{\partial loss}{\partial w}` and
151151

152152
.. code-block:: none
153153
154-
tensor([[0.0363, 0.1757, 0.0730],
155-
[0.0363, 0.1757, 0.0730],
156-
[0.0363, 0.1757, 0.0730],
157-
[0.0363, 0.1757, 0.0730],
158-
[0.0363, 0.1757, 0.0730]])
159-
tensor([0.0363, 0.1757, 0.0730])
154+
tensor([[0.3138, 0.1644, 0.2158],
155+
[0.3138, 0.1644, 0.2158],
156+
[0.3138, 0.1644, 0.2158],
157+
[0.3138, 0.1644, 0.2158],
158+
[0.3138, 0.1644, 0.2158]])
159+
tensor([0.3138, 0.1644, 0.2158])
160160
161161
162162
@@ -395,7 +395,7 @@ Further Reading
395395

396396
.. rst-class:: sphx-glr-timing
397397

398-
**Total running time of the script:** ( 0 minutes 0.041 seconds)
398+
**Total running time of the script:** ( 0 minutes 0.020 seconds)
399399

400400

401401
.. _sphx_glr_download_beginner_basics_autogradqs_tutorial.py:

0 commit comments

Comments
 (0)