Skip to content

Commit fca5998

Browse files
committed
Automated tutorials push
1 parent c08efea commit fca5998

File tree

178 files changed

+494125
-489710
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

178 files changed

+494125
-489710
lines changed
Loading
Loading
Loading
-2.55 KB
Loading
-1.07 KB
Loading
-2.16 KB
Loading
397 Bytes
Loading
187 Bytes
Loading
1.55 KB
Loading
27.2 KB
Loading
19.2 KB
Loading
-1 KB
Loading
202 Bytes
Loading
3.91 KB
Loading
Loading
Loading
Loading
Loading
Loading
Loading
Loading
Loading
933 Bytes
Loading

_images/sphx_glr_trainingyt_001.png

-54 Bytes
Loading

_images/sphx_glr_trainingyt_thumb.png

-278 Bytes
Loading
Loading
Loading
Loading

_sources/advanced/dynamic_quantization_tutorial.rst.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -516,9 +516,9 @@ models run single threaded.
516516
.. code-block:: none
517517
518518
loss: 5.167
519-
elapsed time (seconds): 206.4
519+
elapsed time (seconds): 174.8
520520
loss: 5.168
521-
elapsed time (seconds): 121.3
521+
elapsed time (seconds): 105.4
522522
523523
524524
@@ -540,7 +540,7 @@ Thanks for reading! As always, we welcome any feedback, so please create an issu
540540

541541
.. rst-class:: sphx-glr-timing
542542

543-
**Total running time of the script:** ( 5 minutes 37.097 seconds)
543+
**Total running time of the script:** ( 4 minutes 49.111 seconds)
544544

545545

546546
.. _sphx_glr_download_advanced_dynamic_quantization_tutorial.py:

_sources/advanced/neural_style_tutorial.rst.txt

Lines changed: 34 additions & 62 deletions
Original file line numberDiff line numberDiff line change
@@ -419,61 +419,33 @@ network to evaluation mode using ``.eval()``.
419419
Downloading: "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth" to /var/lib/jenkins/.cache/torch/hub/checkpoints/vgg19-dcbb9e9d.pth
420420
421421
0%| | 0.00/548M [00:00<?, ?B/s]
422-
1%|1 | 7.53M/548M [00:00<00:07, 78.9MB/s]
423-
3%|3 | 17.9M/548M [00:00<00:05, 96.6MB/s]
424-
5%|5 | 28.2M/548M [00:00<00:05, 102MB/s]
425-
7%|6 | 38.4M/548M [00:00<00:05, 103MB/s]
426-
9%|8 | 48.3M/548M [00:00<00:05, 104MB/s]
427-
11%|# | 58.3M/548M [00:00<00:04, 104MB/s]
428-
13%|#2 | 68.6M/548M [00:00<00:04, 105MB/s]
429-
14%|#4 | 78.8M/548M [00:00<00:04, 106MB/s]
430-
16%|#6 | 89.1M/548M [00:00<00:04, 106MB/s]
431-
18%|#8 | 99.3M/548M [00:01<00:04, 105MB/s]
432-
20%|#9 | 109M/548M [00:01<00:04, 105MB/s]
433-
22%|##1 | 120M/548M [00:01<00:04, 106MB/s]
434-
24%|##3 | 130M/548M [00:01<00:04, 106MB/s]
435-
26%|##5 | 140M/548M [00:01<00:04, 106MB/s]
436-
27%|##7 | 150M/548M [00:01<00:03, 106MB/s]
437-
29%|##9 | 161M/548M [00:01<00:03, 107MB/s]
438-
31%|###1 | 171M/548M [00:01<00:03, 107MB/s]
439-
33%|###3 | 181M/548M [00:01<00:03, 107MB/s]
440-
35%|###4 | 191M/548M [00:01<00:03, 107MB/s]
441-
37%|###6 | 201M/548M [00:02<00:03, 107MB/s]
442-
39%|###8 | 212M/548M [00:02<00:03, 107MB/s]
443-
40%|#### | 222M/548M [00:02<00:03, 107MB/s]
444-
42%|####2 | 232M/548M [00:02<00:03, 107MB/s]
445-
44%|####4 | 242M/548M [00:02<00:03, 107MB/s]
446-
46%|####6 | 252M/548M [00:02<00:02, 105MB/s]
447-
48%|####7 | 262M/548M [00:02<00:02, 106MB/s]
448-
50%|####9 | 273M/548M [00:02<00:02, 105MB/s]
449-
52%|#####1 | 283M/548M [00:02<00:02, 106MB/s]
450-
53%|#####3 | 293M/548M [00:02<00:02, 106MB/s]
451-
55%|#####5 | 303M/548M [00:03<00:02, 100MB/s]
452-
57%|#####7 | 313M/548M [00:03<00:02, 102MB/s]
453-
59%|#####8 | 323M/548M [00:03<00:02, 102MB/s]
454-
61%|###### | 333M/548M [00:03<00:02, 104MB/s]
455-
63%|######2 | 344M/548M [00:03<00:02, 104MB/s]
456-
65%|######4 | 354M/548M [00:03<00:01, 104MB/s]
457-
66%|######6 | 364M/548M [00:03<00:01, 104MB/s]
458-
68%|######8 | 374M/548M [00:03<00:01, 104MB/s]
459-
70%|######9 | 383M/548M [00:03<00:01, 103MB/s]
460-
72%|#######1 | 394M/548M [00:03<00:01, 104MB/s]
461-
74%|#######3 | 404M/548M [00:04<00:01, 105MB/s]
462-
75%|#######5 | 414M/548M [00:04<00:01, 105MB/s]
463-
77%|#######7 | 424M/548M [00:04<00:01, 105MB/s]
464-
79%|#######9 | 434M/548M [00:04<00:01, 105MB/s]
465-
81%|########1 | 444M/548M [00:04<00:01, 106MB/s]
466-
83%|########2 | 455M/548M [00:04<00:00, 107MB/s]
467-
85%|########4 | 465M/548M [00:04<00:00, 104MB/s]
468-
87%|########6 | 475M/548M [00:04<00:00, 104MB/s]
469-
88%|########8 | 485M/548M [00:04<00:00, 105MB/s]
470-
90%|######### | 495M/548M [00:04<00:00, 106MB/s]
471-
92%|#########2| 505M/548M [00:05<00:00, 106MB/s]
472-
94%|#########4| 516M/548M [00:05<00:00, 106MB/s]
473-
96%|#########5| 526M/548M [00:05<00:00, 107MB/s]
474-
98%|#########7| 536M/548M [00:05<00:00, 106MB/s]
475-
100%|#########9| 546M/548M [00:05<00:00, 106MB/s]
476-
100%|##########| 548M/548M [00:05<00:00, 105MB/s]
422+
0%| | 688k/548M [00:00<01:23, 6.91MB/s]
423+
0%| | 1.74M/548M [00:00<01:00, 9.40MB/s]
424+
1%| | 3.41M/548M [00:00<00:43, 13.1MB/s]
425+
1%|1 | 5.93M/548M [00:00<00:31, 18.2MB/s]
426+
2%|1 | 9.97M/548M [00:00<00:21, 26.8MB/s]
427+
3%|2 | 16.0M/548M [00:00<00:14, 39.3MB/s]
428+
5%|4 | 25.8M/548M [00:00<00:09, 59.7MB/s]
429+
8%|7 | 41.2M/548M [00:00<00:05, 92.0MB/s]
430+
12%|#1 | 65.0M/548M [00:00<00:03, 141MB/s]
431+
17%|#6 | 92.4M/548M [00:01<00:02, 186MB/s]
432+
22%|##1 | 120M/548M [00:01<00:02, 217MB/s]
433+
27%|##6 | 147M/548M [00:01<00:01, 237MB/s]
434+
32%|###1 | 174M/548M [00:01<00:01, 251MB/s]
435+
37%|###6 | 201M/548M [00:01<00:01, 262MB/s]
436+
42%|####1 | 228M/548M [00:01<00:01, 269MB/s]
437+
47%|####6 | 256M/548M [00:01<00:01, 274MB/s]
438+
52%|#####1 | 283M/548M [00:01<00:01, 277MB/s]
439+
57%|#####6 | 310M/548M [00:01<00:00, 280MB/s]
440+
62%|######1 | 337M/548M [00:01<00:00, 281MB/s]
441+
67%|######6 | 365M/548M [00:02<00:00, 283MB/s]
442+
71%|#######1 | 392M/548M [00:02<00:00, 284MB/s]
443+
76%|#######6 | 419M/548M [00:02<00:00, 285MB/s]
444+
81%|########1 | 446M/548M [00:02<00:00, 285MB/s]
445+
86%|########6 | 474M/548M [00:02<00:00, 285MB/s]
446+
91%|#########1| 501M/548M [00:02<00:00, 286MB/s]
447+
96%|#########6| 528M/548M [00:02<00:00, 285MB/s]
448+
100%|##########| 548M/548M [00:02<00:00, 214MB/s]
477449
478450
479451
@@ -792,22 +764,22 @@ Finally, we can run the algorithm.
792764
793765
Optimizing..
794766
run [50]:
795-
Style Loss : 4.032918 Content Loss: 4.120193
767+
Style Loss : 4.190804 Content Loss: 4.206540
796768
797769
run [100]:
798-
Style Loss : 1.136894 Content Loss: 3.031781
770+
Style Loss : 1.127398 Content Loss: 3.016130
799771
800772
run [150]:
801-
Style Loss : 0.711277 Content Loss: 2.656376
773+
Style Loss : 0.709367 Content Loss: 2.649561
802774
803775
run [200]:
804-
Style Loss : 0.472673 Content Loss: 2.488473
776+
Style Loss : 0.476386 Content Loss: 2.488675
805777
806778
run [250]:
807-
Style Loss : 0.342325 Content Loss: 2.401107
779+
Style Loss : 0.345237 Content Loss: 2.402809
808780
809781
run [300]:
810-
Style Loss : 0.261902 Content Loss: 2.348218
782+
Style Loss : 0.262759 Content Loss: 2.348847
811783
812784
813785
@@ -816,7 +788,7 @@ Finally, we can run the algorithm.
816788
817789
.. rst-class:: sphx-glr-timing
818790

819-
**Total running time of the script:** ( 0 minutes 40.085 seconds)
791+
**Total running time of the script:** ( 0 minutes 37.406 seconds)
820792

821793

822794
.. _sphx_glr_download_advanced_neural_style_tutorial.py:

_sources/advanced/numpy_extensions_tutorial.rst.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -311,7 +311,7 @@ The backward pass computes the gradient wrt the input and the gradient wrt the f
311311
312312
.. rst-class:: sphx-glr-timing
313313

314-
**Total running time of the script:** ( 0 minutes 1.038 seconds)
314+
**Total running time of the script:** ( 0 minutes 1.082 seconds)
315315

316316

317317
.. _sphx_glr_download_advanced_numpy_extensions_tutorial.py:

_sources/beginner/Intro_to_TorchScript_tutorial.rst.txt

Lines changed: 37 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -114,11 +114,11 @@ Let’s examine a small example:
114114

115115
.. code-block:: none
116116
117-
(tensor([[0.8165, 0.8095, 0.8730, 0.5934],
118-
[0.8643, 0.7042, 0.9089, 0.9182],
119-
[0.4413, 0.9120, 0.3316, 0.9512]]), tensor([[0.8165, 0.8095, 0.8730, 0.5934],
120-
[0.8643, 0.7042, 0.9089, 0.9182],
121-
[0.4413, 0.9120, 0.3316, 0.9512]]))
117+
(tensor([[0.7643, 0.8361, 0.9363, 0.8399],
118+
[0.6665, 0.7837, 0.3216, 0.6149],
119+
[0.4208, 0.9388, 0.6530, 0.5980]]), tensor([[0.7643, 0.8361, 0.9363, 0.8399],
120+
[0.6665, 0.7837, 0.3216, 0.6149],
121+
[0.4208, 0.9388, 0.6530, 0.5980]]))
122122
123123
124124
@@ -173,11 +173,11 @@ Let’s do something a little more interesting:
173173
MyCell(
174174
(linear): Linear(in_features=4, out_features=4, bias=True)
175175
)
176-
(tensor([[ 0.4975, 0.4134, 0.2240, -0.4470],
177-
[ 0.7361, 0.3454, 0.2623, 0.5605],
178-
[ 0.5995, 0.8454, -0.4040, 0.4512]], grad_fn=<TanhBackward0>), tensor([[ 0.4975, 0.4134, 0.2240, -0.4470],
179-
[ 0.7361, 0.3454, 0.2623, 0.5605],
180-
[ 0.5995, 0.8454, -0.4040, 0.4512]], grad_fn=<TanhBackward0>))
176+
(tensor([[ 0.7505, 0.6227, 0.7077, 0.4958],
177+
[ 0.7204, 0.1669, -0.0666, 0.1697],
178+
[ 0.5193, 0.6944, 0.5020, 0.2051]], grad_fn=<TanhBackward0>), tensor([[ 0.7505, 0.6227, 0.7077, 0.4958],
179+
[ 0.7204, 0.1669, -0.0666, 0.1697],
180+
[ 0.5193, 0.6944, 0.5020, 0.2051]], grad_fn=<TanhBackward0>))
181181
182182
183183
@@ -248,11 +248,11 @@ Now let’s examine said flexibility:
248248
(dg): MyDecisionGate()
249249
(linear): Linear(in_features=4, out_features=4, bias=True)
250250
)
251-
(tensor([[0.7118, 0.3225, 0.8107, 0.6595],
252-
[0.7757, 0.2660, 0.7987, 0.9409],
253-
[0.6260, 0.8401, 0.4156, 0.9170]], grad_fn=<TanhBackward0>), tensor([[0.7118, 0.3225, 0.8107, 0.6595],
254-
[0.7757, 0.2660, 0.7987, 0.9409],
255-
[0.6260, 0.8401, 0.4156, 0.9170]], grad_fn=<TanhBackward0>))
251+
(tensor([[ 0.7729, 0.6186, 0.7518, 0.6028],
252+
[ 0.3304, -0.4012, 0.5315, 0.1133],
253+
[-0.0313, 0.2601, 0.8278, 0.2082]], grad_fn=<TanhBackward0>), tensor([[ 0.7729, 0.6186, 0.7518, 0.6028],
254+
[ 0.3304, -0.4012, 0.5315, 0.1133],
255+
[-0.0313, 0.2601, 0.8278, 0.2082]], grad_fn=<TanhBackward0>))
256256
257257
258258
@@ -325,11 +325,11 @@ Tracing ``Modules``
325325
(linear): Linear(original_name=Linear)
326326
)
327327
328-
(tensor([[ 0.3406, 0.1434, 0.4284, 0.7689],
329-
[ 0.1571, 0.2347, -0.2803, 0.7971],
330-
[ 0.6675, 0.7412, -0.3189, 0.7567]], grad_fn=<TanhBackward0>), tensor([[ 0.3406, 0.1434, 0.4284, 0.7689],
331-
[ 0.1571, 0.2347, -0.2803, 0.7971],
332-
[ 0.6675, 0.7412, -0.3189, 0.7567]], grad_fn=<TanhBackward0>))
328+
(tensor([[0.2065, 0.7601, 0.6283, 0.8378],
329+
[0.5984, 0.7317, 0.8583, 0.8968],
330+
[0.6940, 0.8424, 0.7893, 0.8505]], grad_fn=<TanhBackward0>), tensor([[0.2065, 0.7601, 0.6283, 0.8378],
331+
[0.5984, 0.7317, 0.8583, 0.8968],
332+
[0.6940, 0.8424, 0.7893, 0.8505]], grad_fn=<TanhBackward0>))
333333
334334
335335
@@ -453,17 +453,17 @@ the Python module:
453453

454454
.. code-block:: none
455455
456-
(tensor([[ 0.3406, 0.1434, 0.4284, 0.7689],
457-
[ 0.1571, 0.2347, -0.2803, 0.7971],
458-
[ 0.6675, 0.7412, -0.3189, 0.7567]], grad_fn=<TanhBackward0>), tensor([[ 0.3406, 0.1434, 0.4284, 0.7689],
459-
[ 0.1571, 0.2347, -0.2803, 0.7971],
460-
[ 0.6675, 0.7412, -0.3189, 0.7567]], grad_fn=<TanhBackward0>))
461-
(tensor([[ 0.3406, 0.1434, 0.4284, 0.7689],
462-
[ 0.1571, 0.2347, -0.2803, 0.7971],
463-
[ 0.6675, 0.7412, -0.3189, 0.7567]],
464-
grad_fn=<DifferentiableGraphBackward>), tensor([[ 0.3406, 0.1434, 0.4284, 0.7689],
465-
[ 0.1571, 0.2347, -0.2803, 0.7971],
466-
[ 0.6675, 0.7412, -0.3189, 0.7567]],
456+
(tensor([[0.2065, 0.7601, 0.6283, 0.8378],
457+
[0.5984, 0.7317, 0.8583, 0.8968],
458+
[0.6940, 0.8424, 0.7893, 0.8505]], grad_fn=<TanhBackward0>), tensor([[0.2065, 0.7601, 0.6283, 0.8378],
459+
[0.5984, 0.7317, 0.8583, 0.8968],
460+
[0.6940, 0.8424, 0.7893, 0.8505]], grad_fn=<TanhBackward0>))
461+
(tensor([[0.2065, 0.7601, 0.6283, 0.8378],
462+
[0.5984, 0.7317, 0.8583, 0.8968],
463+
[0.6940, 0.8424, 0.7893, 0.8505]],
464+
grad_fn=<DifferentiableGraphBackward>), tensor([[0.2065, 0.7601, 0.6283, 0.8378],
465+
[0.5984, 0.7317, 0.8583, 0.8968],
466+
[0.6940, 0.8424, 0.7893, 0.8505]],
467467
grad_fn=<DifferentiableGraphBackward>))
468468
469469
@@ -618,11 +618,11 @@ TorchScript. Let’s now try running the program:
618618
.. code-block:: none
619619
620620
621-
(tensor([[ 0.5860, 0.5817, 0.3571, 0.7082],
622-
[ 0.7657, 0.5250, 0.6627, 0.9018],
623-
[ 0.8862, -0.2171, 0.3688, 0.6472]], grad_fn=<TanhBackward0>), tensor([[ 0.5860, 0.5817, 0.3571, 0.7082],
624-
[ 0.7657, 0.5250, 0.6627, 0.9018],
625-
[ 0.8862, -0.2171, 0.3688, 0.6472]], grad_fn=<TanhBackward0>))
621+
(tensor([[0.2349, 0.7114, 0.4357, 0.7807],
622+
[0.4888, 0.7646, 0.7748, 0.5496],
623+
[0.6256, 0.7396, 0.5257, 0.2756]], grad_fn=<TanhBackward0>), tensor([[0.2349, 0.7114, 0.4357, 0.7807],
624+
[0.4888, 0.7646, 0.7748, 0.5496],
625+
[0.6256, 0.7396, 0.5257, 0.2756]], grad_fn=<TanhBackward0>))
626626
627627
628628
@@ -805,7 +805,7 @@ https://colab.research.google.com/drive/1HiICg6jRkBnr5hvK2-VnMi88Vi9pUzEJ
805805

806806
.. rst-class:: sphx-glr-timing
807807

808-
**Total running time of the script:** ( 0 minutes 0.722 seconds)
808+
**Total running time of the script:** ( 0 minutes 0.771 seconds)
809809

810810

811811
.. _sphx_glr_download_beginner_Intro_to_TorchScript_tutorial.py:

_sources/beginner/basics/autogradqs_tutorial.rst.txt

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -113,8 +113,8 @@ documentation <https://pytorch.org/docs/stable/autograd.html#function>`__.
113113

114114
.. code-block:: none
115115
116-
Gradient function for z = <AddBackward0 object at 0x7f39246d8940>
117-
Gradient function for loss = <BinaryCrossEntropyWithLogitsBackward0 object at 0x7f391e3f5330>
116+
Gradient function for z = <AddBackward0 object at 0x7f35a2a1aaa0>
117+
Gradient function for loss = <BinaryCrossEntropyWithLogitsBackward0 object at 0x7f35a2a1b880>
118118
119119
120120
@@ -151,12 +151,12 @@ namely, we need :math:`\frac{\partial loss}{\partial w}` and
151151

152152
.. code-block:: none
153153
154-
tensor([[0.1300, 0.2960, 0.0747],
155-
[0.1300, 0.2960, 0.0747],
156-
[0.1300, 0.2960, 0.0747],
157-
[0.1300, 0.2960, 0.0747],
158-
[0.1300, 0.2960, 0.0747]])
159-
tensor([0.1300, 0.2960, 0.0747])
154+
tensor([[0.2580, 0.0904, 0.2344],
155+
[0.2580, 0.0904, 0.2344],
156+
[0.2580, 0.0904, 0.2344],
157+
[0.2580, 0.0904, 0.2344],
158+
[0.2580, 0.0904, 0.2344]])
159+
tensor([0.2580, 0.0904, 0.2344])
160160
161161
162162
@@ -395,7 +395,7 @@ Further Reading
395395

396396
.. rst-class:: sphx-glr-timing
397397

398-
**Total running time of the script:** ( 0 minutes 0.026 seconds)
398+
**Total running time of the script:** ( 0 minutes 0.021 seconds)
399399

400400

401401
.. _sphx_glr_download_beginner_basics_autogradqs_tutorial.py:

0 commit comments

Comments
 (0)