Skip to content

Commit b585ecc

Browse files
committed
Automated tutorials push
1 parent 3eb51d8 commit b585ecc

File tree

154 files changed

+9322
-9235
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

154 files changed

+9322
-9235
lines changed

_images/sphx_glr_coding_ddpg_001.png

-2.2 KB
Loading
-105 Bytes
Loading
1.47 KB
Loading
-2.97 KB
Loading
Loading

_sources/advanced/coding_ddpg.rst.txt

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1649,26 +1649,26 @@ modules we need.
16491649
16501650
16511651
0%| | 0/10000 [00:00<?, ?it/s]
1652-
8%|8 | 800/10000 [00:00<00:08, 1095.64it/s]
1653-
16%|#6 | 1600/10000 [00:05<00:33, 253.72it/s]
1654-
24%|##4 | 2400/10000 [00:06<00:19, 380.48it/s]
1655-
32%|###2 | 3200/10000 [00:07<00:13, 496.91it/s]
1656-
40%|#### | 4000/10000 [00:08<00:10, 597.54it/s]
1657-
48%|####8 | 4800/10000 [00:09<00:07, 681.11it/s]
1658-
56%|#####6 | 5600/10000 [00:09<00:05, 748.83it/s]
1659-
reward: -2.48 (r0 = -0.90), reward eval: reward: -0.00, reward normalized=-1.56/6.87, grad norm= 191.04, loss_value= 506.83, loss_actor= 11.94, target value: -8.02: 56%|#####6 | 5600/10000 [00:11<00:05, 748.83it/s]
1660-
reward: -2.48 (r0 = -0.90), reward eval: reward: -0.00, reward normalized=-1.56/6.87, grad norm= 191.04, loss_value= 506.83, loss_actor= 11.94, target value: -8.02: 64%|######4 | 6400/10000 [00:12<00:07, 464.23it/s]
1661-
reward: -0.12 (r0 = -0.90), reward eval: reward: -0.00, reward normalized=-1.67/5.78, grad norm= 57.83, loss_value= 305.88, loss_actor= 11.90, target value: -11.10: 64%|######4 | 6400/10000 [00:14<00:07, 464.23it/s]
1662-
reward: -0.12 (r0 = -0.90), reward eval: reward: -0.00, reward normalized=-1.67/5.78, grad norm= 57.83, loss_value= 305.88, loss_actor= 11.90, target value: -11.10: 72%|#######2 | 7200/10000 [00:16<00:08, 343.38it/s]
1663-
reward: -3.24 (r0 = -0.90), reward eval: reward: -0.00, reward normalized=-2.27/5.04, grad norm= 112.57, loss_value= 189.73, loss_actor= 13.83, target value: -14.60: 72%|#######2 | 7200/10000 [00:18<00:08, 343.38it/s]
1664-
reward: -3.24 (r0 = -0.90), reward eval: reward: -0.00, reward normalized=-2.27/5.04, grad norm= 112.57, loss_value= 189.73, loss_actor= 13.83, target value: -14.60: 80%|######## | 8000/10000 [00:20<00:06, 291.23it/s]
1665-
reward: -4.77 (r0 = -0.90), reward eval: reward: -0.00, reward normalized=-2.34/4.67, grad norm= 146.11, loss_value= 170.75, loss_actor= 15.86, target value: -15.01: 80%|######## | 8000/10000 [00:22<00:06, 291.23it/s]
1666-
reward: -4.77 (r0 = -0.90), reward eval: reward: -0.00, reward normalized=-2.34/4.67, grad norm= 146.11, loss_value= 170.75, loss_actor= 15.86, target value: -15.01: 88%|########8 | 8800/10000 [00:23<00:04, 264.40it/s]
1667-
reward: -4.44 (r0 = -0.90), reward eval: reward: -20.30, reward normalized=-2.15/5.06, grad norm= 50.42, loss_value= 210.07, loss_actor= 12.82, target value: -15.89: 88%|########8 | 8800/10000 [00:29<00:04, 264.40it/s]
1668-
reward: -4.44 (r0 = -0.90), reward eval: reward: -20.30, reward normalized=-2.15/5.06, grad norm= 50.42, loss_value= 210.07, loss_actor= 12.82, target value: -15.89: 96%|#########6| 9600/10000 [00:31<00:02, 180.97it/s]
1669-
reward: -4.88 (r0 = -0.90), reward eval: reward: -20.30, reward normalized=-3.54/4.43, grad norm= 132.01, loss_value= 193.44, loss_actor= 18.56, target value: -24.43: 96%|#########6| 9600/10000 [00:33<00:02, 180.97it/s]
1670-
reward: -4.88 (r0 = -0.90), reward eval: reward: -20.30, reward normalized=-3.54/4.43, grad norm= 132.01, loss_value= 193.44, loss_actor= 18.56, target value: -24.43: : 10400it [00:35, 190.39it/s]
1671-
reward: -16.37 (r0 = -0.90), reward eval: reward: -20.30, reward normalized=-3.37/5.43, grad norm= 111.49, loss_value= 240.03, loss_actor= 22.51, target value: -23.29: : 10400it [00:37, 190.39it/s]
1652+
8%|8 | 800/10000 [00:00<00:08, 1068.13it/s]
1653+
16%|#6 | 1600/10000 [00:05<00:33, 251.38it/s]
1654+
24%|##4 | 2400/10000 [00:06<00:20, 376.45it/s]
1655+
32%|###2 | 3200/10000 [00:07<00:13, 491.36it/s]
1656+
40%|#### | 4000/10000 [00:08<00:10, 590.41it/s]
1657+
48%|####8 | 4800/10000 [00:09<00:07, 672.84it/s]
1658+
56%|#####6 | 5600/10000 [00:09<00:05, 738.13it/s]
1659+
reward: -2.61 (r0 = -1.78), reward eval: reward: -0.00, reward normalized=-2.47/6.09, grad norm= 110.28, loss_value= 330.30, loss_actor= 13.74, target value: -14.68: 56%|#####6 | 5600/10000 [00:12<00:05, 738.13it/s]
1660+
reward: -2.61 (r0 = -1.78), reward eval: reward: -0.00, reward normalized=-2.47/6.09, grad norm= 110.28, loss_value= 330.30, loss_actor= 13.74, target value: -14.68: 64%|######4 | 6400/10000 [00:13<00:07, 456.92it/s]
1661+
reward: -0.12 (r0 = -1.78), reward eval: reward: -0.00, reward normalized=-2.22/5.65, grad norm= 57.01, loss_value= 298.59, loss_actor= 14.41, target value: -14.19: 64%|######4 | 6400/10000 [00:15<00:07, 456.92it/s]
1662+
reward: -0.12 (r0 = -1.78), reward eval: reward: -0.00, reward normalized=-2.22/5.65, grad norm= 57.01, loss_value= 298.59, loss_actor= 14.41, target value: -14.19: 72%|#######2 | 7200/10000 [00:16<00:07, 364.37it/s]
1663+
reward: -1.49 (r0 = -1.78), reward eval: reward: -0.00, reward normalized=-2.60/5.18, grad norm= 167.05, loss_value= 258.41, loss_actor= 13.21, target value: -16.61: 72%|#######2 | 7200/10000 [00:18<00:07, 364.37it/s]
1664+
reward: -1.49 (r0 = -1.78), reward eval: reward: -0.00, reward normalized=-2.60/5.18, grad norm= 167.05, loss_value= 258.41, loss_actor= 13.21, target value: -16.61: 80%|######## | 8000/10000 [00:20<00:06, 300.22it/s]
1665+
reward: -4.79 (r0 = -1.78), reward eval: reward: -0.00, reward normalized=-2.38/4.85, grad norm= 79.00, loss_value= 206.23, loss_actor= 19.63, target value: -14.99: 80%|######## | 8000/10000 [00:22<00:06, 300.22it/s]
1666+
reward: -4.79 (r0 = -1.78), reward eval: reward: -0.00, reward normalized=-2.38/4.85, grad norm= 79.00, loss_value= 206.23, loss_actor= 19.63, target value: -14.99: 88%|########8 | 8800/10000 [00:23<00:04, 268.54it/s]
1667+
reward: -5.15 (r0 = -1.78), reward eval: reward: -1.97, reward normalized=-2.48/5.30, grad norm= 105.76, loss_value= 197.28, loss_actor= 12.32, target value: -17.70: 88%|########8 | 8800/10000 [00:29<00:04, 268.54it/s]
1668+
reward: -5.15 (r0 = -1.78), reward eval: reward: -1.97, reward normalized=-2.48/5.30, grad norm= 105.76, loss_value= 197.28, loss_actor= 12.32, target value: -17.70: 96%|#########6| 9600/10000 [00:31<00:02, 177.73it/s]
1669+
reward: -4.73 (r0 = -1.78), reward eval: reward: -1.97, reward normalized=-2.81/4.37, grad norm= 67.84, loss_value= 147.29, loss_actor= 10.93, target value: -19.58: 96%|#########6| 9600/10000 [00:33<00:02, 177.73it/s]
1670+
reward: -4.73 (r0 = -1.78), reward eval: reward: -1.97, reward normalized=-2.81/4.37, grad norm= 67.84, loss_value= 147.29, loss_actor= 10.93, target value: -19.58: : 10400it [00:35, 186.18it/s]
1671+
reward: -1.02 (r0 = -1.78), reward eval: reward: -1.97, reward normalized=-2.69/4.96, grad norm= 80.73, loss_value= 193.05, loss_actor= 12.62, target value: -19.78: : 10400it [00:37, 186.18it/s]
16721672
16731673
16741674
@@ -1738,7 +1738,7 @@ To iterate further on this loss module we might consider:
17381738

17391739
.. rst-class:: sphx-glr-timing
17401740

1741-
**Total running time of the script:** ( 0 minutes 48.741 seconds)
1741+
**Total running time of the script:** ( 0 minutes 49.185 seconds)
17421742

17431743

17441744
.. _sphx_glr_download_advanced_coding_ddpg.py:

_sources/advanced/dynamic_quantization_tutorial.rst.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -516,9 +516,9 @@ models run single threaded.
516516
.. code-block:: none
517517
518518
loss: 5.167
519-
elapsed time (seconds): 197.7
519+
elapsed time (seconds): 208.1
520520
loss: 5.168
521-
elapsed time (seconds): 111.3
521+
elapsed time (seconds): 110.6
522522
523523
524524
@@ -540,7 +540,7 @@ Thanks for reading! As always, we welcome any feedback, so please create an issu
540540

541541
.. rst-class:: sphx-glr-timing
542542

543-
**Total running time of the script:** ( 5 minutes 17.583 seconds)
543+
**Total running time of the script:** ( 5 minutes 27.645 seconds)
544544

545545

546546
.. _sphx_glr_download_advanced_dynamic_quantization_tutorial.py:

_sources/advanced/neural_style_tutorial.rst.txt

Lines changed: 34 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -418,31 +418,33 @@ network to evaluation mode using ``.eval()``.
418418
Downloading: "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth" to /var/lib/jenkins/.cache/torch/hub/checkpoints/vgg19-dcbb9e9d.pth
419419
420420
0%| | 0.00/548M [00:00<?, ?B/s]
421-
4%|3 | 21.4M/548M [00:00<00:02, 225MB/s]
422-
8%|8 | 43.9M/548M [00:00<00:02, 231MB/s]
423-
12%|#2 | 66.3M/548M [00:00<00:02, 233MB/s]
424-
16%|#6 | 88.7M/548M [00:00<00:02, 234MB/s]
425-
20%|## | 111M/548M [00:00<00:01, 233MB/s]
426-
24%|##4 | 133M/548M [00:00<00:01, 233MB/s]
427-
28%|##8 | 156M/548M [00:00<00:01, 233MB/s]
428-
32%|###2 | 178M/548M [00:00<00:01, 233MB/s]
429-
37%|###6 | 200M/548M [00:00<00:01, 233MB/s]
430-
41%|#### | 222M/548M [00:01<00:01, 233MB/s]
431-
45%|####4 | 245M/548M [00:01<00:01, 234MB/s]
432-
49%|####8 | 267M/548M [00:01<00:01, 234MB/s]
433-
53%|#####2 | 290M/548M [00:01<00:01, 234MB/s]
434-
57%|#####6 | 312M/548M [00:01<00:01, 232MB/s]
435-
61%|###### | 334M/548M [00:01<00:00, 233MB/s]
436-
65%|######5 | 357M/548M [00:01<00:00, 233MB/s]
437-
69%|######9 | 379M/548M [00:01<00:00, 233MB/s]
438-
73%|#######3 | 402M/548M [00:01<00:00, 234MB/s]
439-
77%|#######7 | 424M/548M [00:01<00:00, 235MB/s]
440-
81%|########1 | 446M/548M [00:02<00:00, 234MB/s]
441-
86%|########5 | 469M/548M [00:02<00:00, 234MB/s]
442-
90%|########9 | 491M/548M [00:02<00:00, 235MB/s]
443-
94%|#########3| 514M/548M [00:02<00:00, 233MB/s]
444-
98%|#########7| 536M/548M [00:02<00:00, 231MB/s]
445-
100%|##########| 548M/548M [00:02<00:00, 233MB/s]
421+
3%|3 | 18.7M/548M [00:00<00:02, 196MB/s]
422+
7%|7 | 38.5M/548M [00:00<00:02, 203MB/s]
423+
11%|# | 58.3M/548M [00:00<00:02, 205MB/s]
424+
14%|#4 | 78.6M/548M [00:00<00:02, 208MB/s]
425+
18%|#8 | 98.9M/548M [00:00<00:02, 210MB/s]
426+
22%|##1 | 119M/548M [00:00<00:02, 211MB/s]
427+
25%|##5 | 140M/548M [00:00<00:02, 212MB/s]
428+
29%|##9 | 160M/548M [00:00<00:01, 213MB/s]
429+
33%|###2 | 180M/548M [00:00<00:01, 212MB/s]
430+
37%|###6 | 201M/548M [00:01<00:01, 212MB/s]
431+
40%|#### | 221M/548M [00:01<00:01, 211MB/s]
432+
44%|####3 | 241M/548M [00:01<00:01, 211MB/s]
433+
48%|####7 | 261M/548M [00:01<00:01, 212MB/s]
434+
51%|#####1 | 282M/548M [00:01<00:01, 213MB/s]
435+
55%|#####5 | 302M/548M [00:01<00:01, 213MB/s]
436+
59%|#####8 | 323M/548M [00:01<00:01, 212MB/s]
437+
63%|######2 | 343M/548M [00:01<00:01, 210MB/s]
438+
66%|######6 | 363M/548M [00:01<00:00, 211MB/s]
439+
70%|######9 | 383M/548M [00:01<00:00, 211MB/s]
440+
74%|#######3 | 404M/548M [00:02<00:00, 212MB/s]
441+
77%|#######7 | 424M/548M [00:02<00:00, 212MB/s]
442+
81%|########1 | 444M/548M [00:02<00:00, 212MB/s]
443+
85%|########4 | 465M/548M [00:02<00:00, 213MB/s]
444+
89%|########8 | 486M/548M [00:02<00:00, 215MB/s]
445+
92%|#########2| 507M/548M [00:02<00:00, 217MB/s]
446+
96%|#########6| 528M/548M [00:02<00:00, 218MB/s]
447+
100%|##########| 548M/548M [00:02<00:00, 212MB/s]
446448
447449
448450
@@ -763,22 +765,22 @@ Finally, we can run the algorithm.
763765
764766
Optimizing..
765767
run [50]:
766-
Style Loss : 3.963342 Content Loss: 4.128072
768+
Style Loss : 3.925065 Content Loss: 4.095909
767769
768770
run [100]:
769-
Style Loss : 1.114517 Content Loss: 3.005954
771+
Style Loss : 1.121871 Content Loss: 3.025715
770772
771773
run [150]:
772-
Style Loss : 0.695908 Content Loss: 2.643757
774+
Style Loss : 0.711819 Content Loss: 2.646153
773775
774776
run [200]:
775-
Style Loss : 0.471252 Content Loss: 2.484256
777+
Style Loss : 0.476352 Content Loss: 2.487876
776778
777779
run [250]:
778-
Style Loss : 0.342237 Content Loss: 2.396420
780+
Style Loss : 0.346117 Content Loss: 2.399930
779781
780782
run [300]:
781-
Style Loss : 0.263912 Content Loss: 2.348776
783+
Style Loss : 0.263507 Content Loss: 2.347996
782784
783785
784786
@@ -787,7 +789,7 @@ Finally, we can run the algorithm.
787789
788790
.. rst-class:: sphx-glr-timing
789791

790-
**Total running time of the script:** ( 0 minutes 36.313 seconds)
792+
**Total running time of the script:** ( 0 minutes 37.084 seconds)
791793

792794

793795
.. _sphx_glr_download_advanced_neural_style_tutorial.py:

_sources/advanced/numpy_extensions_tutorial.rst.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -303,7 +303,7 @@ The backward pass computes the gradient ``wrt`` the input and the gradient ``wrt
303303
304304
.. rst-class:: sphx-glr-timing
305305

306-
**Total running time of the script:** ( 0 minutes 0.699 seconds)
306+
**Total running time of the script:** ( 0 minutes 0.684 seconds)
307307

308308

309309
.. _sphx_glr_download_advanced_numpy_extensions_tutorial.py:

_sources/beginner/Intro_to_TorchScript_tutorial.rst.txt

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ model from C++.
6868
6969
2.0.1+cu117
7070
71-
<torch._C.Generator object at 0x7ff3bc79edb0>
71+
<torch._C.Generator object at 0x7f716a236db0>
7272
7373
7474
@@ -808,7 +808,7 @@ https://colab.research.google.com/drive/1HiICg6jRkBnr5hvK2-VnMi88Vi9pUzEJ
808808

809809
.. rst-class:: sphx-glr-timing
810810

811-
**Total running time of the script:** ( 0 minutes 2.209 seconds)
811+
**Total running time of the script:** ( 0 minutes 2.310 seconds)
812812

813813

814814
.. _sphx_glr_download_beginner_Intro_to_TorchScript_tutorial.py:

_sources/beginner/basics/autogradqs_tutorial.rst.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -113,8 +113,8 @@ documentation <https://pytorch.org/docs/stable/autograd.html#function>`__.
113113

114114
.. code-block:: none
115115
116-
Gradient function for z = <AddBackward0 object at 0x7fde60a3f280>
117-
Gradient function for loss = <BinaryCrossEntropyWithLogitsBackward0 object at 0x7fde60a3eb60>
116+
Gradient function for z = <AddBackward0 object at 0x7f6175b58d60>
117+
Gradient function for loss = <BinaryCrossEntropyWithLogitsBackward0 object at 0x7f6175b58a30>
118118
119119
120120
@@ -395,7 +395,7 @@ Further Reading
395395

396396
.. rst-class:: sphx-glr-timing
397397

398-
**Total running time of the script:** ( 0 minutes 0.012 seconds)
398+
**Total running time of the script:** ( 0 minutes 0.011 seconds)
399399

400400

401401
.. _sphx_glr_download_beginner_basics_autogradqs_tutorial.py:

_sources/beginner/basics/buildmodel_tutorial.rst.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -482,7 +482,7 @@ Further Reading
482482

483483
.. rst-class:: sphx-glr-timing
484484

485-
**Total running time of the script:** ( 0 minutes 0.031 seconds)
485+
**Total running time of the script:** ( 0 minutes 0.032 seconds)
486486

487487

488488
.. _sphx_glr_download_beginner_basics_buildmodel_tutorial.py:

_sources/beginner/basics/data_tutorial.rst.txt

Lines changed: 23 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -103,49 +103,46 @@ We load the `FashionMNIST Dataset <https://pytorch.org/vision/stable/datasets.ht
103103
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to data/FashionMNIST/raw/train-images-idx3-ubyte.gz
104104
105105
0%| | 0/26421880 [00:00<?, ?it/s]
106-
0%| | 65536/26421880 [00:00<01:12, 363151.26it/s]
107-
1%| | 229376/26421880 [00:00<00:38, 688153.45it/s]
108-
3%|3 | 884736/26421880 [00:00<00:10, 2527286.83it/s]
109-
7%|7 | 1900544/26421880 [00:00<00:05, 4407321.94it/s]
110-
13%|#3 | 3506176/26421880 [00:00<00:03, 7317755.07it/s]
111-
23%|##2 | 6029312/26421880 [00:00<00:01, 12172065.50it/s]
112-
31%|### | 8126464/26421880 [00:00<00:01, 14115022.69it/s]
113-
37%|###6 | 9732096/26421880 [00:01<00:01, 14588580.28it/s]
114-
43%|####2 | 11337728/26421880 [00:01<00:01, 14744996.88it/s]
115-
49%|####8 | 12943360/26421880 [00:01<00:00, 14571947.61it/s]
116-
58%|#####7 | 15302656/26421880 [00:01<00:00, 16609342.94it/s]
117-
66%|######5 | 17432576/26421880 [00:01<00:00, 17465490.83it/s]
118-
73%|#######2 | 19202048/26421880 [00:01<00:00, 16505549.41it/s]
119-
79%|#######8 | 20873216/26421880 [00:01<00:00, 16029189.52it/s]
120-
87%|########7 | 23035904/26421880 [00:01<00:00, 17221769.98it/s]
121-
95%|#########5| 25165824/26421880 [00:01<00:00, 17892501.41it/s]
122-
100%|##########| 26421880/26421880 [00:01<00:00, 13213920.32it/s]
106+
0%| | 65536/26421880 [00:00<01:12, 364341.15it/s]
107+
1%| | 229376/26421880 [00:00<00:38, 683861.80it/s]
108+
3%|3 | 884736/26421880 [00:00<00:10, 2496403.89it/s]
109+
7%|7 | 1933312/26421880 [00:00<00:05, 4126620.46it/s]
110+
19%|#8 | 4915200/26421880 [00:00<00:01, 10825210.05it/s]
111+
25%|##5 | 6717440/26421880 [00:00<00:01, 10831870.33it/s]
112+
36%|###5 | 9469952/26421880 [00:01<00:01, 14769086.66it/s]
113+
44%|####3 | 11501568/26421880 [00:01<00:01, 13840499.35it/s]
114+
54%|#####3 | 14254080/26421880 [00:01<00:00, 16895948.12it/s]
115+
62%|######1 | 16351232/26421880 [00:01<00:00, 15326785.49it/s]
116+
72%|#######2 | 19070976/26421880 [00:01<00:00, 17925009.64it/s]
117+
80%|######## | 21200896/26421880 [00:01<00:00, 16053925.20it/s]
118+
91%|######### | 23920640/26421880 [00:01<00:00, 18478968.30it/s]
119+
99%|#########8| 26050560/26421880 [00:01<00:00, 16421572.59it/s]
120+
100%|##########| 26421880/26421880 [00:01<00:00, 13239813.58it/s]
123121
Extracting data/FashionMNIST/raw/train-images-idx3-ubyte.gz to data/FashionMNIST/raw
124122
125123
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz
126124
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz to data/FashionMNIST/raw/train-labels-idx1-ubyte.gz
127125
128126
0%| | 0/29515 [00:00<?, ?it/s]
129-
100%|##########| 29515/29515 [00:00<00:00, 338855.84it/s]
127+
100%|##########| 29515/29515 [00:00<00:00, 328875.22it/s]
130128
Extracting data/FashionMNIST/raw/train-labels-idx1-ubyte.gz to data/FashionMNIST/raw
131129
132130
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz
133131
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz to data/FashionMNIST/raw/t10k-images-idx3-ubyte.gz
134132
135133
0%| | 0/4422102 [00:00<?, ?it/s]
136-
1%|1 | 65536/4422102 [00:00<00:11, 379957.93it/s]
137-
5%|5 | 229376/4422102 [00:00<00:05, 714100.11it/s]
138-
17%|#7 | 753664/4422102 [00:00<00:01, 2193576.72it/s]
139-
44%|####3 | 1933312/4422102 [00:00<00:00, 4422108.86it/s]
140-
97%|#########7| 4292608/4422102 [00:00<00:00, 9722551.07it/s]
141-
100%|##########| 4422102/4422102 [00:00<00:00, 6267712.38it/s]
134+
1%|1 | 65536/4422102 [00:00<00:11, 379470.12it/s]
135+
5%|5 | 229376/4422102 [00:00<00:05, 715735.79it/s]
136+
21%|##1 | 950272/4422102 [00:00<00:01, 2298564.76it/s]
137+
69%|######8 | 3047424/4422102 [00:00<00:00, 7423657.83it/s]
138+
100%|##########| 4422102/4422102 [00:00<00:00, 6287836.52it/s]
142139
Extracting data/FashionMNIST/raw/t10k-images-idx3-ubyte.gz to data/FashionMNIST/raw
143140
144141
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz
145142
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz to data/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz
146143
147144
0%| | 0/5148 [00:00<?, ?it/s]
148-
100%|##########| 5148/5148 [00:00<00:00, 40059883.10it/s]
145+
100%|##########| 5148/5148 [00:00<00:00, 46137344.00it/s]
149146
Extracting data/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz to data/FashionMNIST/raw
150147
151148
@@ -446,7 +443,7 @@ Further Reading
446443

447444
.. rst-class:: sphx-glr-timing
448445

449-
**Total running time of the script:** ( 0 minutes 5.463 seconds)
446+
**Total running time of the script:** ( 0 minutes 7.451 seconds)
450447

451448

452449
.. _sphx_glr_download_beginner_basics_data_tutorial.py:

0 commit comments

Comments
 (0)