Skip to content

Commit 6123cf7

Browse files
committed
Build 905eaf0
1 parent 29d9fb8 commit 6123cf7

File tree

86 files changed

+2082
-2052
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

86 files changed

+2082
-2052
lines changed

_downloads/autograd_tutorial.ipynb

Lines changed: 72 additions & 72 deletions
Original file line numberDiff line numberDiff line change
@@ -1,205 +1,205 @@
11
{
2-
"metadata": {
3-
"language_info": {
4-
"nbconvert_exporter": "python",
5-
"mimetype": "text/x-python",
6-
"version": "3.5.2",
7-
"codemirror_mode": {
8-
"version": 3,
9-
"name": "ipython"
10-
},
11-
"name": "python",
12-
"pygments_lexer": "ipython3",
13-
"file_extension": ".py"
14-
},
15-
"kernelspec": {
16-
"language": "python",
17-
"display_name": "Python 3",
18-
"name": "python3"
19-
}
20-
},
21-
"nbformat_minor": 0,
22-
"nbformat": 4,
232
"cells": [
243
{
4+
"cell_type": "code",
5+
"outputs": [],
256
"metadata": {
267
"collapsed": false
278
},
28-
"outputs": [],
29-
"cell_type": "code",
30-
"execution_count": null,
319
"source": [
3210
"%matplotlib inline"
33-
]
11+
],
12+
"execution_count": null
3413
},
3514
{
36-
"metadata": {},
3715
"cell_type": "markdown",
16+
"metadata": {},
3817
"source": [
3918
"\nAutograd: automatic differentiation\n===================================\n\nCentral to all neural networks in PyTorch is the ``autograd`` package.\nLet\u2019s first briefly visit this, and we will then go to training our\nfirst neural network.\n\n\nThe ``autograd`` package provides automatic differentiation for all operations\non Tensors. It is a define-by-run framework, which means that your backprop is\ndefined by how your code is run, and that every single iteration can be\ndifferent.\n\nLet us see this in more simple terms with some examples.\n\nVariable\n--------\n\n``autograd.Variable`` is the central class of the package. It wraps a\nTensor, and supports nearly all of operations defined on it. Once you\nfinish your computation you can call ``.backward()`` and have all the\ngradients computed automatically.\n\nYou can access the raw tensor through the ``.data`` attribute, while the\ngradient w.r.t. this variable is accumulated into ``.grad``.\n\n.. figure:: /_static/img/Variable.png\n :alt: Variable\n\n Variable\n\nThere\u2019s one more class which is very important for autograd\nimplementation - a ``Function``.\n\n``Variable`` and ``Function`` are interconnected and build up an acyclic\ngraph, that encodes a complete history of computation. Each variable has\na ``.creator`` attribute that references a ``Function`` that has created\nthe ``Variable`` (except for Variables created by the user - their\n``creator is None``).\n\nIf you want to compute the derivatives, you can call ``.backward()`` on\na ``Variable``. If ``Variable`` is a scalar (i.e. it holds a one element\ndata), you don\u2019t need to specify any arguments to ``backward()``,\nhowever if it has more elements, you need to specify a ``grad_output``\nargument that is a tensor of matching shape.\n\n"
4019
]
4120
},
4221
{
22+
"cell_type": "code",
23+
"outputs": [],
4324
"metadata": {
4425
"collapsed": false
4526
},
46-
"outputs": [],
47-
"cell_type": "code",
48-
"execution_count": null,
4927
"source": [
5028
"import torch\nfrom torch.autograd import Variable"
51-
]
29+
],
30+
"execution_count": null
5231
},
5332
{
54-
"metadata": {},
5533
"cell_type": "markdown",
34+
"metadata": {},
5635
"source": [
5736
"Create a variable:\n\n"
5837
]
5938
},
6039
{
40+
"cell_type": "code",
41+
"outputs": [],
6142
"metadata": {
6243
"collapsed": false
6344
},
64-
"outputs": [],
65-
"cell_type": "code",
66-
"execution_count": null,
6745
"source": [
6846
"x = Variable(torch.ones(2, 2), requires_grad=True)\nprint(x)"
69-
]
47+
],
48+
"execution_count": null
7049
},
7150
{
72-
"metadata": {},
7351
"cell_type": "markdown",
52+
"metadata": {},
7453
"source": [
7554
"Do an operation of variable:\n\n"
7655
]
7756
},
7857
{
58+
"cell_type": "code",
59+
"outputs": [],
7960
"metadata": {
8061
"collapsed": false
8162
},
82-
"outputs": [],
83-
"cell_type": "code",
84-
"execution_count": null,
8563
"source": [
8664
"y = x + 2\nprint(y)"
87-
]
65+
],
66+
"execution_count": null
8867
},
8968
{
90-
"metadata": {},
9169
"cell_type": "markdown",
70+
"metadata": {},
9271
"source": [
9372
"``y`` was created as a result of an operation, so it has a creator.\n\n"
9473
]
9574
},
9675
{
76+
"cell_type": "code",
77+
"outputs": [],
9778
"metadata": {
9879
"collapsed": false
9980
},
100-
"outputs": [],
101-
"cell_type": "code",
102-
"execution_count": null,
10381
"source": [
10482
"print(y.creator)"
105-
]
83+
],
84+
"execution_count": null
10685
},
10786
{
108-
"metadata": {},
10987
"cell_type": "markdown",
88+
"metadata": {},
11089
"source": [
11190
"Do more operations on y\n\n"
11291
]
11392
},
11493
{
94+
"cell_type": "code",
95+
"outputs": [],
11596
"metadata": {
11697
"collapsed": false
11798
},
118-
"outputs": [],
119-
"cell_type": "code",
120-
"execution_count": null,
12199
"source": [
122100
"z = y * y * 3\nout = z.mean()\n\nprint(z, out)"
123-
]
101+
],
102+
"execution_count": null
124103
},
125104
{
126-
"metadata": {},
127105
"cell_type": "markdown",
106+
"metadata": {},
128107
"source": [
129108
"Gradients\n---------\nlet's backprop now\n``out.backward()`` is equivalent to doing ``out.backward(torch.Tensor([1.0]))``\n\n"
130109
]
131110
},
132111
{
112+
"cell_type": "code",
113+
"outputs": [],
133114
"metadata": {
134115
"collapsed": false
135116
},
136-
"outputs": [],
137-
"cell_type": "code",
138-
"execution_count": null,
139117
"source": [
140118
"out.backward()"
141-
]
119+
],
120+
"execution_count": null
142121
},
143122
{
144-
"metadata": {},
145123
"cell_type": "markdown",
124+
"metadata": {},
146125
"source": [
147126
"print gradients d(out)/dx\n\n\n"
148127
]
149128
},
150129
{
130+
"cell_type": "code",
131+
"outputs": [],
151132
"metadata": {
152133
"collapsed": false
153134
},
154-
"outputs": [],
155-
"cell_type": "code",
156-
"execution_count": null,
157135
"source": [
158136
"print(x.grad)"
159-
]
137+
],
138+
"execution_count": null
160139
},
161140
{
162-
"metadata": {},
163141
"cell_type": "markdown",
142+
"metadata": {},
164143
"source": [
165144
"You should have got a matrix of ``4.5``. Let\u2019s call the ``out``\n*Variable* \u201c$o$\u201d.\nWe have that $o = \\frac{1}{4}\\sum_i z_i$,\n $z_i = 3(x_i+2)^2$ and $z_i\\bigr\\rvert_{x_i=1} = 27$.\n Therefore,\n $\\frac{\\partial o}{\\partial x_i} = \\frac{3}{2}(x_i+2)$, hence\n $\\frac{\\partial o}{\\partial x_i}\\bigr\\rvert_{x_i=1} = \\frac{9}{2} = 4.5$.\n\n"
166145
]
167146
},
168147
{
169-
"metadata": {},
170148
"cell_type": "markdown",
149+
"metadata": {},
171150
"source": [
172151
"You can do many crazy things with autograd!\n\n"
173152
]
174153
},
175154
{
155+
"cell_type": "code",
156+
"outputs": [],
176157
"metadata": {
177158
"collapsed": false
178159
},
179-
"outputs": [],
180-
"cell_type": "code",
181-
"execution_count": null,
182160
"source": [
183161
"x = torch.randn(3)\nx = Variable(x, requires_grad=True)\n\ny = x * 2\nwhile y.data.norm() < 1000:\n y = y * 2\n\nprint(y)"
184-
]
162+
],
163+
"execution_count": null
185164
},
186165
{
166+
"cell_type": "code",
167+
"outputs": [],
187168
"metadata": {
188169
"collapsed": false
189170
},
190-
"outputs": [],
191-
"cell_type": "code",
192-
"execution_count": null,
193171
"source": [
194172
"gradients = torch.FloatTensor([0.1, 1.0, 0.0001])\ny.backward(gradients)\n\nprint(x.grad)"
195-
]
173+
],
174+
"execution_count": null
196175
},
197176
{
198-
"metadata": {},
199177
"cell_type": "markdown",
178+
"metadata": {},
200179
"source": [
201180
"**Read Later:**\n\n Documentation of ``Variable`` and ``Function`` is at http://pytorch.org/docs/autograd\n\n"
202181
]
203182
}
204-
]
183+
],
184+
"metadata": {
185+
"language_info": {
186+
"pygments_lexer": "ipython3",
187+
"nbconvert_exporter": "python",
188+
"name": "python",
189+
"mimetype": "text/x-python",
190+
"file_extension": ".py",
191+
"version": "3.5.2",
192+
"codemirror_mode": {
193+
"version": 3,
194+
"name": "ipython"
195+
}
196+
},
197+
"kernelspec": {
198+
"language": "python",
199+
"display_name": "Python 3",
200+
"name": "python3"
201+
}
202+
},
203+
"nbformat": 4,
204+
"nbformat_minor": 0
205205
}

_downloads/autograd_tutorial.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
# -*- coding: utf-8 -*-
12
"""
23
Autograd: automatic differentiation
34
===================================

0 commit comments

Comments
 (0)