Skip to content

Commit b37d6cb

Browse files
authored
Merge branch 'master' into autograd_blitz
2 parents 64bbe46 + b4134ac commit b37d6cb

File tree

5 files changed

+15
-11
lines changed

5 files changed

+15
-11
lines changed

.jenkins/build.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ if [[ "${JOB_BASE_NAME}" == *worker_* ]]; then
4040
# Step 1: Remove runnable code from tutorials that are not supposed to be run
4141
python $DIR/remove_runnable_code.py beginner_source/aws_distributed_training_tutorial.py beginner_source/aws_distributed_training_tutorial.py || true
4242
# TODO: Fix bugs in these tutorials to make them runnable again
43-
python $DIR/remove_runnable_code.py beginner_source/audio_classifier_tutorial.py beginner_source/audio_classifier_tutorial.py || true
43+
# python $DIR/remove_runnable_code.py beginner_source/audio_classifier_tutorial.py beginner_source/audio_classifier_tutorial.py || true
4444

4545
# Step 2: Keep certain tutorials based on file count, and remove runnable code in all other tutorials
4646
# IMPORTANT NOTE: We assume that each tutorial has a UNIQUE filename.

advanced_source/cpp_export.rst

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -228,6 +228,8 @@ structure:
228228
.. tip::
229229
On Windows, debug and release builds are not ABI-compatible. If you plan to
230230
build your project in debug mode, please try the debug version of LibTorch.
231+
Also, make sure you specify the correct configuration in the ``cmake --build .``
232+
line below.
231233

232234
The last step is building the application. For this, assume our example
233235
directory is laid out like this:
@@ -246,7 +248,7 @@ We can now run the following commands to build the application from within the
246248
mkdir build
247249
cd build
248250
cmake -DCMAKE_PREFIX_PATH=/path/to/libtorch ..
249-
make
251+
cmake --build . --config Release
250252
251253
where ``/path/to/libtorch`` should be the full path to the unzipped LibTorch
252254
distribution. If all goes well, it will look something like this:

advanced_source/cpp_frontend.rst

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -105,6 +105,8 @@ environment, however you are free to follow along on MacOS or Windows too.
105105
.. tip::
106106
On Windows, debug and release builds are not ABI-compatible. If you plan to
107107
build your project in debug mode, please try the debug version of LibTorch.
108+
Also, make sure you specify the correct configuration in the ``cmake --build .``
109+
line below.
108110

109111
The first step is to download the LibTorch distribution locally, via the link
110112
retrieved from the PyTorch website. For a vanilla Ubuntu Linux environment, this
@@ -201,17 +203,17 @@ corresponding absolute path. Now, we are ready to build our application:
201203
-- Configuring done
202204
-- Generating done
203205
-- Build files have been written to: /home/build
204-
root@fa350df05ecf:/home/build# make -j
206+
root@fa350df05ecf:/home/build# cmake --build . --config Release
205207
Scanning dependencies of target dcgan
206208
[ 50%] Building CXX object CMakeFiles/dcgan.dir/dcgan.cpp.o
207209
[100%] Linking CXX executable dcgan
208210
[100%] Built target dcgan
209211
210212
Above, we first created a ``build`` folder inside of our ``dcgan`` directory,
211213
entered this folder, ran the ``cmake`` command to generate the necessary build
212-
(Make) files and finally compiled the project successfully by running ``make
213-
-j``. We are now all set to execute our minimal binary and complete this section
214-
on basic project configuration:
214+
(Make) files and finally compiled the project successfully by running ``cmake
215+
--build . --config Release``. We are now all set to execute our minimal binary
216+
and complete this section on basic project configuration:
215217
216218
.. code-block:: shell
217219

advanced_source/dynamic_quantization_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -178,7 +178,7 @@ def tokenize(self, path):
178178

179179
word = corpus.dictionary.idx2word[word_idx]
180180

181-
outf.write(str(word) + ('\n' if i % 20 == 19 else ' '))
181+
outf.write(str(word.encode('utf-8')) + ('\n' if i % 20 == 19 else ' '))
182182

183183
if i % 100 == 0:
184184
print('| Generated {}/{} words'.format(i, 1000))

beginner_source/blitz/neural_networks_tutorial.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -90,9 +90,9 @@ def num_flat_features(self, x):
9090
print(params[0].size()) # conv1's .weight
9191

9292
########################################################################
93-
# Let try a random 32x32 input.
93+
# Let's try a random 32x32 input.
9494
# Note: expected input size of this net (LeNet) is 32x32. To use this net on
95-
# MNIST dataset, please resize the images from the dataset to 32x32.
95+
# the MNIST dataset, please resize the images from the dataset to 32x32.
9696

9797
input = torch.randn(1, 1, 32, 32)
9898
out = net(input)
@@ -227,7 +227,7 @@ def num_flat_features(self, x):
227227
#
228228
# ``weight = weight - learning_rate * gradient``
229229
#
230-
# We can implement this using simple python code:
230+
# We can implement this using simple Python code:
231231
#
232232
# .. code:: python
233233
#
@@ -258,4 +258,4 @@ def num_flat_features(self, x):
258258
#
259259
# Observe how gradient buffers had to be manually set to zero using
260260
# ``optimizer.zero_grad()``. This is because gradients are accumulated
261-
# as explained in `Backprop`_ section.
261+
# as explained in the `Backprop`_ section.

0 commit comments

Comments
 (0)