Skip to content

Commit a3076e6

Browse files
authored
Merge branch 'master' into master
2 parents 489982e + 8244bff commit a3076e6

File tree

9 files changed

+1185
-174
lines changed

9 files changed

+1185
-174
lines changed

.circleci/config.yml

Lines changed: 299 additions & 45 deletions
Large diffs are not rendered by default.

advanced_source/cpp_export.rst

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
11
Loading a TorchScript Model in C++
22
=====================================
33

4-
**This tutorial was updated to work with PyTorch 1.2**
5-
64
As its name suggests, the primary interface to PyTorch is the Python
75
programming language. While Python is a suitable and preferred language for
86
many scenarios requiring dynamism and ease of iteration, there are equally many
@@ -205,7 +203,7 @@ minimal ``CMakeLists.txt`` to build it could look as simple as:
205203
206204
add_executable(example-app example-app.cpp)
207205
target_link_libraries(example-app "${TORCH_LIBRARIES}")
208-
set_property(TARGET example-app PROPERTY CXX_STANDARD 11)
206+
set_property(TARGET example-app PROPERTY CXX_STANDARD 14)
209207
210208
The last thing we need to build the example application is the LibTorch
211209
distribution. You can always grab the latest stable release from the `download

advanced_source/cpp_frontend.rst

Lines changed: 56 additions & 111 deletions
Original file line numberDiff line numberDiff line change
@@ -144,7 +144,7 @@ on we'll use this ``CMakeLists.txt`` file:
144144
145145
add_executable(dcgan dcgan.cpp)
146146
target_link_libraries(dcgan "${TORCH_LIBRARIES}")
147-
set_property(TARGET dcgan PROPERTY CXX_STANDARD 11)
147+
set_property(TARGET dcgan PROPERTY CXX_STANDARD 14)
148148
149149
.. note::
150150

@@ -698,110 +698,40 @@ The Generator Module
698698
********************
699699
700700
We begin by defining the generator module, which consists of a series of
701-
transposed 2D convolutions, batch normalizations and ReLU activation units. Like
702-
in Python, PyTorch here provides two APIs for model definition: a functional one
703-
where inputs are passed through successive functions, and a more object-oriented
704-
one where we build a ``Sequential`` module containing the entire model as
705-
submodules. Let's see how our generator looks with either API, and you can
706-
decide for yourself which one you prefer. First, using ``Sequential``:
701+
transposed 2D convolutions, batch normalizations and ReLU activation units.
702+
We explicitly pass inputs (in a functional way) between modules in the
703+
``forward()`` method of a module we define ourselves:
707704
708705
.. code-block:: cpp
709706
710-
using namespace torch;
711-
712-
nn::Sequential generator(
713-
// Layer 1
714-
nn::Conv2d(nn::Conv2dOptions(kNoiseSize, 256, 4)
715-
.with_bias(false)
716-
.transposed(true)),
717-
nn::BatchNorm(256),
718-
nn::Functional(torch::relu),
719-
// Layer 2
720-
nn::Conv2d(nn::Conv2dOptions(256, 128, 3)
721-
.stride(2)
722-
.padding(1)
723-
.with_bias(false)
724-
.transposed(true)),
725-
nn::BatchNorm(128),
726-
nn::Functional(torch::relu),
727-
// Layer 3
728-
nn::Conv2d(nn::Conv2dOptions(128, 64, 4)
729-
.stride(2)
730-
.padding(1)
731-
.with_bias(false)
732-
.transposed(true)),
733-
nn::BatchNorm(64),
734-
nn::Functional(torch::relu),
735-
// Layer 4
736-
nn::Conv2d(nn::Conv2dOptions(64, 1, 4)
737-
.stride(2)
738-
.padding(1)
739-
.with_bias(false)
740-
.transposed(true)),
741-
nn::Functional(torch::tanh));
742-
743-
.. tip::
744-
745-
A ``Sequential`` module simply performs function composition. The output of
746-
the first submodule becomes the input of the second, the output of the third
747-
becomes the input of the fourth and so on.
748-
749-
The particular modules chosen, like ``nn::Conv2d`` and ``nn::BatchNorm``,
750-
follows the structure outlined earlier. The ``kNoiseSize`` constant determines
751-
the size of the input noise vector and is set to ``100``. Notice also that we
752-
use the ``torch::nn::Functional`` module for our activation functions, passing
753-
it ``torch::relu`` for inner layers and ``torch::tanh`` as the final activation.
754-
Hyperparameters were, of course, found via grad student descent.
755-
756-
.. note::
757-
758-
The Python frontend has one module for each activation function, like
759-
``torch.nn.ReLU`` or ``torch.nn.Tanh``. In C++, we instead only provide the
760-
``Functional`` module, to which you can pass any C++ function that will be
761-
called inside the ``Functional``'s ``forward()`` method.
762-
763-
.. attention::
764-
765-
No grad students were harmed in the discovery of hyperparameters. They were
766-
fed Soylent regularly.
767-
768-
For the second approach, we explicitly pass inputs (in a functional way) between
769-
modules in the ``forward()`` method of a module we define ourselves:
770-
771-
.. code-block:: cpp
772-
773-
struct GeneratorImpl : nn::Module {
774-
GeneratorImpl(int kNoiseSize)
775-
: conv1(nn::Conv2dOptions(kNoiseSize, 256, 4)
776-
.with_bias(false)
777-
.transposed(true)),
707+
struct DCGANGeneratorImpl : nn::Module {
708+
DCGANGeneratorImpl(int kNoiseSize)
709+
: conv1(nn::ConvTranspose2dOptions(kNoiseSize, 256, 4)
710+
.bias(false)),
778711
batch_norm1(256),
779-
conv2(nn::Conv2dOptions(256, 128, 3)
712+
conv2(nn::ConvTranspose2dOptions(256, 128, 3)
780713
.stride(2)
781714
.padding(1)
782-
.with_bias(false)
783-
.transposed(true)),
715+
.bias(false)),
784716
batch_norm2(128),
785-
conv3(nn::Conv2dOptions(128, 64, 4)
717+
conv3(nn::ConvTranspose2dOptions(128, 64, 4)
786718
.stride(2)
787719
.padding(1)
788-
.with_bias(false)
789-
.transposed(true)),
720+
.bias(false)),
790721
batch_norm3(64),
791-
conv4(nn::Conv2dOptions(64, 1, 4)
722+
conv4(nn::ConvTranspose2dOptions(64, 1, 4)
792723
.stride(2)
793724
.padding(1)
794-
.with_bias(false)
795-
.transposed(true))
725+
.bias(false))
796726
{
797727
// register_module() is needed if we want to use the parameters() method later on
798728
register_module("conv1", conv1);
799729
register_module("conv2", conv2);
800730
register_module("conv3", conv3);
801731
register_module("conv4", conv4);
802732
register_module("batch_norm1", batch_norm1);
803-
register_module("batch_norm2", batch_norm1);
804-
register_module("batch_norm3", batch_norm1);
733+
register_module("batch_norm2", batch_norm2);
734+
register_module("batch_norm3", batch_norm3);
805735
}
806736
807737
torch::Tensor forward(torch::Tensor x) {
@@ -812,25 +742,34 @@ modules in the ``forward()`` method of a module we define ourselves:
812742
return x;
813743
}
814744
815-
nn::Conv2d conv1, conv2, conv3, conv4;
816-
nn::BatchNorm batch_norm1, batch_norm2, batch_norm3;
745+
nn::ConvTranspose2d conv1, conv2, conv3, conv4;
746+
nn::BatchNorm2d batch_norm1, batch_norm2, batch_norm3;
817747
};
818-
TORCH_MODULE(Generator);
748+
TORCH_MODULE(DCGANGenerator);
749+
750+
DCGANGenerator generator(kNoiseSize);
751+
752+
We can now invoke ``forward()`` on the ``DCGANGenerator`` to map a noise sample to an image.
753+
754+
The particular modules chosen, like ``nn::ConvTranspose2d`` and ``nn::BatchNorm2d``,
755+
follows the structure outlined earlier. The ``kNoiseSize`` constant determines
756+
the size of the input noise vector and is set to ``100``. Hyperparameters were,
757+
of course, found via grad student descent.
819758
820-
Generator generator;
759+
.. attention::
821760
822-
Whichever approach we use, we can now invoke ``forward()`` on the ``Generator`` to
823-
map a noise sample to an image.
761+
No grad students were harmed in the discovery of hyperparameters. They were
762+
fed Soylent regularly.
824763
825764
.. note::
826765
827766
A brief word on the way options are passed to built-in modules like ``Conv2d``
828767
in the C++ frontend: Every module has some required options, like the number
829-
of features for ``BatchNorm``. If you only need to configure the required
768+
of features for ``BatchNorm2d``. If you only need to configure the required
830769
options, you can pass them directly to the module's constructor, like
831-
``BatchNorm(128)`` or ``Dropout(0.5)`` or ``Conv2d(8, 4, 2)`` (for input
770+
``BatchNorm2d(128)`` or ``Dropout(0.5)`` or ``Conv2d(8, 4, 2)`` (for input
832771
channel count, output channel count, and kernel size). If, however, you need
833-
to modify other options, which are normally defaulted, such as ``with_bias``
772+
to modify other options, which are normally defaulted, such as ``bias``
834773
for ``Conv2d``, you need to construct and pass an *options* object. Every
835774
module in the C++ frontend has an associated options struct, called
836775
``ModuleOptions`` where ``Module`` is the name of the module, like
@@ -845,36 +784,42 @@ and activations. However, the convolutions are now regular ones instead of
845784
transposed, and we use a leaky ReLU with an alpha value of 0.2 instead of a
846785
vanilla ReLU. Also, the final activation becomes a Sigmoid, which squashes
847786
values into a range between 0 and 1. We can then interpret these squashed values
848-
as the probabilities the discriminator assigns to images being real:
787+
as the probabilities the discriminator assigns to images being real.
788+
789+
To build the discriminator, we will try something different: a `Sequential` module.
790+
Like in Python, PyTorch here provides two APIs for model definition: a functional one
791+
where inputs are passed through successive functions (e.g. the generator module example),
792+
and a more object-oriented one where we build a `Sequential` module containing the
793+
entire model as submodules. Using `Sequential`, the discriminator would look like:
849794
850795
.. code-block:: cpp
851796
852797
nn::Sequential discriminator(
853798
// Layer 1
854799
nn::Conv2d(
855-
nn::Conv2dOptions(1, 64, 4).stride(2).padding(1).with_bias(false)),
856-
nn::Functional(torch::leaky_relu, 0.2),
800+
nn::Conv2dOptions(1, 64, 4).stride(2).padding(1).bias(false)),
801+
nn::LeakyReLU(nn::LeakyReLUOptions().negative_slope(0.2)),
857802
// Layer 2
858803
nn::Conv2d(
859-
nn::Conv2dOptions(64, 128, 4).stride(2).padding(1).with_bias(false)),
860-
nn::BatchNorm(128),
861-
nn::Functional(torch::leaky_relu, 0.2),
804+
nn::Conv2dOptions(64, 128, 4).stride(2).padding(1).bias(false)),
805+
nn::BatchNorm2d(128),
806+
nn::LeakyReLU(nn::LeakyReLUOptions().negative_slope(0.2)),
862807
// Layer 3
863808
nn::Conv2d(
864-
nn::Conv2dOptions(128, 256, 4).stride(2).padding(1).with_bias(false)),
865-
nn::BatchNorm(256),
866-
nn::Functional(torch::leaky_relu, 0.2),
809+
nn::Conv2dOptions(128, 256, 4).stride(2).padding(1).bias(false)),
810+
nn::BatchNorm2d(256),
811+
nn::LeakyReLU(nn::LeakyReLUOptions().negative_slope(0.2)),
867812
// Layer 4
868813
nn::Conv2d(
869-
nn::Conv2dOptions(256, 1, 3).stride(1).padding(0).with_bias(false)),
870-
nn::Functional(torch::sigmoid));
814+
nn::Conv2dOptions(256, 1, 3).stride(1).padding(0).bias(false)),
815+
nn::Sigmoid());
871816
872-
.. note::
817+
.. tip::
818+
819+
A ``Sequential`` module simply performs function composition. The output of
820+
the first submodule becomes the input of the second, the output of the third
821+
becomes the input of the fourth and so on.
873822
874-
When the function we pass to ``Functional`` takes more arguments than a single
875-
tensor, we can pass them to the ``Functional`` constructor, which will forward
876-
them to each function call. For the leaky ReLU above, this means
877-
``torch::leaky_relu(previous_output_tensor, 0.2)`` is called.
878823
879824
Loading Data
880825
------------

0 commit comments

Comments
 (0)