Skip to content
This repository was archived by the owner on Aug 7, 2019. It is now read-only.

Commit 0af236a

Browse files
vfdev-5Christine Abernathy
authored andcommitted
Fix typos and update to 0.4.1 (pytorch#280)
`AT_ASSERT(cond, ...)` -> `AT_ASSERTM(cond, ...)` https://github.com/pytorch/pytorch/blob/v0.4.1/aten/src/ATen/Error.h#L123
1 parent 5739c41 commit 0af236a

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

advanced_source/cpp_extension.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -151,7 +151,7 @@ looks as simple as this::
151151
from torch.utils.cpp_extension import CppExtension, BuildExtension
152152

153153
setup(name='lltm',
154-
ext_modules=[CppExtension('lltm', ['lltm.cpp'])]
154+
ext_modules=[CppExtension('lltm', ['lltm.cpp'])],
155155
cmdclass={'build_ext': BuildExtension})
156156

157157

@@ -662,8 +662,8 @@ We'll start with the C++ file, which we'll call ``lltm_cuda.cpp``, for example:
662662
663663
// C++ interface
664664
665-
#define CHECK_CUDA(x) AT_ASSERT(x.type().is_cuda(), #x " must be a CUDA tensor")
666-
#define CHECK_CONTIGUOUS(x) AT_ASSERT(x.is_contiguous(), #x " must be contiguous")
665+
#define CHECK_CUDA(x) AT_ASSERTM(x.type().is_cuda(), #x " must be a CUDA tensor")
666+
#define CHECK_CONTIGUOUS(x) AT_ASSERTM(x.is_contiguous(), #x " must be contiguous")
667667
#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
668668
669669
std::vector<at::Tensor> lltm_forward(

0 commit comments

Comments
 (0)