Skip to content

Commit ed465bd

Browse files
author
Vincent Moens
committed
amend
1 parent 69e98ea commit ed465bd

File tree

1 file changed

+3
-20
lines changed

1 file changed

+3
-20
lines changed

intermediate_source/pinmem_nonblock.py

Lines changed: 3 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -13,8 +13,9 @@
1313
This tutorial examines two key methods for device-to-device data transfer in PyTorch:
1414
:meth:`~torch.Tensor.pin_memory` and :meth:`~torch.Tensor.to` with the ``non_blocking=True`` option.
1515
16-
Key Learnings
17-
~~~~~~~~~~~~~
16+
What you will learn
17+
~~~~~~~~~~~~~~~~~~~
18+
1819
Optimizing the transfer of tensors from the CPU to the GPU can be achieved through asynchronous transfers and memory
1920
pinning. However, there are important considerations:
2021
@@ -52,24 +53,6 @@
5253
#
5354
# We start by outlining the theory surrounding these concepts, and then move to concrete test examples of the features.
5455
#
55-
# - :ref:`Background <pinned_memory_background>`
56-
#
57-
# - :ref:`Memory management basics <pinned_memory_memory>`
58-
# - :ref:`CUDA and (non-)pageable memory <pinned_memory_cuda_pageable_memory>`
59-
# - :ref:`Asynchronous vs. Synchronous Operations with non_blocking=True <pinned_memory_async_sync>`
60-
#
61-
# - :ref:`A PyTorch perspective <pinned_memory_pt_perspective>`
62-
#
63-
# - :ref:`pin_memory <pinned_memory_pinned>`
64-
# - :ref:`non_blocking=True <pinned_memory_non_blocking>`
65-
# - :ref:`Synergies <pinned_memory_synergies>`
66-
# - :ref:`Other copy directions (GPU -> CPU) <pinned_memory_other_direction>`
67-
#
68-
# - :ref:`Practical recommendations <pinned_memory_recommendations>`
69-
# - :ref:`Additional considerations <pinned_memory_considerations>`
70-
# - :ref:`Conclusion <pinned_memory_conclusion>`
71-
# - :ref:`Additional resources <pinned_memory_resources>`
72-
#
7356
#
7457
# Background
7558
# ----------

0 commit comments

Comments
 (0)