You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# could be, in rare cases, faster than the corresponding CUDA version.
343
+
# which relies on creating a brand new storage in pinned memory through `cudaHostAlloc <https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__MEMORY.html#group__CUDART__MEMORY_1gb65da58f444e7230d3322b6126bb4902>`
344
+
# could be, in rare cases, faster than transitioning data in chunks as ``cudaMemcpy`` does.
345
+
# Here too, the observation may vary depending on the available hardware, the size of the tensors being sent or
0 commit comments