You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
auto outputTensor = _impl.forward({tensor}).toTensor().cpu();
78
80
...
79
-
return nil;
80
81
}
81
82
82
83
As you can see, we simply just call ``.metal()`` to move our input tensor from CPU to GPU, and then call ``.cpu()`` to move the result back. Internally, ``.metal()`` will copy the input data from the CPU buffer to a GPU buffer with a GPU compatible memory format. When `.cpu()` is invoked, the GPU command buffer will be flushed and synced. After `forward` finished, the final result will then be copied back from the GPU buffer back to a CPU buffer.
The result model can be used only on Vulkan backend as it contains specific to the Vulkan backend operators.
@@ -123,9 +123,9 @@ C++ API
123
123
at::is_vulkan_available()
124
124
auto tensor = at::rand({1, 2, 2, 3}, at::device(at::kCPU).dtype(at::kFloat));
125
125
auto tensor_vulkan = t.vulkan();
126
-
auto module = torch::jit::load(“$PATH”);
127
-
auto tensor_output_vulkan = module.forward(inputs)
128
-
auto tensor_output = tensor_output.cpu()
126
+
auto module = torch::jit::load("$PATH");
127
+
auto tensor_output_vulkan = module.forward(inputs).toTensor();
128
+
auto tensor_output = tensor_output.cpu();
129
129
130
130
``at::is_vulkan_available()`` function tries to initialize Vulkan backend and if Vulkan device is successfully found and context is created - it will return true, false otherwise.
131
131
@@ -185,7 +185,7 @@ For Android API to run model on Vulkan backend we have to specify this during mo
0 commit comments