Open
Description
I just assign 10000 values to a tensor:
clock_t start = clock();
torch::Tensor transform_tensor = torch::zeros({ 10000 });
for (size_t m = 0; m < 10000 m++)
transform_tensor[m] = int(m);
clock_t finish = clock();
And it takes 0.317s. If I assign 10,000 to an array or a vector, the time cost will be less.
Why tensor takes so much time? Can the time cost be decreased?