Skip to content

enable the fallback to cpu for adaptive_avg_pool2d and max_pool2d #84

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 11 commits into from
Jun 22, 2020

Conversation

chunyuan-w
Copy link
Contributor

fix #79

@chunyuan-w
Copy link
Contributor Author

@EikanWang @pinzhenx

#if defined(_DEBUG)
TORCH_WARN(e.what());
#endif
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Redundant whitespace

TORCH_WARN(e.what());
#endif
}
if (input.device().type() == c10::DeviceType::DPCPP){
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like incorrect indent style

@EikanWang
Copy link
Contributor

LGTM. Please refine the code style.

@EikanWang
Copy link
Contributor

@pinzhenx r u okay with this fix?

y_cpu,
y_dpcpp)

self.assertEqual("cpu", y_cpu.device.type)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this check could be removed

self.assertEqual(device, y_dpcpp.device.type)

def test_adaptive_avg_pool2d_backward_not_divisible(self):
ipex.enable_auto_dnnl()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it possible to merge the forward and backward ut into a single one?

Copy link
Contributor Author

@chunyuan-w chunyuan-w Jun 19, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other tests in test_lazy_reorder.py separate the backward from the forward. Should we align with other ut in this file?

def test_max_pool3d_backward(self):

Copy link
Contributor

@pinzhenx pinzhenx Jun 19, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay, that's just fine.
This code was copied from test_mkldnn.py where we had tons of copy-pasted code there.

y_dpcpp.backward()
self.assertEqual(x_cpu.grad, x_dpcpp.grad)

self.assertEqual("cpu", x_cpu.grad.device.type)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

}
} catch(std::exception& e) {
#if defined(_DEBUG)
TORCH_WARN(e.what());
Copy link
Contributor

@pinzhenx pinzhenx Jun 19, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: 4 more spaces needed here, indent according to the catch block
also, a space between catch catch (std::exception& e)

@pinzhenx
Copy link
Contributor

LGTM. Thanks!

@EikanWang EikanWang merged commit 79953a5 into intel:master Jun 22, 2020
zhuhaozhe pushed a commit to zhuhaozhe/intel-extension-for-pytorch that referenced this pull request Jun 24, 2020
…tel#84)

1. move "bn folding" and "prepack conv weight" to the hooked jit script function
2. add check on fused node in the graph
@chunyuan-w chunyuan-w deleted the pooling_fallback branch June 24, 2020 08:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

adaptive_avg_pool2d cannot fallback to cpu
3 participants