-
Notifications
You must be signed in to change notification settings - Fork 282
enable the fallback to cpu for adaptive_avg_pool2d and max_pool2d #84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…pt function 2. add check on fused node in the graph
torch_ipex/csrc/cpu/CustomOPs.h
Outdated
#if defined(_DEBUG) | ||
TORCH_WARN(e.what()); | ||
#endif | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Redundant whitespace
torch_ipex/csrc/cpu/CustomOPs.h
Outdated
TORCH_WARN(e.what()); | ||
#endif | ||
} | ||
if (input.device().type() == c10::DeviceType::DPCPP){ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like incorrect indent style
LGTM. Please refine the code style. |
@pinzhenx r u okay with this fix? |
tests/cpu/test_lazy_reorder.py
Outdated
y_cpu, | ||
y_dpcpp) | ||
|
||
self.assertEqual("cpu", y_cpu.device.type) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this check could be removed
self.assertEqual(device, y_dpcpp.device.type) | ||
|
||
def test_adaptive_avg_pool2d_backward_not_divisible(self): | ||
ipex.enable_auto_dnnl() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it possible to merge the forward and backward ut into a single one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other tests in test_lazy_reorder.py separate the backward from the forward. Should we align with other ut in this file?
def test_max_pool3d_backward(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
okay, that's just fine.
This code was copied from test_mkldnn.py
where we had tons of copy-pasted code there.
tests/cpu/test_lazy_reorder.py
Outdated
y_dpcpp.backward() | ||
self.assertEqual(x_cpu.grad, x_dpcpp.grad) | ||
|
||
self.assertEqual("cpu", x_cpu.grad.device.type) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
torch_ipex/csrc/cpu/CustomOPs.h
Outdated
} | ||
} catch(std::exception& e) { | ||
#if defined(_DEBUG) | ||
TORCH_WARN(e.what()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: 4 more spaces needed here, indent according to the catch block
also, a space between catch catch (std::exception& e)
LGTM. Thanks! |
…tel#84) 1. move "bn folding" and "prepack conv weight" to the hooked jit script function 2. add check on fused node in the graph
fix #79