Skip to content

Sync shape info between dil tensor and aten tensor #12

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
May 18, 2020

Conversation

EikanWang
Copy link
Contributor

@pinzhenx @hongzhen1 @jiayisunx @XiaobingSuper , Please review the patch to make sure the shape of aten tensor and its dil tensor buffer is consistent. The dil tensor is just a buffer of aten tensor, all shape info for the computation should come from the aten tensor.

@pinzhenx , I'm not sure which DNNL OP requires its input tensors should be contiguous. Could you help to check it?

After that, each DNNL op needs to do "contiguous" explicitly if it requires its input tensors are contiguous.

@@ -38,7 +38,6 @@ dil::tensor try_gen_dil_tensor(const at::Tensor &input) {
if (cpu::ShadeDataContext::isDilTensor(input)) {
return cpu::ShadeDataContext::getDilTensor(input);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

getDilTensor will lose aten metadata when gen dil tensor

@EikanWang EikanWang merged commit 51c8b48 into intel:master May 18, 2020
zhuhaozhe pushed a commit to zhuhaozhe/intel-extension-for-pytorch that referenced this pull request Apr 23, 2021
EikanWang pushed a commit that referenced this pull request Oct 4, 2021
…and adaptive_avg_pool2d (#12)

* input quantization parameters propagate to the output for max_pool2d and adaptive_avg_pool2d

* [LLGA] add UT for int8 max_pool2d

* [LLGA] add skipped UT for adap_avg_pool

Co-authored-by: chunyuan <chunyuan.wu@intel.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants