Skip to content

Commit 1d474f9

Browse files
committed
fix code
Signed-off-by: Chris Abraham <cjyabraham@gmail.com>
1 parent 52994ee commit 1d474f9

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

_posts/2024-07-08-accelerated-pytorch-inference.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -329,15 +329,15 @@ ACL_CHECK_SUPPORT(
329329
We defined mixed precision primitive definitions and updated the existing oneDNN ACL fp32 primitives to handle bfloat16 tensors.
330330

331331
```
332-
/* With graph compilation, we are able to reorder and pre-pack the weights during the model load
332+
{% raw %} /* With graph compilation, we are able to reorder and pre-pack the weights during the model load
333333
* and compilation phase itself so that redundant and on-the-fly reorders can be avoided.
334334
* This primitive definition is to support gemm fastmath mode for the compile scenario where src is
335335
* in fp32 and weights are in bf16
336336
*/
337-
{ {forward, f32, bf16, f32}, {
337+
{{forward, f32, bf16, f32}, {
338338
CPU_INSTANCE_AARCH64_ACL(acl_inner_product_fwd_t)
339339
nullptr,
340-
}},
340+
}},{% endraw %}
341341
```
342342

343343
### Optimization 3: Disabled operator fusion pass in torch inductor

0 commit comments

Comments
 (0)