Skip to content

Commit 5518c92

Browse files
Fixing featured image
1 parent f8d2648 commit 5518c92

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

_posts/2022-8-16-empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
layout: blog_detail
33
title: "Empowering PyTorch on Intel® Xeon® Scalable processors with Bfloat16"
44
author: Mingfei Ma (Intel), Vitaly Fedyunin (Meta), Wei Wei (Meta)
5-
featured-img: '\assets\images\empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16.png'
5+
featured-img: '/assets/images/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16.png'
66
---
77

88
## Overview
@@ -52,7 +52,7 @@ Generally, the explicit conversion approach and AMP approach have similar perfor
5252
We benchmarked inference performance of TorchVision models on Intel® Xeon® Platinum 8380H CPU @ 2.90GHz (codenamed Cooper Lake), single instance per socket (batch size = 2 x number of physical cores). Results show that bfloat16 has 1.4x to 2.2x performance gain over float32.
5353

5454
<p align="center">
55-
<img src="\assets\images\empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16.png" width="100%">
55+
<img src="/assets/images/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16.png" width="100%">
5656
</p>
5757

5858
## The performance boost of bfloat16 over float32 primarily comes from 3 aspects:

0 commit comments

Comments
 (0)