Skip to content

Commit a6fa021

Browse files
committed
update
2 parents 484f772 + ca44760 commit a6fa021

File tree

2 files changed

+2
-1
lines changed

2 files changed

+2
-1
lines changed

Gemfile

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,5 +4,6 @@ ruby "3.3.0"
44
group :jekyll_plugins do
55
gem "github-pages"
66
gem "jekyll-paginate-v2"
7+
gem 'jekyll-autoprefixer'
78
gem 'jekyll-feed'
89
end

_posts/2020-08-18-pytorch-1.6-now-includes-stochastic-weight-averaging.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
layout: blog_detail
33
title: 'PyTorch 1.6 now includes Stochastic Weight Averaging'
4-
author: Pavel Izmailov, Andrew Gordon Wilson and Vincent Queneneville-Belair
4+
author: Pavel Izmailov, Andrew Gordon Wilson and Vincent Quenneville-Belair
55
---
66

77
Do you use stochastic gradient descent (SGD) or Adam? Regardless of the procedure you use to train your neural network, you can likely achieve significantly better generalization at virtually no additional cost with a simple new technique now natively supported in PyTorch 1.6, Stochastic Weight Averaging (SWA) [1]. Even if you have already trained your model, it’s easy to realize the benefits of SWA by running SWA for a small number of epochs starting with a pre-trained model. [Again](https://twitter.com/MilesCranmer/status/1282140440892932096) and [again](https://twitter.com/leopd/status/1285969855062192129), researchers are discovering that SWA improves the performance of well-tuned models in a wide array of practical applications with little cost or effort!

0 commit comments

Comments
 (0)