Skip to content

Commit 2b7f15e

Browse files
atskaeatsukosvekars
authored
Reduce number of iterations in USB tutorial (#2771)
* Add USB framework image * Define acronyms at beginning * Update num_train_iter to 500 * Add note on requiring CUDA --------- Co-authored-by: atsuko <atsuko@ibm.com> Co-authored-by: Svetlana Karslioglu <svekars@meta.com>
1 parent 475671e commit 2b7f15e

File tree

4 files changed

+28
-14
lines changed

4 files changed

+28
-14
lines changed

.jenkins/validate_tutorials_built.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,6 @@
2828
"intermediate_source/_torch_export_nightly_tutorial", # does not work on release
2929
"advanced_source/super_resolution_with_onnxruntime",
3030
"advanced_source/ddp_pipeline", # requires 4 gpus
31-
"advanced_source/usb_semisup_learn", # in the current form takes 140+ minutes to build - can be enabled when the build time is reduced
3231
"prototype_source/fx_graph_mode_ptq_dynamic",
3332
"prototype_source/vmap_recipe",
3433
"prototype_source/torchscript_freezing",
555 KB
Loading

advanced_source/usb_semisup_learn.py

Lines changed: 20 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
**Author**: `Hao Chen <https://github.com/Hhhhhhao>`_
66
77
Unified Semi-supervised learning Benchmark (USB) is a semi-supervised
8-
learning framework built upon PyTorch.
8+
learning (SSL) framework built upon PyTorch.
99
Based on Datasets and Modules provided by PyTorch, USB becomes a flexible,
1010
modular, and easy-to-use framework for semi-supervised learning.
1111
It supports a variety of semi-supervised learning algorithms, including
@@ -17,7 +17,7 @@
1717
This tutorial will walk you through the basics of using the USB lighting
1818
package.
1919
Let's get started by training a ``FreeMatch``/``SoftMatch`` model on
20-
CIFAR-10 using pretrained ViT!
20+
CIFAR-10 using pretrained Vision Transformers (ViT)!
2121
And we will show it is easy to change the semi-supervised algorithm and train
2222
on imbalanced datasets.
2323
@@ -64,6 +64,9 @@
6464
# Now, let's use USB to train ``FreeMatch`` and ``SoftMatch`` on CIFAR-10.
6565
# First, we need to install USB package ``semilearn`` and import necessary API
6666
# functions from USB.
67+
# If you are running this in Google Colab, install ``semilearn`` by running:
68+
# ``!pip install semilearn``.
69+
#
6770
# Below is a list of functions we will use from ``semilearn``:
6871
#
6972
# - ``get_dataset`` to load dataset, here we use CIFAR-10
@@ -77,6 +80,10 @@
7780
# - ``Trainer``: a Trainer class for training and evaluating the
7881
# algorithm on dataset
7982
#
83+
# Note that a CUDA-enabled backend is required for training with the ``semilearn`` package.
84+
# See `Enabling CUDA in Google Colab <https://pytorch.org/tutorials/beginner/colab#using-cuda>`__ for instructions
85+
# on enabling CUDA in Google Colab.
86+
#
8087
import semilearn
8188
from semilearn import get_dataset, get_data_loader, get_net_builder, get_algorithm, get_config, Trainer
8289

@@ -92,7 +99,7 @@
9299

93100
# optimization configs
94101
'epoch': 1,
95-
'num_train_iter': 4000,
102+
'num_train_iter': 500,
96103
'num_eval_iter': 500,
97104
'num_log_iter': 50,
98105
'optim': 'AdamW',
@@ -141,16 +148,16 @@
141148

142149
######################################################################
143150
# We can start training the algorithms on CIFAR-10 with 40 labels now.
144-
# We train for 4000 iterations and evaluate every 500 iterations.
151+
# We train for 500 iterations and evaluate every 500 iterations.
145152
#
146153
trainer = Trainer(config, algorithm)
147154
trainer.fit(train_lb_loader, train_ulb_loader, eval_loader)
148155

149156

150157
######################################################################
151158
# Finally, let's evaluate the trained model on the validation set.
152-
# After training 4000 iterations with ``FreeMatch`` on only 40 labels of
153-
# CIFAR-10, we obtain a classifier that achieves above 93 accuracy on the validation set.
159+
# After training 500 iterations with ``FreeMatch`` on only 40 labels of
160+
# CIFAR-10, we obtain a classifier that achieves around 87% accuracy on the validation set.
154161
trainer.evaluate(eval_loader)
155162

156163

@@ -174,7 +181,7 @@
174181

175182
# optimization configs
176183
'epoch': 1,
177-
'num_train_iter': 4000,
184+
'num_train_iter': 500,
178185
'num_eval_iter': 500,
179186
'num_log_iter': 50,
180187
'optim': 'AdamW',
@@ -225,7 +232,7 @@
225232

226233
######################################################################
227234
# We can start Train the algorithms on CIFAR-10 with 40 labels now.
228-
# We train for 4000 iterations and evaluate every 500 iterations.
235+
# We train for 500 iterations and evaluate every 500 iterations.
229236
#
230237
trainer = Trainer(config, algorithm)
231238
trainer.fit(train_lb_loader, train_ulb_loader, eval_loader)
@@ -239,8 +246,8 @@
239246

240247

241248
######################################################################
242-
# References
243-
# [1] USB: https://github.com/microsoft/Semi-supervised-learning
244-
# [2] Kihyuk Sohn et al. FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
245-
# [3] Yidong Wang et al. FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning
246-
# [4] Hao Chen et al. SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-supervised Learning
249+
# References:
250+
# - [1] USB: https://github.com/microsoft/Semi-supervised-learning
251+
# - [2] Kihyuk Sohn et al. FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
252+
# - [3] Yidong Wang et al. FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning
253+
# - [4] Hao Chen et al. SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-supervised Learning

beginner_source/colab.rst

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -93,3 +93,11 @@ Hopefully this example will give you a good starting point for running
9393
some of the more complex tutorials in Colab. As we evolve our use of
9494
Colab on the PyTorch tutorials site, we'll look at ways to make this
9595
easier for users.
96+
97+
Enabling CUDA
98+
~~~~~~~~~~~~~~~~
99+
Some tutorials require a CUDA-enabled device (NVIDIA GPU), which involves
100+
changing the Runtime type prior to executing the tutorial.
101+
To change the Runtime in Google Colab, on the top drop-down menu select **Runtime**,
102+
then select **Change runtime type**. Under **Hardware accelerator**, select ``T4 GPU``,
103+
then click ``Save``.

0 commit comments

Comments
 (0)