diff --git a/intermediate_source/pruning_tutorial.py b/intermediate_source/pruning_tutorial.py index ba105df1a0a..d8de5a7502a 100644 --- a/intermediate_source/pruning_tutorial.py +++ b/intermediate_source/pruning_tutorial.py @@ -9,7 +9,7 @@ known to use efficient sparse connectivity. Identifying optimal techniques to compress models by reducing the number of parameters in them is important in order to reduce memory, battery, and hardware consumption without -sacrificing accuracy, deploy lightweight models on device, and guarantee +sacrificing accuracy. This in turn allows you to deploy lightweight models on device, and guarantee privacy with private on-device computation. On the research front, pruning is used to investigate the differences in learning dynamics between over-parametrized and under-parametrized networks, to study the role of lucky