From 45e9d83d1951eb5a099cc48163578db82fa4da99 Mon Sep 17 00:00:00 2001 From: Danielle Pintz <38207072+daniellepintz@users.noreply.github.com> Date: Sun, 15 Oct 2023 17:24:45 -0400 Subject: [PATCH 1/3] Update FSDP_tutorial.rst Some grammar and readability improvements --- intermediate_source/FSDP_tutorial.rst | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/intermediate_source/FSDP_tutorial.rst b/intermediate_source/FSDP_tutorial.rst index d69a03b68be..17b601f5d30 100644 --- a/intermediate_source/FSDP_tutorial.rst +++ b/intermediate_source/FSDP_tutorial.rst @@ -8,7 +8,7 @@ Getting Started with Fully Sharded Data Parallel(FSDP) Training AI models at a large scale is a challenging task that requires a lot of compute power and resources. It also comes with considerable engineering complexity to handle the training of these very large models. -`Pytorch FSDP `__, released in PyTorch 1.11 makes this easier. +`PyTorch FSDP `__, released in PyTorch 1.11 makes this easier. In this tutorial, we show how to use `FSDP APIs `__, for simple MNIST models that can be extended to other larger models such as `HuggingFace BERT models `__, `GPT 3 models up to 1T parameters `__ . The sample DDP MNIST code has been borrowed from `here `__. @@ -18,7 +18,7 @@ How FSDP works -------------- In `DistributedDataParallel `__, (DDP) training, each process/ worker owns a replica of the model and processes a batch of data, finally it uses all-reduce to sum up gradients over different workers. In DDP the model weights and optimizer states are replicated across all workers. FSDP is a type of data parallelism that shards model parameters, optimizer states and gradients across DDP ranks. -FSDP GPU memory footprint would be smaller than DDP across all workers. This makes the training of some very large models feasible and helps to fit larger models or batch sizes for our training job. This would come with the cost of increased communication volume. The communication overhead is reduced by internal optimizations like communication and computation overlapping. +When training with FSDP, the GPU memory footprint is smaller than when training with DDP across all workers. This makes the training of some very large models feasible by allowing larger models or batch sizes to fit on device. This comes with the cost of increased communication volume. The communication overhead is reduced by internal optimizations like overlapping communication and computation. .. figure:: /_static/img/distributed/fsdp_workflow.png :width: 100% @@ -27,7 +27,7 @@ FSDP GPU memory footprint would be smaller than DDP across all workers. This mak FSDP Workflow -At high level FSDP works as follow: +At a high level FSDP works as follow: *In constructor* @@ -48,11 +48,11 @@ At high level FSDP works as follow: How to use FSDP -------------- -Here we use a toy model to run training on MNIST dataset for demonstration purposes. Similarly the APIs and logic can be applied to larger models for training. +Here we use a toy model to run training on the MNIST dataset for demonstration purposes. The APIs and logic can be applied to training larger models as well. *Setup* -1.1 Install Pytorch along with Torchvision +1.1 Install PyTorch along with Torchvision .. code-block:: bash @@ -139,7 +139,7 @@ We add the following code snippets to a python script “FSDP_mnist.py”. output = F.log_softmax(x, dim=1) return output -2.2 define a train function +2.2 Define a train function .. code-block:: python @@ -189,7 +189,7 @@ We add the following code snippets to a python script “FSDP_mnist.py”. 2.4 Define a distributed train function that wraps the model in FSDP -**Note: to save the FSDP model, we need to call the state_dict on each rank then on Rank 0 save the overall states. This is only available in Pytorch nightlies, current Pytorch release is 1.11 at the moment.** +**Note: to save the FSDP model, we need to call the state_dict on each rank then on Rank 0 save the overall states. This is only available in PyTorch nightlies, current PyTorch release is 1.11 at the moment.** .. code-block:: python @@ -319,7 +319,7 @@ Alternatively, we will look at adding the fsdp_auto_wrap_policy next and will di ) ) -Following is the peak memory usage from FSDP MNIST training on g4dn.12.xlarge AWS EC2 instance with 4 gpus captured from Pytorch Profiler. +Following is the peak memory usage from FSDP MNIST training on g4dn.12.xlarge AWS EC2 instance with 4 gpus captured from PyTorch Profiler. .. figure:: /_static/img/distributed/FSDP_memory.gif @@ -381,7 +381,7 @@ Applying the FSDP_auto_wrap_policy, the model would be as follows: CUDA event elapsed time on training loop 41.89130859375sec -Following is the peak memory usage from FSDP with auto_wrap policy of MNIST training on g4dn.12.xlarge AWS EC2 instance with 4 gpus captured from Pytorch Profiler. +The following is the peak memory usage from FSDP with auto_wrap policy of MNIST training on g4dn.12.xlarge AWS EC2 instance with 4 gpus captured from PyTorch Profiler. It can be observed that the peak memory usage on each device is smaller compared to FSDP without auto wrap policy applied, from ~75 MB to 66 MB. .. figure:: /_static/img/distributed/FSDP_autowrap.gif @@ -423,7 +423,7 @@ Compare it with DDP, if in 2.4 we just normally wrap the model in ddp, saving th CUDA event elapsed time on training loop 39.77766015625sec -Following is the peak memory usage from DDP MNIST training on g4dn.12.xlarge AWS EC2 instance with 4 gpus captured from Pytorch profiler. +Following is the peak memory usage from DDP MNIST training on g4dn.12.xlarge AWS EC2 instance with 4 gpus captured from PyTorch profiler. .. figure:: /_static/img/distributed/DDP_memory.gif :width: 100% From 09dfc6222a1d23abe10431a5b5872f5c628f939f Mon Sep 17 00:00:00 2001 From: Danielle Pintz <38207072+daniellepintz@users.noreply.github.com> Date: Sun, 15 Oct 2023 17:27:35 -0400 Subject: [PATCH 2/3] Remove outdated version mention --- intermediate_source/FSDP_tutorial.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/intermediate_source/FSDP_tutorial.rst b/intermediate_source/FSDP_tutorial.rst index 17b601f5d30..1a76a7db668 100644 --- a/intermediate_source/FSDP_tutorial.rst +++ b/intermediate_source/FSDP_tutorial.rst @@ -189,7 +189,7 @@ We add the following code snippets to a python script “FSDP_mnist.py”. 2.4 Define a distributed train function that wraps the model in FSDP -**Note: to save the FSDP model, we need to call the state_dict on each rank then on Rank 0 save the overall states. This is only available in PyTorch nightlies, current PyTorch release is 1.11 at the moment.** +**Note: to save the FSDP model, we need to call the state_dict on each rank then on Rank 0 save the overall states.** .. code-block:: python From 8e4992536713d9c067a7a0a3cadda2a76503c6d0 Mon Sep 17 00:00:00 2001 From: Danielle Pintz <38207072+daniellepintz@users.noreply.github.com> Date: Sun, 15 Oct 2023 17:37:41 -0400 Subject: [PATCH 3/3] Update FSDP_tutorial.rst --- intermediate_source/FSDP_tutorial.rst | 23 +++++++++++------------ 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/intermediate_source/FSDP_tutorial.rst b/intermediate_source/FSDP_tutorial.rst index 1a76a7db668..26988eda900 100644 --- a/intermediate_source/FSDP_tutorial.rst +++ b/intermediate_source/FSDP_tutorial.rst @@ -250,7 +250,6 @@ We add the following code snippets to a python script “FSDP_mnist.py”. if args.save_model: # use a barrier to make sure training is done on all ranks dist.barrier() - # state_dict for FSDP model is only available on Nightlies for now states = model.state_dict() if rank == 0: torch.save(states, "mnist_cnn.pt") @@ -259,7 +258,7 @@ We add the following code snippets to a python script “FSDP_mnist.py”. -2.5 Finally parsing the arguments and setting the main function +2.5 Finally parse the arguments and set the main function .. code-block:: python @@ -319,7 +318,7 @@ Alternatively, we will look at adding the fsdp_auto_wrap_policy next and will di ) ) -Following is the peak memory usage from FSDP MNIST training on g4dn.12.xlarge AWS EC2 instance with 4 gpus captured from PyTorch Profiler. +The following is the peak memory usage from FSDP MNIST training on g4dn.12.xlarge AWS EC2 instance with 4 GPUs captured from PyTorch Profiler. .. figure:: /_static/img/distributed/FSDP_memory.gif @@ -329,7 +328,7 @@ Following is the peak memory usage from FSDP MNIST training on g4dn.12.xlarge AW FSDP Peak Memory Usage -*Applying fsdp_auto_wrap_policy* in FSDP otherwise, FSDP will put the entire model in one FSDP unit, which will reduce computation efficiency and memory efficiency. +Applying *fsdp_auto_wrap_policy* in FSDP otherwise, FSDP will put the entire model in one FSDP unit, which will reduce computation efficiency and memory efficiency. The way it works is that, suppose your model contains 100 Linear layers. If you do FSDP(model), there will only be one FSDP unit which wraps the entire model. In that case, the allgather would collect the full parameters for all 100 linear layers, and hence won't save CUDA memory for parameter sharding. Also, there is only one blocking allgather call for the all 100 linear layers, there will not be communication and computation overlapping between layers. @@ -354,7 +353,7 @@ Finding an optimal auto wrap policy is challenging, PyTorch will add auto tuning model = FSDP(model, fsdp_auto_wrap_policy=my_auto_wrap_policy) -Applying the FSDP_auto_wrap_policy, the model would be as follows: +Applying the fsdp_auto_wrap_policy, the model would be as follows: .. code-block:: bash @@ -381,7 +380,7 @@ Applying the FSDP_auto_wrap_policy, the model would be as follows: CUDA event elapsed time on training loop 41.89130859375sec -The following is the peak memory usage from FSDP with auto_wrap policy of MNIST training on g4dn.12.xlarge AWS EC2 instance with 4 gpus captured from PyTorch Profiler. +The following is the peak memory usage from FSDP with auto_wrap policy of MNIST training on a g4dn.12.xlarge AWS EC2 instance with 4 GPUs captured from PyTorch Profiler. It can be observed that the peak memory usage on each device is smaller compared to FSDP without auto wrap policy applied, from ~75 MB to 66 MB. .. figure:: /_static/img/distributed/FSDP_autowrap.gif @@ -391,11 +390,11 @@ It can be observed that the peak memory usage on each device is smaller compared FSDP Peak Memory Usage using Auto_wrap policy -*CPU Off-loading*: In case the model is very large that even with FSDP wouldn't fit into gpus, then CPU offload can be helpful here. +*CPU Off-loading*: In case the model is very large that even with FSDP wouldn't fit into GPUs, then CPU offload can be helpful here. Currently, only parameter and gradient CPU offload is supported. It can be enabled via passing in cpu_offload=CPUOffload(offload_params=True). -Note that this currently implicitly enables gradient offloading to CPU in order for params and grads to be on the same device to work with the optimizer. This API is subject to change. Default is None in which case there will be no offloading. +Note that this currently implicitly enables gradient offloading to CPU in order for params and grads to be on the same device to work with the optimizer. This API is subject to change. The default is None in which case there will be no offloading. Using this feature may slow down the training considerably, due to frequent copying of tensors from host to device, but it could help improve memory efficiency and train larger scale models. @@ -409,7 +408,7 @@ In 2.4 we just add it to the FSDP wrapper cpu_offload=CPUOffload(offload_params=True)) -Compare it with DDP, if in 2.4 we just normally wrap the model in ddp, saving the changes in “DDP_mnist.py”. +Compare it with DDP, if in 2.4 we just normally wrap the model in DPP, saving the changes in “DDP_mnist.py”. .. code-block:: python @@ -423,7 +422,7 @@ Compare it with DDP, if in 2.4 we just normally wrap the model in ddp, saving th CUDA event elapsed time on training loop 39.77766015625sec -Following is the peak memory usage from DDP MNIST training on g4dn.12.xlarge AWS EC2 instance with 4 gpus captured from PyTorch profiler. +The following is the peak memory usage from DDP MNIST training on g4dn.12.xlarge AWS EC2 instance with 4 GPUs captured from PyTorch profiler. .. figure:: /_static/img/distributed/DDP_memory.gif :width: 100% @@ -434,8 +433,8 @@ Following is the peak memory usage from DDP MNIST training on g4dn.12.xlarge AWS Considering the toy example and tiny MNIST model we defined here, we can observe the difference between peak memory usage of DDP and FSDP. -In DDP each process holds a replica of the model, so the memory footprint is higher compared to FSDP that shards the model parameter, optimizer states and gradients over DDP ranks. +In DDP each process holds a replica of the model, so the memory footprint is higher compared to FSDP which shards the model parameters, optimizer states and gradients over DDP ranks. The peak memory usage using FSDP with auto_wrap policy is the lowest followed by FSDP and DDP. -Also, looking at timings, considering the small model and running the training on a single machine, FSDP with/out auto_wrap policy performed almost as fast as DDP. +Also, looking at timings, considering the small model and running the training on a single machine, FSDP with and without auto_wrap policy performed almost as fast as DDP. This example does not represent most of the real applications, for detailed analysis and comparison between DDP and FSDP please refer to this `blog post `__ .