@@ -2,7 +2,7 @@ Getting Started with Distributed Data Parallel
2
2
=================================================
3
3
**Author **: `Shen Li <https://mrshenli.github.io/ >`_
4
4
5
- **Edited by **: `Joe Zhu <https://github.com/gunandrose4u >`_
5
+ **Edited by **: `Joe Zhu <https://github.com/gunandrose4u >`_, ` Chirag Pandya < https://github.com/c-p-i-o >`__
6
6
7
7
.. note ::
8
8
|edit | View and edit this tutorial in `github <https://github.com/pytorch/tutorials/blob/main/intermediate_source/ddp_tutorial.rst >`__.
@@ -15,24 +15,30 @@ Prerequisites:
15
15
16
16
17
17
`DistributedDataParallel <https://pytorch.org/docs/stable/nn.html#module-torch.nn.parallel >`__
18
- (DDP) implements data parallelism at the module level which can run across
19
- multiple machines. Applications using DDP should spawn multiple processes and
20
- create a single DDP instance per process. DDP uses collective communications in the
18
+ (DDP) is a powerful module in PyTorch that allows you to parallelize your model across
19
+ multiple machines, making it perfect for large-scale deep learning applications.
20
+ To use DDP, you'll need to spawn multiple processes and create a single instance of DDP per process.
21
+
22
+ But how does it work? DDP uses collective communications from the
21
23
`torch.distributed <https://pytorch.org/tutorials/intermediate/dist_tuto.html >`__
22
- package to synchronize gradients and buffers. More specifically, DDP registers
23
- an autograd hook for each parameter given by ``model.parameters() `` and the
24
- hook will fire when the corresponding gradient is computed in the backward
25
- pass. Then DDP uses that signal to trigger gradient synchronization across
26
- processes. Please refer to
27
- `DDP design note <https://pytorch.org/docs/master/notes/ddp.html >`__ for more details.
24
+ package to synchronize gradients and buffers across all processes. This means that each process will have
25
+ its own copy of the model, but they'll all work together to train the model as if it were on a single machine.
26
+
27
+ To make this happen, DDP registers an autograd hook for each parameter in the model.
28
+ When the backward pass is run, this hook fires and triggers gradient synchronization across all processes.
29
+ This ensures that each process has the same gradients, which are then used to update the model.
30
+
31
+ For more information on how DDP works and how to use it effectively, be sure to check out the
32
+ `DDP design note <https://pytorch.org/docs/master/notes/ddp.html >`__.
33
+ With DDP, you can train your models faster and more efficiently than ever before!
34
+
35
+ The recommended way to use DDP is to spawn one process for each model replica. The model replica can span
36
+ multiple devices. DDP processes can be placed on the same machine or across machines. Note that GPU devices
37
+ cannot be shared across DDP processes (i.e. one GPU for one DDP process).
28
38
29
39
30
- The recommended way to use DDP is to spawn one process for each model replica,
31
- where a model replica can span multiple devices. DDP processes can be
32
- placed on the same machine or across machines, but GPU devices cannot be
33
- shared across processes. This tutorial starts from a basic DDP use case and
34
- then demonstrates more advanced use cases including checkpointing models and
35
- combining DDP with model parallel.
40
+ In this tutorial, we'll start with a basic DDP use case and then demonstrate more advanced use cases,
41
+ including checkpointing models and combining DDP with model parallel.
36
42
37
43
38
44
.. note ::
@@ -43,25 +49,22 @@ combining DDP with model parallel.
43
49
Comparison between ``DataParallel `` and ``DistributedDataParallel ``
44
50
-------------------------------------------------------------------
45
51
46
- Before we dive in, let's clarify why, despite the added complexity, you would
47
- consider using `` DistributedDataParallel `` over ``DataParallel ``:
52
+ Before we dive in, let's clarify why you would consider using `` DistributedDataParallel ``
53
+ over ``DataParallel ``, despite its added complexity :
48
54
49
- - First, ``DataParallel `` is single-process, multi-thread, and only works on a
50
- single machine, while ``DistributedDataParallel `` is multi-process and works
51
- for both single- and multi- machine training. `` DataParallel `` is usually
52
- slower than `` DistributedDataParallel `` even on a single machine due to GIL
53
- contention across threads, per-iteration replicated model, and additional
54
- overhead introduced by scattering inputs and gathering outputs .
55
+ - First, ``DataParallel `` is single-process, multi-threaded, but it only works on a
56
+ single machine. In contrast, ``DistributedDataParallel `` is multi-process and supports
57
+ both single- and multi- machine training.
58
+ Due to GIL contention across threads, per-iteration replicated model, and additional overhead introduced by
59
+ scattering inputs and gathering outputs, `` DataParallel `` is usually
60
+ slower than `` DistributedDataParallel `` even on a single machine .
55
61
- Recall from the
56
62
`prior tutorial <https://pytorch.org/tutorials/intermediate/model_parallel_tutorial.html >`__
57
63
that if your model is too large to fit on a single GPU, you must use **model parallel **
58
64
to split it across multiple GPUs. ``DistributedDataParallel `` works with
59
- **model parallel **; ``DataParallel `` does not at this time. When DDP is combined
65
+ **model parallel **, while ``DataParallel `` does not at this time. When DDP is combined
60
66
with model parallel, each DDP process would use model parallel, and all processes
61
67
collectively would use data parallel.
62
- - If your model needs to span multiple machines or if your use case does not fit
63
- into data parallelism paradigm, please see `the RPC API <https://pytorch.org/docs/stable/rpc.html >`__
64
- for more generic distributed training support.
65
68
66
69
Basic Use Case
67
70
--------------
@@ -141,6 +144,7 @@ different DDP processes starting from different initial model parameter values.
141
144
optimizer.step()
142
145
143
146
cleanup()
147
+ print (f " Finished running basic DDP example on rank { rank} . " )
144
148
145
149
146
150
def run_demo (demo_fn , world_size ):
@@ -154,7 +158,7 @@ provides a clean API as if it were a local model. Gradient synchronization
154
158
communications take place during the backward pass and overlap with the
155
159
backward computation. When the ``backward() `` returns, ``param.grad `` already
156
160
contains the synchronized gradient tensor. For basic use cases, DDP only
157
- requires a few more LoCs to set up the process group. When applying DDP to more
161
+ requires a few more lines of code to set up the process group. When applying DDP to more
158
162
advanced use cases, some caveats require caution.
159
163
160
164
Skewed Processing Speeds
@@ -179,13 +183,14 @@ It's common to use ``torch.save`` and ``torch.load`` to checkpoint modules
179
183
during training and recover from checkpoints. See
180
184
`SAVING AND LOADING MODELS <https://pytorch.org/tutorials/beginner/saving_loading_models.html >`__
181
185
for more details. When using DDP, one optimization is to save the model in
182
- only one process and then load it to all processes, reducing write overhead.
183
- This is correct because all processes start from the same parameters and
186
+ only one process and then load it on all processes, reducing write overhead.
187
+ This works because all processes start from the same parameters and
184
188
gradients are synchronized in backward passes, and hence optimizers should keep
185
- setting parameters to the same values. If you use this optimization, make sure no process starts
189
+ setting parameters to the same values.
190
+ If you use this optimization (i.e. save on one process but restore on all), make sure no process starts
186
191
loading before the saving is finished. Additionally, when
187
192
loading the module, you need to provide an appropriate ``map_location ``
188
- argument to prevent a process from stepping into others' devices. If ``map_location ``
193
+ argument to prevent processes from stepping into others' devices. If ``map_location ``
189
194
is missing, ``torch.load `` will first load the module to CPU and then copy each
190
195
parameter to where it was saved, which would result in all processes on the
191
196
same machine using the same set of devices. For more advanced failure recovery
@@ -218,7 +223,7 @@ and elasticity support, please refer to `TorchElastic <https://pytorch.org/elast
218
223
219
224
loss_fn = nn.MSELoss()
220
225
optimizer = optim.SGD(ddp_model.parameters(), lr = 0.001 )
221
-
226
+
222
227
optimizer.zero_grad()
223
228
outputs = ddp_model(torch.randn(20 , 10 ))
224
229
labels = torch.randn(20 , 5 ).to(rank)
@@ -234,6 +239,7 @@ and elasticity support, please refer to `TorchElastic <https://pytorch.org/elast
234
239
os.remove(CHECKPOINT_PATH )
235
240
236
241
cleanup()
242
+ print (f " Finished running DDP checkpoint example on rank { rank} . " )
237
243
238
244
Combining DDP with Model Parallelism
239
245
------------------------------------
@@ -285,6 +291,7 @@ either the application or the model ``forward()`` method.
285
291
optimizer.step()
286
292
287
293
cleanup()
294
+ print (f " Finished running DDP with model parallel example on rank { rank} . " )
288
295
289
296
290
297
if __name__ == " __main__" :
@@ -323,15 +330,14 @@ Let's still use the Toymodel example and create a file named ``elastic_ddp.py``.
323
330
324
331
325
332
def demo_basic ():
333
+ torch.cuda.set_device(int (os.environ[" LOCAL_RANK" ]))
326
334
dist.init_process_group(" nccl" )
327
335
rank = dist.get_rank()
328
336
print (f " Start running basic DDP example on rank { rank} . " )
329
-
330
337
# create model and move it to GPU with id rank
331
338
device_id = rank % torch.cuda.device_count()
332
339
model = ToyModel().to(device_id)
333
340
ddp_model = DDP(model, device_ids = [device_id])
334
-
335
341
loss_fn = nn.MSELoss()
336
342
optimizer = optim.SGD(ddp_model.parameters(), lr = 0.001 )
337
343
@@ -341,22 +347,23 @@ Let's still use the Toymodel example and create a file named ``elastic_ddp.py``.
341
347
loss_fn(outputs, labels).backward()
342
348
optimizer.step()
343
349
dist.destroy_process_group()
344
-
350
+ print (f " Finished running basic DDP example on rank { rank} . " )
351
+
345
352
if __name__ == " __main__" :
346
353
demo_basic()
347
354
348
- One can then run a `torch elastic/torchrun <https://pytorch.org/docs/stable/elastic/quickstart.html >`__ command
355
+ One can then run a `torch elastic/torchrun <https://pytorch.org/docs/stable/elastic/quickstart.html >`__ command
349
356
on all nodes to initialize the DDP job created above:
350
357
351
358
.. code :: bash
352
359
353
360
torchrun --nnodes=2 --nproc_per_node=8 --rdzv_id=100 --rdzv_backend=c10d --rdzv_endpoint=$MASTER_ADDR :29400 elastic_ddp.py
354
361
355
- We are running the DDP script on two hosts, and each host we run with 8 processes, aka, we
356
- are running it on 16 GPUs. Note that ``$MASTER_ADDR `` must be the same across all nodes.
362
+ In the example above, we are running the DDP script on two hosts and we run with 8 processes on each host. That is, we
363
+ are running this job on 16 GPUs. Note that ``$MASTER_ADDR `` must be the same across all nodes.
357
364
358
- Here torchrun will launch 8 process and invoke ``elastic_ddp.py ``
359
- on each process on the node it is launched on, but user also needs to apply cluster
365
+ Here `` torchrun `` will launch 8 processes and invoke ``elastic_ddp.py ``
366
+ on each process on the node it is launched on, but user also needs to apply cluster
360
367
management tools like slurm to actually run this command on 2 nodes.
361
368
362
369
For example, on a SLURM enabled cluster, we can write a script to run the command above
@@ -368,8 +375,8 @@ and set ``MASTER_ADDR`` as:
368
375
369
376
370
377
Then we can just run this script using the SLURM command: ``srun --nodes=2 ./torchrun_script.sh ``.
371
- Of course, this is just an example; you can choose your own cluster scheduling tools
372
- to initiate the torchrun job.
373
378
374
- For more information about Elastic run, one can check this
375
- `quick start document <https://pytorch.org/docs/stable/elastic/quickstart.html >`__ to learn more.
379
+ This is just an example; you can choose your own cluster scheduling tools to initiate the ``torchrun `` job.
380
+
381
+ For more information about Elastic run, please see the
382
+ `quick start document <https://pytorch.org/docs/stable/elastic/quickstart.html >`__.
0 commit comments