How DreamBooth training example for FLUX.1 [dev] GPU usage? #9233
Replies: 2 comments 3 replies
-
Same error here. Did you solve it? |
Beta Was this translation helpful? Give feedback.
3 replies
-
I also ran this, and it caused an out-of-memory (OOM) error on a single A800 GPU. But when using 8 A800 GPUs, training worked fine. I asked GPT, and the explanation was: multi-GPU training "distributes" the memory load of model parameters and activations across the GPUs. In single-GPU training, one card can't handle all that memory load, so it crashes. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello.
I tried to learn the following README_flux.md for example sample learning.
ex) accelerate launch train_dreambooth_flux.py
However, out of memory has occurred.
How much gpu memory do I need for learning? I'm currently using 1 gpu of A100 80G.
Is there a way to learn with less gpu?
Beta Was this translation helpful? Give feedback.
All reactions