Skip to content

Is it possible to inference using multiple GPUs? #2977

Closed
@DevJunghun

Description

@DevJunghun

Hi, Thanks for sharing this library for using stable diffusion.
There is one questions I want to ask.

Like title, Is it possible to inference using multiple GPUs? If possible, how?
do you share doc about inference using multiple GPUs?

OS: Linux Ubuntu 20.04
GPU: RTX 4090 (24GB) * n
RAM: 72GB
Python: 3.9.16

assume i have two stable diffusion models (model 1, model 2)
ex) GPU 1 - using model 1, GPU 2 - using model 2

or

assume i have two request, i want to process both request parallel (prompt 1, prompt 2)
ex) GPU 1 - processing prompt 1, GPU 2 - processing prompt 2

I think.. this question can be solved by using thread and two pipes like below.. right?

p_01 = StableDiffusionPipeline.from_pretrained(model_01).to("cuda:0")  
p_02 = StableDiffusionPipeline.from_pretrained(model_02).to("cuda:1")  

Thread(target=generate_pipe01, args=(prompt, negative_prompt)).start()  
Thread(target=generate_pipe02, args=(prompt, negative_prompt)).start()

I'll be waiting for your good opinions. Thank you.

Metadata

Metadata

Assignees

No one assigned

    Labels

    staleIssues that haven't received updates

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions