Stable Diffusion(SDXL/Refiner)WebUI Cloud Inference Extension
This extension enables faster image generation without the need for expensive GPUs and seamlessly integrates with the AUTOMAIC1111 UI.
Feature | Support | Limitations |
---|---|---|
txt2img | ✅ | |
txt2img_hires.fix | ✅ | |
txt2img_sdxl_refiner | ✅ | |
txt2img_controlnet | ✅ | |
img2img | ✅ | |
img2img_inpaint | ✅ | |
img2img_sdxl_refiner | ✅ | |
img2img_controlnet | ✅ | |
extras upscale | ✅ | |
vae model | ✅ | |
scripts - X/Y/Z plot | ✅ | |
scripts - Prompt matrix | ✅ | |
scripts - Prompt from file | ✅ |
Open omniinfer.io in browser
We can choice "Google Login" or "Github Login"
Let us back to Cloud Inference
tab of stable-diffusion-webui
Let us back to Txt2Img
tab of stable-diffusion-webui
From now on, you can give it a try and enjoy your creative journey.
Furthermore, you are welcome to freely discuss your user experience, share suggestions, and provide feedback on our Discord channel.
or you can use the VAE feature with X/Y/Z
The AUTOMATIC1111 webui loads the model on startup. However, on low-memory computers like the MacBook Air, the performance is suboptimal. To address this, we have developed a stripped-down minimal-size model. You can utilize the following commands to enable it.
its will reduce memory from 4.8G -> 739MB
wget -O ./models/Stable-diffusion/tiny.yaml https://github.com/omniinfer/sd-webui-cloud-inference/releases/download/tiny-model/tiny.yaml
wget -O ./models/Stable-diffusion/tiny.safetensors https://github.com/omniinfer/sd-webui-cloud-inference/releases/download/tiny-model/tiny.safetensors
--ckpt=/stable-diffusion-webui/models/Stable-diffusion/tiny.safetensors