vladmandic / sdnext
SD.Next: All-in-one WebUI for AI generative image and video creation, captioning and processing
AI Architecture Analysis
This repository is indexed by RepoMind. By analyzing vladmandic/sdnext in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.
Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.
Repository Summary (README)
PreviewSD.Next: All-in-one WebUI for AI generative image and video creation and captioning
Docs | Wiki | Discord | Changelog
</div> </br>Table of contents
SD.Next Features
All individual features are not listed here, instead check ChangeLog for full list of changes
- Fully localized: ▹ English | Chinese | Russian | Spanish | German | French | Italian | Portuguese | Japanese | Korean
- Desktop and Mobile support!
- Multiple diffusion models!
- Multi-platform!
▹ Windows | Linux | MacOS | nVidia CUDA | AMD ROCm | Intel Arc / IPEX XPU | DirectML | OpenVINO | ONNX+Olive | ZLUDA - Platform specific auto-detection and tuning performed on install
- Optimized processing with latest
torchdevelopments with built-in support for model compile and quantize
Compile backends: Triton | StableFast | DeepCache | OneDiff | TeaCache | etc.
Quantization methods: SDNQ | BitsAndBytes | Optimum-Quanto | TorchAO / LayerWise - Interrogate/Captioning with 150+ OpenCLiP models and 20+ built-in VLMs
- Built in installer with automatic updates and dependency management
Desktop interface
<div align="center"> <img src="https://github.com/user-attachments/assets/d6119a63-6ee5-4597-95f6-29ed0701d3b5" alt="screenshot-modernui-desktop" width="90%"> </div>Mobile interface
<div align="center"> <img src="https://github.com/user-attachments/assets/ced9fe0c-d2c2-46d1-94a7-8f9f2307ce38" alt="screenshot-modernui-mobile" width="35%"> </div>For screenshots and information on other available themes, see Themes
<br>Model support
SD.Next supports broad range of models: supported models and model specs
Platform support
- nVidia GPUs using CUDA libraries on both Windows and Linux
- AMD GPUs using ROCm libraries on Linux
Support will be extended to Windows once AMD releases ROCm for Windows - Intel Arc GPUs using OneAPI with IPEX XPU libraries on both Windows and Linux
- Any GPU compatible with DirectX on Windows using DirectML libraries
This includes support for AMD GPUs that are not supported by native ROCm libraries - Any GPU or device compatible with OpenVINO libraries on both Windows and Linux
- Apple M1/M2 on OSX using built-in support in Torch with MPS optimizations
- ONNX/Olive
- AMD GPUs on Windows using ZLUDA libraries
Plus Docker container recipes for: CUDA, ROCm, Intel IPEX and OpenVINO
Getting started
- Get started with SD.Next by following the installation instructions
- For more details, check out advanced installation guide
- List and explanation of command line arguments
- Install walkthrough video
[!TIP] And for platform specific information, check out
WSL | Intel Arc | DirectML | OpenVINO | ONNX & Olive | ZLUDA | AMD ROCm | MacOS | nVidia | Docker
[!WARNING] If you run into issues, check out troubleshooting and debugging guides
Contributing
Please see Contributing for details on how to contribute to this project
And for any question, reach out on Discord or open an issue or discussion
Credits
- Main credit goes to Automatic1111 WebUI for the original codebase
- Additional credits are listed in Credits
- Licenses for modules are listed in Licenses
Evolution
<a href="https://star-history.com/#vladmandic/sdnext&Date"> <picture width=640> <source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=vladmandic/sdnext&type=Date&theme=dark" /> <img src="https://api.star-history.com/svg?repos=vladmandic/sdnext&type=Date" alt="starts" width="320"> </picture> </a>Docs
If you're unsure how to use a feature, best place to start is Docs and if its not there,
check ChangeLog for when feature was first introduced as it will always have a short note on how to use it