Code for instruction-tuning Stable Diffusion.
-
Updated
Feb 16, 2024 - Python
Code for instruction-tuning Stable Diffusion.
Easiest way of fine-tuning HuggingFace video classification models
Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of new AI research
Optimal Sparse Decision Trees
stable diffusion, controlnet, tensorrt, accelerate
Unofficial DynaDUSt3R reimplementation trained on Stereo4D (research only).
Arxiv 25: Dynamic Pyramid Network for Efficient Multimodal Large Language Model
A set of scripts and configurations for pretraining of Large Language Models (LLM)
GPT* - Training faster small transformers using ALiBi, Parallel Residual Connections and more!
Language Modeling Research Hub, a comprehensive compendium for enthusiasts and scholars delving into the fascinating realm of language models (LMs), with a particular focus on large language models (LLMs)
Transformers training in a supercomputer with the 🤗 Stack and Slurm
Train and fine-tune diffusion models. Perform image-to-image class transfer experiments.
Communication-Efficient Diffusion Denoising Parallelization via Reuse-then-Predict Mechanism (NIPS'25)
vapoursynth version for DRBA
Experience the power of the FLUX.1-dev diffusion model combined with a massive collection of 255+ community-created LoRAs! This Gradio application provides an easy-to-use interface to explore diverse artistic styles directly on top of the FLUX base model.
A pytorch example using Hugging Face Accelerate Library with DogsVsCats dataset
Experimental demonstration for the Qwen/Qwen-Image-Edit-2511 model with lazy-loaded LoRA adapters supporting multi-image input editing. Users can upload one or more images (gallery format) and apply advanced edits such as pose transfer, anime conversion, or camera angle changes via natural language prompts. Features integrated Rerun SDK.
A Gradio-based demo application for comparing state-of-the-art OCR models: DeepSeek-OCR, Dots.OCR, HunyuanOCR, and Nanonets-OCR2-3B.
Accelerate FLUX.2 inference from 18s to 12s (33% speedup) using SADA in H200
Add a description, image, and links to the accelerate topic page so that developers can more easily learn about it.
To associate your repository with the accelerate topic, visit your repo's landing page and select "manage topics."