PyTorch's API. Rust's Safety. The Next-Gen AI Infrastructure.
RusTorch is a production-grade deep learning framework re-imagined in Rust. It combines the usability you love from PyTorch with the performance, safety, and concurrency guarantees of Rust. Say goodbye to GIL locks, GC pauses, and runtime errors. Say hello to RusTorch.
- Click the Codespaces badge above to launch the interactive RustTorch vs PyTorch demo directly from GitHub
- The demo server auto-starts in Codespaces and exposes
http://127.0.0.1:3003/ - The dashboard includes real-time training curves, speed ratio timeline, pipeline stats, and one-click PROMO mode
- โก Blazing Fast: Powered by
Rayonfor parallel CPU execution and optimized CUDA kernels (coming soon) for GPU. Zero-cost abstractions mean you pay for what you use. - ๐ก๏ธ Memory Safe: Leveraging Rust's ownership model, RusTorch ensures memory safety without the overhead of a Garbage Collector. No more segfaults in production.
- ๐ง PyTorch-like API: If you know PyTorch, you already know RusTorch. We've meticulously mirrored the API design so you can switch instantly.
- ๐ฎ JIT Graph Optimization: Built-in XLA-style compiler that traces your code, fuses operators (e.g., Conv2d + ReLU), and eliminates dead code for maximum efficiency.
- ๐ Distributed Ready: Native
DistributedDataParallelsupport designed for modern multi-gpu, multi-node training clusters.
RusTorch is a modular workspace designed for scalability. We adopt a "Core + Plugins" architecture to ensure lightweight runtime and maximum extensibility.
mindmap
root((RusTorch))
Core(rustorch-core)
Tensor Engine
Autograd
JIT Compiler
NN(rustorch-nn)
Layers
Optimizers
Loss Functions
Backends
CUDA(rustorch-cuda)
WGPU(rustorch-wgpu)
Vulkan(rustorch-vulkan)
Metal(rustorch-metal)
Ecosystem
Vision(rustorch-vision)
Text(rustorch-text)
Audio(rustorch-audio)
Interop
PyTorch(rustorch-pytorch)
ONNX(rustorch-onnx)
WASM(rustorch-wasm)
rustorch-core: The heart. N-dimensional Tensors, Autograd engine, and JIT compiler.rustorch-nn: Neural network building blocks (Conv2d, LSTM, Transformer), Loss functions, and Optimizers.rustorch-vision: Computer vision datasets (MNIST, CIFAR) and transforms.rustorch-text: NLP primitives, Tokenizers, and Vocab.rustorch-cuda: High-performance CUDA kernels.rustorch-wasm: Run your models directly in the browser.rustorch-pytorch: ๐ NEW! Bridge to PyTorch ecosystem. Load.pthfiles and interop with LibTorch.rustorch-wgpu: ๐ NEW! WebGPU backend for browser and cross-platform GPU acceleration.rustorch-vulkan: ๐ฎ NEW! Vulkan compute backend for high-performance graphics hardware.
| Feature | RusTorch | PyTorch | TensorFlow |
|---|---|---|---|
| Memory Safety | ๐ก๏ธ Guaranteed | โ (C++) | โ (C++) |
| GIL-Free | ๐ Yes | โ No | โ No |
| WebGPU Support | ๐ Native | ๐ง Experimental | ๐ง Experimental |
| Browser Inference | โ WASM + WebGPU | โ Heavy | โ TFLite |
| API Style | ๐ฅ Pythonic | ๐ฅ Pythonic | ๐ Verbose |
| Deployment | ๐ฆ Single Binary | ๐ Python Env | ๐ Python Env |
RusTorch isn't just a library; it's a universal tensor compiler.
graph TD
%% Styling
classDef core fill:#e85d04,stroke:#333,stroke-width:2px,color:white;
classDef backend fill:#8338ec,stroke:#333,stroke-width:2px,color:white;
classDef interop fill:#3a86ff,stroke:#333,stroke-width:2px,color:white;
classDef user fill:#fb5607,stroke:#333,stroke-width:2px,color:white;
User["๐ค User Application"]:::user --> API["๐ฅ RusTorch API"]:::core
API --> Core["๐ง rustorch-core"]:::core
subgraph Compute_Backends ["โ๏ธ Compute Backends"]
direction TB
Core -.-> CPU["๐ฅ๏ธ Rayon CPU"]:::backend
Core -.-> CUDA["๐ CUDA (NVidia)"]:::backend
Core -.-> WGPU["๐ WebGPU (Browser)"]:::backend
Core -.-> Vulkan["๐ฎ Vulkan (Cross-Platform)"]:::backend
end
subgraph Interoperability ["๐ Interoperability"]
direction TB
PyTorch["๐ฅ PyTorch Ecosystem"]:::interop <-->|rustorch-pytorch| Core
Model["๐พ .pth Models"]:::interop <-->|Load/Save| Core
end
Seamlessly switch between RusTorch and PyTorch. No more rewriting models from scratch.
- ๐ Zero-Copy Conversion: Convert
rustorch::Tensor<->torch::Tensorinstantly. - ๐พ Model Loading: Load pre-trained
.pthweights directly into RusTorch models. - ๐ก๏ธ Operator Fallback: Use PyTorch's battle-tested operators when RusTorch implementation is missing.
use rustorch_pytorch::PyTorchAdapter;
// Load a PyTorch model checkpoint
let weights = PyTorchAdapter::load_state_dict("resnet18.pth")?;
// Run inference in RusTorch
let input = Tensor::randn(&[1, 3, 224, 224]);
// let output = model.forward(&input);Unlock the power of your GPU, anywhere.
- WebGPU Backend: Run large language models directly in the browser with near-native performance.
- Vulkan Backend: Cross-vendor GPU support (AMD, Intel, NVIDIA, Mobile) with low-level control.
Add RusTorch to your Cargo.toml:
[dependencies]
rus-torch = "0.1.2"sequenceDiagram
autonumber
participant Data as ๐ฟ Dataset
participant Model as ๐ง Model
participant Loss as ๐ Loss Fn
participant Optim as โ๏ธ Optimizer
loop Training Epochs
Data->>Model: Forward(Batch)
Model->>Loss: Compute Loss(Pred, Target)
Loss-->>Model: Backward() (Compute Gradients)
Optim->>Model: Step() (Update Weights)
Optim->>Model: ZeroGrad()
end
use rus_torch::core::Tensor;
use rus_torch::nn::{Linear, Module, CrossEntropyLoss, SGD};
fn main() {
// 1. Define a simple model
let fc = Linear::new(10, 2); // Input: 10, Output: 2 classes
// 2. Setup Loss & Optimizer
let criterion = CrossEntropyLoss::new();
let mut optimizer = SGD::new(fc.parameters(), 0.01);
// 3. Dummy Data (Batch Size: 1, Features: 10)
let input = Tensor::new(&[0.5; 10], &[1, 10]).set_requires_grad(true);
let target = Tensor::new(&[1.0], &[1]); // Target Class 1
// 4. Training Step
optimizer.zero_grad();
let output = fc.forward(&input);
let loss = criterion.forward(&output, &target);
loss.backward();
optimizer.step();
println!("๐ Training step complete! Loss: {:?}", loss);
}- Zero to Hero Tutorial: The best place to start for beginners.
- Architecture Guide: Deep dive into RusTorch's internals.
- Examples: Real-world examples including CNNs, RNNs, and JIT usage.
We are building the future of AI in Rust, and we need YOU! Whether it's adding new operators, fixing bugs, or improving docs, all contributions are welcome.
Check out CONTRIBUTING.md to get started.
RusTorch is open-source software licensed under the MIT or Apache-2.0 license.