Skip to content

Genius-apple/RusTorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

49 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

RusTorch ๐Ÿฆ€๐Ÿ”ฅ

Build Status License Crates.io Rust

PyTorch's API. Rust's Safety. The Next-Gen AI Infrastructure.

RusTorch is a production-grade deep learning framework re-imagined in Rust. It combines the usability you love from PyTorch with the performance, safety, and concurrency guarantees of Rust. Say goodbye to GIL locks, GC pauses, and runtime errors. Say hello to RusTorch.


๐ŸŽฌ Interactive Demo

Open in GitHub Codespaces CI

  • Click the Codespaces badge above to launch the interactive RustTorch vs PyTorch demo directly from GitHub
  • The demo server auto-starts in Codespaces and exposes http://127.0.0.1:3003/
  • The dashboard includes real-time training curves, speed ratio timeline, pipeline stats, and one-click PROMO mode

๐Ÿš€ Why RusTorch?

  • โšก Blazing Fast: Powered by Rayon for parallel CPU execution and optimized CUDA kernels (coming soon) for GPU. Zero-cost abstractions mean you pay for what you use.
  • ๐Ÿ›ก๏ธ Memory Safe: Leveraging Rust's ownership model, RusTorch ensures memory safety without the overhead of a Garbage Collector. No more segfaults in production.
  • ๐Ÿง  PyTorch-like API: If you know PyTorch, you already know RusTorch. We've meticulously mirrored the API design so you can switch instantly.
  • ๐Ÿ”ฎ JIT Graph Optimization: Built-in XLA-style compiler that traces your code, fuses operators (e.g., Conv2d + ReLU), and eliminates dead code for maximum efficiency.
  • ๐ŸŒ Distributed Ready: Native DistributedDataParallel support designed for modern multi-gpu, multi-node training clusters.

๐Ÿ“ฆ Ecosystem & Architecture

RusTorch is a modular workspace designed for scalability. We adopt a "Core + Plugins" architecture to ensure lightweight runtime and maximum extensibility.

๐Ÿงฉ Project Structure

mindmap
  root((RusTorch))
    Core(rustorch-core)
      Tensor Engine
      Autograd
      JIT Compiler
    NN(rustorch-nn)
      Layers
      Optimizers
      Loss Functions
    Backends
      CUDA(rustorch-cuda)
      WGPU(rustorch-wgpu)
      Vulkan(rustorch-vulkan)
      Metal(rustorch-metal)
    Ecosystem
      Vision(rustorch-vision)
      Text(rustorch-text)
      Audio(rustorch-audio)
    Interop
      PyTorch(rustorch-pytorch)
      ONNX(rustorch-onnx)
      WASM(rustorch-wasm)
Loading
  • rustorch-core: The heart. N-dimensional Tensors, Autograd engine, and JIT compiler.
  • rustorch-nn: Neural network building blocks (Conv2d, LSTM, Transformer), Loss functions, and Optimizers.
  • rustorch-vision: Computer vision datasets (MNIST, CIFAR) and transforms.
  • rustorch-text: NLP primitives, Tokenizers, and Vocab.
  • rustorch-cuda: High-performance CUDA kernels.
  • rustorch-wasm: Run your models directly in the browser.
  • rustorch-pytorch: ๐ŸŒ‰ NEW! Bridge to PyTorch ecosystem. Load .pth files and interop with LibTorch.
  • rustorch-wgpu: ๐ŸŒ NEW! WebGPU backend for browser and cross-platform GPU acceleration.
  • rustorch-vulkan: ๐ŸŽฎ NEW! Vulkan compute backend for high-performance graphics hardware.

โœ… Feature Matrix

Feature RusTorch PyTorch TensorFlow
Memory Safety ๐Ÿ›ก๏ธ Guaranteed โŒ (C++) โŒ (C++)
GIL-Free ๐Ÿš€ Yes โŒ No โŒ No
WebGPU Support ๐ŸŒ Native ๐Ÿšง Experimental ๐Ÿšง Experimental
Browser Inference โœ… WASM + WebGPU โŒ Heavy โœ… TFLite
API Style ๐Ÿ”ฅ Pythonic ๐Ÿ”ฅ Pythonic ๐Ÿ“‰ Verbose
Deployment ๐Ÿ“ฆ Single Binary ๐Ÿ Python Env ๐Ÿ Python Env

๐ŸŒ Universal Architecture

RusTorch isn't just a library; it's a universal tensor compiler.

graph TD
    %% Styling
    classDef core fill:#e85d04,stroke:#333,stroke-width:2px,color:white;
    classDef backend fill:#8338ec,stroke:#333,stroke-width:2px,color:white;
    classDef interop fill:#3a86ff,stroke:#333,stroke-width:2px,color:white;
    classDef user fill:#fb5607,stroke:#333,stroke-width:2px,color:white;

    User["๐Ÿ‘ค User Application"]:::user --> API["๐Ÿ”ฅ RusTorch API"]:::core
    API --> Core["๐Ÿง  rustorch-core"]:::core
    
    subgraph Compute_Backends ["โš™๏ธ Compute Backends"]
        direction TB
        Core -.-> CPU["๐Ÿ–ฅ๏ธ Rayon CPU"]:::backend
        Core -.-> CUDA["๐Ÿš€ CUDA (NVidia)"]:::backend
        Core -.-> WGPU["๐ŸŒ WebGPU (Browser)"]:::backend
        Core -.-> Vulkan["๐ŸŽฎ Vulkan (Cross-Platform)"]:::backend
    end
    
    subgraph Interoperability ["๐Ÿ”Œ Interoperability"]
        direction TB
        PyTorch["๐Ÿ”ฅ PyTorch Ecosystem"]:::interop <-->|rustorch-pytorch| Core
        Model["๐Ÿ’พ .pth Models"]:::interop <-->|Load/Save| Core
    end
Loading

๐ŸŒ‰ PyTorch Bridge (rustorch-pytorch)

Seamlessly switch between RusTorch and PyTorch. No more rewriting models from scratch.

  • ๐Ÿ”„ Zero-Copy Conversion: Convert rustorch::Tensor <-> torch::Tensor instantly.
  • ๐Ÿ’พ Model Loading: Load pre-trained .pth weights directly into RusTorch models.
  • ๐Ÿ›ก๏ธ Operator Fallback: Use PyTorch's battle-tested operators when RusTorch implementation is missing.
use rustorch_pytorch::PyTorchAdapter;

// Load a PyTorch model checkpoint
let weights = PyTorchAdapter::load_state_dict("resnet18.pth")?;

// Run inference in RusTorch
let input = Tensor::randn(&[1, 3, 224, 224]);
// let output = model.forward(&input);

๐ŸŽฎ Graphics-Ready Compute (rustorch-wgpu & rustorch-vulkan)

Unlock the power of your GPU, anywhere.

  • WebGPU Backend: Run large language models directly in the browser with near-native performance.
  • Vulkan Backend: Cross-vendor GPU support (AMD, Intel, NVIDIA, Mobile) with low-level control.

๐Ÿ› ๏ธ Quick Start

Add RusTorch to your Cargo.toml:

[dependencies]
rus-torch = "0.1.2"

๐Ÿ”ฅ Train a Model in 30 Seconds

sequenceDiagram
    autonumber
    participant Data as ๐Ÿ’ฟ Dataset
    participant Model as ๐Ÿง  Model
    participant Loss as ๐Ÿ“‰ Loss Fn
    participant Optim as โš™๏ธ Optimizer

    loop Training Epochs
        Data->>Model: Forward(Batch)
        Model->>Loss: Compute Loss(Pred, Target)
        Loss-->>Model: Backward() (Compute Gradients)
        Optim->>Model: Step() (Update Weights)
        Optim->>Model: ZeroGrad()
    end
Loading
use rus_torch::core::Tensor;
use rus_torch::nn::{Linear, Module, CrossEntropyLoss, SGD};

fn main() {
    // 1. Define a simple model
    let fc = Linear::new(10, 2); // Input: 10, Output: 2 classes
    
    // 2. Setup Loss & Optimizer
    let criterion = CrossEntropyLoss::new();
    let mut optimizer = SGD::new(fc.parameters(), 0.01);

    // 3. Dummy Data (Batch Size: 1, Features: 10)
    let input = Tensor::new(&[0.5; 10], &[1, 10]).set_requires_grad(true);
    let target = Tensor::new(&[1.0], &[1]); // Target Class 1

    // 4. Training Step
    optimizer.zero_grad();
    let output = fc.forward(&input);
    let loss = criterion.forward(&output, &target);
    loss.backward();
    optimizer.step();

    println!("๐ŸŽ‰ Training step complete! Loss: {:?}", loss);
}

๐ŸŽ“ Documentation & Tutorials


๐Ÿค Contributing

We are building the future of AI in Rust, and we need YOU! Whether it's adding new operators, fixing bugs, or improving docs, all contributions are welcome.

Check out CONTRIBUTING.md to get started.


๐Ÿ“œ License

RusTorch is open-source software licensed under the MIT or Apache-2.0 license.

Built with โค๏ธ by the Rust AI Community

About

RusTorch is a production-grade deep learning framework re-imagined in Rust. It combines the usability you love from PyTorch with the performance, safety, and concurrency guarantees of Rust. Say goodbye to GIL locks, GC pauses, and runtime errors. Say hello to RusTorch.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors