Stop Writing Slow Rust: 20 Rust Tricks That Changed Everything



This content originally appeared on DEV Community and was authored by Leapcell

Leapcell: The Best of Serverless Web Hosting

20 Practical Tips for Rust Performance Optimization

Rust, as a performance-focused systems programming language, has demonstrated excellent performance in many scenarios. However, to fully unleash Rust’s potential and write efficient code, it’s necessary to master some performance optimization techniques. This article will introduce 20 practical tips for Rust performance optimization, with specific code examples to aid understanding.

  1. Choose the Right Data Structure Different data structures are suitable for different scenarios, and choosing correctly can significantly improve performance. For example, if you need to frequently insert and delete elements in a collection, VecDeque may be more appropriate than Vec; if you need fast lookups, HashMap or BTreeMap are better choices.
// Using VecDeque as a queue
use std::collections::VecDeque;
let mut queue = VecDeque::new();
queue.push_back(1);
queue.push_back(2);
let item = queue.pop_front();

// Using HashMap for fast lookups
use std::collections::HashMap;
let mut scores = HashMap::new();
scores.insert("Alice", 100);
let score = scores.get("Alice");
  1. Leverage Iterators and Closures Rust’s iterators and closures provide an efficient and concise way to handle collections. Chaining iterator methods avoids creating intermediate variables and reduces unnecessary memory allocations.
let numbers = vec![1, 2, 3, 4, 5];
let doubled: Vec<i32> = numbers.iter().map(|x| x * 2).collect();
let sum: i32 = doubled.iter().sum();
  1. Reduce Unnecessary Memory Allocations Prefer stack allocations over heap allocations since stack allocations are faster. For fixed-size data structures, use arrays instead of dynamic Vec.
// Using a stack-allocated array
let arr: [i32; 5] = [1, 2, 3, 4, 5];

// Preallocate capacity to reduce Vec's dynamic expansion
let mut vec = Vec::with_capacity(100);
for i in 1..=100 {
    vec.push(i);
}
  1. Use &str Instead of String When working with strings, use &str if you don’t need to modify the string, as &str is a read-only reference with no heap allocation, while String is mutable and requires heap allocation.
fn process(s: &str) {
    println!("Processing string: {}", s);
}

fn main() {
    let s1 = "Hello, Rust!"; // &str
    let s2 = String::from("Hello, Rust!"); // String
    process(s1);
    process(&s2); // Convert String to &str here
}
  1. Avoid Unnecessary Cloning and Copying Cloning and copying can introduce performance overhead, especially for large data structures. Pass data by reference instead of cloning or copying when possible.
fn print_numbers(numbers: &[i32]) {
    for num in numbers {
        println!("{}", num);
    }
}

fn main() {
    let numbers = vec![1, 2, 3, 4, 5];
    print_numbers(&numbers); // Pass by reference to avoid cloning
}
  1. Optimize Loops Reduce unnecessary operations inside loops by moving invariant calculations outside. For simple loops, consider while instead of for to avoid extra overhead.
// Before optimization
let mut result = 0;
for i in 1..=100 {
    let factor = 2 * i;
    result += factor;
}

// After optimization
let factor = 2;
let mut result = 0;
for i in 1..=100 {
    result += factor * i;
}
  1. Simplify Conditionals with if let and while let if let and while let reduce verbose match expressions, making code cleaner and potentially more performant.
// Simplify Option handling with if let
let value: Option<i32> = Some(42);
if let Some(num) = value {
    println!("The value is: {}", num);
}

// Simplify Iterator handling with while let
let mut numbers = vec![1, 2, 3, 4, 5].into_iter();
while let Some(num) = numbers.next() {
    println!("{}", num);
}
  1. Utilize const and static const defines constants evaluated at compile time, occupying no runtime memory. static defines variables with a lifetime spanning the entire program. Use them judiciously to improve performance.
const PI: f64 = 3.141592653589793;

fn calculate_area(radius: f64) -> f64 {
    PI * radius * radius
}

static COUNTER: std::sync::atomic::AtomicUsize = std::sync::atomic::AtomicUsize::new(0);

fn increment_counter() {
    COUNTER.fetch_add(1, std::sync::atomic::Ordering::SeqCst);
}
  1. Enable Compiler Optimizations In Cargo.toml, set opt-level to enable compiler optimizations. Options include 0 (default, prioritizes compile time), 1 (basic optimizations), 2 (more optimizations), and 3 (maximum optimization).
[profile.release]
opt-level = 3
  1. Use Link-Time Optimization (LTO) LTO allows the compiler to optimize the entire program during linking, further improving performance. Enable LTO in Cargo.toml:
[profile.release]
lto = true
  1. Reduce Dynamic Dispatch Dynamic dispatch (e.g., calling methods via trait objects) incurs runtime overhead due to method lookup. In performance-critical code, prefer static dispatch via generics.
// Dynamic dispatch
trait Animal {
    fn speak(&self);
}

struct Dog;
impl Animal for Dog {
    fn speak(&self) {
        println!("Woof!");
    }
}

fn make_sound(animal: &dyn Animal) {
    animal.speak();
}

// Static dispatch
fn make_sound_static<T: Animal>(animal: &T) {
    animal.speak();
}
  1. Optimize Function Calls For small functions, use the #[inline] attribute to hint the compiler to inline them, reducing call overhead.
#[inline]
fn add(a: i32, b: i32) -> i32 {
    a + b
}
  1. Use unsafe Code for Critical Paths Carefully use unsafe code in performance-critical paths to bypass Rust’s safety checks, but ensure code safety.
// Safe but slower implementation
fn sum_safe(numbers: &[i32]) -> i32 {
    let mut sum = 0;
    for &num in numbers {
        sum += num;
    }
    sum
}

// High-performance unsafe implementation
fn sum_unsafe(numbers: &[i32]) -> i32 {
    let len = numbers.len();
    let ptr = numbers.as_ptr();
    let mut sum = 0;
    for i in 0..len {
        sum += unsafe { *ptr.add(i) };
    }
    sum
}
  1. Leverage Parallel Computing Rust offers parallel computing libraries like rayon, which utilize multi-core CPUs to improve efficiency.
use rayon::prelude::*;

let numbers = vec![1, 2, 3, 4, 5];
let doubled: Vec<i32> = numbers.par_iter().map(|x| x * 2).collect();
  1. Optimize Data Layout Proper data layout improves CPU cache hit rates. Store related data in contiguous memory.
// Good data layout
#[derive(Copy, Clone)]
struct Point {
    x: i32,
    y: i32,
}

let points: Vec<Point> = vec![Point { x: 1, y: 2 }, Point { x: 3, y: 4 }];

// Poor data layout (hypothetical)
struct SeparateData {
    x_values: Vec<i32>,
    y_values: Vec<i32>,
}
  1. Avoid Premature Optimization Prioritize correctness and readability initially. Premature optimization complicates code and may yield minimal gains. Use profiling tools to identify bottlenecks first.
// Simple but potentially suboptimal implementation
fn find_max(numbers: &[i32]) -> Option<i32> {
    let mut max = None;
    for &num in numbers {
        if max.is_none() || num > max.unwrap() {
            max = Some(num);
        }
    }
    max
}
  1. Utilize SIMD Instructions Single Instruction, Multiple Data (SIMD) instructions operate on multiple data elements simultaneously, boosting numerical computation performance. Rust’s std::simd module supports SIMD.
use std::simd::i32x4;

let a = i32x4::new(1, 2, 3, 4);
let b = i32x4::new(5, 6, 7, 8);
let result = a + b;
  1. Optimize Error Handling Efficient error handling reduces overhead. When using Result, avoid creating Err values in the normal execution path.
// Before optimization
fn divide(a: i32, b: i32) -> Result<i32, String> {
    if b == 0 {
        return Err(String::from("Division by zero"));
    }
    Ok(a / b)
}

// After optimization
fn divide(a: i32, b: i32) -> Result<i32, &'static str> {
    if b == 0 {
        return Err("Division by zero");
    }
    Ok(a / b)
}
  1. Cache Frequently Used Results Cache results of expensive functions with identical inputs to avoid redundant computations.
use std::collections::HashMap;

fn expensive_computation(x: i32) -> i32 {
    // Simulate expensive computation
    std::thread::sleep(std::time::Duration::from_secs(1));
    x * x
}

let mut cache = HashMap::new();
fn cached_computation(x: i32) -> i32 {
    if let Some(result) = cache.get(&x) {
        *result
    } else {
        let result = expensive_computation(x);
        cache.insert(x, result);
        result
    }
}
  1. Use Performance Profiling Tools The Rust ecosystem offers tools like cargo bench for benchmarking and perf (on Linux) for profiling. These identify bottlenecks for targeted optimization.
// Benchmark with cargo bench
#[cfg(test)]
mod tests {
    use test::Bencher;

    #[bench]
    fn bench_function(b: &mut Bencher) {
        b.iter(|| {
            // Code to test
        });
    }
}

By applying these 20 tips, you can effectively optimize Rust code, harnessing the language’s performance advantages to build efficient and reliable applications.

Leapcell: The Best of Serverless Web Hosting

Finally, we recommend the best platform for deploying Rust services: Leapcell

🚀 Build with Your Favorite Language

Develop effortlessly in JavaScript, Python, Go, or Rust.

🌍 Deploy Unlimited Projects for Free

Only pay for what you use—no requests, no charges.

⚡ Pay-as-You-Go, No Hidden Costs

No idle fees, just seamless scalability.

📖 Explore Our Documentation

🔹 Follow us on Twitter: @LeapcellHQ


This content originally appeared on DEV Community and was authored by Leapcell