Node vs bun vs go vs rust

Note on Node vs bun vs go vs rust

5 min read
Published January 23, 2026
Tech Notes

Why not nodejs?

Bad things

  1. Not maintained as well
  2. Single threaded
  3. Generally slower
  4. No type checking (TS does fix this, but doesnt really enforce safety (anys))

Good things

  1. Easy to understand, low learning curve
  2. Now being maintained better than before
  3. BunJS taking away some problems in Node.js (speed)

Why go

Benefits

  1. Multithreading
  2. Types enforced

Downsides

  1. Debatable syntax/error first approach
  2. Not memory safe, see - https://github.com/hkirat/devops-assignment/tree/main/docker-go

Benefits

  1. Easy syntax compared to other low level languages (c, rust)
  2. Does provide primitives to do safe memory management
  3. Multithreaded
  4. Goroutines
  5. Doesnt have async , but go runtime scheduler is smart enough to understand that the routine is blocked on an async task and moveto running a diff goroutine
  6. Fast

Why rust

Downsides

  1. Confusing memory model, borrow checker
  2. Confusing syntax
  3. Very strict, harder to make it compile
  4. Extremely verbose syntax (comparable to java)
  5. Rust specific concepts (lifetimes)

Upsides

  1. Memory safety
  2. Multithreaded
  3. Fast

Time comparision

Lets write the code that finds the sum from 1-10^8 in all these languages and compare response times.

Repo - https://github.com/100xdevs-cohort-3/language-benchmarks/

1. JS with Node - 100ms

console.time('Node.js Sum');
let sum = 0;
const n = 100_000_000;

for (let i = 1; i <= n; i++) {
    sum += i;
}

console.log(`Sum: ${sum}`);
console.timeEnd('Node.js Sum');

Run with: node sum_node.js

2. JS with Bun - 97ms

console.time('Bun Sum');
let sum = 0;
const n = 100_000_000;

for (let i = 1; i <= n; i++) {
    sum += i;
}

console.log(`Sum: ${sum}`);
console.timeEnd('Bun Sum');

Run with: bun sum_bun.js

3. Go w/o multithreading - 31ms

package main

import (
    "fmt"
    "time"
)

func main() {
    start := time.Now()

    var sum int64 = 0
    const n = 100_000_000

    for i := int64(1); i <= n; i++ {
        sum += i
    }

    fmt.Printf("Sum: %d\n", sum)
    fmt.Printf("Time: %v\n", time.Since(start))
}

Run with: go run sum_single.go

4. Go with multithreading - 36ms (what could’ve gone wrong?)

package main

import (
    "fmt"
    "runtime"
    "sync"
    "time"
)

func main() {
    start := time.Now()

    const n = 100_000_000
    numWorkers := runtime.NumCPU()
    chunkSize := n / numWorkers

    var wg sync.WaitGroup
    results := make([]int64, numWorkers)

    for i := 0; i < numWorkers; i++ {
        wg.Add(1)
        go func(workerID int) {
            defer wg.Done()

            start := int64(workerID*chunkSize + 1)
            end := int64((workerID+1) * chunkSize)
            if workerID == numWorkers-1 {
                end = n // Handle remainder
            }

            var localSum int64 = 0
            for j := start; j <= end; j++ {
                localSum += j
            }
            results[workerID] = localSum
        }(i)
    }

    wg.Wait()

    var totalSum int64 = 0
    for _, result := range results {
        totalSum += result
    }

    fmt.Printf("Sum: %d\n", totalSum)
    fmt.Printf("Time: %v\n", time.Since(start))
}

Run with: go run sum_multi.go

Benchmarks using go on my mac -

Screenshot_2025-08-07_at_11.08.18_PM.png

5. Rust without multithreading - 63ms

use std::time::Instant;

fn main() {
    let start = Instant::now();

    let n: i64 = 100_000_000;
    let mut sum: i64 = 0;

    for i in 1..=n {
        sum += i;
    }

    println!("Sum: {}", sum);
    println!("Time: {:?}", start.elapsed());
}

Compile and run with: rustc -O sum_single.rs && ./sum_single

6. Rust with multithreading - 38ms

use std::sync::Arc;
use std::sync::atomic::{AtomicI64, Ordering};
use std::thread;
use std::time::Instant;

fn main() {
    let start = Instant::now();

    let n: i64 = 100_000_000;
    let num_threads = num_cpus::get();
    let chunk_size = n / num_threads as i64;

    let sum = Arc::new(AtomicI64::new(0));
    let mut handles = vec![];

    for i in 0..num_threads {
        let sum_clone = Arc::clone(&sum);
        let handle = thread::spawn(move || {
            let start_val = (i as i64) * chunk_size + 1;
            let end_val = if i == num_threads - 1 {
                n // Handle remainder
            } else {
                (i as i64 + 1) * chunk_size
            };

            let mut local_sum = 0i64;
            for j in start_val..=end_val {
                local_sum += j;
            }

            sum_clone.fetch_add(local_sum, Ordering::Relaxed);
        });
        handles.push(handle);
    }

    for handle in handles {
        handle.join().unwrap();
    }

    println!("Sum: {}", sum.load(Ordering::Relaxed));
    println!("Time: {:?}", start.elapsed());
}

Add to Cargo.toml:

[dependencies]
num_cpus = "1.0"

Compile and run with: cargo run --release

Alternative Rust Implementation (Channel-based)

// sum_multi_channels.rs
use std::sync::mpsc;
use std::thread;
use std::time::Instant;

fn main() {
    let start = Instant::now();

    let n: i64 = 100_000_000;
    let num_threads = num_cpus::get();
    let chunk_size = n / num_threads as i64;

    let (tx, rx) = mpsc::channel();

    for i in 0..num_threads {
        let tx_clone = tx.clone();
        thread::spawn(move || {
            let start_val = (i as i64) * chunk_size + 1;
            let end_val = if i == num_threads - 1 {
                n // Handle remainder
            } else {
                (i as i64 + 1) * chunk_size
            };

            let mut local_sum = 0i64;
            for j in start_val..=end_val {
                local_sum += j;
            }

            tx_clone.send(local_sum).unwrap();
        });
    }

    drop(tx); // Close the sender

    let total_sum: i64 = rx.iter().sum();

    println!("Sum: {}", total_sum);
    println!("Time: {:?}", start.elapsed());
}

Expected Results

The mathematical answer should be: 5,000,000,050,000,000

Performance expectations (approximate):

  • Bun: Fastest JS runtime
  • Node.js: Slower than Bun but still decent
  • Rust (optimized): Fastest overall, especially single-threaded
  • Go: Good performance, multithreaded version should scale well
  • Multithreaded: May not always be faster due to overhead for simple operations

Note: For this simple summation, the mathematical formula n*(n+1)/2 would be much faster than any loop, but this benchmark tests raw computational performance.

Conclusion

What should you chose? Depends on the use case.

Most full stack projects (CRMs, saas, enterprise) can be built with Node/go.

Lower level apps/finance systems should use a language like rust/c/zig esp if latency matters a lot.

Everyone here, should learn atleast 1 high level (py, js) and 1 low level language (go, rust, c, zig)