Skip to content
Rust for Systems Programming: When the Borrow Checker Earns Its Keep

Rust for Systems Programming: When the Borrow Checker Earns Its Keep

Rust gets discussed like a competitor to Go or to C++. That framing misses the point. Rust is a deliberate trade: more friction up front, in exchange for compiler-enforced guarantees that make whole categories of bugs structurally impossible. Whether that trade is worth it depends entirely on what you are building.

This is a practitioner’s view of when Rust pays its keep, when it does not, and what to expect when you commit to it.

Where Rust earns its placeDiagram comparing Rust's strongest fit areas against areas where simpler productivity languages may be better.Rust is a sharp tool. Aim it at sharp problems.Strong fitnetwork data planesparsers and protocol codesecurity-sensitive servicesUsually avoidshort-lived prototypesplain CRUD servicesexploratory ML notebookspay the ownership tax only where the receipt is useful
Rust is most compelling when correctness and latency are part of the product, not just engineering preferences.

The one-paragraph summary of what makes Rust different

Rust enforces three rules at compile time:

  1. Every value has exactly one owner.
  2. You can have many immutable borrows OR one mutable borrow, never both.
  3. References must always point to live data.

Together, these eliminate use-after-free, double-free, data races, and entire classes of memory unsafety β€” without a garbage collector and without runtime overhead. Everything else about Rust β€” the trait system, async/await, the macro hygiene β€” is downstream of getting these three rules right.

The cost: the compiler will reject programs that would work, because it cannot prove they will. You will fight the borrow checker. The fights get shorter with experience, never zero.

Where Rust earns its place

1. Long-running services where memory safety is non-negotiable

Web servers, databases, kernels, browser engines, language runtimes, blockchain nodes. Anywhere a memory bug is a security incident, not a stack trace. The cost of a memory-corruption CVE in a network-facing service is enormous; Rust eliminates the category. This is why AWS, Microsoft, and Google have all publicly committed to Rust for new infrastructure code.

2. High-throughput data plane code

Protocol parsers, packet processing, codecs, compression, hash tables on the hot path. Rust is competitive with C++ on raw performance and significantly easier to make correct under concurrency. Discord’s switch from Go to Rust for one of their state services is the canonical case study β€” they specifically cited GC pause latency.

3. Systems where concurrency is fundamental

Rust’s Send and Sync traits make data-race-freedom a property the compiler enforces. You cannot share a non-thread-safe type across threads β€” the compiler refuses to compile. For systems with complex internal concurrency (databases, schedulers, async runtimes), this turns “test extensively and hope” into “compiles, therefore correct.” Not a small upgrade.

4. Embedded and resource-constrained environments

no_std Rust runs on bare metal, in microcontrollers, in WASM with kilobytes of memory. The same language, ownership rules, and tooling that you use for a server backend works on a 32 KB MCU. C and C++ are still the incumbents, but Rust is the most credible challenger they have ever had.

5. WebAssembly

If you are compiling to WASM, Rust is the most mature toolchain. The output is small, the bindings are clean, and the tooling (wasm-pack, wasm-bindgen) is excellent. For client-side compute (image processing, crypto, parsing) that wants to run in a browser, Rust + WASM is the path.

Where Rust is the wrong tool

1. Most CRUD web applications

If your service is mostly “accept HTTP request, talk to a database, return JSON” with moderate throughput, you are paying Rust’s cost for none of its benefits. Go, TypeScript, or Python will get you there in half the time and the runtime difference is not what is bottlenecking your business. Use Rust for the 5% of services where it matters; use the productivity language for the other 95%.

2. Quick prototypes and exploratory work

The borrow checker punishes shape changes. Refactoring a Rust prototype is more expensive than refactoring a TypeScript or Python one, because every reshape revisits ownership. Prototype in something flexible; rewrite the parts that matter in Rust.

3. Heavy use of self-referential data structures

Doubly-linked lists, graph nodes that point at each other, observer patterns where every node holds references to its siblings β€” all of these fight the borrow checker hard. They can be done (with Rc<RefCell<T>>, Pin, raw pointers, or Arena allocators), but it is unergonomic. If your domain is graph-shaped and nodes need to mutate each other freely, you will spend a lot of time on architecture you could skip in another language.

4. ML/numerical research

Python’s ML ecosystem is a moat. PyTorch, JAX, NumPy, Transformers, the entire CUDA stack β€” none of these have full Rust equivalents. Rust has Polars, candle, and a growing ecosystem, but for research-velocity ML work, Python is still the right answer. Rust comes in when you operationalize: serving, preprocessing pipelines, custom CUDA kernels via crates like cudarc.

Async Rust: the elephant in the room

Asynchronous Rust is the most controversial part of the language. It is also where most production server work happens. A few honest observations:

  • Tokio is the default runtime. Pick it. The other runtimes (async-std, smol) exist for specific cases, but tokio has the ecosystem.
  • async fn in traits is finally stable (Rust 1.75+). The era of returning boxed futures from trait methods is over.
  • Pin is the price. You will encounter it. Understanding Pin is the difference between “I write async Rust” and “I copy-paste async Rust.”
  • Function coloring is real. Async functions can only be called from async contexts. This propagates through your codebase. Plan accordingly: most of your service is probably async; the parts that are not should stay small.
  • Cancellation is cooperative. Dropping an async future cancels it at the next .await. This is powerful but requires care: any state that must be cleaned up on cancellation needs Drop impls or scope guards.

If you are building a network service in Rust, you will write tokio-based async code. Embrace the ecosystem; don’t fight it.

Compared to C++

Rust is what a modern C++ replacement looks like:

C++Rust
Memory safetyManual, error-proneCompile-time enforced
Build systemCMake/Bazel/etcCargo, single tool, works
Package managementvcpkg/Conan/systemCargo + crates.io
Compile errorsFamous for being incomprehensibleFamously good error messages
Mature ecosystem40+ years10 years, growing fast
Ergonomics for big projectsRequire strong conventionsEnforced by the language

Where C++ still wins: existing legacy codebases, niche embedded toolchains, raw performance for very specific kernels with hand-tuned SIMD, AAA game engines (Unreal, etc.). For new systems work in 2025-2026, Rust is often the more defensible starting point when memory safety and concurrency are core requirements.

Compared to Go

A common framing: “is Rust replacing Go?” Mostly no. The languages target different points:

  • Go: simple, fast to learn, GC, focused on services. Best for: business-logic services, CLIs, build tooling. The win is engineering velocity at acceptable performance.
  • Rust: complex, slow to learn, no GC, focused on correctness and performance. Best for: data-plane code, systems infrastructure, libraries that other services depend on. The win is correctness and performance with no GC overhead.

Go and Rust co-exist well in a polyglot architecture. The control plane in Go, the data plane in Rust, both speaking gRPC or whatever, is a perfectly reasonable shape.

Go and Rust in one architectureLayered architecture showing Go in the control plane and Rust in the data plane.Polyglot is not failure. It is placement.Go control planeAPIs, reconciliation, admin workflows, business logicRust data plane
The boringly effective split: write coordination where velocity matters, write the hot boundary where guarantees matter.

What to expect when you commit to Rust

The first six weeks are hard. The borrow checker will reject programs you are sure are correct. Embrace it. The mental model is real and you will internalize it.

Compile times will frustrate you. Cold Rust builds are slow. Incremental builds are fine. Use sccache, cargo-watch, and the new parallel front-end (Rust 1.75+) to make iteration tolerable.

unsafe is fine, in moderation. It is not failure to use it. It is failure to use it without isolating it behind a safe API and writing the proof of why it is sound.

Read other people’s Rust. Reading idiomatic Rust is the fastest way to absorb the patterns. The standard library source, tokio internals, ripgrep, and the Rust compiler itself are all worth time.

Use clippy religiously. Most of the lints catch real bugs or anti-patterns. Treat clippy warnings as errors in CI.

A realistic adoption path

If your team is Rust-curious, here is a path that has worked for teams I know:

  1. Tooling first. Write a CLI tool, a build script, or an internal utility in Rust. Low stakes, real production use, learn cargo and the standard library.
  2. A library second. Pull a hot path out of an existing service, rewrite it in Rust, expose it via FFI (or rewrite as a sidecar). You learn ownership without the async-runtime complexity yet.
  3. A service third. When you have a use case where Rust’s strengths matter β€” a high-throughput parser, a cache, a network proxy β€” write a real service. By now you have absorbed the language.

Skipping straight to “rewrite our entire backend in Rust” is the path that produces six-month projects with no shipped value. Don’t do that.

A practical decision rule

Before choosing Rust, ask three questions:

  1. Will a memory-safety bug here become a security incident or customer-visible outage?
  2. Will tail latency or resource predictability materially affect cost or user experience?
  3. Will this component become shared infrastructure that other teams depend on?

If the answer is yes to two or three, Rust deserves serious consideration. If the answer is no to all three, pick the language that lets the team ship and operate the system with the least ceremony. Engineering taste is nice. Production fit is better.

The honest verdict

Rust is the right answer for a real but bounded set of problems. When it is the right answer, it is decisively the right answer β€” the alternative is C++ and a thousand-page memory-safety rulebook nobody actually follows. When it is not the right answer, using it anyway is a tax on your velocity that you pay forever.

Pick deliberately. The borrow checker is a tool, not a religion.

Sources and receipts