When Rust hurts

  2023-02-16 Reddit

Functional programming deals with values; imperative programming deals with objects.

Alexander Stepanov, Elements of Programming, p. 5

Introduction

Rust is in a sweet spot in the language design space. It allows us to build efficient and memory-safe programs with concise, portable, and sometimes even pretty code.

However, it is not all roses and sunshine. Memory management details often stay in your way and make the code uglier or more repetitive than it could be in a higher-level programming language, such as Haskell or OCaml. In almost all cases, these issues are not defects of the compiler but direct consequences of the Rust’s team design choices.

This article details on how frustrating Rust can be if you approach it with a functional programming mindset and why Rust has no choice but to frustrate you.

Objects, values, and references

Values and objects play complementary roles. Values are unchanging and independent of any particular implementation in the computer. Objects are changeable and have computer-specific implementations.

Alexander Stepanov, Elements of Programming, p. 5

Understanding the difference between objects, values, and references is helpful before diving deeper into Rust.

In the context of this article, values are entities with distinct identities, such as numbers and strings. An object is a representation of a value in the computer memory. A reference is the address of an object that we can use to access the object or its parts.

A visualization of values, objects, and references on an example of an integer in a 16-bit computer. The value is number five, which has no inherent type. The object is a 16-bit integer stored at address 0x0300 (little-endian). The memory contains a reference to the number, represented as a pointer to address 0x0300.

System programming languages, such as C++ and Rust, force the programmer to deal with the distinction between objects and references. This distinction allows us to write blazingly fast code, but it comes with a high price: it is a never-ending source of bugs. It is almost always a bug to modify the contents of an object if some other part of the program references that object. There are multiple ways to address this issue:

The distinction between objects and references is also a source of accidental complexity and choice explosion. A language with immutable objects and automatic memory management allows us to stay ignorant of this distinction and treat everything as a value (at least in pure code). A unified storage model frees up a programmer’s mental resources and enables her to write more expressive and elegant code. However, what we gain in convenience, we lose in efficiency: pure functional programs often require more memory, can become unresponsive, and are harder to optimize (your mileage may vary).

When abstraction hurts

Manual memory management and the ownership-aware type system interfere with our ability to break down the code into smaller pieces.

Common expression elimination

Extracting a common expression into a variable can pose unexpected challenges. Let us start with the following snippet of code.

let x = f("a very long string".to_string());
let y = g("a very long string".to_string());
// …

Look, "a very long string".to_string() appears twice! Our first instinct is to assign a name to the expression and use it twice:

let s = "a very long string".to_string();
let x = f(s);
let y = g(s);

However, our first naive version will not compile because the String type does not implement the Copy trait. We must write the following expression instead:

let s = "a very long string".to_string();
f(s.clone());
g(s);

We can see the extra verbosity in a positive light if we care about extra memory allocations because copying memory became explicit. But it can be quite annoying in practice, especially when you add h(s) two months later.

let s = "a very long string".to_string();
f(s.clone());
g(s);

// fifty lines of code...

h(s); // ← won't compile, you need scroll up and update g(s).

Monomorphism restriction

In Rust, let x = y; does not always mean that x is the same thing as y. One example of when this natural property breaks is when y is an overloaded function.

For example, let us define a short name for an overloaded function.

// Do we have to type "MyType::from" every time?
// How about introducing an alias?
let x = MyType::from(b"bytes");
let y = MyType::from("string");

// Nope, Rust won't let us.
let f = MyType::from;
let x = f(b"bytes");
let y = f("string");
//      - ^^^^^^^^ expected slice `[u8]`, found `str`
//      |
//      arguments to this function are incorrect

The snippet does not compile because the compiler will bind f to a particular instance of MyType::from, not to a polymorphic function. We have to make f polymorphic explicitly.

// Compiles fine, but is longer than the original.
fn f<T: Into<MyType>>(t: T) -> MyType { t.into() }

let x = f(b"bytes");
let y = f("string");

Haskell programmers might find this problem familiar: it looks suspiciously similar to the dreaded monomorphism restriction! Unfortunately, rustc does not have the NoMonomorphismRestriction pragma.

Functional abstraction

Factoring code into a function might be harder than you expect because the compiler cannot reason about aliasing across function boundaries. Let’s say we have the following code.

impl State {
  fn tick(&mut self) {
    self.state = match self.state {
      Ping(s) => { self.x += 1; Pong(s) }
      Pong(s) => { self.x += 1; Ping(s) }
    }
  }
}

The self.x += 1 statement appears multiple times. Why not extract it into a method…

impl State {
  fn tick(&mut self) {
    self.state = match self.state {
      Ping(s) => { self.inc(); Pong(s) } // ← compile error
      Pong(s) => { self.inc(); Ping(s) } // ← compile error
    }
  }

  fn inc(&mut self) {
    self.x += 1;
  }
}

Rust will bark at us because the method attempts to re-borrow self exclusively while the surrounding context still holds a mutable reference to self.state.

Rust 2021 edition implemented disjoint capture to address a similar issue with closures. Before Rust 2021, code that looked like x.f.m(|| x.y) might not compile but manually inlining m and the closure would resolve the error. For example, imagine we have a struct that owns a map and a default value for map entries.

struct S { map: HashMap<i64, String>, def: String }

impl S {
  fn ensure_has_entry(&mut self, key: i64) {
    // Doesn't compile with Rust 2018:
    self.map.entry(key).or_insert_with(|| self.def.clone());
// |         ------            -------------- ^^ ---- second borrow occurs...
// |         |                 |              |
// |         |                 |              immutable borrow occurs here
// |         |                 mutable borrow later used by call
// |         mutable borrow occurs here
  }
}

However, if we inline the definition of or_insert_with and the lambda function, the compiler can finally see that the borrowing rules hold.

struct S { map: HashMap<i64, String>, def: String }

impl S {
  fn ensure_has_entry(&mut self, key: i64) {
    use std::collections::hash_map::Entry::*;
    // This version is more verbose, but it works with Rust 2018.
    match self.map.entry(key) {
      Occupied(mut e) => e.get_mut(),
      Vacant(mut e) => e.insert(self.def.clone()),
    };
  }
}

When someone asks you, what tricks can Rust closures do that named functions cannot? you will know the answer: they can capture only the fields they use.

Newtype abstraction

The new type idiom Folks in the C++ land call this idiom strong typedef. in Rust allows the programmer to give a new identity to an existing type. The idiom’s name comes from Haskell’s newtype keyword.

One of the common uses of this idiom is to work around the orphan rules and define trait implementation for the aliased type. For example, the following code defines a new type that displays byte vectors in hex.

struct Hex(Vec<u8>);

impl std::fmt::Display for Hex {
  fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
    self.0.iter().try_for_each(|b| write!(f, "{:02x}", b))
  }
}

println!("{}", Hex((0..32).collect()));
// => 000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f

The new type idiom is efficient: the representation of the Hex type in the machine’s memory is identical to that of Vec<u8>. However, despite the identical representation, the compiler does not treat our new type as a strong alias for Vec<u8>. For example, we cannot safely transform Vec<Hex> to Vec<Vec<u8>> and back without reallocating the outer vector. Also, without copying the bytes, we cannot safely coerce &Vec<u8> to &Hex.

fn complex_function(bytes: &Vec<u8>) {
  // … a lot of code …

  println!("{}", &Hex(bytes));        // That does not work.
  println!("{}", Hex(bytes.clone())); // That works but is slow.

  // … a lot of code …
}

Overall, the newtype idiom is a leaky abstraction because it is a convention, not a first-class language feature. If you wonder how Haskell solved this problem, I recommend watching the Safe, Zero-Cost Coercions in Haskell talk by Simon Peyton Jones.

Views and bundles

Each time the programmer describes a struct field or passes an argument to a function, she must decide whether the field/argument should be an object or a reference. Or maybe the best option is to decide at runtime? That is a lot of decision-making! Unfortunately, sometimes there is no optimal choice. On such occasions, we grit our teeth and define several versions of the same type with slightly different field types.

Most functions in Rust take arguments by reference and return results as a self-contained object There are plenty of exceptions, of course. Sometimes we pass arguments by value if making a copy is cheap or the function can efficiently reuse its input to produce the result. Some functions return references to one of their arguments. . This pattern is so common that it might be helpful to define new terms. I call input types with lifetime parameters views because they are optimal for inspecting data. I call regular output types bundles because they are self-contained.

The following snippet comes from the (sunset) Lucet WebAssembly runtime.

/// A WebAssembly global along with its export specification.
/// The lifetime parameter exists to support zero-copy deserialization
/// for the `&str` fields at the leaves of the structure.
/// For a variant with owned types at the leaves, see `OwnedGlobalSpec`.
pub struct GlobalSpec<'a> {
    global: Global<'a>,
    export_names: Vec<&'a str>,
}

…

/// A variant of `GlobalSpec` with owned strings throughout.
/// This type is useful when directly building up a value to be serialized.
pub struct OwnedGlobalSpec {
    global: OwnedGlobal,
    export_names: Vec<String>,
}

The authors duplicated the GlobalSpec data structure to support two use cases:

In a language with automatic memory management, we can combine the efficiency of GlobalSpec<'a> with the versatility of OwnedGlobalSpec in a single type declaration.

When composition hurts

Building a working program from smaller pieces can be frustrating in Rust.

Object composition

When programmers have two distinct objects, they often want to combine them into a single structure. Sounds easy? Not in Rust.

Assume we have an object Db that has a method giving you another object, Snapshot<'a>. The lifetime of the snapshot depends on the lifetime of the database.

struct Db { /* … */ }

struct Snapshot<'a> { /* … */ }

impl Db { fn snapshot<'a>(&'a self) -> Snapshot<'a>; }

We might want to bundle If you wonder why we have this strange desire, you can read comments from the rocksdb_iterator.rs module. the database with its snapshot, but Rust will not let us.

// There is no way to define the following struct without
// contaminating it with lifetimes.
struct DbSnapshot {
  snapshot: Snapshot<'a>, // what should ’a be?
  db: Arc<Db>,
}

Rust folks call this arrangement sibling pointers. Rust forbids sibling pointers in safe code because they undermine Rust’s safety model.

As discussed in the Objects, values, and references section, modifying a referenced object is usually a bug. In our case, the snapshot object might depend on the physical location of the db object. If we move the DbSnapshot as a whole, the physical location of the db field will change, corrupting references in the snapshot object. We know that moving Arc<Db> will not change the location of the Db object, but there is no way to communicate this information to rustc.

Another issue with DbSnapshot is that the order of its field destruction matters. If Rust allowed sibling pointers, changing the field order could introduce undefined behavior: the snapshot’s destructor could try to access fields of a destroyed db object.

Pattern matching cannot see through boxes

In Rust, we cannot pattern-match on boxed types such as Box, Arc, String, and Vec. This restriction is often a deal-breaker because we cannot avoid boxing when we define recursive data types.

For example, let us try to match a vector of strings.

let x = vec!["a".to_string(), "b".to_string()];
match x {
//    - help: consider slicing here: `x[..]`
    ["a", "b"] => println!("OK"),
//  ^^^^^^^^^^ pattern cannot match with input type `Vec<String>`
    _ => (),
}

First, we can’t match a vector, only on a slice. Luckily, the compiler suggests an easy fix: we must replace x with x[..] in the match expression. Let us give it a try.

let x = vec!["a".to_string(), "b".to_string()];
match x[..] {
//    ----- this expression has type `[String]`
    ["a", "b"] => println!("OK"),
//   ^^^ expected struct `String`, found `&str`
    _ => (),
}

As you can see, removing one layer of boxes is not enough to make the compiler happy. We also need to unbox the strings inside of the vector, which is not possible without allocating a new vector:

let x = vec!["a".to_string(), "b".to_string()];
// We have to allocate new storage.
let x_for_match: Vec<_> = x.iter().map(|s| s.as_str()).collect();
match &x_for_match[..] {
    ["a", "b"] => println!("OK"), // this compiles
    _ => (),
}

Forget about balancing Red-Black trees in five lines of code in Rust.

Orphan rules

Rust uses orphan rules to decide whether a type can implement a trait. For non-generic types, these rules forbid implementing a trait for a type outside of crates defining the trait or the type. In other words, the package defining the trait must depend on the package defining the type or vice versa.

Orphan rules in Rust demand that a trait implementation resides in the crate defining the trait or the crate defining the type. Boxes represent separate crates, arrows—crate dependencies.

These rules make it easy for the compiler to guarantee coherence, which is a smart way to say that all parts of your program see the same trait implementation for a particular type. In exchange, this rule significantly complicates integrating traits and types from unrelated libraries.

One example is traits we want to use only in tests, such as Arbitrary from the proptest package. We can save a lot of typing if the compiler derives implementations for types from our package, but we want our production code to be independent of the proptest package. In the perfect setup, all the Arbitrary implementations would go into a separate test-only package. Unfortunately, orphan rules oppose this arrangement, forcing us to bite the bullet and write proptest strategies manually instead There are workarounds for this issue, such as using cargo features and conditional compilation, but they complicate the build setup so much that writing boilerplate is usually a better option. .

Type conversion traits, such as From and Into, are also problematic under orphan rules. I often see xxx-types packages that start small but end up as bottlenecks in the compilation chain. Splitting such packages into smaller pieces is often daunting because of the intricate webs of type conversions connecting distant types together. Orphan rules do not allow us to cut these packages on module boundaries and move all conversions into a separate package without doing a lot of tedious work.

Do not get me wrong: orphan rules are a great default. Haskell allows you to define orphan instances, but programmers frown upon this practice. It is the inability to escape orphan rules that makes me sad. In large codebases, decomposing large packages into smaller pieces and maintaining shallow dependencies graphs are the only path to acceptable compilation speed. Orphan rules often stay in the way of trimming dependency graphs.

Fearless concurrency is a lie

The Rust team coined the term Fearless Concurrency to indicate that Rust helps you avoid common pitfalls associated with parallel and concurrent programming. Despite these claims, my cortisol level goes up every time I introduce concurrency to my Rust programs.

Deadlocks

So it’s perfectly fine for a Safe Rust program to get deadlocked or do something nonsensical with incorrect synchronization. Obviously such a program isn’t very good, but Rust can only hold your hand so far.

Safe Rust prevents a specific type of concurrency bug called data race. Concurrent Rust programs have plenty of other ways to behave incorrectly.

One class of concurrency bugs that I experienced firsthand is deadlock. A typical explanation of this class of bugs involves two locks and two processes trying to acquire the locks in opposite orders. However, if the locks you use are not re-entrant (and Rust’s locks are not), having a single lock is enough to cause a deadlock.

For example, the following code is buggy because it attempts to acquire the same lock twice. The bug might be hard to spot if do_something and helper_function are large and live far apart in the source file or if we call helper_function on a rare execution path.

impl Service {
  pub fn do_something(&self) {
    let guard = self.lock.read();
    // …
    self.helper_function(); // BUG: will panic or deadlock
    // …
  }

  fn helper_function(&self) {
    let guard = self.lock.read();
    // …
  }
}

The documentation for RwLock::read mentions that the function might panic if the current thread already holds the lock. All I got was a hanging program.

Some languages tried to provide a solution to this problem in their concurrency toolkits. The Clang compiler has Thread safety annotations enabling a form of static analysis that can detect race conditions and deadlocks. However, the best way to avoid deadlocks is not to have locks. Two technologies that approach the problem fundamentally are Software Transaction Memory (implemented in Haskell, Clojure, and Scala) and the actor model (Erlang was the first language that fully embraced it).

Filesystem is a shared resource

We can view a path as an address. Then a string representing a path is a pointer, and accessing a file through a path is a pointer dereference. Thus, component interference due to file overwriting can be viewed as an address collision problem: two components occupy overlapping parts of the address space.

Rust gives us powerful tools to deal with shared memory. However, once our programs need to interact with the outside world (e.g., use a network interface or a filesystem), we are on our own. Rust is similar to most modern languages in this regard. However, it can give you a false sense of security.

Remember that paths are raw pointers, even in Rust. Most file operations are inherently unsafe and can lead to data races (in a broad sense) if you do not correctly synchronize file access. For example, as of February 2023, I still experience a six-year-old concurrency bug in rustup.

Implicit async runtimes

I cannot seriously believe in it because the theory cannot be reconciled with the idea that physics should represent a reality in time and space, free from spooky action at a distance.

The value of Rust that I like the most is its focus on local reasoning. Looking at the function’s type signature often gives you a solid understanding of what the function can do. State mutations are explicit thanks to mutability and lifetime annotations. Error handling is explicit and intuitive thanks to the ubiquitous Result type. When used correctly, these features often lead to the mystical if it compiles—it works effect. Asynchronous programming in Rust is different, however.

Rust supports the async/.await syntax for defining and composing asynchronous functions, but the runtime support is limited. Several libraries (called async runtimes) define asynchronous functions to interact with the operating system. The tokio package is the most popular library.

One common issue with runtimes is that they rely on passing arguments implicitly. For example, the tokio runtime allows you to spawn a concurrent task at any point in your program. For this function to work, the programmer has to construct a runtime object in advance.

fn innocently_looking_function() {
  tokio::spawn(some_async_func());
  // ^
  // |
  // This code will panic if we remove this line. Spukhafte Fernwirkung!
} //                                     |
  //                                     |
fn main() { //                           v
  let _rt = tokio::runtime::Runtime::new().unwrap();
  innocently_looking_function();
}

These implicit arguments turn compile-time errors into runtime errors. What should have been a compile error turns into a debugging adventure:

Some might argue that threading ubiquitous arguments through the entire call stack is unergonomic. Explicitly passing all arguments is the only approach that scales well.

Functions have colors

At this point, a reasonable person might think the language hates us.

Rust’s async/.await syntax simplifies the composition of asynchronous algorithms. In return, it brings a fair amount of complexity, painting every function in blue (sync) or red (async) color. There are new rules to follow:

That is it; some sync functions are secretly purple: they can read files, join threads, or thread::sleep on a couch. We do not want to call these purple (blocking) functions from red (async) functions because they will block the runtime and kill the performance benefits that motivated us to step into this asynchrony mess.

Unfortunately, purple functions are secretive: you cannot tell whether a function is purple without inspecting its body and the bodies of all other functions in its call graph. These bodies evolve, so we better keep an eye on them.

The real fun comes when you have a code base with shared ownership where multiple teams sandwich sync and async code. Such packages tend to be bug silos, waiting for enough system load to manifest another purple defect in the sandwich that makes the system unresponsive.

Languages with runtimes built around green threads, such as Haskell and Go, eliminate the proliferation of function colors. Building a concurrent program from independent components is easier and safer in such languages.

Conclusion

There are only two kinds of languages: the ones people complain about and the ones nobody uses.

Rust is a disciplined language that got many important decisions right, such as an uncompromising focus on safety, the trait system designI’m looking at you, C++ Concepts., the lack of implicit conversions, and a holistic approach to error handling. It allows us to develop robust and memory-safe programs relatively quickly without compromising execution speed.

Yet, I often find myself overwhelmed with accidental complexity, especially when I care little about performance and want to get something working quickly (for example, in test code). Rust can complicate decomposing your program into smaller pieces and composing it from smaller pieces. Rust only partially eliminates the concurrency issues. Oh well, no language is perfect for every problem.

You can discuss this article on Reddit.