Tutorial: stable-structures

  2023-02-04

Only purpose of any code is to transform data.

Introduction

Canisters hosted on the Internet Computer (IC) are mutable: canister controllers can upgrade the code to add new features or fix bugs without changing the canister’s identity.

Since the orthogonal persistence feature cannot handle upgrades, the IC allows canisters to use additional storage, called stable memory, to facilitate the data transfer between code versions. The conventional approach to canister state persistence is to serialize the entire state to stable memory in the pre_upgrade hook and decode it back in the post_upgrade hook. This approach is easy to implement and works well for relatively small datasets. Unfortunately, it does not scale well and can render a canister non-upgradable, so I recommend using stable memory as the primary storage when possible.

The ic-stable-structures library aims to simplify managing data structures directly in stable memory. This article explains the philosophy behind the library This article describes the 0.5.0 version of the library. and how to use it effectively.

Design principles

The point is that you must decide, in advance, what the coding priorities and quality bars will be; otherwise, the team will have to waste time rewriting misconceived or substandard code.

Steve Maguire, Debugging the Development Process, the groundwork, p. 19

Software designs reflect their creators’ values. The following principles shaped the stable-structures library design.

As a result of these goals, using the library requires planning and understanding your data shape. For example, you might need to set an upper bound on a value size that you cannot adjust without data migration. Or you might have to decide how many memory pages you want to allocate for canister configuration. There are other libraries with similar goals whose authors chose a slightly different set of trade-offs, such as ic-stable-memory.

Abstractions

In solving a problem with or without a computer it is necessary to choose an abstraction of reality, i.e., to define a set of data that is to represent the real situation. The choice must be guided by the problem to be solved.

Niklaus Wirth, Algorithms + Data Structures = Programs, fundamental data structures, p. 1

Memory

The core abstraction of the library is the Memory trait that models a WebAssembly memory instance That design decision is not a coincidence. Eventually, canisters will access multiple stable memories, and each data structure might reside in a dedicated memory instance. .

pub trait Memory {
    /// Equivalent to WebAssembly memory.size.
    fn size(&self) -> u64;

    /// Equivalent to WebAssembly memory.grow.
    fn grow(&self, pages: u64) -> i64;

    /// Copies bytes from this memory to the heap (in Wasm, memory 0).
    fn read(&self, offset: u64, dst: &mut [u8]);

    /// Writes bytes from the heap (in Wasm, memory 0) to this memory.
    fn write(&self, offset: u64, src: &[u8]);
}
The Memory trait models the WebAssembly memory instance. It allocates memory in 64KiB pages: the zero page spans addresses 0ffff, the first page—100001ffff, etc.

Some instances of the Memory trait are:

Restricted memory limits the primary memory to a contiguous page range. The example on the diagram demonstrates splitting a 5-page primary memory into two memories: the first memory spans pages from zero to two (exclusive), and the second memory spans pages from two to five (exclusive).

The most powerful and convenient way to create memories is to use the MemoryManager. This utility interleaves up to 255 non-intersecting memories in a single address space, acting similarly to a virtual memory subsystem in modern operating systems. The memory manager uses part of the parent memory to keep a dynamic translation table assigning page ranges to virtual memories.

The memory manager interleaves multiple virtual memories in a single primary memory, using the first few pages to store metadata.

Virtual memories have a non-contiguous representation, so a single write can translate to multiple system calls.

Storable types

The library does not impose any serialization format on you and does not provide a default. Depending on your needs, you might prefer Candid, Protocol Buffers, CBOR, Borsh, DER, or something else. The Storable trait abstracts data structures over your choice of serialization format.

The Storable trait abstracts data structures over your choice of serialization format.
pub trait Storable {
    /// Serializes a value of a storable type into bytes.
    fn to_bytes(&self) -> Cow<'_, [u8]>;

    /// Deserializes a value of a storable type from a byte array.
    ///
    /// REQUIREMENT: Self::from_bytes(self.to_bytes().to_vec()) == self
    fn from_bytes(bytes: Cow<[u8]>) -> Self;
}

Some data structures, such as stable vectors, need to know how much memory they must allocate for each instance of your storable type. Such types rely on an extension of this trait, providing this necessary metadata, BoundedStorable. BoundedStorable types are analogous in their function to Sized types, except that the compiler cannot deduce serialized sizes for you.

Some data structures require their values to implement BoundedStorable trait to know how much space they need to allocate for each item.
pub trait BoundedStorable: Storable {
    /// The maximum slice length that to_bytes can return.
    ///
    /// REQUIREMENT: self.to_bytes().len() ≤ Self::MAX_SIZE as usize
    const MAX_SIZE: u32;

    /// Whether all values of this type have the same length (equal to Self::MAX_SIZE)
    /// when serialized. If you are unsure about this flag, set it to false.
    ///
    /// REQUIREMENT: Self::IS_FIXED_SIZE ⇒ self.to_bytes().len() == Self::MAX_SIZE as usize
    const IS_FIXED_SIZE: bool;
}

The library implements these traits for a few basic types, such as integers, fixed-size arrays, and tuples, allowing you to get away without serialization libraries if you store only primitives. It also provides efficient variable-size bounded byte arrays, the Blob<N> type, where N is the maximal number of bytes this type can hold. For example, you can persist an IC principal as a Blob<29>.

Data structures

One has an intuitive feeling that data precede algorithms: you must have some objects before you can perform operations on them.

Niklaus Wirth, Algorithms + Data Structures = Programs, preface, p. xiii

The heart of the stable-structures library is a collection of data structures, each spanning one or more memories.

Stable structures do not compose The reason for this restriction is simplicity: most data structures are significantly easier to implement correctly and efficiently, assuming they can span an entire memory. . For example, you cannot construct a stable map containing stable vectors as values. Many conventional storage systems impose the same restriction. For example, SQL databases do not allow tables to hold other tables as values, and Redis does not allow the nesting of its data types.

Using composite keys or defining several data structures linked with foreign keys, you can work around the restriction.

// BAD: stable structures do not support nesting.
type BalanceMap = StableBTreeMap<Principal, StableBTreeMap<Subaccount, Tokens>>;
// GOOD: use a composite key (a tuple) to avoid nesting.
// Use a range scan to find all subaccounts of a principal.
type BalanceMap = StableBTreeMap<(Principal, Subaccount), Tokens>;
// BAD: stable structures do not support nesting.
type TxIndex = StableBTreeMap<Principal, StableVector<Transaction>>;
// GOOD: use a composite key to avoid nesting.
// Use a range scan to find all transactions of a principal.
type TxIndex = StableBTreeMap<(Principal, TxId), Transaction>;

Let us examine the available data structures in detail.

Stable cell

A Cell represents a single value stored in stable memory in serialized form. Cell’s contents in stable memory updates every time you change the underlying value.

The core interface of the Cell stable structure.
impl<T: Storable, M: Memory> struct Cell<T, M> {
    /// Returns the current cell value.
    /// Complexity: O(1).
    pub fn get(&self) -> Option<&T>;

    /// Updates the cell value.
    /// Complexity: O(value size).
    pub fn set(&mut self, value: T) -> Result<T, ValueError>;
}
A Cell persists a single value in stable memory and caches it on the heap. The serialized value and the value on the heap are always in sync.

The primary use case for cells is storing canister configuration:

Stable vector

A Vec is a growable mutable array similar to std::vec::Vec. A stable vector stores its items by value, so it must know how much space it needs to allocate for each item, hence the BoundedStorable bound on the item type.

Stable vector takes advantage of the T::IS_FIXED_SIZE attribute of the item type. If the value size is not fixed, the actual value can occupy anything from zero to T::MAX_SIZE bytes, so the vector allocates T::MAX_SIZE and a few extra bytes per entry to store the actual entry size. If all the values are the same size, the vector implementation uses T::MAX_SIZE for the item slot, saving up to 4 bytes per entry. This reduction is primarily helpful for vectors of primitives (e.g., StableVec<u64>).

The core interface of the Vec stable structure.
impl<T: BoundedStorable, Data: Memory> struct Vec<T, Memory> {
    /// Adds a new item at the vector's back.
    /// Complexity: O(T::MAX_SIZE).
    pub fn push(&self, item: &T) -> Result<usize, GrowFailed>;

    /// Removes an item from the vector's back.
    /// Complexity: O(T::MAX_SIZE).
    pub fn pop(&self) -> Option<T>;

    /// Returns the item at the specified index.
    /// Complexity: O(T::MAX_SIZE).
    pub fn get(&self, index: usize) -> Option<T>;

    /// Updates the item at the specified index.
    /// Complexity: O(T::MAX_SIZE).
    pub fn set(&self, index: usize, item: &T);

    /// Returns the number of items in the vector.
    /// Complexity: O(1).
    pub fn len() -> usize;
}
A Vec is a growable mutable array. The data representation depends on the IS_FIXED_WIDTH attribute of the item type. If the type’s representation is not fixed-width, the vector implementation must record each entry’s length.

The stable structure powering the Internet Identity service is a stable vector in disguise, though a less generic one.

Stable log

Sometimes called write-ahead logs or commit logs or transaction logs, logs have been around almost as long as computers and are at the heart of many distributed data systems and real-time application architectures.

A Log is an append-only list of arbitrary-sized values, similar to streams in Redis. The log requires two memories: the index storing entry offsets and the data storing raw entry bytes. The number of instructions needed to access old and append new entries does not depend on the number of items in the log, only on the entry size.

The core interface of the Log stable structure.
impl<T, Index, Data> struct Log<T, Index, Data>
where
  T: Storable,
  Index: Memory,
  Data: Memory,
{
    /// Adds a new entry to the log.
    /// Complexity: O(entry size).
    pub fn append(&self, bytes: &T) -> Result<u64, WriteError>;

    /// Returns the entry at the specified index.
    /// Complexity: O(entry size).
    pub fn get(&self, idx: u64) -> Option<T>;

    /// Returns the number of entries in the log.
    /// Complexity: O(1).
    pub fn len() -> u64;
}
A Log is an append-only list of values. Logs need two memories: the index memory storing value offsets and the data memory storing raw entries. The image depicts a log with two values: the first entry is 100 bytes long, and the second entry is 200 bytes long.

The log is a versatile data structure that can be helpful in almost any application:

Stable B-tree

Deletion of items from a B-tree is fairly straightforward in principle, but it is complicated in the details.

Niklaus Wirth, Algorithms + Data Structures = Programs, dynamic information structures, p. 250

The BTreeMap stable structure is an associative container that can hold any bounded storable type. The map must know the sizes of the keys and values because it allocates nodes from a pool of fixed-size tree nodes The tree allocator is a free-list allocator, the simplest allocator capable of freeing memory. .

The interface of stable BTreeMap will look familiar to any seasoned Rust programmer.

The core interface of the BTreeMap stable structure.
impl<K, V, M> struct BTreeMap<K, V, M>
where
  K: BoundedStorable + Ord + Clone,
  V: BoundedStorable,
  M: Memory,
{
    /// Adds a new entry to the map.
    /// Complexity: O(log(N) * K::MAX_SIZE + V::MAX_SIZE).
    pub fn insert(&self, key: K, value: V) -> Option<V>;

    /// Returns the value associated with the specified key.
    /// Complexity: O(log(N) * K::MAX_SIZE + V::MAX_SIZE).
    pub fn get(&self, key: &K) -> Option<V>;

    /// Removes an entry from the map.
    /// Complexity: O(log(N) * K::MAX_SIZE + V::MAX_SIZE).
    pub fn remove(&self, key: &K) -> Option<V>;

    /// Returns an iterator over the entries in the specified key range.
    pub fn range(&self, range: impl RangeBounds<K>) -> impl Iterator<Item = (K, V)>;

    /// Returns the number of entries in the map.
    /// Complexity: O(1).
    pub fn len() -> usize;
}
A BTreeMap is an associative container storing data in fixed-size dynamically-allocated nodes. Each node stores an array of key-value mappings ordered by key. The tree uses the free-list technique for allocating and freeing nodes. Dotted boxes represent the logical tree structure.

If you need a BTreeSet<K>, you can use BTreeMap<K, ()> instead.

The range method is handy for selecting relevant data from a large data set.

Two examples of using the StableBTreeMap::range method.
/// Selects all subaccounts of the specified principal.
fn principal_subaccounts(
  balance_map: &StableBTreeMap<(Principal, Subaccount), Tokens>,
  principal: Principal,
) -> impl Iterator<Item = (Subaccount, Tokens)> + '_ {
  balance_map
    .range((principal, Subaccount::default())..)
    .take_while(|((p, _), )| p == principal)
    .map(|((_, s), t)| (s, t))
}

/// Selects a transaction range for the specified principal.
fn principal_tx_range(
  tx_index: &StableBTreeMap<(Principal, TxId), Transaction>,
  principal: Principal,
  start: TxId,
  len: usize,
) -> impl Iterator<Item = Transaction> + '_ {
  tx_index
    .range((principal, start)..)
    .take(len)
    .map(|((_, _), tx)| tx)
}

Stable B-tree maps are ideal for large sets of relatively small objects (hundreds of bytes). For example, the bitcoin canister uses this data structure to store bitcoin block headers and UTXOs.

Constructing stable structures

Make sure that objects are initialized before they’re used.

Scott Meyers, Effective C++, third edition, Item 4, p. 26

We have to declare stable structures before we can use them. Each data structure T in the library declares at least three constructors:

In practice, most canisters need only T::init.

The most common way to create a stable structure is by declaring a global variable. This approach reminds me of the way we organize tables in relational databases.

Declaring a stable data structure as a standalone global variable.
use ic_stable_structures::{StableBTreeMap, DefaultMemoryImpl};
use std::cell::RefCell;

thread_local! {
  static USERS: RefCell<StableBTreeMap<UserId, User, DefaultMemoryImpl>> =
    RefCell::new(StableBTreeMap::init(DefaultMemoryImpl::default()));
}

The main benefit of this approach is that the runtime will automatically initialize the stable structure the first time you access it. Ensure that you access all such variables in the post_upgrade hook. Otherwise, you might only be able to catch a configuration error after the upgrade is complete.

However, you do not have to declare each stable structure in a separate global variable. You can embed stable structures into regular structures if this helps you keep the code better organized. For example, the Bitcoin canister stores UTXOs in multiple stable structures combined into a regular struct.

Tying it all together

This section contains an example of declaring several data structures for a ledger canister compatible with the ICRC-1 specification. We will need three collections:

  1. A Cell to store the ledger metadata.
  2. A StableBTreeMap to map accounts An account is a pair of a principal and a 32-byte subaccount. to their current balance.
  3. A StableLog for transaction history.

The ledger metadata is relatively small; it should fit into two megabytes or sixteen WebAssembly memory pages. We will allocate the metadata cell in a restricted memory, partitioning the rest of the stable memory between the accounts map and the log using the memory manager.

Let us put these ideas to code. First, we must import a few type definitions from the library.

use ic_stable_structures::{
  DefaultMemoryImpl as DefMem,
  RestrictedMemory,
  StableBTreeMap,
  StableCell,
  StableLog,
  Storable,
};
use ic_stable_structures::memory_manager::{
  MemoryId,
  MemoryManager as MM,
  VirtualMemory,
};
use ic_stable_structures::storable::Blob;
use std::borrow::Cow;
use std::cell::RefCell;

Next, let us define the types of data we will store in our stable structures. I omit the field definitions because they are not essential for understanding the example. We derive serialization boilerplate for the Metadata and Tx types using the serde framework.

type Account = (Blob<29>, [u8; 32]);
type Amount = u64;

#[derive(serde::Serialize, serde::Deserialize)]
struct Metadata { /* … */ }

#[derive(serde::Serialize, serde::Deserialize)]
enum Tx {
  Mint { /* … */ },
  Burn { /* … */ },
  Transfer { /* … */ },
}

The next step is to decide on the data serialization format. I use Concise Binary Object Representation in the example because this encoding served me well in production. Instead of implementing the Storable trait for Metadata and Tx, I define a generic wrapper type Cbor<T> that I use for all types I want to encode as cbor and implement Storable only for the wrapper. I also implement std::ops::Deref to improve the ergonomics of the wrapper type.

/// A helper type implementing Storable for all
/// serde-serializable types using the CBOR encoding.
#[derive(Default)]
struct Cbor<T>(pub T)
where T: serde::Serialize + serde::de::DeserializeOwned;

impl<T> std::ops::Deref for Cbor<T>
where T: serde::Serialize + serde::de::DeserializeOwned
{
  type Target = T;

  fn deref(&self) -> &Self::Target { &self.0 }
}

impl<T> Storable for Cbor<T>
where T: serde::Serialize + serde::de::DeserializeOwned
{
  fn to_bytes(&self) -> Cow<[u8]> {
    let mut buf = vec![];
    ciborium::ser::into_writer(&self.0, &mut buf).unwrap();
    Cow::Owned(buf)
  }

  fn from_bytes(bytes: Cow<[u8]>) -> Self {
    Self(ciborium::de::from_reader(bytes.as_ref()).unwrap())
  }
}

The final and most important part is defining stable structures as global canister variables. Note the use of RestrictedMemory to split the canister memory into two non-intersecting regions and MemoryManager (abbreviated as MM) to interleave multiple data structures in the second region.

// NOTE: ensure that all memory ids are unique and
// do not change across upgrades!
const BALANCES_MEM_ID: MemoryId = MemoryId::new(0);
const LOG_INDX_MEM_ID: MemoryId = MemoryId::new(1);
const LOG_DATA_MEM_ID: MemoryId = MemoryId::new(2);

// NOTE: we allocate the first 16 pages (about 2 MiB) of the
// canister memory for the metadata.
const METADATA_PAGES: u64 = 16;

type RM = RestrictedMemory<DefMem>;
type VM = VirtualMemory<RM>;

thread_local! {
  static METADATA: RefCell<StableCell<Cbor<Option<Metadata>>, RM>> =
    RefCell::new(StableCell::init(
        RM::new(DefMem::default(), 0..METADATA_PAGES),
        Cbor::default(),
      ).expect("failed to initialize the metadata cell")
    );

  static MEMORY_MANAGER: RefCell<MM<RM>> = RefCell::new(
    MM::init(RM::new(DefMem::default(), METADATA_PAGES..u64::MAX))
  );

  static BALANCES: RefCell<StableBTreeMap<Account, Amount, VM>> =
    MEMORY_MANAGER.with(|mm| {
      RefCell::new(StableBTreeMap::init(mm.borrow().get(BALANCES_MEM_ID)))
    });

  static TX_LOG: RefCell<StableLog<Cbor<Tx>, VM, VM>> =
    MEMORY_MANAGER.with(|mm| {
      RefCell::new(StableLog::init(
        mm.borrow().get(LOG_INDX_MEM_ID),
        mm.borrow().get(LOG_DATA_MEM_ID),
      ).expect("failed to initialize the tx log"))
    });
}

We are all set to start working on the ledger! I left implementing the rest of the specification as an exercise for an attentive reader.

Where to go next

Similar articles