Advent of Code 2025

  2025-12-27

A screenshot of the 2025 event page.

Advent of Code 2025 lasted only 12 days. I liked the shorter format since keeping up with puzzles for over three weeks in previous years was challenging.

This year, I solved the puzzles in Rust since I didn’t have the capacity to practice a new language. Rust is an excellent fit for Advent of Code: the tooling is mature, and the standard library is rich, so I didn’t need external dependencies I used z3 to solve part two of day 10, which was painful, but then I found a way to solve it without external dependencies. .

This article explains solutions to the puzzles I liked the most.

Day 3: Lobby

This puzzle asks us to find a string subsequence of length k that has the largest numeric value. For example, if k=2 and the input is 818181911112111, the output is 92.

In the first part, k=2 and the string length is n=100, so a brute-force O(n2) algorithm is fast enough. The second part increases k to 12, so a brute-force algorithm needs to check (10012)=1,050,421,051,106,700 combinations. Even if it processes a combination in 100 nanoseconds, it still requires over three years to solve a single instance.

The shape of this combinatorial explosion is also a clue to the solution. Binomial coefficients obey a recurrence relation that can be computed using dynamic programming.

(nk)=(n-1k)+(n-1k-1)

Similarly, if we define B to be the input string and M[k,i] to be the maximum numeric value of length k we can draw from B[0i], we get the following recurrence:

M[k,i]=max{M[k,i-1],10×M[k-1,i-1]+B[i-1]}

Together with boundary conditions M[k,0]=0 and M[0,i]=0, this translates to the following O(n×k) solution:

fn max_numeric_subsequence(b: &[u8], max_k: usize) -> u64 {
    let n = b.len();
    let mut m = vec![vec![0; n + 1]; max_k + 1];
    for k in 1..=max_k {
        for i in 1..=n {
            m[k][i] = m[k][i-1].max(10 * m[k-1][i-1] + b[i-1] as u64);
        }
    }
    m[k][n]
}

Day 7: Laboratories

This puzzle asks us to simulate beam propagation. The beam starts at the top and splits into two beams each time it reaches a splitter. The first part asks us to count how many splitters the beam reaches; the second part asks how many paths the beam can take through the maze.

My programming career started with physics simulations, which remain close to my heart. I also like how the second part connects to day 11.

My solution computes answers to both parts in one grid traversal. It treats the grid as a directed acyclic graph in which each space is a parent of the space below it, and a space above a splitter is a parent of the two spaces on the splitter’s sides. The number of paths to a space is the sum of paths to its parents.

fn count_splits_and_timelines(g: &[Vec<char>]) -> (usize, usize) {
    let n = g.len();
    let m = g[0].len();
    let mut ts = vec![vec![0; m]; n];
    let s = g[0]
        .iter()
        .position(|c| *c == 'S')
        .expect("the first row should have the starting position");
    ts[0][s] = 1;

    let mut splits = 0;
    for i in 1..n {
        for (j, c) in g[i].iter().enumerate() {
            let t = ts[i - 1][j];
            if *c == '^' && t > 0 {
                splits += 1;
                if j > 0 {
                    ts[i][j - 1] += t;
                }
                if j < m - 1 {
                    ts[i][j + 1] += t;
                }
            } else {
                ts[i][j] += t;
            }
        }
    }
    let timelines = ts[n - 1].iter().sum::<usize>();
    (splits, timelines)
}

Day 8: Playground

This puzzle gives us a set of 3d points and asks us to repeatedly add an edge between the pair of closest unconnected points. In the first part, we must compute the sizes of the three largest connected components after adding a thousand edges. In the second part, we must stop at the first edge that makes the graph fully connected.

I like this puzzle because it’s a perfect use case for one of my favorite data structures: union-find forest My best-loved data structure is Fenwick tree. . This data structure maintains disjoint set partitions and allows merging them. In this problem, the set is the collection of points, the partitions are connected components, and merging joins components by adding an edge.

Here is my simple implementation of union-find:

struct UnionFind {
    parent: Vec<usize>,
    size: Vec<usize>,
    trees: usize,
}

impl UnionFind {
    fn new(n: usize) -> Self {
        Self {
            parent: (0..n).collect(),
            size: vec![1; n],
            trees: n,
        }
    }

    /// Returns the root node of node X.
    /// Nodes that belong to the same partition share the root.
    fn find(&self, mut x: usize) -> usize {
        while self.parent[x] != x {
            x = self.parent[x];
        }
        x
    }

    /// Joins partitions to which nodes X and Y belong.
    /// Returns the root of the resulting partition.
    fn union(&mut self, x: usize, y: usize) -> usize {
        let root_x = self.find(x);
        let root_y = self.find(y);
        if root_x != root_y {
            let (p, c) = if self.size[root_x] >= self.size[root_y] {
                (root_x, root_y)
            } else {
                (root_y, root_x)
            };
            self.trees -= 1;
            self.parent[c] = p;
            self.size[p] += self.size[c];
            p
        } else {
            root_x
        }
    }

    /// Returns the size of the partition to which X belongs. 
    fn size(&self, x: usize) -> usize {
        self.size[self.find(x)]
    }

    /// Returns the partition count.
    fn tree_count(&self) -> usize {
        self.trees
    }
}

Both parts require computing all edges between points and sorting them by distance.

type Point = (i64, i64, i64);
type Dist = i64;
type Node = usize;

/// Returns the squared distance between two points.
fn dist2(a: &Point, b: &Point) -> Dist {
    let dx = a.0 - b.0;
    let dy = a.1 - b.1;
    let dz = a.2 - b.2;
    dx * dx + dy * dy + dz * dz
}

/// Returns the graph edges sorted by distance.
fn sorted_edges(boxes: &[Point]) -> Vec<(Dist, Node, Node)> {
    let n = boxes.len();
    let mut edges = vec![];
    for i in 0..n {
        for j in i + 1..n {
            edges.push((dist2(&boxes[i], &boxes[j]), i, j))
        }
    }
    edges.sort_by_key(|e| e.0);
    edges
}

In the first part, we must compute the sizes of the three largest connected components after adding top_k edges. We don’t have to traverse the graph because union-find already tracks all the information we need.

fn part1(boxes: &[Point], top_k: usize) -> usize {
    let edges = sorted_edges(boxes);
    let n = boxes.len();
    let mut uf = UnionFind::new(n);

    for (_, i, j) in &edges[0..top_k] {
        uf.union(*i, *j);
    }

    let mut sizes: Vec<_> = (0..n)
        .filter_map(|i| (i == uf.find(i)).then_some(uf.size(i)))
        .collect();
    sizes.sort_by(|a, b| b.cmp(a));

    sizes[0..3].iter().product::<usize>()
}

In the second part, we must multiply the x-coordinates of the first edge that makes the graph connected. Union-find makes this trivial.

fn part2(boxes: &[Point]) -> i64 {
    let mut uf = UnionFind::new(boxes.len());
    let edges = sorted_edges(boxes);
    for (_, i, j) in &edges {
        uf.union(*i, *j);
        if uf.tree_count() == 1 {
            return boxes[*i].0 * boxes[*j].0;
        }
    }
    unreachable!()
}

Day 9: Movie theater

In this puzzle, we work with a 2d polygon whose edges are either vertical or horizontal. In the first part, we must find the largest rectangle whose opposite corners coincide with polygon corners. In the second part, we must restrict our search to rectangles fully contained within the polygon.

My computational geometry skills are weak, so I struggled with the second part, but I also learned a lot in the process.

I used the same driver function for both parts. It enumerates all candidate rectangles, checks that they satisfy a predicate passed as an argument, and selects the one with the largest area. For the first part, the predicate always returns true. For the second part, the predicate checks whether the polygon contains the rectangle.

type Point = (i64, i64);

fn largest_rectangle(
    coords: &[Point],
    p: impl Fn(usize, usize, &[Point]) -> bool,
) -> i64 {
    let n = coords.len();
    let mut max_area = 0;
    for i in 0..n {
        for j in (i + 1)..n {
            if !p(i, j, coords) {
                continue;
            }
            let (x1, y1) = coords[i];
            let (x2, y2) = coords[j];
            let area = ((x2 - x1).abs() + 1) * ((y2 - y1).abs() + 1);
            max_area = max_area.max(area);
        }
    }
    max_area
}

Now the juicy part: how do we check whether a polygon contains a rectangle? We must verify two conditions:

  1. The polygon contains all the rectangle’s corners. Since two corners come from the polygon, we only need to check the other two.
  2. None of the rectangle’s edges cross any polygon edge.
fn rect_in_polygon(a: usize, b: usize, coords: &[Point]) -> bool {
    let (x1, y1) = coords[a];
    let (x2, y2) = coords[b];
    let c = (x1, y2);
    let d = (x2, y1);
    point_in_polygon(c, coords)
        && point_in_polygon(d, coords)
        && !line_crosses_polygon(coords[a], c, coords)
        && !line_crosses_polygon(coords[a], d, coords)
        && !line_crosses_polygon(coords[b], c, coords)
        && !line_crosses_polygon(coords[b], d, coords)
}

To check whether a polygon contains a point, I used a simplified version of the ray casting algorithm. It casts a horizontal ray from the point and counts the number of polygon edges it crosses. If this number is odd, the point lies within the polygon.

A horizontal ray from point (x,y) intersecting a vertical polygon edge.

/// Checks whether a point lies in a polygon.
fn point_in_polygon(p: Point, vertices: &[Point]) -> bool {
    let mut inside = false;
    let n = vertices.len();
    let (x, y) = p;
    for i in 0..n {
        let (x1, y1) = vertices[i];
        let (_,  y2) = vertices[(i + 1) % n];
        if (y < y1) != (y < y2) && x1 >= x {
            inside = !inside;
        }
    }
    inside
}

The (y < y1) != (y < y2) condition is subtle and somewhat cryptic. It’s equivalent to the more verbose y1.min(y2) <= y && y < y1.max(y2). The asymmetric inequality (only one bound is inclusive) ensures that the edge is vertical (y1 != y2) and that we don’t double-count intersections at polygon corners.

To test whether two lines cross, we check that the x-coordinate of a vertical line lies between the x-coordinates of a horizontal line, and vice versa. Note that the lines must cross, not just touch (hence the strong inequalities in lines_cross).

Horizontal line a-b crossing vertical line c-d.

/// Checks whether line a-b crosses line c-d.
/// Both lines must be parallel to one of the axis.
fn lines_cross(a: Point, b: Point, c: Point, d: Point) -> bool {
    match (a.0 == b.0, c.0 == d.0) {
        (true, true) => false, // both vertical
        (true, false) => {
            // a-b is vertical, c-d is horizontal
            assert_eq!(c.1, d.1);
            let xmin = c.0.min(d.0);
            let xmax = c.0.max(d.0);
            let ymin = a.1.min(b.1);
            let ymax = a.1.max(b.1);
            xmin < a.0 && a.0 < xmax && ymin < c.1 && c.1 < ymax
        }
        (false, true) => lines_cross(c, d, a, b),
        (false, false) => false, // both horizontal
    }
}

/// Checks whether line a-b crosses any of the polygon edges.
fn line_crosses_polygon(a: Point, b: Point, vertices: &[Point]) -> bool {
    let n = vertices.len();
    for i in 0..n {
        let c = vertices[i];
        let d = vertices[(i + 1) % n];
        if lines_cross(a, b, c, d) {
            return true;
        }
    }
    false
}

Day 10: Factory

This puzzle is the most elaborate of the entire event. I think of it in terms of vectors. We start with a set of n m-dimensional bit vectors B={b1,,bn} B stands for buttons. .

bi=(x1,,xm),xj{0,1}

The first part of the puzzle gives us a target binary vector t and asks us to find integer coefficients ki such that:

k1b1knbn=t

Furthermore, we must minimize the sum 1inki. In other words, we must find the smallest subset of vectors bi that yields t when xored.

My first instinct was to treat this as a shortest path problem in a state graph. The vertices correspond to current xor sums, and edges correspond to vector additions (bi). The search starts at the zero vector and must reach t. Since the number of dimensions is small (up to ten), I represent vectors as 64-bit integers.

fn lights_min_presses(buttons: &[u64], target: u64) -> usize {
    if target == 0 {
        return 0;
    }
    let mut visited: HashSet<u64> = HashSet::new();
    let mut q = VecDeque::new();
    let start = 0;
    visited.insert(start);
    q.push_back((start, 0));
    while let Some((v, dist)) = q.pop_front() {
        for b in buttons {
            let s = v ^ *b;
            if visited.contains(&s) {
                continue;
            }
            if s == target {
                return dist + 1;
            }
            visited.insert(s);
            q.push_back((s, dist + 1));
        }
    }
    panic!("Target state is unreachable: {buttons:?} {target}");
}

The second part has the same structure, but uses normal addition instead of xor, and the target vector can contain arbitrary non-negative integers. The objective remains minimizing the sum of coefficients ki.

k1b1++knbn=t

I tried adapting the graph-search solution and exploring dynamic programming techniques, but nothing worked. The values in the target vector were too large (over 50 on average in my input) for my programs to finish in a reasonable time.

Like many solvers, I resorted to the smt solver z3. Rust has nice bindings, and my code worked on the first try.

fn joltage_min_presses_z3
    ndims: usize,
    buttons: &[u64],
    target: &[u16],
) -> usize {
    use z3::{ast::Int, Optimize, SatResult};
    let opt = Optimize::new();

    // Define the k_i variables.
    let mut vars = vec![];
    for i in 0..buttons.len() {
        let k_i = Int::fresh_const(format!("k_{}", i).as_str());
        opt.assert(&k_i.ge(0));
        vars.push(k_i);
    }

    // Define a constraint for every dimension.
    for d in 0..ndims {
        let mut vals = vec![];
        for (i, b) in buttons.iter().enumerate() {
            if *b & (1 << d) > 0 {
                vals.push(vars[i].clone());
            }
        }
        opt.assert(&Int::add(&vals).eq(target[d]));
    }

    // Minimize the sum of k_i.
    opt.minimize(&Int::add(&vars));

    let mut result = 0;
    match opt.check(&[]) {
        SatResult::Sat => {
            let model = opt.get_model().unwrap();
            for v in vars.iter() {
                let val = model
                    .get_const_interp(v)
                    .unwrap()
                    .as_i64()
                    .unwrap();
                result += val as usize;
            }
        }
        _ => {
            panic!("No solution found.");
        }
    }
    result
}

Divide and conquer

The z3 solution feels unsatisfactory: I didn’t become any wiser Except for learning that z3 is awesome, of course. , and my compile times exploded. Luckily, Reddit user u/tenthmascot posted an ingenious solution that was fast and didn’t require any dependencies.

The solution relies on this observation: if all ki are even (ki=2ki), then each component in the target vector must also be even. We can halve both the coefficients and the target to get a smaller instance of the same problem, reducing the search space.

(2k1b1++2knbn=2t)(k1b1++knbn=t)

We can’t be sure all ki are even, but we can make them even by subtracting at most one. Which ki should we decrement? We don’t know, so we try all 2n combinations and pick the best result!

Unfortunately, testing all combinations every time is still too slow. Naive approaches must evaluate around 50n combinations (where 50 is a typical value from the target vector). Our clever way probes… (2n)log250=50n combinations.

Luckily, we can do better: restrict the search to combinations matching the target’s parity. If an equation holds under normal arithmetic, it must also hold modulo two.

(k1b1++knbn=t)((k12)b1(kn2)bn=t2)

For example, if the target vector is (32,93,66,72,61), its parity vector is (0,1,0,0,1), and we need to consider only those combinations of bi that yield the same parity when xored. Now we deal with Boolean variables just like we did in the first part, but this time we must look at all solutions, not only the optimal one.

If combinations of input vectors bi cover the parity space uniformly, we only evaluate about 2n2m=2n-m candidates at each step. All instances in my input exhibit near-perfect distribution I wrote a program that generates random vectors and prints their parity group size distributions. Uniform coverage seems to be the norm. . To exploit this reduction, we precompute parity groups and consult them at every recursion level.

Here is my implementation of u/tenthmascot’s idea:


type ParityGroups = HashMap<u64, Vec<u64>>;

fn parity_groups(ndims: usize, buttons: &[u64]) -> ParityGroups {
    let n = buttons.len();
    assert!(n <= 64);
    let mut groups = ParityGroups::new();
    // Use a 64-bit integer to represent subsets.
    for mask in 0..(1 << ndims) {
        let mut parity_bits = 0;
        for i in 0..n {
            if mask & (1 << i) == 0 {
                continue;
            }
            for d in 0..ndims {
                if buttons[i] & (1 << d) > 0 {
                    parity_bits ^= 1 << d;
                }
            }
        }
        groups.entry(parity_bits).or_default().push(mask);
    }
    groups
}

fn joltage_min_presses
    groups: &ParityGroups,
    buttons: &[u64],
    target: &[u16],
) -> Option<usize> {
    if target.iter().sum::<u16>() == 0 {
        return Some(0);
    }

    // Compute parity vector of the target.
    let mut parity = 0u64;
    for (d, t_d) in target.iter().enumerate() {
        if t_d & 1 > 0 {
            parity |= 1u64 << d;
        }
    }

    let mut best: Option<usize> = None;

    // Loop over subsets that give the desired parity.
    'outer: for subset in groups.get(&parity)? {
        // Compute the arguments for a smaller problem.
        let mut new_target = target.to_vec();
        let mut presses = 0;
        for i in 0..buttons.len() {
            if subset & (1 << i) == 0 {
                continue;
            }
            presses += 1;
            for d in 0..target.len() {
                if buttons[i] & (1 << d) > 0 {
                    if new_target[d] == 0 {
                        continue 'outer;
                    }
                    new_target[d] -= 1;
                }
            }
        }
        for v in new_target.iter_mut() {
            assert_eq!(*v % 2, 0);
            *v /= 2;
        }

        // Solve the smaller problem recursively.
        if let Some(x) = joltage_min_presses(groups, buttons, &new_target) {
            let total = presses + 2 * x;
            best = Some(best.map_or(total, |v| v.min(total)));
        }
    }
    best
}

Parity groups also yield a simple algorithm for part one: pick the smallest subset in the target’s parity group.

fn lights_min_presses_parity(groups: &ParityGroups, target: u64) -> usize {
    groups
        .get(&target)
        .expect("no solution found")
        .iter()
        .map(|mask| mask.count_ones() as usize)
        .min()
        .unwrap()
}

Day 11: Reactor

This puzzle asks us to count paths between two vertices in a directed acyclic graph. That’s the same problem as day seven, part two, but this time the graph is general, not grid-embedded. On day seven, the grid structure provided a natural topological order. Here, we must compute it ourselves.

type Graph = Vec<Vec<usize>>;

/// Returns the vertices of a graph in a topological order.
fn toposort(g: &Graph, start: usize) -> Vec<usize> {
    fn go(
        g: &Graph,
        v: usize,
        visited: &mut Vec<bool>,
        out: &mut Vec<usize>,
    ) {
        if visited[v] {
            return;
        }
        visited[v] = true;
        for w in &g[v] {
            go(g, *w, visited, out);
        }
        out.push(v);
    }

    let mut out = vec![];
    let mut visited = vec![false; g.len()];
    go(g, start, &mut visited, &mut out);
    out.reverse();
    out
}

/// Computes the number of paths between two vertices.
fn count_paths(g: &Graph, start: usize, end: usize) -> usize {
    let mut d = vec![0; g.len()];
    d[start] = 1;
    for p in toposort(g, start) {
        for c in &g[p] {
            d[*c] += d[p];
        }
    }
    d[end]
}

In the second part, we must count paths from svr to out that pass through both fft and dac (in any order). The straightforward solution multiplies path counts for segments svr -> fft -> dac -> out and svr -> dac -> fft -> out.

let fft_dac =
    count_paths(&g, svr, fft) *
    count_paths(&g, fft, dac) *
    count_paths(&g, dac, out);
let dac_fft =
    count_paths(&g, svr, dac) *
    count_paths(&g, dac, fft) *
    count_paths(&g, fft, out);
println!("{}", fft_dac + dac_fft);

Of course, only one of count_paths(&g, dac, fft) and count_paths(&g, fft, dac) can be non-zero.

Day 12: Christmas tree farm

This puzzle asks whether a rectangle of given dimensions can fit a set of irregular tiles. The problem seemed overwhelmingly complex, and the input sizes were large, so I suspected a trick. After all, the final day is often cheesy.

One smoke test for whether a rectangle can fit a set of tiles is checking whether its area is large enough. If the rectangle’s area is 360 units and the tiles occupy 365 units, the instance is clearly unsolvable.

I printed area ratios for all instances in my input. They fell into two groups: ratios <80% and ratios >100%. I assumed that if a rectangle could fit the tiles, it did. Luckily, this guess was correct.

My solution boils down to the following humble heuristic:

let tiles_per_shape: Vec<usize> =
    shapes.iter().map(|g| grid_count(g, &'#')).collect();

let mut total = 0usize;
for instance in instances {
    let tile_count = tiles_per_shape
        .iter()
        .zip(instance.shape_counts.iter())
        .map(|(x, y)| x * y)
        .sum::<usize>();

    if tile_count <= instance.w * instance.h {
        total += 1;
    }
}
println!("{total}");

If you want to dig deeper and solve the problem exactly, Knuth’s dancing links algorithm seems to be a promising direction to explore.

Happy holidays!

Similar articles