Noordzij's Cube

Jialin Lu luxxxlucy.github.io November 14 2025

The Cube

Gerrit Noordzij's Cube Seminal work The Stroke: Theory of Writing.

The Stroke: Theory of Writing by Gerrit Noordzij
is perhaps the world's first design space—the conceptual foundation for parametric design and variable fonts. It defines letterforms through three fundamental axes: translation, expansion, and rotation. These correspond to the physical properties of a writing instrument moving across a surface.

The cube represents a complete typographic parameter space. Each point within it corresponds to a unique letterform style.

Can We Learn It?

A small variational autoencoder is trained to see if a neural network can internalize this design space. The network compresses each letterform into a latent vector and reconstructs it, attempting to capture typographic features like contrast, weight, and stroke style.

Training data (blue) spans most of the simplex; test samples (orange) held out for validation. The cube is defined from $(0,0,0)$ to $(1,1,1)$. The simplex $(1-x) + (1-y) + (1-z) \leq 1$ where $x,y,z \geq 0$ is held out for testing. Train/Test Split in Noordzij Cube

The network does learn something—it can interpolate between unseen letterforms, suggesting it has captured some continuous structure of the design space. But the interpolation isn't smooth. With more data and a larger model, it would likely improve. Still, this small experiment reveals fundamental issues.

Interpolating between two unseen 'e' forms. The transitions work but lack smoothness. Letter Interpolation

Why Is This Hard?

The interpolation problem. Linear paths in latent space rarely follow the true data manifold. Andrew Wilson has a great insight Garipov, Timur, et al. "Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs." NeurIPS 2018. that optima in loss surfaces connect via curved trajectories, not straight lines—a concept called "mode connectivity." This phenomenon rhymes here as well. Instead of relying on vector arithmetic that we assume to work but not always really works well, we need more careful sampling for interpolation such as Béthune, Louis, et al. "Follow the Energy, Find the Path: Riemannian Metrics from Energy-Based Models." arXiv:2505.18230, 2025. that respect the underlying data manifold.

The representation problem. Pixel representations limit resolution and editability. Implicit neural fields encode images as continuous functions, offering resolution independence. Signed distance functions (SDFs) give crisp vector-like rendering but struggle with artifacts at sharp corners—a fundamental limitation. Multi-channel SDFs Chlumský, Viktor. "Shape Decomposition for Multi-channel Distance Fields." Master's thesis, Czech Technical University in Prague, 2015. encode additional geometry to help. Multi-implicit approaches Reddy et al. "A Multi-Implicit Neural Representation for Fonts." NeurIPS 2021. superimpose multiple functions to preserve fine features like serifs.

The editability problem. Direct Deep SVG generation is hard—discrete Bézier control points or SVG commands turns out to be very difficult to train and generally does not perform well. However, caveat here that even if we could train a network to generate SVG commands, designers still couldn't work with them easily. Simon Cozens noted at the first Font&AI conference, "Fonts & AI Conference." Trust, October 2025. trust.ilovetypography.com/fonts-and-ai/ SVG commands suit computers, not designers. Foundry design work needs concrete representations like curve-in/curve-out handles that are interpretable and editable.

A Potential Path Forward

I have a rough idea: train an implicit field for expressiveness, then add a projection step to convert output into parametric-defined shapes. This makes autotracing differentiable—fitting curves while managing gradient flow through implicit gradients. Related work on optimization layers Agrawal, Akshay, et al. "Differentiable Convex Optimization Layers." NeurIPS 2019. Cho, Minsu, et al. "Differentiable Spline Approximations." arXiv:2110.01532, 2021. provide relevant techniques, but the challenge is quality—the curve fitting must handle sharp edges and corners properly, with special care for discontinuities.

The core idea isn't new, but if done right, it could bridge neural expressiveness with parametric editability: a stable neural model with good shape bias that produces editable results designers can actually work with.

References

Noordzij Cube

In Memoriam Gerrit Noordzij 1931-2022

Axis Praxis: Noordzij Cube

Multidimensional Axis Visualizer

Related Technical Work

Ha, David. "Generating Large Images from Latent Vectors." blog.otoro.net, 2016.

Reddy, Pradyumna, et al. "A Multi-Implicit Neural Representation for Fonts." NeurIPS 2021.

ChlumskĂ˝, Viktor. "Shape Decomposition for Multi-channel Distance Fields." Master's thesis, Czech Technical University in Prague, 2015.

BĂŠthune, Louis, et al. "Follow the Energy, Find the Path: Riemannian Metrics from Energy-Based Models." arXiv:2505.18230, 2025.

Garipov, Timur, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry Vetrov, and Andrew Gordon Wilson. "Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs." NeurIPS 2018.

Cho, Minsu, Jihoon Chung, and Seok-Hee Hong. "Differentiable Spline Approximations." arXiv:2110.01532, 2021.

Agrawal, Akshay, Brandon Amos, Shane Barratt, Stephen Boyd, Steven Diamond, and J. Zico Kolter. "Differentiable Convex Optimization Layers." NeurIPS 2019.