Neuralatex Logo A machine learning library written in pure LaTeX

University of York

Abstract

Neuralatex is a scalar values-based auto-grad library similar to MicroGrad but written entirely in latex! As part of your latex document you can specify the architecture of a neural network and loss functions, how to generate or load training data, and specify training hyperparameters and experiments. When the document is compiled, the latex compiler will generate or load training data, train the network, run experiments and generate figures. Training debug output can be written to the latex compiler log or included as part of the paper itself.

Why?

  • Ideal Language: LaTeX offers macros instead of variables, comma-separated strings instead of arrays, and flexible loop options like pgfplotsforeachungrouped.
  • Self-Contained Paper: Solves reproducibility by including all code, data, and experiments in the LaTeX source, executable during compilation.
  • Overleaf as Compute: Turns Overleaf into a free cloud compute service for training models.
  • Small-Scale Focus: arXiv’s 50MB limit encourages small datasets and models, leveling the playing field.
  • Unified Workflow: Combines paper writing and coding in one LaTeX environment, reducing context switching.
  • No Git Needed: Links to arXiv source files replace GitHub, eliminating pesky version control hassles.

Implementation

Our implementation is heavily based on MicroGrad, although with a better choice of implementation language! Like MicroGrad, Neuralatex implements backpropagation (reverse-mode autodiff) over a dynamically-constructed DAG which can implement arbitrarily complex neural networks. Unlike MicroGrad (which comprises around 150 lines of python), our autograd engine requires nearly 700 lines of pure latex and the neural network library around 400! We estimate that this means Neuralatex is around 700% better. Neuralatex is object oriented using the TiKZ PGF module oo.

Getting the gradients of a computation is as simple as calling .backward() !


\input{nn.tex}

\pgfoonew \x=new Value(2.5,{},'',0)
\x.show()

\pgfoonew \y=new Value(0.3,{},'',0)
\y.show()

\x.multiply(\y,z)
\z.show()

\z.backward()

\x.show()
\y.show()
            

This prints:


Value(self: 1, data: 2.5, grad: 0.0, prev: , next: , op: ", isparam: 0, GC: 0.0)
Value(self: 2, data: 0.3, grad: 0.0, prev: , next: , op: ", isparam: 0, GC: 0.0)
Value(self: 3, data: 0.75, grad: 0.0, prev: 1,2, next: , op: *, isparam: 0, GC: 0.0)
Value(self: 1, data: 2.5, grad: 0.3, prev: , next: 3, op: ", isparam: 0, GC: 1.0)
Value(self: 2, data: 0.3, grad: 2.5, prev: , next: 3, op: ", isparam: 0, GC: 1.0)
            

And defining and calling an MLP is easy too!


\input{engine.tex}
\input{nn.tex}      

% Create two Value objects to store input values
\pgfoonew \x=new Value(1.0,{},'',0)
\pgfoonew \y=new Value(-1.0,{},'',0)

% Store the object IDs of the input Values in a list
\x.get id(\inputIDx)
\y.get id(\inputIDy)
\edef\templist{\inputIDx,\inputIDy}

% Define the MLP
\pgfoonew \mlp=new MLP(2,{4,4,1})

% Forward pass through MLP
\mlp.forward(\templist,output)
            

Demo

To demonstrate the power of our library we trained a small MLP to classify the two classes of the nonlinear, 2D spiral dataset. Training for 35 epochs on a dataset of 100 2D points (i.e. compiling the latex document) took only 48 hours on a Macbook Pro 2.4GHz Quad-Core! The document was compiled using TeXShop and the Macbook got very hot. We could have trained for more epochs but we thought it was obvious it was going to converge to zero loss and perfect test set performance so we didn’t feel the need to.

Training Dataset

Training Dataset

Results

Eval Dataset Results (86% Accuracy)

Additionally we propose two new metrics. The Written in Latex (WIL) metric is the proportion of source code of a machine learning library written in LaTeX and the Source code of method in source code of paper (SCOMISCOP) metric is the proportion of the source code of a method that is contained within the source code of the paper. See how Neuralatex is state-of-the-art in both metrics!

ML Library WIL
NeuRaLaTeX 1.0
PyTorch [1] 0.0
Tensorflow [2] 0.0
Matlab [3] 0.0
Deep learning paper SCOMISCOP
NeuRaLaTeX 1.0
Attention is all you need [4] 0.0
Deep Residual Learning for Image Recognition [5] 0.0

Now What?

This is just the beginning! We are working on adding more features to Neuralatex, including:

  • Accelerators: Support for arbitrary accelerators, GPUs, TPUs, Hadron Colliders! Lets get that compile time down!
  • In silicon: Neuralatex is not just the world’s best and only fully reproducible machine learning library, but also the future substrate of all computation!
  • Neuralatex LLMs: A perfect marriage of document editing, model training and ASI!

UnicorNeuRaLaTeX

We are seeking investment to accelerate the development and commercialisation of NeuRaLaTeX. We have decided to skip the angel and seed investment rounds and jump straight to unicorn status. Investors who would like a slice of the NeuRaLaTeX pie should post us a cheque for $10M in return for a 1% equity stake.

BibTeX

@misc{gardner2025neuralatexmachinelearninglibrary,
    title={NeuRaLaTeX: A machine learning library written in pure LaTeX}, 
    author={James A. D. Gardner and Will Rowan and William A. P. Smith},
    year={2025},
    eprint={2503.24187},
    archivePrefix={arXiv},
    primaryClass={cs.LG},
    url={https://arxiv.org/abs/2503.24187},
}