Our implementation is heavily based on MicroGrad, although with a better choice of implementation language!
Like MicroGrad, Neuralatex implements backpropagation (reverse-mode autodiff) over a dynamically-constructed DAG which can implement arbitrarily complex neural networks.
Unlike MicroGrad (which comprises around 150 lines of python), our autograd engine requires nearly 700 lines of pure latex and the neural network library around 400!
We estimate that this means Neuralatex is around 700% better. Neuralatex is object oriented using the TiKZ PGF module oo.
Getting the gradients of a computation is as simple as calling .backward() !
\input{nn.tex}
\pgfoonew \x=new Value(2.5,{},'',0)
\x.show()
\pgfoonew \y=new Value(0.3,{},'',0)
\y.show()
\x.multiply(\y,z)
\z.show()
\z.backward()
\x.show()
\y.show()
This prints:
Value(self: 1, data: 2.5, grad: 0.0, prev: , next: , op: ", isparam: 0, GC: 0.0)
Value(self: 2, data: 0.3, grad: 0.0, prev: , next: , op: ", isparam: 0, GC: 0.0)
Value(self: 3, data: 0.75, grad: 0.0, prev: 1,2, next: , op: *, isparam: 0, GC: 0.0)
Value(self: 1, data: 2.5, grad: 0.3, prev: , next: 3, op: ", isparam: 0, GC: 1.0)
Value(self: 2, data: 0.3, grad: 2.5, prev: , next: 3, op: ", isparam: 0, GC: 1.0)
And defining and calling an MLP is easy too!
\input{engine.tex}
\input{nn.tex}
% Create two Value objects to store input values
\pgfoonew \x=new Value(1.0,{},'',0)
\pgfoonew \y=new Value(-1.0,{},'',0)
% Store the object IDs of the input Values in a list
\x.get id(\inputIDx)
\y.get id(\inputIDy)
\edef\templist{\inputIDx,\inputIDy}
% Define the MLP
\pgfoonew \mlp=new MLP(2,{4,4,1})
% Forward pass through MLP
\mlp.forward(\templist,output)