An analog AI processor drawing just 36µW of power compared to a 110mW Arduino baseline — performing the same inference task at a fraction of the energy through asynchronous analog computing.
Modern AI inference is bottlenecked by digital hardware. GPUs and TPUs burn enormous amounts of energy shuttling data between memory and compute units. The fundamental limitation isn't algorithmic - it's architectural. Digital computing forces continuous conversion between analog signals and discrete representations, wasting energy at every step.
This project explores analog computing as a native substrate for neural network inference. By performing matrix multiplications directly in the analog domain using physical properties of circuits, we can eliminate the digital bottleneck entirely.
The system includes both a hardware prototype and a full software simulator that models the analog computation pipeline, allowing rapid iteration on circuit designs before fabrication.
No global clock signal - computations complete at the speed of physics rather than waiting for synchronisation.
Weights are stored as physical circuit properties, eliminating the memory-compute bottleneck.
Cycle-accurate simulator models analog noise, drift, and nonlinearities for realistic pre-fabrication testing.
36µW vs 110mW Arduino baseline — over 3000x more energy efficient for the same inference task.