← All Projects
Hardware AI

Tarski AI Accelerator

An analog AI processor drawing just 36µW of power compared to a 110mW Arduino baseline — performing the same inference task at a fraction of the energy through asynchronous analog computing.


The Problem

Modern AI inference is bottlenecked by digital hardware. GPUs and TPUs burn enormous amounts of energy shuttling data between memory and compute units. The fundamental limitation isn't algorithmic - it's architectural. Digital computing forces continuous conversion between analog signals and discrete representations, wasting energy at every step.


The Approach

This project explores analog computing as a native substrate for neural network inference. By performing matrix multiplications directly in the analog domain using physical properties of circuits, we can eliminate the digital bottleneck entirely.

The system includes both a hardware prototype and a full software simulator that models the analog computation pipeline, allowing rapid iteration on circuit designs before fabrication.

Asynchronous Design

No global clock signal - computations complete at the speed of physics rather than waiting for synchronisation.

In-Memory Compute

Weights are stored as physical circuit properties, eliminating the memory-compute bottleneck.

Software Simulator

Cycle-accurate simulator models analog noise, drift, and nonlinearities for realistic pre-fabrication testing.

Energy Efficiency

36µW vs 110mW Arduino baseline — over 3000x more energy efficient for the same inference task.


Tech Stack

Circuit Design Python Rust SPICE Simulation Analog Electronics
← Back to Projects