Home
PROJECT TARSKIAnalog Neuromorphic Computing

Analog Neuromorphic Computing

Analog Neuromorphic Computing for Ultra-Low-Power AI

Sub-watt, instant AI experiences. Analog hardware delivering 100× power reduction for edge intelligence.

The Problem

AI's power crisis — visualized

GWh draw

Training vs. daily inference

6040200
GWh

GPT-4 Training

Training 2022–2023

50 GWh

ChatGPT Daily Load

45 GWh/day

GPT-4's training used enough energy to power a city for days. That power is now used daily to run these models.

Market demand

Edge AI locked by watts

Edge AI market trajectory$24.9B2025$41.2B2027$66.5B2030

$24.9B → $66.5B market growth throttled by digital efficiency limits.

Wearables die fast

Batteries drained in hours instead of days.

IoT sensors stall

Recharge trucks, not data streams.

Robots idle

Charge cycles dominate duty cycles.

Implants risk safety

Heat + maintenance windows limit adoption.

Solution

True analog computing

Digital AI (today)

  • • Binary switching, energy burned every clock.
  • • 100W+ per inference workload.
  • • Heat + battery budgets kill edge deployments.

Analog neuromorphic (Project Tarski)

  • • Continuous-value circuits — energy only on state change.
  • • <1W for the same task.
  • • 100× reduction by leaning on physics, not optimization.

Why it works

Your brain runs on 20 watts and still outperforms supercomputers on perception tasks. We build the same way: circuits that compute in analog, tolerate noise, and only sip power when information changes.

PCB-first prototyping

10× faster iteration vs. custom silicon loops.

Rust simulator

200× faster than SPICE with analog physics baked in.

Programmable architecture

Reconfigure analog fabric for new AI tasks.

Path to silicon

Prove on boards, then shrink to ASIC.

Flow

Simulation → Training → PCB → Silicon

Simulation

Simulation

Rust physics engine runs 200× faster than SPICE with analog noise baked in.

Training

Training

Hardware-aware learning tunes weights that tolerate drift, offsets, and stochasticity.

PCB

PCB

PCB-first architecture lets us validate in weeks before taping out silicon.

Silicon

Silicon

Proven analog blocks transition to ASIC form factors for deployment.

Traction

Real progress

Current progress

  • Single-neuron prototype

    Under construction on breadboard with analog integrate-and-fire core.

  • SPICE validation

    Noise + temperature sweeps confirm circuit stability across tolerances.

  • KiCAD layout

    Multi-neuron PCB routing in progress for lab bring-up.

  • Rust simulator

    Framework operational, streaming hardware params into training loop.

Next milestones

  • Dec 2025

    Full network simulator

  • Mar 2026

    Neural network demonstrator (MNIST digit recognition)

  • 2027

    Silicon prototype + pilot partnerships

Previous work

GridAI

Optimized Ireland's energy grid using high-performance Rust (2.5M sim-years/hr).

Tyndall Institute

Laser power transmission research (26% efficiency achieved).

Patch

Co-founded concussion sensor startup with WiFi mesh (6,000 samples/s).

Applications

Unlocking impossible use cases

Target industries: $2-5B serviceable market in power-critical edge AI.

Robotics

All-day autonomous operation without recharging

Wearables

Week-long battery life with always-on AI

IoT Sensors

Decade-long deployments on coin cells

Medical Implants

Safe, ultra-low-power diagnostics

Drones

10× flight time extension

How it works

Analog integrate-and-fire neurons

Input

Weighted analog voltages — no ADC/quantization overhead.

Integration

Op-amp + capacitor accumulates charge like a dendrite.

Activation

Comparator spikes once thresholds are met.

Output

Continuous-valued signal cascades forward (<1 mW per neuron).

Network power

1000 analog neurons ≈1W vs. 100W digital.

The challenge

Analog hardware is noisy

Problem

Component tolerances, temperature drift, and manufacturing variation make analog circuits unpredictable.

Hardware-aware training

ADC telemetry + finetuning

Embedded ADCs stream live voltages/current into our Rust stack so each deployed board gets hardware-specific finetuning.

Component-aware programming

We program against measured tolerances, automatically compensating for drift, mismatch, and variation across neurons.

Result: neural networks behave like biological brains — noisy neurons, reliable systems.

Technology stack

Prototype fast, transition to silicon

Hardware

  • Design: KiCAD (schematic + PCB)
  • Validation: ngspice (SPICE simulation)
  • Fabrication: Standard PCB assembly (JLCPCB)
  • Components: Off-the-shelf op-amps, comparators, passives

Software

  • Simulator: Custom-built in Rust (200× speedup)
  • Training: Python interface to Rust core
  • Models: Hardware noise, variation, nonlinearity
  • Algorithm: Hardware-aware surrogate gradient descent + time-to-event loss

Most neuromorphic projects use exotic custom fabrication (6-12 month cycles, €50K+ per iteration). We prototype on PCB in weeks for <€5K, then transition proven designs to silicon.

Competitive landscape

Analog-first advantage

Digital neuromorphic

  • • Intel Loihi 2: 1,152-chip Hala Point system, 1,000× efficiency vs GPUs on niche tasks, but research-only and still digital.
  • • BrainChip Akida: 500× lower energy yet fully digital, so still limited by switching power.

Analog neuromorphic

  • • Mostly university research with slow, expensive custom fabrication.
  • • Limited programmability and commercial focus.

Our advantage:

True analog + PCB-first iteration speed + programmable architecture + commercial focus.