Neuromorphic processors for edge inference.
We design spiking neural network processors from scratch in Verilog and validate them on real FPGA hardware. Event-driven cores that fire only on input, learn on-chip through programmable plasticity rules, and run at a fraction of the power of conventional accelerators. Four generations designed. Two open sourced.
Loihi 1 feature parity. 14-opcode microcode learning engine, barrier-synchronised mesh network-on-chip with multi-chip serial links, triple RV32IMF RISC-V cluster. Validated on AWS F2 (VU47P) and Kria K26.
Programmable microcode neurons with Loihi 2 parity. Graded spike transmission, eligibility traces, reward-modulated plasticity. 28/28 hardware tests on AWS F2. SDK with CPU, GPU, UART, and PCIe backends.
Time-division multiplexing, async hybrid NoC with adaptive routing, 4 parallel learning threads, hardware short-term plasticity and homeostatic scaling. NeurOS virtualisation scheduling 680+ concurrent networks. 19/19 hardware tests on AWS F2.
Chiplet architecture with 2 Neural Compute Chiplets, 32 tiles each. Spike Tensor Core with 16×16 MAC array. 8 synapse formats including KAN B-spline. 4-way multi-threading. 8 hardware learning rules. AES-256-GCM with post-quantum key exchange.
3,229 simulation tests passing. N4-Edge variant runs at 2.6% LUT utilisation and 0.378 W total on Kria K26.
All from FPGA-validated builds. Trained on GPU, quantized to 16-bit, deployed on hardware.
Open to research collaboration, FPGA contract work, and partnerships.