128 cores. 524K neurons (24-bit) to 1M (8-bit), up to 8.4M virtual with TDM. Hybrid ANN/SNN. Hardware virtualization. On-chip continual learning.
Hybrid Computing
Each core can switch between spiking neural network mode and INT8 multiply-accumulate mode. Run classical deep learning layers alongside spiking layers on the same silicon. No other neuromorphic chip does this.
4-Level Memory
L1 SRAM per core (96 KB), L2 shared tile cache (1 MB), L3 external DRAM (500M+ synapses), L4 CXL fabric. Loihi 2 has two levels. N3 has four, with hardware-managed caching and LRU eviction.
NeurOS Virtualization
Hardware-scheduled time-division multiplexing with dirty-page tracking and compressed DMA context switching. Each physical neuron handles 8 virtual time slots. No other neuromorphic chip offers hardware virtualization.
Learning Accelerators
One per tile, each with a 28-opcode ISA and 80 registers. Loihi 2 has one global learning engine. N3 has 16 running in parallel. No cross-chip bottleneck. Learning at wire speed.
Precision-Adaptive
Configure neuron precision per core. At 8-bit, neuron density doubles to over 1 million total. FACTOR low-rank synapse compression saves 2–8× memory. Same network, multiple precision targets.
Continual Learning
Hardware metaplasticity (3-bit synaptic consolidation), homeostatic plasticity (firing rate tracking with synaptic scaling), and synaptic fatigue. On-chip mechanisms for networks that learn continuously without catastrophic forgetting.
Full Die
128 neuromorphic cores across 16 tiles, 4 RISC-V management CPUs, async hybrid network-on-chip, and 36 MB on-chip SRAM. Every architectural feature implemented in RTL and validated on FPGA.
Scale
Architecture
Performance
Per-core toggle between spiking and classical multiply-accumulate. Deploy hybrid ANN/SNN networks natively.
Hardware-scheduled TDM for 680+ virtual networks. Context switching with dirty tracking and compressed DMA.
16 independent learning accelerators. 28-opcode ISA, 80 registers each. 16× the throughput of a single global engine.
L1 per-core, L2 tile cache, L3 DRAM-backed, L4 CXL fabric. 500M+ addressable synapses with hardware LRU management.
32 shared parameter sets per core. 4,096 neurons in 96 KB L1 instead of 1,024. 4× density increase.
3-bit consolidation state per synapse. Automatic meta-learning in hardware. No microcode overhead.
Low-rank synapse format using SVD decomposition. Hardware STORE_A/STORE_B opcodes. 2–8× memory savings.
Two-pass hardware WTA with configurable groups and k-winners. Competitive coding for sparse representations.
DELTA, BURST, and ADAPTIVE encoding on inter-chip links. 2–8× effective bandwidth for multi-chip systems.
The neurocore SDK targets N3 with the same Python API you already know. Hardware-accurate simulation on CPU, GPU, and FPGA.
pip install catalyst-cloud import catalyst_cloud as cc # Target N3 hardware net = cc.Network(chip="n3") inp = net.add_population("input", 784) # N3: hybrid ANN layer (INT8 MAC) ann = net.add_population("encoder", 256, neuron_model="ann_int8") # N3: spiking recurrent layer exc = net.add_population("hidden", 512, neuron_model="adaptive_lif") out = net.add_population("output", 10) net.connect(inp, ann, weight=0.3) net.connect(ann, exc, learning="stdp") net.connect(exc, out) result = cc.run(net, timesteps=1000) print(result.spike_trains)
| Catalyst N3 | Intel Loihi 2 | BrainChip Akida 2 | SpiNNaker 2 | |
|---|---|---|---|---|
| Cores | 128 (16 tiles) | 128 | 8 NPUs | 152 (ARM) |
| Neurons (24-bit)* | 524,288 | ~1,000,000 | 2,048 | Up to 16M |
| Neurons (8-bit) | 1,048,576 | — | — | — |
| Virtual neurons (TDM) | 4.2M (24-bit) | — | — | — |
| Neuron models | 8 (7 + custom ISA) | 3+ | 1 (LIF) | Software |
| Weight precision | 1–16-bit | 1–8-bit | 1/2/4/8-bit | Software |
| ANN mode | INT8 MAC | — | Yes | — |
| On-chip learning | 16 accelerators | 1 global | Limited | ARM cores |
| Learning ISA | 28 opcodes, 80 reg | Microcode | Fixed | Software |
| Memory levels | 4 (L1–L4) | 2 | 2 | External |
| Virtualization | NeurOS (680+) | — | — | — |
| Spike compression | DELTA/BURST | — | — | — |
| Metaplasticity | Hardware (3-bit) | — | — | — |
| Hardware validated | FPGA validated | ASIC (Intel 4) | ASIC (28nm) | ASIC (22nm) |
| Open design | Yes (BSL 1.1) | No | No | Partial |
* Neuron counts are not directly comparable across platforms. Loihi 2 neurons are 1-bit compartments; N3 neurons are 24-bit with full state (potential, current, traces). N3 at 8-bit precision supports 1,048,576 physical neurons, or up to 8.4M virtual with TDM.
Full architecture specification, benchmark results, and FPGA validation.
Research partnerships, early access, and integration enquiries.