Skip to content
Home » ADMM-Based Training for Spiking Neural Networks

ADMM-Based Training for Spiking Neural Networks

Training spiking neural networks (SNNs) remains a longstanding challenge due to their non-differentiable spike activation functions. Most existing approaches adapt methods designed for traditional artificial neural networks, such as backpropagation with surrogate gradients. While widely used, these surrogate techniques introduce approximation errors, limit the precise tracking of spike timings, and struggle to scale as SNN architectures deepen. This paper presents a fundamentally new training framework for SNNs based on the Alternating Direction Method of Multipliers (ADMM). Instead of relying on gradient-based updates, the method formulates SNN training as a model-based optimization problem, where weights, membrane potentials, and spike variables are jointly optimized under the constraints of neuronal dynamics. Closed-form iterative updates are derived for each variable, and a dedicated subroutine is proposed to handle the non-differentiability of the Heaviside step function without approximations.

A key contribution is that this approach tracks spike activation times exactly, avoiding the inaccuracies inherent in surrogate gradient learning. The method also allows the optimization of both network parameters and internal neuron states, offering a structured alternative to stochastic gradient descent–based training. Numerical experiments on the N-MNIST dataset demonstrate the feasibility of the approach and highlight its potential, particularly for shallow SNN architectures. The paper concludes by outlining promising research directions, including improving convergence speed, extending the optimizer to additional SNN layer types, and enhancing scalability to deeper and more complex networks. The authors also note that the ADMM structure naturally supports distributed and federated learning, given its separability across data samples.

ADMM-Based Training for Spiking Neural Networks