Dendritic Neural Networks with Equilibrium Propagation
Mirrored from arXiv — Machine Learning for archival readability. Support the source by reading on the original site.
arXiv:2605.08135v1 Announce Type: new
Abstract: Equilibrium propagation (EP) is a biologically plausible alternative to backpropagation (BP), but its effectiveness can degrade in deeper and more challenging learning settings. In parallel, dendritic neural networks have demonstrated improved performance and generalization when trained with BP, suggesting that structured, biologically inspired architectures may enhance learning. In this work, we investigate the integration of dendritic neural networks with equilibrium propagation using an advanced EP framework. We evaluate the proposed dendritic EP model on MNIST, Kuzushiji-MNIST (KMNIST), and Fashion-MNIST (FMNIST), considering both shallow and deeper architectures. Our results show that dendritic EP achieves performance comparable to standard EP on simple tasks, while providing consistent improvements on more challenging datasets and deeper models. In particular, dendritic EP significantly outperforms standard EP on KMNIST and FMNIST, and approaches the performance of dendritic networks trained with backpropagation through time.To further understand these improvements, we analyze the evolution of hidden states during the free phase. We observe that dendritic EP exhibits higher activation magnitudes and more distributed hidden-state activity compared to standard EP, indicating that dendritic structure alters the internal network dynamics. These findings suggest that incorporating dendritic structure can enhance the effectiveness of biologically plausible learning algorithms, especially in regimes where standard EP struggles. Our work highlights the importance of architectural design for improving biologically inspired training methods.
More from arXiv — Machine Learning
-
Interpretable EEG Microstate Discovery via Variational Deep Embedding: A Systematic Architecture Search with Multi-Quadrant Evaluation
May 13
-
QuIDE: Mastering the Quantized Intelligence Trade-off via Active Optimization
May 13
-
Steering Without Breaking: Mechanistically Informed Interventions for Discrete Diffusion Language Models
May 13
-
Rotation-Preserving Supervised Fine-Tuning
May 13
Discussion (0)
Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.
Sign in →No comments yet. Sign in and be the first to say something.