Scaling Up Liquid-Resistance Liquid-Capacitance Networks for Efficient Sequence Modeling

M贸nika Farsang, Ramin Hasani, Daniela Rus, Radu Grosu

Published: 2025/5/27

Abstract

We present LrcSSM, a $\textit{non-linear}$ recurrent model that processes long sequences as fast as today's linear state-space layers. By forcing the Jacobian matrix to be diagonal, the full sequence can be solved in parallel, giving $\mathcal{O}(TD)$ time and memory and only $\mathcal{O}(\log T)$ sequential depth, for input-sequence length $T$ and a state dimension $D$. Moreover, LrcSSM offers a formal gradient-stability guarantee that other input-varying systems such as Liquid-S4 and Mamba do not provide. Importantly, the diagonal Jacobian structure of our model results in no performance loss compared to the original model with dense Jacobian, and the approach can be generalized to other non-linear recurrent models, demonstrating broader applicability. On a suite of long-range forecasting tasks, we demonstrate that LrcSSM outperforms Transformers, LRU, S5, and Mamba.

Scaling Up Liquid-Resistance Liquid-Capacitance Networks for Efficient Sequence Modeling | SummarXiv | SummarXiv