Balancing Sparse RNNs with Hyperparameterization Benefiting Meta-Learning

Quincy Hershey, Randy Paffenroth

公開日: 2025/9/18

Abstract

This paper develops alternative hyperparameters for specifying sparse Recurrent Neural Networks (RNNs). These hyperparameters allow for varying sparsity within the trainable weight matrices of the model while improving overall performance. This architecture enables the definition of a novel metric, hidden proportion, which seeks to balance the distribution of unknowns within the model and provides significant explanatory power of model performance. Together, the use of the varied sparsity RNN architecture combined with the hidden proportion metric generates significant performance gains while improving performance expectations on an a priori basis. This combined approach provides a path forward towards generalized meta-learning applications and model optimization based on intrinsic characteristics of the data set, including input and output dimensions.

Balancing Sparse RNNs with Hyperparameterization Benefiting Meta-Learning | SummarXiv | SummarXiv