Developing Training Procedures for Piecewise-linear Spline Activation Functions in Neural Networks

William H Patty

公開日: 2025/9/17

Abstract

Activation functions in neural networks are typically selected from a set of empirically validated, commonly used static functions such as ReLU, tanh, or sigmoid. However, by optimizing the shapes of a network's activation functions, we can train models that are more parameter-efficient and accurate by assigning more optimal activations to the neurons. In this paper, I present and compare 9 training methodologies to explore dual-optimization dynamics in neural networks with parameterized linear B-spline activation functions. The experiments realize up to 94% lower end model error rates in FNNs and 51% lower rates in CNNs compared to traditional ReLU-based models. These gains come at the cost of additional development and training complexity as well as end model latency.

Developing Training Procedures for Piecewise-linear Spline Activation Functions in Neural Networks | SummarXiv | SummarXiv