Enhancing Low-Rank Adaptation with Structured Nonlinear Transformations

Guanzhi Deng, Mingyang Liu, Dapeng Wu, Yinqiao Li, Linqi Song

Published: 2025/9/26

Abstract

Low-Rank Adaptation (LoRA) is a widely adopted parameter-efficient fine-tuning method for large language models. However, its linear nature limits expressiveness. We propose LoRAN, a non-linear extension of LoRA that applies lightweight transformations to the low-rank updates. We further introduce Sinter, a sine-based activation that adds structured perturbations without increasing parameter count. Experiments across summarization and classification tasks show that LoRAN consistently improves over QLoRA. Ablation studies reveal that Sinter outperforms standard activations such as Sigmoid, ReLU, and Tanh, highlighting the importance of activation design in lowrank tuning.

Enhancing Low-Rank Adaptation with Structured Nonlinear Transformations | SummarXiv | SummarXiv