Tiny-QMoE
Jack Cashman, Jiaqi Nie
Published: 2025/9/26
Abstract
The QMoE model provides a practical approach for compression of massive Mixture-of-Experts (MoE) models. QMoE offers a solution geared towards memory limitations that often reach terabyte scales, and it has the advantage of working with high sparsity models which implicitly lend themselves to compression techniques. QMoE also has the advantage of only taking MoE models into account and does not evaluate its use with non mixture of expert systems. Although this prior attempt focuses on the limitations of large servers with the latest NVIDIA hardware which in the case of the H100 and V100 which have 80 GB of HBM (High Bandwidth Memory), what is not being considered is a significantly more constrained environment, such as in the case of mobile devices which may have in the case of the iPhone anywhere from 4 to 8 GB of unified memory which also needs to be shared with the operating system and additional processes. Although edge devices such as phones and laptops are becoming increasingly more computationally powerful, they are still not close to the level of advanced server machines such as NVIDIA. An additional constraint that we must consider is that of latency. The communication time of sending a request to an LLM server and then getting it back is an additional waiting time that can be removed. We may also want to use LLM technology in environments where there is no reliable network connection.