Color2Struct: efficient and accurate deep-learning inverse design of structural color with controllable inference

Sichao Shan, Han Ye, Zhengmei Yang, Junpeng Hou, Zhitong Li

公開日: 2025/10/1

Abstract

Deep learning (DL) has revolutionized many fields such as materials design and protein folding. Recent studies have demonstrated the advantages of DL in the inverse design of structural colors, by effectively learning the complex nonlinear relations between structure parameters and optical responses, as dictated by the physical laws of light. While several models, such as tandem neural networks and generative adversarial networks, have been proposed, these methods can be biased and are difficult to scale up to complex structures. Moreover, the difficulty in incorporating physical constraints at the inference time hinders the controllability of the model-predicted spectra. In this work, we propose Color2Struct, a universal framework for efficient and accurate inverse design of structural colors with controllable predictions. By utilizing sampling bias correction, adaptive loss weighting, and physics-guided inference, Color2Struct improves the prediction of tandem networks by 65% (color difference) and 48% (short-wave near-infrared reflectivity) in designing RGB primary colors. These improvements make Color2Struct highly promising for applications in high-end display technologies and solar thermal energy harvesting. In experiments, the nanostructure samples are fabricated using a standard thin-film deposition method and their reflectance spectra are measured to validate the designs. Our work provides an efficient and highly optimized method for controllable inverse design, benefiting future explorations of more intricate structures. The proposed framework can be further generalized to a wide range of fields beyond nanophotonics.