Yi, YungangLi, WeihuaKuo, MatthewBai, Quan2025-09-222025-09-222025-09-18IEEE Transactions on Audio, Speech and Language Processing, ISSN: 2998-4173 (Print); 2998-4173 (Online), Institute of Electrical and Electronics Engineers (IEEE), 1-13. doi: 10.1109/taslpro.2025.36118362998-41732998-4173http://hdl.handle.net/10292/19836AI-based music generation has made significant progress in recent years. However, generating symbolic music that is both long-structured and expressive remains a significant challenge. In this paper, we propose PerceiverS (Segmentation and Scale), a novel architecture designed to address this issue by leveraging both Effective Segmentation and Multi-Scale attention mechanisms. Our approach enhances symbolic music generation by simultaneously learning long-term structural dependencies and short-term expressive details. By combining cross-attention and self-attention in a Multi-Scale setting, PerceiverS captures long-range musical structure while preserving performance nuances. The proposed model has been evaluated using the Maestro dataset and has demonstrated improvements in generating coherent and diverse music, characterized by both structural consistency and expressive variation. The project demos and the generated music samples can be accessed through the link: https://perceivers.github.io.© 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.0801 Artificial Intelligence and Image Processing0906 Electrical and Electronic EngineeringSpeech-Language Pathology & AudiologyPerceiverS: A Multi-Scale Perceiver with Effective Segmentation for Long-Term Expressive Symbolic Music GenerationJournal ArticleOpenAccess10.1109/taslpro.2025.3611836