The Application of LSTM in the AI-Based Enhancement of Classical Compositions

Main Article Content

Dzikri Rahadian Fudholi
Delfia Nur Anrianti Putri
Raden Bagus Muhammad AdryanPutra Adhy Wijaya
Jonathan Edmund Kusnadi
Jovinca Claudia Amarissa

Abstract

Music enhancement through deep learning methodologies presents an innovative approach to refining and augmenting classical compositions. Leveraging a comprehensive dataset of classical piano MIDI files, this study employs LSTM networks with attention mechanisms for music refinement. The model, trained on diverse compositions, demonstrates proficiency in capturing tempo nuances but faces challenges in replicating varied pitch patterns. Assessments by 28 individuals reveal positive reception, particularly in melody integration, scoring notably high at 8 out of 10. However, while praised for cohesion, bass lines received slightly lower scores, suggesting opportunities for enhancing originality and impact. These findings underscore the LSTM model's capability to generate harmonious melodies and highlight refinement areas, particularly in innovating bass lines within classical compositions. This study contributes to advancing automated music refinement, guiding further developments in LSTM-based music generation techniques.

Article Details

How to Cite
Fudholi, D., Putri, D., Wijaya, R., Kusnadi, J., & Amarissa, J. (2024). The Application of LSTM in the AI-Based Enhancement of Classical Compositions. Journal of Informatics Information System Software Engineering and Applications (INISTA), 7(1), 107-117. https://doi.org/10.20895/inista.v7i1.1628
Section
Articles

References

[1] D. A. Tyas and T. Ratnaningsih, “Analisis Segmentasi Sel Darah Merah berbasis Mask-RCNN,” Jurnal Informasi, Sistem Informasi, dan Rekayasa Perangkat Lunak (INISTA), vol. 5, no. 1, pp. 1–7, Nov. 2022, doi: 10.20895/inista.v5i1.766.
[2] D. A. Wibisono and A. Buhanuddin, "Classification of Marine Fish Based on Image Using Convolutional Neural Network Algorithm and VGG16," Jurnal Informasi, Sistem Informasi, dan Rekayasa Perangkat Lunak (INISTA), vol. 6, no. 2, pp. 116–123, Jun. 2024, doi: 10.20895/inista.v6i2.1466.
[3] P. D. N. A. Wijaya R. B. M. A. A. and D. R. Fudholi, "Smart GreenGrocer: Automatic Vegetable Type Classification Using the CNN Algorithm," Indonesian Journal of Computing and Cybernetics Systems (IJCCS), vol. 17, no. 3, Jul. 2023, doi: 10.22146/ijccs.82377.
[4] W. R. B. M. A. A. N. Putri D. N. A. and Wahyono, "U-Net Based Vegetation Segmentation for Urban Green Space (UGS) Regulation in Yogyakarta City," in 2023 Eighth International Conference on Informatics and Computing (ICIC), Dec. 2023, pp. 1–6. doi: 10.1109/ICIC60109.2023.10381938.
[5] S. Malloch and C. Trevarthen, "The Human Nature of Music," Front Psychol, vol. 9, Oct. 2018, doi: 10.3389/fpsyg.2018.01680.
[6] V. Velardo and M. Vallati, "Automated Planning and Music Composition: an Efficient Approach for Generating Musical Structures," in Proceedings of the 11th International Symposium on Computer Music Multidisciplinary Research (CMMR 2015), Plymouth, UK: Springer, 2015, pp. 778–790.
[7] K. J. Rolla V. and L. Velho, "The complexity of classical music networks," Europhys Lett, vol. 121, no. 3, p. 38005, Apr. 2018, doi: 10.1209/0295-5075/121/38005.
[8] K. S. S. Y. V. G. P. Yadav P. S. and R. S. Singh, "A Lightweight Deep Learning-Based Approach for Jazz Music Generation in MIDI Format," Comput Intell Neurosci, vol. 2022, p. 2140895, Aug. 2022, doi: 10.1155/2022/2140895.
[9] Y. C. G. S. Z. G. Possik J. and A. D'Ambrogio, "A Model Based Systems Engineering Approach to Automated Music Arrangement," in 2021 Annual Modeling and Simulation Conference (ANNSIM), Jul. 2021, pp. 1–12. doi: 10.23919/ANNSIM52504.2021.9552105.
[10] N. T. Shah F. and N. Vyas, "LSTM Based Music Generation," in 2019 International Conference on Machine Learning and Data Engineering (iCMLDE), Dec. 2019, pp. 48–53. doi: 10.1109/iCMLDE49015.2019.00020.
[11] T. V Mohanty M. N. and R. K. Pattanaik, "Music Regeneration with RNN Architecture Using LSTM," in 2023 International Conference in Advances in Power, Signal, and Information Technology (APSIT), Jun. 2023, pp. 538–542. doi: 10.1109/APSIT58554.2023.10201708.
[12] L. S. Nistal J. and G. Richard, "Comparing Representations for Audio Synthesis Using Generative Adversarial Networks," in 2020 28th European Signal Processing Conference (EUSIPCO), Jan. 2021, pp. 161–165. doi: 10.23919/Eusipco47968.2020.9287799.
[13] A. et al. Wali, "Generative adversarial networks for speech processing: A review," Comput Speech Lang, vol. 72, p. 101308, Mar. 2022, doi: 10.1016/j.csl.2021.101308.
[14] C. S.-P. Wan C.-H. and H.-Y. Lee, "Towards Audio to Scene Image Synthesis Using Generative Adversarial Network," in ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 2019, pp. 496–500. doi: 10.1109/ICASSP.2019.8682383.
[15] D. S. Sajad S. and M. Meleet, "Music Generation for Novices Using Recurrent Neural Network (RNN)," in 2021 International Conference on Innovative Computing, Intelligent Communication and Smart Electrical Systems (ICSES), Sep. 2021, pp. 1–6. doi: 10.1109/ICSES52305.2021.9633906.
[16] F. J. Neves P. L. T. and J. B. Florindo, "Generating Music with Sentiment Using Transformer-GANs," in Proceedings of the 23rd ISMIR Conference, Dec. 2022, pp. 717–725. doi: 10.5281/zenodo.7342704.