Classification of Marine Fish Based on Image Using Convolutional Neural Network Algorithm and VGG16

Main Article Content

Dimas Adira Wibisono
Auliya Buhanuddin

Abstract

Marine fish are one of the many natural resources that frequently consumed by people. However, several types of marine fish are prohibited from consumption due to being nearly extinct. Additionally, some fish species contain high levels of mercury that can be harmful to human if consumed. Because of the large number of marine fish species, it becomes challenging to identify them without knowledge of fisheries. Moreover, computers have become highly advanced devices that facilitate various human activities. This advancement allows for the creation of systems capable of processing information from images, known as image classification. There are numerous methods that can be employed in designing an image classification system, one of which is transfer learning. This study was aimed to design an image classification system using the transfer learning method with a pre-trained VGG16 model and Convolutional Neural Network algorithm. The research results showed a training data accuracy of 100% and a validation data accuracy of 99.3%, with an overall accuracy of 84% and a loss value of 0.6591.

Article Details

How to Cite
Wibisono, D., & Buhanuddin, A. (2024). Classification of Marine Fish Based on Image Using Convolutional Neural Network Algorithm and VGG16. Journal of Informatics Information System Software Engineering and Applications (INISTA), 6(2), 116-123. https://doi.org/10.20895/inista.v6i2.1466
Section
Articles

References

[1] O. Pratama, “Konservasi Perairan Sebagai Upaya menjaga Potensi Kelautan dan Perikanan Indonesia,” 2020. .
[2] Kementerian Kelautan dan Perikanan, “Petunjuk Teknis Pemetaan Sebaran Jenis Agen Hayati Yang Dilindungi, Dilarang, dan Invasif di Indonesia,” Badan Karantina Ikan, Pengendalian Mutu dan Keamanan Hasil Perikanan. pp. 1–34, 2015.
[3] M. Ramadhani and D. H. Murti, “Klasifikasi Ikan Menggunakan Oriented Fast and Rotated Brief (Orb) Dan K-Nearest Neighbor (Knn),” JUTI J. Ilm. Teknol. Inf., vol. 16, no. 2, p. 115, 2018, doi: 10.12962/j24068535.v16i2.a711.
[4] H. Darmanto, “Pengenalan Spesies Ikan Berdasarkan Kontur Otolith Menggunakan Convolutional Neural Network,” Joined J. (Journal Informatics Educ., vol. 2, no. 1, p. 41, 2019, doi: 10.31331/joined.v2i1.847.
[5] M. Batta, “Machine Learning Algorithms - A Review ,” Int. J. Sci. Res. (IJ, vol. 9, no. 1, pp. 381-undefined, 2020, doi: 10.21275/ART20203995.
[6] D. A. Bashar, “Survey on Evolving Deep Learning Neural Network Architectures,” J. Artif. Intell. Capsul. Networks, vol. 2019, no. 2, pp. 73–82, 2019, doi: 10.36548/jaicn.2019.2.003.
[7] Y. Ke and M. Hagiwara, “CNN-encoded radical-level representation for Japanese processing,” Trans. Japanese Soc. Artif. Intell., vol. 33, no. 4, 2018, doi: 10.1527/tjsai.D-I23.
[8] M. Rajnoha, R. Burget, and M. K. Dutta, “Handwriting Comenia Script Recognition with Convolutional Neural Network,” pp. 775–779, 2017.
[9] F. F. Maulana and N. Rochmawati, “Klasifikasi Citra Buah Menggunakan Convolutional Neural Network,” J. Informatics Comput. Sci., vol. 01, pp. 104–108, 2019.
[10] D. F. H. Permadi and M. Z. Abdullah, “Implementasi pengenalan citra wajah binatang dengan algoritma convolutional neural network (cnn),” JUTI J. Ilm. Teknol. Inf. -, vol. 21, no. 1, pp. 30–41, 2023.
[11] I. Kandel and M. Castelli, “Transfer Learning with Convolutional Neural Networks for Diabetic Retinopathy Image Classification. A Review Ibrahem,” Appl. Sci., vol. 10, no. 6, pp. 1–24, 2020.
[12] S. S. Yadav and S. M. Jadhav, “Deep convolutional neural network based medical image classification for disease diagnosis,” J. Big Data, 2019, doi: 10.1186/s40537-019-0276-2.
[13] C. Narvekar and M. Rao, “Flower classification using CNN and transfer learning in CNN-Agriculture Perspective,” Proc. 3rd Int. Conf. Intell. Sustain. Syst. ICISS 2020, pp. 660–664, 2020, doi: 10.1109/ICISS49785.2020.9316030.
[14] P. M. Samuel and T. Veeramalai, “Review on retinal blood vessel segmentation - An algorithmic perspective,” Int. J. Biomed. Eng. Technol., vol. 34, no. 1, pp. 75–105, 2020, doi: 10.1504/IJBET.2020.110362.
[15] R. Agustina, R. Magdalena, and N. O. R. K. Caecar, “Klasifikasi Kanker Kulit menggunakan Metode Convolutional Neural Network dengan Arsitektur VGG-16,” Elkomika, vol. 10, no. 2, pp. 446–457, 2022, [Online]. Available: https://ejurnal.itenas.ac.id/index.php/elkomika/article/view/5674/2879.
[16] Darmatasia, “Analisis Perbandingan Performa Model Deep Learning Untuk Mendeteksi Penggunaan Masker,” J. IT Media Inf. IT STMIK Handayani, vol. 11, no. 3, pp. 65–71, 2020.
[17] P. M. Samuel and T. Veeramalai, “VSSC Net: Vessel Specific Skip chain Convolutional Network for blood vessel segmentation,” Comput. Methods Programs Biomed., vol. 198, p. 105769, 2021, doi: 10.1016/j.cmpb.2020.105769.
[18] M. A. Pangestu and H. Bunyamin, “Analisis Performa dan Pengembangan Sistem Deteksi Ras Anjing pada Gambar dengan Menggunakan Pre-Trained CNN Model,” J. Tek. Inform. dan Sist. Inf., vol. 4, pp. 337–344, 2018.
[19] Y. Wang, P. Yu, and C. Li, “Offline Handwritten New Tai Lue Characters Recognition Using CNN-SVM,” Proc. 2019 IEEE 2nd Int. Conf. Electron. Inf. Commun. Technol. ICEICT 2019, pp. 636–639, 2019, doi: 10.1109/ICEICT.2019.8846292.
[20] N. E. W. Nugroho and A. Harjoko, “Transliteration of Hiragana and Katakana Handwritten Characters Using CNN-SVM,” IJCCS (Indonesian J. Comput. Cybern. Syst., vol. 15, no. 3, p. 221, 2021, doi: 10.22146/ijccs.66062.
[21] J. Gan, W. Wang, and K. Lu, “A new perspective : Recognizing online handwritten Chinese characters via 1-dimensional CNN,” vol. 478, no. Information Sciences 478, pp. 375–390, 2019, doi: 10.1016/j.ins.2018.11.035.
[22] J. Zhang and T. Matsumoto, “Corpus augmentation for neural machine translation with Chinese-Japanese parallel corpora,” Appl. Sci., vol. 9, no. 10, 2019, doi: 10.3390/app9102036.
[23] H. Miyao and M. Maruyama, “Virtual example synthesis based on PCA for off-line handwritten character recognition,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 3872 LNCS. pp. 96–105, 2006, doi: 10.1007/11669487_9.
[24] J. Wang et al., “GIT: A Generative Image-to-text Transformer for Vision and Language,” ArXiv, vol. 2, pp. 1–49, 2022, [Online]. Available: http://arxiv.org/abs/2205.14100.