PERBANDINGAN ALGORITMA BOOSTING UNTUK KLASIFIKASI LINGKUNGAN TEKTONIK GEOKIMIA VULKANIK

Main Article Content

Hans Santoso
Sabrina Phalosa Phai
Sarah Barbara
Maryanto

Abstract

Penelitian ini bertujuan untuk menentukan model machine learning paling efektif untuk klasifikasi lingkungan tektonik (tectonic setting) berdasarkan komposisi geokimia. Menggunakan dataset dari database GEOROC, tiga algoritma gradient boosting XGBoost, LightGBM, dan CatBoost diuji melalui beberapa skenario, termasuk pembagian data 70:30 dan 80:20. Model dengan kinerja terbaik kemudian dioptimalkan menggunakan Grid Search Cross-Validation (GridSearchCV). Hasil menunjukkan bahwa model LightGBM, setelah melalui proses hyperparameter tuning pada skenario 80:20, mencapai performa tertinggi dengan akurasi 80,87%. Analisis kepentingan fitur (feature importance) lebih lanjut mengidentifikasi bahwa Al₂O₃ (Aluminium Oksida), Na₂O (Natrium Oksida), dan FeOT (Besi Oksida Total) merupakan tiga prediktor paling signifikan. Studi ini membuktikan bahwa LightGBM adalah pendekatan yang superior dan andal untuk tugas klasifikasi geokimia otomatis.

Downloads

Download data is not yet available.

Article Details

Section

Articles

How to Cite

PERBANDINGAN ALGORITMA BOOSTING UNTUK KLASIFIKASI LINGKUNGAN TEKTONIK GEOKIMIA VULKANIK. (2026). Jurnal Ilmu Komputer Dan Sistem Informasi, 14(1). https://doi.org/10.24912/vqd3yd53

References

[1] K. Ueki, H. Hino, and T. Kuwatani, “An Introduction to SGTPPR: Sparse Geochemical Tectono-Magmatic Setting Probabilistic MembershiP DiscriminatoR,” Geochemistry, Geophys. Geosystems, vol. 25, no. 2, 2024, doi: 10.1029/2023GC011237.

[2] Z. Yan, Yucong; Zhang, Zuocheng; Zhou, Xiaocheng; Wang, Guangcai; He, Miao; Tian, Jiao; Dong, Jinyuan; Li, Jingchao; Bai, Yunfei; Zeng, Zhaojun; Wang, Yuwen; Yao, Bingyu; Xing, Gaoyuan; Cui, Shihan; Shi, “Geochemical characteristics of hot springs in active fault zones within the northern Sichuan-Yunnan block: Geochemical evidence for tectonic activity,” J. Hydrol., vol. 09, no. 5, pp. 7352–7363, 2024.

[3] K. Itano and H. Sawada, “Revisiting the Geochemical Classification of Zircon Source Rocks Using a Machine Learning Approach,” Math. Geosci., vol. 56, no. 6, pp. 1139–1160, 2024, doi: 10.1007/s11004-023-10128-z.

[4] F. Liu, Bo; Zhai, Mingguo; Wen, “Developing a new geochemical database management system of 2 metamorphic rock in response to the big data era,” SSRN, 2023, doi: http://dx.doi.org/10.2139/ssrn.4940952.

[5] J. Tamanna; Hezel, Dominik C.; Srivastava, Nishtha; Faber, “Using Machine Learning for automatic rock classification .,” Am. Mineral., pp. 1–37, 2025, doi: https://doi.org/10.2138/am-2025-9958.

[6] H. Utama et al., “Integrasi Augmentasi Data dan Machine Learning dalam Prediksi Magnitudo Gempa Bumi,” vol. 2, no. 3, pp. 97–108, 2025.

[7] M. Risha, P. Liu, and M. Risha, “Data-Driven Facies Prediction: A Comparative Study of Random Forest, XGBoost, SVM, CatBoost, and K-Means,” pp. 0–3, 2025, [Online]. Available: https://eartharxiv.org/repository/view/9382/

[8] K. Ileri, “Comparative analysis of CatBoost, LightGBM, XGBoost, RF, and DT methods optimised with PSO to estimate the number of k-barriers for intrusion detection in wireless sensor networks,” Int. J. Mach. Learn. Cybern., vol. 16, no. 9, pp. 6937–6956, 2025, doi: 10.1007/s13042-025-02654-5.

[9] C. Yu, Y. Jin, Q. Xing, Y. Zhang, S. Guo, and S. Meng, “Advanced User Credit Risk Prediction Model Using LightGBM, XGBoost and Tabnet with SMOTEENN,” 2024 IEEE 6th Int. Conf. Power, Intell. Comput. Syst. ICPICS 2024, pp. 876–883, 2024, doi: 10.1109/ICPICS62053.2024.10796247.

[10] J. Qin, Ben; Fang, Huang; Shichun, Huang; Andre, Python; Yunfeng, Chen; ZhangZhou, “Global mantle clinopyroxene data (major and trace elements),” GFZ Data Serv., 2022, doi: https://doi.org/10.5880/digis.e.2024.007.

[11] E. W. Nabila, Hanifah Afkar; Pamungkas, “PERBANDINGAN ALGORITMA MACHINE LEARNING: SVM, RANDOM FOREST, DAN XGBOOST UNTUK PREDIKSI STROKE,” vol. 10, no. 2, pp. 1098–1110, 2025.

[12] R. Zizilia et al., “Klasifikasi Penyakit Kanker Paru-Paru dengan Algoritma Extreme Gradient Boosting ( XGBoost ) dan Mutual Information sebagai Metode Feature Selection Lung Cancer Classification Using the Extreme Gradient Boosting ( XGBoost ) Algorithm and Mutual Informatio,” vol. 14, pp. 2198–2214, 2025.

[13] Jan Melvin Ayu Soraya Dachi and Pardomuan Sitompul, “Analisis Perbandingan Algoritma XGBoost dan Algoritma Random Forest Ensemble Learning pada Klasifikasi Keputusan Kredit,” J. Ris. Rumpun Mat. Dan Ilmu Pengetah. Alam, vol. 2, no. 2, pp. 87–103, 2023, doi: 10.55606/jurrimipa.v2i2.1470.

[14] I. M. Karo Karo, “Implementasi Metode XGBoost dan Feature Important untuk Klasifikasi pada Kebakaran Hutan dan Lahan,” J. Softw. Eng. Inf. Commun. Technol., vol. 1, no. 1, pp. 11–18, 2022, doi: 10.17509/seict.v1i1.29347.

[15] Y. I. Mahendra and R. E. Putra, “Penerapan Algoritma Gradient Boosted Decision Tree (GBDT) untuk Klasifikasi Serangan DDoS,” J. Informatics Comput. Sci., vol. 6, no. 01, pp. 158–166, 2024, doi: 10.26740/jinacs.v6n01.p158-166.

[16] A. Pramudyantoro, E. Utami, and D. Ariatmanto, “JIPI (Jurnal Ilmiah Penelitian dan Pembelajaran Informatika) Journal homepage: https://jurnal.stkippgritulungagung.ac.id/index.php/jipi PENGGABUNGAN K-NEAREST NEIGHBORS DAN LIGHTGBM UNTUK PREDIKSI DIABETES PADA DATASET PIMA INDIANS: MENGGUNAKAN PENDEKATAN,” vol. 9, no. 3, pp. 1133–1144, 2024, [Online]. Available: https://doi.org/10.29100/jipi.v9i3.4966

[17] P. Septiana Rizky, R. Haiban Hirzi, and U. Hidayaturrohman, “Perbandingan Metode LightGBM dan XGBoost dalam Menangani Data dengan Kelas Tidak Seimbang,” J Stat. J. Ilm. Teor. dan Apl. Stat., vol. 15, no. 2, pp. 228–236, 2022, doi: 10.36456/jstat.vol15.no2.a5548.

[18] J. T. Hancock and T. M. Khoshgoftaar, “CatBoost for big data: an interdisciplinary review,” J. Big Data, vol. 7, no. 1, 2020, doi: 10.1186/s40537-020-00369-8.

[19] A. Febriansyah Istianto, A. Id Hadiana, and F. Rakhmat Umbara, “Prediksi Curah Hujan Menggunakan Metode Categorical Boosting (Catboost),” JATI (Jurnal Mhs. Tek. Inform., vol. 7, no. 4, pp. 2930–2937, 2024, doi: 10.36040/jati.v7i4.7304.

[20] Y. Purbolingga, D. Marta, A. Rahmawatia, and B. Wajhi, “Perbandingan Algoritma CatBoost dan XGBoost dalam Klasifikasi Penyakit Jantung,” J. APTEK Vol. 15 No 2 126-133, vol. 15, no. 2, pp. 126–133, 2023, [Online]. Available: http://journal.upp.ac.id/index.php/aptek/article/download/1930/1163/4970

[21] A. Ahmadi, S. S. Sharif, and Y. M. Banad, “A Comparative Study of Sampling Methods with Cross-Validation in the FedHome Framework,” IEEE Trans. Parallel Distrib. Syst., vol. 36, no. 3, pp. 570–579, 2025, doi: 10.1109/TPDS.2025.3526238.

[22] M. R. Salmanpour et al., “Machine Learning Evaluation Metric Discrepancies Across Programming Languages and Their Components in Medical Imaging Domains: Need for Standardization,” IEEE Access, vol. 13, no. February, pp. 47217–47229, 2025, doi: 10.1109/ACCESS.2025.3549702.

[23] Sugiarto et al., “Optimizing The XGBoost Model with Grid Search Hyperparameter Tuning for Maximum Temperature Forecasting,” J. Appl. Data Sci., vol. 6, no. 4, pp. 2517–2529, 2025, doi: 10.47738/jads.v6i4.885.

[24] F. Budiman, “SVM-RBF parameters testing optimization using cross validation and grid search to improve multiclass classification,” Sci. Vis., vol. 11, no. 1, pp. 80–90, 2019, doi: 10.26583/sv.11.1.07.

Similar Articles

You may also start an advanced similarity search for this article.

Most read articles by the same author(s)