DISTANCE AND ACCURACY IN OBJECT DETECTION BASED ON YOLOV8 COMPUTER VISION ALGORITHM
Main Article Content
Abstract
Artificial intelligence is on the rise and has undergone massive growth in the industry, especially in computer
vision. The emergence of computer vision from autonomous cars, robotics, surveillance, and many more has led
to challenge Artificial intelligence's confidence accuracy in detecting an object. Many artificial intelligence
algorithms are used by the industry, one of them is You Only Look Once version 8 (YOLOv8). YOLOv8 is a deep learning model for object detection. YOLOv8, which is developed by Joseph Redmon and Ali Farhadi is a powerful
method to detect an object in real time because YOLOv8 has the capability of processing high-resolution images
at high speeds. The research discusses about accuracy YOLOv8 in detecting object with a certain distance from
far to very close distance. The dataset is used to train the model of YOLOv8. The dataset is collected by taking
photos of an object with constant lighting but different distances. This research aims to obtain the most effective
distance that the YOLOv8 computer vision algorithm model could detect. The hipothessis is there is connection
between distance and detection accuracy of YOLOv8. If the distance increases, the detection accuracy decreases.
However, if the object is close the detection accuracy increases. So based on the results, a conclussion could be
concluded that a YOLOv8 model would have the highest accuracy at a certain distance.
Article Details
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
References
Alnujaidi, K., Alhabib, G., & Alodhieb, A. (2023). Spot-the-Camel: Computer Vision for
Safer Roads. https://arxiv.org/abs/2304.00757v1
Humayun, M., Ashfaq, F., Jhanjhi, N., & Alsadun, M. (2022). Traffic management:
Multi-scale vehicle detection in varying weather conditions using yolov4 and spatial
pyramid pooling network. Electronics, 11(17), 2748. https://www.mdpi.com/2079-
/11/17/2748
Schwarting, W., … J. A.-M.-, Autonomous, and, & 2018, undefined. (2018). Planning
and decision-making for autonomous vehicles. Annualreviews.Org, 1, 187–210.
https://doi.org/10.1146/annurev-control-060117
Khan, A. A., Laghari, A. A., & Awan, S. A. (2021). Machine Learning in Computer
Vision: A Review. EAI Endorsed Transactions on Scalable Information Systems, 8(32),
e4–e4. https://doi.org/10.4108/EAI.21-4-2021.169418
Terven, J. R., & Cordova-Esparaza, D. M. (2023). A Comprehensive Review of YOLO:
From YOLOv1 to YOLOv8 and Beyond. https://arxiv.org/abs/2304.00501v1
Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified,
real-time object detection. Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, 779–788. https://www.cvfoundation.org/openaccess/content_cvpr_2016/html/Redmon_You_Only_Look_CVPR_
_paper.html
Ju, R., & Cai, W. (2021). arXiv : 2304 . 05071v1 [ cs . CV ] 11 Apr 2023 Fracture
Detection in Pediatric Wrist Trauma X-ray Images Using YOLOv8 Algorithm.
Wang, C.-Y., Bochkovskiy, A., & Liao, H.-Y. M. (2022). YOLOv7: Trainable bag-offreebies sets new state-of-the-art for real-time object detectors.
Ruiz-Ponce, P., Ortiz-Perez, D., Garcia-Rodriguez, J., & Kiefer, B. (2023). POSEIDON:
A Data Augmentation Tool for Small Object Detection Datasets in Maritime
Environments. Sensors, 23(7). https://doi.org/10.3390/s23073691
Ghahremannezhad, H., Shi, H., & Liu, C. (2023). Object Detection in Traffic Videos: A
Survey. IEEE Transactions on Intelligent Transportation Systems, 1–20.
https://doi.org/10.1109/TITS.2023.3258683
Bradski, G. (2000). The OpenCV Library. Dr. Dobb’s Journal of Software Tools.
Kanan, C., & Cottrell, G. W. (2012). Color-to-Grayscale: Does the Method Matter in
Image Recognition? PLOS ONE, 7(1), e29740.
https://doi.org/10.1371/JOURNAL.PONE.0029740
Triantafillou, E., Larochelle, H., Zemel, R., & Dumoulin, V. (2021). Learning a
universal template for few-shot dataset generalization. International Conference on
Machine Learning, 10424–10433.